Job Location: Pune
This role is for a developer with strong core application or system programming skills in Scala with good exposure to Spark and the broader Hadoop ecosystem. Risk, Finance & Treasury covers a variety of existing systems and green-field projects.
Key Responsibilities
- Work with our clients to deliver value through the delivery of high-quality software within an agile development lifecycle.
- Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level.
- Bring deep industry knowledge into the Squad Team to understand problems, leverage design patterns, automation to support a CI and CD pipeline to production and support emergent design within the agreed domain target architecture.
- Contribute to the wider domain goals to ensure flow, consistent standards and approach to software development while designing to a common shared framework.
- Work with the right and robust engineering practices.
- Strong ability to articulate design ideas and collaborate across teams
- Strong belief in continuous innovation – endeavor to incrementally improve existing systems through adoption of new tools and technologies.
Engineering Experience:
- Strong experience in designing and developing distributed computing applications using Scala/Java & Apache Spark.
- Experience in building streaming (near real time), and/or batch-oriented systems that integrate with various types of middleware solutions like Solace, Kafka.
- Strong experience in tuning Hadoop applications to improve throughput and latency.
- Strong experience in working with various Hadoop file system formats – Avro, Parquet, ORC and appropriate use cases.
- Proven experience in delivering high volume, distributed computing applications all the way through to production.
- Good working knowledge of JIRA / Defect Management tools
- Good working Knowledge of modern, distributed version control systems such as Git.
- Strong belief & understanding of agile testing practices such as test-driven development, behaviour driven development.
- Working knowledge of testing tools relevant to the big data landscape, CI & CD pipelines for automated integration and deployment of code.
Technical Experience:
- Strong programming experience in Scala will be preferred. Expert level Java programmers with basic understanding of functional programming will also be considered.
- Strong knowledge of Spark APIs for processing large volumes of data
- Strong knowledge of Spark SQL, DAGs, catalyst, join optimizations.
- Experience in application-level compute resource management – pools, YARN queues.
- Working knowledge of query engines such as Impala & Hive
- Working knowledge of event streaming platforms such as Apache Kafka and brokers such as Solace or RabbitMQ
- Working knowledge of unit testing and code coverage monitoring libraries such as scalatest or equivalent.
- Foundational knowledge of all key Hadoop infrastructure components, and their deployment topology.
- Continuous Integration & code quality management using TeamCity/Jenkins & Sonar.
- Experience working as working with an agile delivery approach – ideally Scrum or LeSS
- Strong experience in software development processes, models, lifecycles and methodologies
Desirable skills
- Experience in building Real time event processing software using Spark streaming, Kafka or Flume.
- Experience in migrating on prem, managed Hadoop applications to a public cloud platform such as GCP.
- Experience in data analytics using python libraries.
- Experience in open-source container orchestration systems such as Kubernetes.
- Banking experience, particularly in Risk / Finance / Treasury
Submit CV To All Data Science Job Consultants Across India For Free
🔍 Explore All Related ITSM Jobs Below! 🚀
✅ Select your preferred "Job Category" in the Job Category Filter 🎯
🔎 Hit "Search" to find matching jobs 🔥
➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

