Job Location: Bangalore/Bengaluru
Roles and Responsibilities
Enterprise Technology & Services (ETS) delivers shared technology services for the Firm supporting all business applications and end users. ETS provides capabilities for all stages of the Firm’s software development lifecycle, enabling productive coding, functional and integration testing, application releases, and ongoing monitoring and support for over 3,000 production applications.
ETS also delivers all workplace technologies (desktop, mobile, voice, video, productivity, intranet/internet) in integrated configurations that boost the personal productivity of our employees. Application and end user services are delivered on a scalable, secure, and reliable infrastructure composed of seamlessly integrated datacenter, network, compute, cloud, storage, and database services.
Organizational Description
The Data Engineering & Analytics group in the Core Infrastructure with us provides technologies and platform required to model, provision, transform, analyze, report, visualize, store and protect enterprise data on-prem and in public cloud. The team is responsible for the delivery and operation of these products
Job Description
We are seeking skilled, enthusiastic, and experienced engineers to join our team responsible for delivery and operation of big data, caching and messaging products like MongoDB, Kafka, Azure Databricks, Redis, Snowflake and others. This role also demands integrating internal systems with Public Cloud with heavy focus on automation and DevOps. Other responsibilities include troubleshooting and helping development team with best practices and on-boarding, monitoring, optimizing, and tuning. After a period of onboarding and training, new team members will take ownership of these products, working with global counterparts and customers to prioritize and execute on enhancements, extensions, and remediation of critical components of our infrastructure.
Desired Candidate Profile
Required Skills:
– 5+ years of experience in data engineering with an emphasis on automation, performance, query optimization and troubleshooting
– Hands on 3+ years of experience using Java/Python/Scala & Linux is a must
– 2+ years of experience designing and building solutions utilizing various Cloud services, big data, and messaging products such as Azure, Azure SQL, Databricks/Spark, MongoDB, Kafka
– Good knowledge of network and security protocols like TCP/IP, HTTP(s), TLS, DNS, OIDC/oAUTH, Proxies & Load balancers.
– Experience working with Docker & Kubernetes
– Strong fundamentals in distributed system design, development and deployment using agile/devops practices
– Experience with Agile development methodology and CI/CD
– Experience with tools such as GIT, Jira and Bitbucket
– A self-starter with the ability to work effectively in teams
– Good communication skills and excellent teamwork experience
Desired Skills:
– Deep knowledge of JVM internals
– Knowledge of Ansible/Terraform
– Experience with system performance
– Contributor/Committer to open source projects
Submit CV To All Data Science Job Consultants Across India For Free

