Job Location: Chennai
- As a Data Engineer – Cloud Native you are expected to be functionally knowledgeable and have expertise in areas like data integration, data extraction, ingestion and loading (ETL), into Saas (Snowflake/Panoply) and Hyperscalar Paas Services on AWS, Azure and GCP platforms. Should have good knowledge on database technologies (NoSQL) and hands-on in either data management or open-source programming.
- As a Data Engineer in our Data and AI team, you will provide support in data management for one or more projects; assist in finalizing scope and support sizing of work; and participate in Proof of Concept development.
- You will provide support to data engineering solutions provided by the Architect/Sr Data Engineers based on the business problem, data integration with third party services, designing and developing complex data pipelines for clients business needs.
- You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Pre-Sales and various pursuits focused on our clients business needs.
- You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation.
- You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains.
Responsibilities:
- Be responsible for developing and maintaining complex data pipelines, ETL, data models and standards for various data Integration & data warehousing projects from source to sinks on Saas/Paas platforms – snowflake, databricks, Azure Synapse, Redshift, BigQuery, etc
- Develop scalable, secure and optimized data transformation pipelines and integrate them with downstream sinks
- Provide support to technical solutions from data flow design and architecture perspective, ensure the right direction and propose resolution to potential data pipeline-related problems.
- Participate in developing Proof of concepts (PoC) of key technology components to project stakeholders
- Developing scalable and re-usable frameworks for ingesting of geospatial data sets
- Develop connectors to extract data from source and use event / streaming services to persist and process data into sinks
- Collaborate with other members of the project team (Architects, Sr Data Engineers) to support delivery of additional project components (like API interfaces, Search, visualization)
- Participate in evaluation and creation of PoVs around the performance aspects of data integrations tools in the market against customer requirements
- Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
Data Engineer with 7 years of experience with following skills –
- Must have 2 years of experience working as a Data Engineer in cloud transformation projects in the areas of data management solutions – AWS or Azure or GCP or Snowflake, Databricks etc.
- Must have hands on experience at least one end to end implementation of data lake/warehouse projects using Paas and Saas – such as Snowflake, Databricks, Redshift, Synapse, BigQuery etc. and 2 data warehouse/data lake implementations on-premise
- Experience in data pipelines development, ETL/ELT, implementing complex stored Procedures and standard DWH and ETL concepts
- Experience in setting up resource monitors, RBAC controls, warehouse sizing, query performance tuning, IAM policies, Cloud networking (VPC, Virtual Network etc)
- Experience in Data Migration from on-premise RDBMS to cloud data warehouses
- Good understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
- Hands-on experience in Python, PySpark, programming for data integration projects
- Good experience in Cloud data storage (Blob, S3, Object stores, Minio), data pipeline services, data integration services, data visualization
- Support in providing resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
Required Skills:
- AWS / Azure / GCP (any one of the 3) native data engineering / architect Apache Kafka, ELK, Grafana
- Snowflake / data bricks / redshift
- Hadoop
- NoSQL/Key-value, Data Vault 2.0 concepts knowledge
- Cloud Certification – Data Engineering
- Experience in Agile development methodologies
- Understanding of cloud network, security, data security and data access controls and design aspects
Submit CV To All Data Science Job Consultants Across India For Free
๐ Explore All Related ITSM Jobs Below! ๐
โ
Select your preferred “Job Category” in the Job Category Filter ๐ฏ
๐ Hit “Search” to find matching jobs ๐ฅ
โ Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐๐ผ

Leave a Reply