Job Location: Kolkata
JD- Sr. Data Engineer
Specifics
1. Minimum 8 yrs. Of experience in Data Engineering
2. Minimum 8 yrs. Experience in one of the programming language like Python, Scala, Java etc.)
3. SQL Query expert
4. Good knowledge in No SQL
5. Experience on Big Data (any would be beneficial like Hadoop, Hive, DRTS, Spark.
6. Understanding on AWS
7. Good communication skills.
We are looking for a highly skilled Data Engineer that can work with the Developers from the Datawarehouse Team to build a next generation data pipeline using Big Data Technologies, that enables faster analytical processing, with the goal of serving both Batch as well as near Real time Analytic needs.
Note: The expectation for this position is a Data Engineer who is well versed in one or more programming languages used in Data Engineering (for example Python, Scala etc.,) as well as SQL to build next generation data pipelines, using Big Data Technologies, Programmatic frameworks, and Cloud environments. While knowledge of SQL and knowing Declarative frameworks (for example SSIS, Informatica etc.,) could help, the expectation for this position is engineers who can code.
Essential Responsibilities:
- Design, build and launch extremely efficient & reliable data pipelines to move data from a variety of sources (SQL, NoSQL, Streams etc.,) to Targets (Data Warehouse, Data Lakes etc.,).
- Working with DW Developers and DW Subject area experts, architect pipelines that deliver data models required for both Batch as well as Real time analytics.
- Develop data pipelines that can scale massive datasets and large clusters of machines.
- Leverage expert coding skills in several languages (for example Python, Scala, Java) and modern technologies to build pipelines, that are fault tolerant, catch Data Quality issues, easy to troubleshoot and have auditing capabilities.
- Design and develop new systems and tools to enable end users to consume and understand data faster.
- Work across multiple teams in high visibility roles and own the solution end-to-end. Job
Requirements:
- At least 8 years of demonstrable experience as a Data Engineer building data pipelines.
- At lease 8 years of Hands-on experience in one or more Programming languages (for example Python, Scala, Java etc.,) and applying those skills in large scale data and analytic processing that interfaces with Data Warehouses and Data Lakes.
- Experience in one or more Big Data technologies (Distributed Computing platforms such as Hadoop and Spark, NoSQL databases, Distributed Real-time Systems, Big Query, Apache Hive)
- Experience in one or more AWS technologies, for example AWS-EMR, AWS-S3, AWS-Lamda etc.,
- Proficiency in writing complex SQL queries.
- Proficient in working with NoSQL databases.
- Analytical mind with a problem-solving aptitude.
- Proven abilities to take initiative and provide innovative solutions.
- Good written and verbal skills and an effective communicator. Be a team player with strong empathy for our internal as well as external customers.
- Self-learner with a bent of mind to quickly pick up and learn recent technologies. Preferred Skills and Abilities:
- Experience in building data pipelines using programmatic frameworks (for example Apache Airflow, Apache Spark etc.,) in addition to declarative frameworks.
- Experience in reviewing existing data pipelines, identifying bottlenecks, and making noticeable architectural improvements to them.
- Experience in building Real time Analytics, leveraging Streaming Technologies (for example Apache Kafka).
- Experience in Object Oriented software development.
- Ability to identify and adopt Open-Source Libraries and integrate them with existing systems, based on requirements.
Note: This position
- Does not require travel.
- Requires frequent computer use
Submit CV To All Data Science Job Consultants Across India For Free

