Podium Systems | Hiring | Senior Data Engineer | BigDataKB.com | 2022-09-16

0

Job Location: Pune, Gurgaon/Gurugram, Bangalore/Bengaluru

The key accountabilities for this role are, but not limited to;

  • Lead and mentor a team of Data Engineers across complex Data Pipelines or a variety of consumers.
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs..
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • The deliverables for each Sprint are clearly understood by the Agile Team(s).
  • Ensure that the Agile team(s) delivers working software of sufficient quality to deliver to clients at the end of each development sprint.
  • Source Control repositories are appropriately managed Agile team receives sufficient resourcing to be able to complete its objectives.

Desired Candidate Profile • Strong experience in Azure, ADF, PySpark, Scala, DataBricks, SQL.

• Experience with ETLs, JSON, Hop or ETL orchestration tools.

• Experience in working with EventHub, streaming data.

• Understanding of ML models and experience in building ML pipeline,

MLflow, AirFlow.

• Experience with big data tools: Hadoop, Spark, Kafka, etc.

• Experience with relational SQL and NoSQL databases, including Postgres

and Cassandra.

• Experience with data pipeline and workflow management tools: Azkaban,

Luigi, Airflow, etc.

• Experience with stream-processing systems: Storm, Spark-Streaming, etc.

• Understanding of Graph data, neo4j is a plus

• Strong knowledge Azure based services

• Strong understanding of RDBMS data structure, Azure Tables, Blob, and

other data sources

• Experience with test driven development

• Experience in PowerBI or other tools

• Understanding of Jenkins, CI/CD processes using ADF, and DataBricks

preferred.

Perks and Benefits

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

LEAVE A REPLY

Please enter your comment!
Please enter your name here