Job Location: The Great Bharat
Section
D&AI – New Product Introduction
Job posted on
Oct 13, 2022
Employee Type
White Collar
Experience range
3 years – 5 years
Group Company: TVS Motor Company
Designation: Data Engineer II (Data Engineer II_D&AI; – New Product Introduction), Senior Data Engineer (Senior Data Engineer_D&AI; – DM&G,Adj Biz, COE&Corp; – Adj Biz)
Office Location: Electronic City (Territory)
Position description:
Responsibility
- Responsible for Ingesting data from a variety of data sources like csv’s/pdf’s/docs/storage containers, streams and databases, process it to get meaningful insights.
- Develop programs in Python notebooks/.py files & SQL as part of data extraction, cleaning, transformation and processing.
- Rest API’s development for sharing data.
Primary Responsibilities:
- Overall 4-8 years of experience into Data Engineering.
- Strong understanding & familiarity with all Azure / Bigdata Ecosystem components.
- In depth knowledge and understanding of Apache Kafka to handle streaming data.
- In depth knowledge and hands on experience of Azure Data bricks / spark, Spark streaming and its various core components like GraphX, MLLib.
- Design and develop data ingestion programs to process large data sets in Batch mode using python, Databricks, and Spark.
- Strong understanding of underlying Azure Architectural concepts and distributed computing paradigms.
- Hands-on programming experience in Python & OOPs Concepts.
- Hands on experience with major components like Airflow / Nifi / ADF.
- Experience working with SQL (mySQL/Postgres/SQL Server), NoSQL (cassandra/Mongo/Cosmos DB) & Graph databases (Neo4j).
- Advanced working SQL knowledge to create complex queries.
- Hands On experience on visualization tools like Grafana / kibana / Power BI.
- Good understanding of Linux commands, shell scripting and monitoring Linux based production machines. Experience working on Azure Cloud services IaaS, PaaS.
- Hands on Experience in working on Microsoft Azure Services like ADLS, Event Hubs, scale sets, Load Balancers, Azure Functions, Logic Apps, Azure Data Factory etc.
- Should have worked on various Bigdata file formats like Avro, ORC, Parquet, Nested Json.
- Experience of data migration and deployment from On-Prem to Cloud environment and vice-versa.
- Individual Contributor. Problem solving skills.
Primary Responsibilities:
- Responsible for Ingesting data from a variety of data sources like csv’s/pdf’s/docs/storage containers, streams and databases, process it to get meaningful insights.
- Develop programs in Python notebooks/.py files & SQL as part of data extraction, cleaning, transformation and processing.
- Rest API’s development for sharing data.
- In depth knowledge and hands on experience of Azure Data bricks / spark, Spark streaming and its various core components like GraphX, MLLib.
- Experience working with SQL (mySQL/Postgres/SQL Server), NoSQL (cassandra/Mongo/Cosmos DB) & Graph databases (Neo4j).
Educational qualifications preferred
- Category: Bachelor’s Degree, Master’s Degree
- Field specialization: Computer and information Sciences and Support services, Computer Engineering, Computer Programming, Specific applications, Information Technology, Information Communication and Technology
- Degree: Bachelor of Engineering – BE, Master of Computer Applications – MCA, Master of Engineering – MEng, Bachelors of Computer Applications – BCA
Submit CV To All Data Science Job Consultants Across Bharat For Free
🔍 Explore All Related ITSM Jobs Below! 🚀
✅ Select your preferred "Job Category" in the Job Category Filter 🎯
🔎 Hit "Search" to find matching jobs 🔥
➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼