Job Location: Nagpur
Responsibilities:
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Create and maintain optimal data pipeline,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Keep our data separated and secure
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with data and analytics experts to strive for greater functionality in our data systems
Required Skills:
- 4+ years of experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce (any one tools hands on experience is mandatory).
- Experience with object-oriented/object function scripting languages: Python, Scala etc
- Candidate should atleast have 2 years of experience in creating IAM roles and policies and experience in securing AWS resources like EMR cluster, EC2 instances, S3 buckets.
- Candidate should have strong experience in Python and should be able orchestrate AWS pipelines using AWS Python SDK.
- Advertisement -
Desired Candidate Profile
Experience: 4+ Years Experience
Qualification: BE/ BTech
- Advertisement -
Location: Pune/ Nagpur/ Bangalore
Submit CV To All Data Science Job Consultants Across India For Free