Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: Chennai
Roles and Responsibilities
1. Develop and deploy batch and streaming data pipelines in cloud ecosystem.
2. Automation of manual processes and performance tuning of existing pipelines.
3. Data loading and processing from multiple source locations into Data lake, Datamart and
Datawarehouse while keeping cost, performance and security in mind.
4. Automate and develop analytics tools and occasionally involve in visualization set up
processes.
5. Develop processes for migrating on-premise data to cloud environment
Desired Candidate Profile
1. 4+ years of experience in IT programming, application/product development.
2. At least 2 years experience working in any of the Big data cloud ecosystems like AWS, GCP,
PCF, Azure etc.
3. Strong in SQL/RDBMS and any of the programming languages like Java/Python/Scala.
4. Experience with one or more of the big data tools like Hadoop, Kafka, Spark, Beam etc.,
5. Good experience with AWS services like EC2, EMR, Redshift or equivalent GCP services like
Compute engine, Big query, Dataflow etc.
6. Good knowledge in Data structures.
7. Experience working with multiple OS like Windows, Linux and Unix and good scripting
knowledge including Shell, Bash.
8. Basic knowledge in any of the web and server side frameworks like AngularJS, ReactJS,
NodeJS, Django
9. Good knowledge in NOSQL DB concepts.
10. Very good communication and team player skills.
Perks and Benefits
Submit CV To All Data Science Job Consultants Across India For Free