Clairvoyant India Pvt. Ltd | Hiring | Data Engineer (Python, Spark, AWS) | BigDataKB.com | 08-04-2022

Job Location: Pune

 

Must-Have:

 Tech-savvy engineer – willing and able to learn new skills, track industry trend

3 to 8 years of total experience of very strong data engineering experience, especially in an Open Source, data-intensive, distributed environment with a minimum of 2 years of experience in Big data related technologies like Spark, PySpark, Airflow, Hive, etc

 Good to have Experience in the migration of data to AWS or to the GCP Cloud

Programming background – preferred Python

Experience in Building ETL Pipelines

 Experience in Building Data pipelines in AWS ( S3, EC2, EMR, Athena, Redshift ) or in GCP (Big Query, Dataproc, Data fusion, Dataproc, Dataflow

Self-starter resourceful personality with the ability to manage pressure situations

Exposure to Scrum and Agile Development Best Practices

 Experience working with geographically distributed teamsRole Responsibilities:

 Build Data and ETL pipelines in AWS or in GCP

 Support migration of data to cloud using Big Data Technologies like Spark, Hive, Python

Interact with customers on daily basis to ensure smooth engagement

Responsible for timely and quality deliveries.

Fulfill organization responsibilities – Sharing knowledge and experience with the other groups in the organization, conducting various technical sessions and training.Key Skills: Spark, Hive, Python, AWSEducation: BE/B.Tech from a reputed

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

๐Ÿ” Explore All Related ITSM Jobs Below! ๐Ÿš€ โœ… Select your preferred “Job Category” in the Job Category Filter ๐ŸŽฏ ๐Ÿ”Ž Hit “Search” to find matching jobs ๐Ÿ”ฅ โž• Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐Ÿ“๐Ÿ’ผ

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *