Rudhra Info Solutions | Hiring | AWS Data Engineer – Python/Spark/Redshift | BigDataKB.com | 2022-09-27

0

Job Location: Chennai, Bangalore/Bengaluru

– AWS tech stack required for the project is mentioned in a table below

– Primary focus is on Glue, S3, AWS, Python, Spark, Redshift, EMR, Athena, S3, Lamda, Glue.

– Added skillset which could add value are AWS Step function, NoSQL DB like Dynamo DB and AWS Data Migration Service in that order of priority.

– Data engineer should have 3 to 8 years of relevant AWS experience with their services mentioned above. Experience in data security or governance and performance improvement is an added benefit

– Only focus is on AWS services and tech stack.

– Experience with building data pipelines and applications to stream and process datasets at low latencies.

– Show efficiency in handling data – tracking data lineage, ensuring data quality, and improving discoverability of data.

– Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.

– Knowledge of Engineering and Operational Excellence using standard methodologies.

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

LEAVE A REPLY

Please enter your comment!
Please enter your name here