Rudhra Info Solutions | Hiring | AWS Data Engineer – Python/Spark/Redshift | BigDataKB.com | 2022-09-27

Job Location: Chennai, Bangalore/Bengaluru

– AWS tech stack required for the project is mentioned in a table below

– Primary focus is on Glue, S3, AWS, Python, Spark, Redshift, EMR, Athena, S3, Lamda, Glue.

– Added skillset which could add value are AWS Step function, NoSQL DB like Dynamo DB and AWS Data Migration Service in that order of priority.

– Data engineer should have 3 to 8 years of relevant AWS experience with their services mentioned above. Experience in data security or governance and performance improvement is an added benefit

– Only focus is on AWS services and tech stack.

– Experience with building data pipelines and applications to stream and process datasets at low latencies.

– Show efficiency in handling data – tracking data lineage, ensuring data quality, and improving discoverability of data.

– Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.

– Knowledge of Engineering and Operational Excellence using standard methodologies.

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

๐Ÿ” Explore All Related ITSM Jobs Below! ๐Ÿš€ โœ… Select your preferred “Job Category” in the Job Category Filter ๐ŸŽฏ ๐Ÿ”Ž Hit “Search” to find matching jobs ๐Ÿ”ฅ โž• Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐Ÿ“๐Ÿ’ผ

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *