Net | Jobs | Hadoop Developer | BigDataKB.com | 31-03-22

โ€”

by

Job Location: Bangalore/Bengaluru

Roles and Responsibilities

Design, build and operationalise large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties – Spark, EMR, DynamoDB, RedShift, Lambda, Glue, Snowflake.
– Lead the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for streaming and batch ETLs and RESTful APIโ€™s
– Assemble large, complex data sets that meet functional / non-functional business requirements.
– Identify, design, and implement internal process improvements: automating manual processes, re-designing infrastructure for greater scalability, etc.
– Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, PySpark.
– Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
– Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS โ€˜big dataโ€™ technologies.
– Working knowledge of message queuing, stream processing, and highly scalable โ€˜big dataโ€™ data stores.
– Good to have experience with data pipeline and workflow management tools Airflow.
– Mentor junior team members
– Experience in Linux command line tools

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

๐Ÿ” Explore All Related ITSM Jobs Below! ๐Ÿš€ โœ… Select your preferred “Job Category” in the Job Category Filter ๐ŸŽฏ ๐Ÿ”Ž Hit “Search” to find matching jobs ๐Ÿ”ฅ โž• Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐Ÿ“๐Ÿ’ผ

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *