emorphis | Jobs | Data Engineer – Spark/Hadoop | BigDataKB.com | 30-03-22

โ€”

by

Job Location: Madhya Pradesh

Location – Permanent remote

Preferred – Immediate joiner NP of 15 days

Skills :

– Spark (Scala or Java), AWS.

– Ability to develop deploy and run Spark Jobs on AWS EMR cluster

– Ability to setup data ingestion from various sources

– Experience with AWS or GCP is must

– Around 2 years of experience with production grade applications

Responsibilities :

– You will be involved in the design of data solutions using Hadoop based technologies along with Hadoop, Azure, using Scala Programming.

– Responsible to Ingest data from files, streams and databases. Process the data with Hadoop, Scala, SQL Database, Spark.

– Develop programs in Scala and Python as part of data cleaning and processing

– Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems

– Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform

– Provide high operational excellence guaranteeing high availability and platform stability

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

๐Ÿ” Explore All Related ITSM Jobs Below! ๐Ÿš€ โœ… Select your preferred “Job Category” in the Job Category Filter ๐ŸŽฏ ๐Ÿ”Ž Hit “Search” to find matching jobs ๐Ÿ”ฅ โž• Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐Ÿ“๐Ÿ’ผ

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *