Larsen & Toubro Limited | Hadoop Developer | Bhagya Nagar | Bharat | BigDataKB.com | 2023-03-09

0
137

Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

 

Job Location: Bhagya Nagar

Job Detail:

Notice Period – Immediate to 30 days

Mandatory Skills: Hive, Hadoop, Hdfs, Spark

Optional: Nifi, Pyspark, ETL, Grafanna

Job description:

Experience in distributed/scalable systems. Ability to design high performance applications/services.
Experience as a professional software developer with technical expertise in all phases of Software development life cycle ( SDLC ), specializing in BigData t echnologies like Spark and Hadoop Ecosystem .
Experience in Big Data analytics, Data manipulation, using Hadoop Ecosystem MapReduce, HDFS, Yarn/MRv2, Hive, Spark, Oozie, Sqoop and Zookeeper .
Excellent programming skills with a higher level of abstraction using Scala
Strong experience in real time data analytics using Spark Streaming
Worked on GUI Based Hive Interaction tools like Hue, Hive View for querying data

Experience working with snowflake pattern data warehousing

Experienced in checking the status of clusters using Cloudera manager and Ambari .

Ability to work with Onsite and Offshore Teams

Experience in writing Shell Scripts in Unix/Linux

Good experience with use-case development and methodologies like Agile and Waterfall

Proven ability to manage all stages of project development. Strong Problem Solving and Analytical skills and ability to take balanced and independent decisions

Expert experience in design of batch jobs and ETL Jobs

Expert experience in building services using Apache Spark, Kafka.

Experience/Familiarity with RDBMS or any SQL technologies.

Development and Implementation of various methods to load HIVE tables from HDFS and Local File System
Developed end to end Hive Queries to parse the raw data, populated external & internal tables and store the refined data in partitioned external tables
Improvising the tuning options using HIVE functions such as Partitioning, Bucketing

Need to have experience in building data pipelines.

Writing optimized code for spark processing

Working knowledge on building nifi pipelines.

Need to have expertice in writing spark jobs using python or scala.

Writing Optimized Hive or Sql queries on larger data sets.

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

LEAVE A REPLY

Please enter your comment!
Please enter your name here