NTT DATA | Jobs | Data Engineer | BigDataKB.com | 31-03-22

    0

    Job Location: Hyderabad/Secunderabad

     

    This position exists for team member of a central compute team in C2MA. The role is

    for a data engineer primarily working for creating data pipeline to load the T2

    analytical enterprise model.

    Data Engineer should be able to understand the Business requirements, Functional

    and Technical requirements and translate them effectively using the Spark

    Framework.

    Should be able to extract the data from source systems (Open/Mainframes) to Data

    Lake (Hive/Parquet). Understand the complex transformation logic and translate

    them to Hive/Spark-SQL queries to load them onto Enterprise Data Domain tables.

    Hands on experience on dealing with live streaming datasets using Kafka/Spark

    Streaming.

    Excellent oral and written communication skills, On-time delivery and good team

    player.

    Roles Responsibilities

    3 to 9 years hands on experience on Spark Core, Spark-SQL, Java-Programming

    and Streaming datasets in Big Data platform

    Should have extensive working experience in Hive and other components of Hadoop

    eco system (HBase, Zookeeper, Kafka)

    Should be able to understand the complex ETL transformation logic and translate

    them to Spark-SQL queries. With knowledge of source target mapping and data

    modeling.

    Unix Shell Scripting and setting up CRON jobs/ Airflow.

    Should have worked on Cloudera distribution framework, Bitbucket (or any version

    controller).

    Prior experience in Consumer Banking Domain is an advantage

    Prior experience in agile delivery method is an advantage

    Excellent understanding of technology life cycles and the concepts and practices

    required to build big data solutions

    Able to write code, services and components in Java, Apache Spark and Hadoop.

    Responsible for systems analysis Design, Coding, Unit testing, CICD and other

    SDLC activities.

    Hands-on experience with Java essential, writing codes to efficiently handle files on

    Hadoop.

    Proven experience working with Apache Spark streaming and batch framework.

    Proven experience in performance tuning Java, Spark based applications is a must.

    Knowledge of working with different file formats like JSON, AVRO and Parquet.

    Data Warehouse experience working with databases like Teradata , Oracle. Well

    versed with concepts of change data capture and SCD implementation.

    Knowledge of S3 will be a plus.

    Ability to work proactively, independently and with global teams.

    Knowledge on API handling is a plus.

    Strong communication skills should be able to communicate effectively with the

    stakeholders and present the outcome of analysis.

    Working experience in projects following Agile methodology is preferred.

    Education / Preferred Qualifications (Indicate the minimum and preferred qualifications required to perform all essential functions)

    Please tick where appropriate: (Prefer but not mandatory)

    GCE O Level GCE A Level

    Degree (please specify if necessary)

    Post Graduate (optional) MBA or equivalent

    Professional Qualification

    (please specify)

    Core Competencies (State the qualities, skills, behaviours and attitudes that the incumbent must possess in order to perform the responsibilities of this position.)


    Job Segment: Database, Consulting, Oracle, Data Warehouse, Business Intelligence, Technology

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here