Verizon | Jobs | Engr II-Data Engineering | BigDataKB.com | 10-02-22

    Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

     

    Job Location: Hyderabad/Secunderabad

    What you ll be doing…

    As a Big Data/Cloud EngineeringDeveloper you will be responsible for ETL/ELT Developmentusing Hadoop Eco tools on Big Data/Cloud platforms to meet business processes and application requirements for collecting, storing, processing, and analysing of huge sets of data in the Verizon Consumer Business areas of Data Warehouse Big Data initiatives. Requires good Experienced Big data resource who can support and manage data processing operations using Hadoop Eco tools, Spark, XML, JSON, Shell scripting, SQL, Python, CICD, DevOps Cloud services (GCP).

    BigDataKB.com Jyotish
    BigDataKB.com Jyotish - Career & Life Prediction

    You will be a Big Data/Cloud EngineeringDeveloper in the Artificial Intelligence and Data Organization (AID), contributing to the Data Enablement by ingestion for the Data Warehousing and Big Data initiatives. You will be working with Leads to understand the Ingestion requirements, and developing the data pipelines using varied tech stack for Batch Real time data needs on different platforms of Cloud like GCP, On Premises Hadoop Clusters, and Teradata platforms. You will be, primarily work on Data Engineering Ingestion projects aiming work force management with Best Customer Experience, billing use cases, Churn reduction, Voice of Customers and customer Upselling/Cross selling. Implementation of these initiatives will include fetching, Loading, Transforming, Storing and present the data to the Business Clients for reporting and ML Engineering teams for insights development.

    You will be following the Project Intake process to deliver the ETL Solution to deliver the Data layer with RAW data, processed data ready for curation BI reporting purposes in various data formats.

    • Compiling and analysing data, preparing the data for creating ETL Pipelines.
    • Working with leads to understand requirements and effectively implement in support of the business processes, programs and/or services
    • Building,deploying new data pipelines within Big Data Eco-Systems with large amounts of structured and unstructured data, including integrating data from multiple sources
    • Documenting new/existing pipelines, Data Sources and Data Sets and Improve existing data pipelines by simplifying and increasing performance.
    • Performinga variety of ETL/ELT development activities with adequate technical functional skills having niche skill set like very good experience in Cloud Enablement using GCP, Hadoop tool stack, UNIX shell scripting, Python, Spark processing, Streaming tools like Storm, Kafka, nifi, CI/CD Deployment well versed with DEVOPS concepts tools etc..
    • Understanding the Ingestion Automation Frameworks, Data Ops to use them for delivering the Project Objectives.
      • Having good understanding of data lakes, Data Mesh, data warehousing, Visualization Reporting, data/Analytical Engineering, Data Analyticsconcepts.

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here