Net | Jobs | Hadoop Developer | BigDataKB.com | 31-03-22

    0

    Job Location: Bangalore/Bengaluru

    Roles and Responsibilities

    Design, build and operationalise large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties – Spark, EMR, DynamoDB, RedShift, Lambda, Glue, Snowflake.
    – Lead the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for streaming and batch ETLs and RESTful API’s
    – Assemble large, complex data sets that meet functional / non-functional business requirements.
    – Identify, design, and implement internal process improvements: automating manual processes, re-designing infrastructure for greater scalability, etc.
    – Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, PySpark.
    – Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
    – Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
    – Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
    – Good to have experience with data pipeline and workflow management tools Airflow.
    – Mentor junior team members
    – Experience in Linux command line tools

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here