Incept Data Solutions, Inc | Jobs | Sr . Data Engineer | | 24-02-22

    Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE


    Job Location: remote


    • Architecting, building, and maintaining modern, scalable data architectures in the cloud preferably AWS.
    • Clean, transform, and analyze vast amounts of raw data from various systems using Spark to provide ready-to-use data to our feature developers and business analysts.
    • Involves both ad-hoc requests as well as data pipelines that are embedded in our production environment.
    • Design develop frameworks for increasing the overall efficiency of bringing data into the data lake, processing and delivery of data; Encode best practices into reusable tools that can be shared across the team.
    • Automate the deployment of changes to production to improve data reliability quality
    • Work closely with upstream and downstream stakeholders to collect business requirements and develop data models to satisfy those requirements.
    • Design and deliver event-driven solutions using a streaming platform such as Kafka.
    • Mentor data engineers and other stakeholders on the best practices of managing a large-scale production data platform and developing a data-driven operations culture within the data engineering team.
    • Oversee the migration of data from legacy systems to new solutions.
    • Experience with Machine Learning or other leading-edge technologies.

    Job requirements:

    • Exposure to at least two to three domains (such as Sales Marketing, Supply Chain, Finance, Data Science, Analytics).
    • ETL Pipeline experience Hadoop Ecosystem, Kafka, Spark
    • Workflow management oozie, Airflow
    • 5+ years of hands-on experience building large-scale data systems.
    • 5+ years of experience with architectural patterns, building APIs, microservices, event streams, and high throughput systems.
    • 5+ years of using AWS ecosystem tools preferred, including Redshift, S3, EMR, Lambda, Athena, Kinesis, ECS, Glue, and/or another cloud provider s equivalent stack.
    • Expert level proficiency with Python and SQL. Professional experience with at least one other programming language (Java, Scala, Go, Ruby, JavaScript)
    • 5+ years of experience building ETL pipelines and working with Cloud Data Warehouses (such as Redshift, Snowflake, Azure SQL Data Warehouse, Big Query).
    • Experience operating very large data warehouses or data lakes.
    • Experience designing and implementing data models for enterprise data warehouses.
    • Demonstrated Kafka, streaming experience
    • 5+ years of experience with ETL tools (Matillion, Informatica, DataStage, Talend)
    • Experience working with orchestration tools (Airflow, Luigi, Apache Nifi, Step Functions, etc.)

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free Jyotish Jyotish - Career & Life Prediction


    Please enter your comment!
    Please enter your name here