Kimberly-Clark Lever Limited | Jobs | Senior Data Engineer | BigDataKB.com | 24-02-22

    Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

     

    Job Location: Bangalore/Bengaluru

    • We are looking for a hardworking, aspirational and innovative engineering leader for the Senior Data Integration Engineer position in our AI engineering and innovation team.
    • The Senior Data Integration Engineer will play a diverse and far-reaching role across organizations providing leadership and influencing adoption of technical solutions, data processing, and design patters across multiple teams and partners within Kimberley-Clark.

    Responsibilities:

    • Work with Technical architects, Product Owners and Business teams to translate requirements into technical design for data modelling and data integration
    • Demonstrate deep background in data warehousing, data modelling and ETL/ELT data processing patterns
    • Design and develop ETL/ELT pipelines with reusable patterns and frameworks
    • Design and build efficient SQLs to process and curate the data sets in HANA, Azure and Snowflake
    • Design and review data ingestion frameworks leveraging Python, Spark, Azure Data Factory, Snowpipe, etc.,
    • Design and build Data Quality models and ABCR frameworks to ingest, validate, curate and prepare the data for consumption
    • Understand the functional domain, business needs and able to identify the gaps in the requirements proactively prior to implementing solutions
    • Work with platform teams to design and build processes for automation in pipeline build, testing and code migrations
    • Demonstrate exceptional impact in delivering projects, products and/or platforms in terms of scalable data processing and application architectures, technical deliverables and delivery throughout the project lifecycle.
    • Provide design and guiding principles on building data models and semantic models in Snowflake – enabling true self-service
    • Responsible for ensuring the effectiveness of the ingestion and data delivery frameworks and patterns.
    • Build and maintain data development standards and principles, provide guidance and project specific recommendations as appropriate
    • Must be conversant with DevOps delivery approach and tools and have a track record of delivering products in agile model.
    • Provide insight and direction on roles and responsibilities required for platform/ product operations

    Qualifications:

    BigDataKB.com Jyotish
    BigDataKB.com Jyotish - Career & Life Prediction
    • 10+ years of experience designing, developing, and building ETL/ELT pipelines, procedures and SQLs on MPP platforms such as HANA, Snowflake and Teradata
    • Experience in designing and building metadata driven data ingestion frameworks, building SAP BO/Data Services, Azure Data Factory, SnowSQL, Snowpipe – as well as building mini-batch, real-time and event-driven data processing jobs
    • Proficient in distributed computing principles, modular application architecture, and various types of data processing patterns – real-time, batch, lambda, and other architectures
    • Experience with a broad range of data stores – Object stores (Azure ADLS, HDFS, GCP Cloud Storage), Row and Columnar databases (Azure SQL DW, SQL Server, Snowflake, Teradata, PostgresSQL, Oracle), NoSQL databases (CosmosDB, MongoDB, Cassandra), ElasticSearch, Redis, and Data processing platforms – Spark, Databricks, and SnowSQL
    • Hands-on experience with Docker, Kubernetes and the cloud infra like Azure, AWS, and GCP.
    • Experience with one or more programming languages such Python, Java and Scala is preferred
    • Familiarity in leveraging Azure Stream Analytics, Azure Analysis Services, Data Lake Analytics, HDInsight, HDP, Spark, Databricks, MapReduce, Pig, Hive, Tez, SSAS, Watson Analytics, SPSS
    • Strong Knowledge on source code management, configuration management, CI/CD, security and performance.
    • Ability to look ahead to identify opportunities and thrive in a culture of innovation
    • Self-starter who can see the big picture, and prioritize your work to make the largest impact on the business and customer s vision and requirements
    • Experience in building, testing, and deploying code to run on Azure cloud datalake
    • Ability to Lead/nurture/mentor others in the team.
    • A can-do attitude in anticipating and resolving problems to help your team to achieve its goals.
    • Must have experience in Agile development methods

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here