Verisk Analytics | Jobs | Data Engineer (Senior Software Engineer I) | BigDataKB.com | 12-02-22

    0

    Job Location: Bangalore/Bengaluru

    • Design and Develop Data Ingestion and Processing Code using Python/Pyspark/ language on AWS big data platform.
    • Create and update Design Specs and reference Architecture documents to enable acceleration in solution development.
    • Participate in testing and peer code reviews to identify any bugs and ensure reusability of code.
    • Automate the deployment of the solutions by using Shell scripts/Python and AWS services.
    • Work with IT change Management group to promote the developed code/scripts from non-production to production environments.
    • Work with Architecture team (Application/Security/Infrastructure/Data) to get their approval on the designed solutions.

    Qualifications

    Minimum Personal Qualifications

    • 8-10 years of experience in design, build and deployment of Python based applications.
    • Strong experience in writing complex SQL queries.
    • Expertise in handling complex large-scale Big Data environments preferably (20Tb+).
    • Experience in Leading and mentoring a team.
    • Strong analytical capability and problem-solving skills.
    • Knowledge of data processing using python.
    • Excellent communication, teamwork, and interpersonal skills.
    • Ability to write abstracted, reusable code components.
    • Able to quickly adapt and learn.
    • Able to jump into an ambiguous situation and able to take the lead.
    • Able to communicate and coordinate across various teams.
    • Basic Understanding of spark architecture with experience in PySpark applications is good to have.
    • Experience in cloud technologies preferably AWS is good to have
    • BE/B.Tech in Computer Science from an accredited college or university.

    Technical Qualification

    • Python, Hadoop ecosystem – Hive, HDFS, HBase
    • Strong SQL skills with exposure to any of the RDBMS.
    • Strong technical background on data modelling, database design and optimization of queries.
    • Apache Spark – Py-Spark /Scala or Java would be good to have.
    • AWS – EMR, IAM roles, S3, Glue Catalog, Step function, serverless/Lambda, Cloud formation / CI/CD process development
    • OpenStack Other Software & Tools – Docker, GitLab
    • Linux/Unix – Shell scripting +

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here