Sisna Technology | Jobs | Senior Data Engineer – Hadoop/Spark/Kafka | BigDataKB.com | 30-03-22

    0

    Job Location: Chennai

    Job Description :

    Role expectations :

    – Outstanding collaboration and communication skills are essential

    – Logical thinking

    – Keen attention to detail

    – Document application process for future maintenance and upgrades

    Project Responsibility :

    – Create and maintain optimal data pipeline architecture

    – Assemble large, complex data sets that meet functional non-functional business requirements.

    – Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

    – Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

    – Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

    – Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

    – Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

    – Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

    – Work with data and analytics experts to strive for greater functionality in our data systems.

    Qualification :

    – Bachelor’s degree in Computer Science, Information Systems or equivalent education or work Experience

    – 6-10 years of experience in a Data Engineer role with expertise in Cloud, python, ETL, SQL, Power BI

    – Experience in Power BI, Dax and M queries will be an added advantage.

    – Experience with big data tools: Hadoop, Spark, Kafka, etc.

    – Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

    – Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

    – Experience with AWS cloud services: EC2, EMR, RDS, Redshift

    – Experience with stream-processing systems: Storm, Spark-Streaming, etc.

    – Experience with object-orientedobject function scripting languages: Python, Java, C++, Scala, etc.

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here