innData Analytics | Jobs | Sr. Big Data Engineer – Cloudera/ Hortonworks | BigDataKB.com | 31-03-22

    Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

     

    Job Location: Visakhapatnam

    Location : Visakhapatnam

    Experience : 3-7 years

    Notice Period : 30 days

    Roles and Responsibilities :

    BigDataKB.com Jyotish
    BigDataKB.com Jyotish - Career & Life Prediction
    • Design & implement new components and various emerging technologies in Hadoop Eco System, and successful execution of various projects.
    • Integrate external data sources and create data lake/data mart.
    • Integrate machine learning models on real-time input data stream.
    • Collaborate with various cross-functional teams: infrastructure, network, database.
    • Work with various teams to set up new Hadoop users, security and platform governance which should be pci-dss complaint.
    • Create and executive capacity planning strategy process for the Hadoop platform.
    • Monitor job performances, file system/disk-space management, cluster & database connectivity, log files, management of backup/security and troubleshooting various user issues.
    • Design, implement, test and document performance benchmarking strategy for the platform as well for each use cases.
    • Drive customer communication during critical events and participate/lead various operational improvement initiatives.
    • Responsible for setup, administration, and monitoring, tuning, optimizing, governing Large Scale
    • Hadoop Cluster and Hadoop components: On-Premise/Cloud to meet high availability/uptime requirements.? – 2-4 years relevant experience in BIG DATA.
    • Exposure to Cloudera/Hortonworks production implementations.
    • Knowledge of Linux and shell scripting is a must.
    • Sound knowledge on Python or Scala.
    • Sound knowledge on Spark, HDFS/HIVE/HBASE
    • Thorough understanding of Hadoop, Spark, and ecosystem components.
    • Must be proficient with data ingestion tools like sqoop, flume, talend, and Kafka.
    • Candidates having knowledge on Machine Learning using Spark will be given preference.
    • Knowledge of Spark & Hadoop is a must.
    • Knowledge of AWS and Google Cloud Platform and their various components is preferable.

    Apply Here

    Submit CV To All Data Science Job Consultants Across India For Free

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here