Clairvoyant India Pvt. Ltd | Hiring | Associate Engineer – Data & Cloud Ops | Pune | BigDataKB.com | 2022-09-28

Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

 

Job Location: Pune

  • Must Have 2 years of relevant experience – Hadoop, System administration with good knowledge in Linux/Unix-based Operating systems

  • Experience in setting up and supporting Hadoop environment (Cloud and On-premise) Ability to closely work with the infrastructure, networking, development teams to ensure timely deliverables
  • Experience in setting up, security for Hadoop clusters like Kerberos, Sentry/Ranger, TLS, etc
  • Experience in setting up, configuring, and troubleshooting AD/LDAP authentication for Hadoop clusters
  • Experience in setting up services like YARN, HDFS, Zookeeper, Hive, Impala, HBase, etc
  • Should be able to do HA for services
  • Linux tuning – understands kernel parameters tuning and tweaking them
  • Should be able to set up and configure Linux/Unix level prerequisites for Hadoop cluster installation and configuration
  • Should be willing to work in 24×7 rotating shifts including weekends and public holidays Should be having good handhold on the Hadoop command-line interface Should be having a good understanding of installation, configuration, tuning, and troubleshooting of the Hadoop ecosystem components like Impala, Hive, Kudu, Hbase, Kafka, Spark, Oozie, sqoop, Solr, etc
  • Develop and document best practices Should be able to do disk management
  • Should be able to do user management
  • Should be able to address all the Hadoop cluster-level issues with RCA and able to provide resolution on the issue
  • Good troubleshooting skills
  • Ability to perform Backup and Recovery as applicable Experience in large scale enterprise apps that involves provisioning systems using virtualization on private clouds and using physical servers The interest, ability, curiosity to tinker and automate everything in a repeatable fashion with the least amount of manual work involved
  • Strong analytical skills and a “Get it done” attitude Ready to explore the new technologies
  • Exposure to Agile methodologies Should be able to mentor the junior Preferred to have an understanding/experience in one of the following tools: Log aggregation, ELK, Splunk, logstash, Ganglia, Grafana, Nagios, etc
  • BlueGreenDeployments – understanding and experience Docker and Kubernetes – understanding and experience/exposure 2 years of experience working with multiple clients and multiple clusters Chef/Puppet/Ansible awareness Experience in Shell/Bash scripting, python, ruby, etc
  • Any MongoDB, Cassandra, Hadoop, elastic search cluster administration experience Experience in AWS/GCP/Azure(Network, Storage, Compute)
  • Measuring availability, SLAs, 99th percentile, 99
  • 9th percentile Open-source Technology backgroundResponsibilities: Actively participate in all planned training Work on escalated tickets Provide 24/7 support to client clusters (working in shifts on a rotation basis and could be on-call) Participate in the creation of weekly/monthly/quarterly client cluster reports Ensure smooth transition of the needed information to next shift administrators With the help of senior Hadoop, administrators ensure resolution of tickets within defined SLA
  • Actively work with the senior administrator for cluster maintenance activities Keep all the related documentation of the assigned clusters up to date Ensure meeting of Clairvoyant standards while providing the best customer service
  • Understand and ensure implementation of set ticket workflow Could act as the first point of contact for escalation in the assigned shift Provide training and mentoring to junior administrators Attending client meetings as and when required (could be in the evening hours) Mentoring the junior members of the team Participate in planning meetings
  • Responsible for implementation and ongoing administration of Hadoop infrastructure as part of our global Managed Services group
  • Ensuring high availability of the Services in the cluster and ensuring that the Cluster resources are utilized optimally reducing the costs
  • Work closely with the India and US team with some overlap hours to ensure a proper handover resulting in optimal productivity
  • Install and configure monitoring tools
  • Diligently teaming with the infrastructure, network, database, application, and business teams to guarantee high data quality and availability
  • Troubleshoot Hadoop cluster and Hadoop services issues with the RCA and provide resolution on the issue with the quality of technical updates
  • Managing the Hadoop File system and monitoring the services and performing backup and disaster recovery (when applicable) for Hadoop data
  • Coordinate root cause analysis (RCA) efforts to minimize future system issues
  • Provide 24*7 support on a rotational basis
  • Assist with developing and maintaining the system runbooks
  • Create and publish various production metrics including system performance and reliability information to systems owners and management
  • Work with big data developers and developers designing scalable supportable infrastructureKey skills: Hadoop SysAdmin(Unix), Client/Cluster, Yarn, HDFS, Hive, LLAP, Hue, Impala, HBase, Kudu, Spark, Sqoop, Flume, Oozie, Kafka, Chef/Puppet/Ansible, AWS Azure/ GCP DBEducation: Bachelors (preferably BE/B
  • Tech
  • ) – Computer Science/IT or equivalent

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

LEAVE A REPLY

Please enter your comment!
Please enter your name here