Job Location: Bangalore
Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 25,000 employees who make the world better every day, we know we have something special. Behind our customers – amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility – our people are energized problem solvers that take pride in how the work we do changes the world for the better.
We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us!
Job Description
Job Responsibilities: Responsible for implementation and ongoing administration of Hadoop infrastructure.Completes root cause analysis of outages or incident trends (often working with managed services partner). Recommends preventative action Monitors specific IT service or set of services for availability and performance, collates and analyzes data to develop recommendations to improve or change technologyPerformance tuning of Hadoop clustersImplementation of containerization of different application including Hadoop using Docker/ KubernetesMonitor Hadoop cluster connectivity and securityManage and review Hadoop log files.File system management and monitoring.Develop CI/CD principlesReview and modify CI/CD principles, iteratively.Maintain CI/CD tools/platforms Educational Qualifications: Bachelor of Science in Computer Science, Computer Engineering, or any other Engineering, with concentration in software; or equivalent knowledge in the areas of software engineering (software requirements analysis, software design, software testing) desired. Experience Requirements: Minimum of 5 to 7 years of hands-on experience Knowledge/Skills Must have over 5 years of experience in managing build pipelines using Jenkins or other similar platforms for CI and CDExpert skills in Linux environment (installation, filesystem, configuration, network, package installation, shell scripting, deployment, and troubleshooting)Fluent in installing web servers like Apache TomcatMust have proficiency in configuring and managing Hadoop clusters, HDFS, Apache SparkExpert with analytical skills to analyze all kinds of logs to determine performance bottlenecks in various components (e.g.: Hadoop cluster, Kafka, RabbitMQ and others)Experience in developing ETL pipelines Experience in using Spark SQL Experience implementing and administering logging and monitoring tools such as NagiosFluent in Python and JavaNice to have experience in Cloudera or Ambari Hadoop distribution/managementExperience in developing build and deployment automation Experience managing source in git (GitHub ops, branching, merging, etc.) a big plus Experience in configuration management and orchestration tools like Ansible Experience with ETL tools like Airflow Experience with SaaS operations Experience in CI build tools such as Gradle, Maven and JenkinsExcellent problem-solving skills; proven technical leadership and communication skills Provide hardware architectural guidance, planning, estimating cluster capacity, and creating roadmapsPrior experience in daily collaboration with global teams across the full software development lifecycleExperienced working within Agile scrum deliveryGood interpersonal, verbal, and written communication skills Additional Plus Skills Big plus if you have instrumentation experience using Grafana, Logstash, KibanaBig plus if you have experience with Kubernetes, Docker, Mesos, HAProxy, MySQL and NoSQL databases
Submit CV To All Data Science Job Consultants Across Bharat For Free
🔍 Explore All Related ITSM Jobs Below! 🚀
✅ Select your preferred "Job Category" in the Job Category Filter 🎯
🔎 Hit "Search" to find matching jobs 🔥
➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

