Crowdstrike | Sr. DevOps Engineer, Data Platform (Remote) | Dallas, TX | United States | BigDataKB.com | 11/14/2022

0

Job Location: Dallas, TX

About The Role:

CrowdStrike is looking to hire a Senior DevOps Engineer to join the Data Infrastructure team. In this team, we are on a mission to create a hyper scale data lake, which helps finding bad actors and stopping breaches. The team builds and operates systems to centralize all of the data the falcon platform collects, making it easy for internal and external customers to transform and access the data for analytics, machine learning, and threat hunting.

As a Systems Engineer in this team you will be responsible for building our Hadoop ecosystem in our DataCenter, including HDFS, YARN, Hadoop Cluster Management, Spark, Hive, Presto, Monitoring and Logging. We are looking for candidates that have setup Petabyte scale Hadoop clusters in the DataCenter and are passionate about solving problems at high scale. This role involves training and mentoring junior engineers.


Responsibilities:

  • Perform and oversee a variety of functions to ensure that our Hadoop infrastructure is available, reliable, stable and secure

  • Applies judgment in analyzing and selecting technologies, installation and maintenance of software and hardware systems that allow multiple Engineering and DataScience teams to interact with this system

  • Manage Deployment, Monitoring and Alerting dashboards with the goal of setting up an automated DevOps model

  • Has responsibility for building and maintaining the Hadoop infrastructure such that software engineers, data analysts and data scientists can run jobs to gather data and insights


Desired Skills and Experience:

  • Hadoop certification is desirable

  • Understanding of Apache Hadoop ecosystem technologies (HDFS, YARN, Apache Spark, Kafka, Zookeeper, Hive, Ranger, Presto)

  • Experience with large-scale business critical platforms (primarily Linux), server-grade machines, shell scripting

  • Solid understanding of databases (SQL, NoSQL, Hive, Cassandra, In-Memory databases like Redis)

  • Prior experience with Grafana, Prometheus is desirable

  • Familiarity with Terraform and Chef is strongly preferred

  • Proficiency in some scripting language (Python and/or shell scripting)

  • Proven ability to work with both local and remote teams

  • Strong communication skills both verbal and written

  • Bachelor’s degree in an applicable field, such as CS, CIS or Engineering

#LI-AR3

#LI-OC1

#LI-Remote

#HTF




Apply Here

Submit CV To All Data Science Job Consultants Across United States For Free

🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

LEAVE A REPLY

Please enter your comment!
Please enter your name here