Open Systems Inc. | Hiring | Big Data Engineer (ONLY W2) | United States | BigDataKB.com | 2022-09-28

0

Job Location: United States

Position:  Big Data Engineer Remote

Location: Atlanta, Georgia (Remote)

Contract- 12 + Months

Job Description:

 

Our client is currently seeking an experienced Sr Data Engineer – Big Data individual. The successful candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions. Join a smart, highly skilled team with a passion for technology, where you will work on our state-of-the-art Big Data Platforms (Cloudera). They must be a very good communicator, both written and verbal, and have some experience working with business areas to translate their business data needs and data questions into project requirements. The candidate will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions, and perform data ingestion development and support duties. 

Skills and Experience:

Required:

 

  • Architect, design, and build Big Data applications to support business strategies and deliver business value
  • Hands-on experience in development, leading code reviews and testing, and working on proofs of concept using latest big data technologies
  • Advanced proficiency in SQL, data ingestion frameworks, data modeling, and working with big data
  • Mentor and inspire Junior team members
  • Willingness to learn new technologies constantly, and work/guide other team members
  • 7+ years of overall IT experience
  • 5+ years of experience as a Data Engineer or in a similar role
  • 3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or Cloud platforms e.g., AWS, GCP, etc.
  • 3+ years of experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming
  • Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka
  • Deep knowledge of troubleshooting and tuning Spark applications
  • 3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV
  • 3+ years of experience building, testing, and optimizing ‘Big Data’ data ingestion pipelines, architectures, and data sets
  • 2+ years of experience with Kudu and Impala
  • 2+ years of experience with Scala (and/or Python) and PySpark/Scala-Spark
  • 2+ years of experience with NoSQL databases, including HBASE and/or Cassandra
  • Knowledge of Unix/Linux platform and shell scripting is a must
  • Strong analytical and problem-solving skills

 

Preferred (Not Required):  

 

  • Experience with Cloudera/Hortonworks HDP and HDF platforms
  • Strong SQL skills with ability to write intermediate complexity queries
  • Experience with GIT code versioning software
  • Experience with REST API and Web Services
  • Good business analyst and requirements gathering/writing skills

 

Education

 

  • Bachelor’s Degree required. Preferably in Information Systems, Computer Science, Computer Information Systems, or related field

Apply Here

Submit CV To All Data Science Job Consultants Across United States For Free

🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

LEAVE A REPLY

Please enter your comment!
Please enter your name here