ConsumerTrack | Hiring | Staff Data Engineer | BigDataKB.com | 2022-09-26

0

Job Location: remote

How will you make an impact

  • Build and maintain multiple data pipelines to ingest new data sources (API and file-based) and support products used by external users and internal teams.
  • Optimize by building tools to evaluate and automatically monitor data quality and develop automated scheduling, testing, and distribution of feeds.
  • Work with data engineers, data scientists, and product managers to design, rapid prototype, and productize new data product ideas and capabilities.
  • Design and build cloud-based data lakes and data warehouses.
  • Conquer complex problems by finding new ways to solve them with simple, efficient approaches focusing on our platforms reliability, scalability, quality, and cost.
  • Collaborate with the team to perform root cause analysis and audit internal and external data and processes to help answer specific business questions.
What will you bring to us

  • Masters Degree (or a B.S. degree with relevant industry experience) in math, statistics, computer science, or equivalent technical field
  • Experience with dimensional data modeling and schema design in a database or data warehouse
  • Expertise with scripting languages such as Python and writing efficient and optimized SQL.
  • Working experience in building data warehouses and data lakes.
  • Experience working directly with data analytics to bridge business requirements with data engineering.
  • Experience with AWS infrastructure
  • Ability to operate in an agile, entrepreneurial start-up environment and prioritize
  • Excellent communication and teamwork, and a passion for learning
  • Curiosity and passion for data, visualization, and solving problems
  • Willingness to question the validity, accuracy of data, and assumptions

Preferred Qualifications:

  • Experience building data warehouse, data lake, and data pipeline using Snowflake/Redshift and other AWS Technologies.
  • Experience with large-scale distributed systems with large datasets.
  • Experience with event streams and stream processing (e.g., Kafka, Spark, Kinesis)
  • Hands-on experience with event streaming with modern event streaming tools like Pulsar, Kafka, and Kinesis. Understanding when streaming vs. batch processing is appropriate, and tradeoffs in a given context
  • Knowledge of advertising platforms.

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

LEAVE A REPLY

Please enter your comment!
Please enter your name here