Job Location: Los Angeles, CA
Ready to shake things up? Splunk is looking for an individual to join its Global Business Operations (GBO) team to boost Spunk’s fast growing business. The GBO team ensures data driven decision-making throughout the company. Join us as we pursue our disruptive new vision to make machine data accessible, usable and valuable to everyone. We continue to be on a tear while enjoying incredible growth year over year.
As a Cloud Data Engineer, you should be an expert with data warehousing technical components (e.g., ETL, ELT, Cloud Databases and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have deep understanding of the architecture for enterprise-level data lake solutions using multiple platforms (RDBMS, AWS, Cloud). You should be an expert in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The individual is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions.Â
What you’ll do: Yeah, I want to and can do that. Â
- As a Data Engineer, you will be responsible for engineering data pipelines for Splunk’s enterprise data platform, democratizing datasets, enabling advanced analytics capabilities, integrating data from various systems, and applications. You will work as part of an evolving Global Business Operations Team to rapidly design, secure, build, test and release new data enablement capabilities. The role will collaborate closely with other specialists, Product Managers & key stakeholders across the company.Â
- Build large-scale batch and real-time data pipelines using the cloud data technologies, such as Snowflake, Matillion, Kubernetes, FiveTran, Python, Apache Airflow and Apache KafkaÂ
- Serve as a resource for data management implementations on other technology teams and collaborate with data owners, business owners, and leaders.Â
- Supports the design and development of framework based data integration and interoperability across multiple Splunk Business applications.Â
- Advanced level skills in Python, SQL, data integration, data modeling and data architecture.Â
Requirements: I’ve already done that or have that! Â
- A minimum of 5 years of related experienceÂ
- 1+ years of experience as a Data Warehouse Architect or Data Engineer.Â
- 1+ years of experience driving adoption and building automation of data management services and tools.Â
- 1+ years of experience with API based ELT automation framework, data management, or interface design, development and maintenance.Â
- Large scale design, implementation and operations of Cloud data storage technologies such as AWS Redshift, Snowflake, Kubernetes, etc.Â
- 2+ years of experience with programming scripting and data science languages such as Python, SQL, etc.Â
- Experience in building data models, including conceptual, logical, and physical for Enterprise Relational, and Dimensional Databases.Â
- Advanced knowledge of Big Data concepts in organizing both structured and unstructured dataÂ
Preferred knowledge and experience: These are a huge plus. Â
- Knowledge of Splunk productsÂ
Education: Got it! Â
- Bachelor’s degree preferably in Computer Science, Information Technology, Management Information Systems, or equivalent years of industry experience.
We value diversity at our company. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or any other applicable legally protected characteristics in the location in which the candidate is applying.
For job positions in San Francisco, CA, and other locations where required, we will consider for employment qualified applicants with arrest and conviction records.
Submit CV To All Data Science Job Consultants Across United States For Free

