Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: Houston, TX
Job Detail:
POSITION SUMMARY:
The Data Engineer will be responsible for operationalizing data and analytics initiatives for the company. They will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection. The Data Engineer is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, data architects, and data analysts on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
JOB RESPONSIBILITIES:
- Develop, construct, test, and maintain data architectures or data pipelines
- Ensure data architecture will support the requirements of the business
- Discover opportunities for data acquisition
- Develop data set processes for data modeling, mining, and production
- Employ a variety of languages and tools to marry systems together
- Recommend ways to improve data reliability, efficiency, and quality
- Leverage large volumes of data from internal and external sources to answer business demands
- Employ sophisticated analytics programs, machine learning, and statistical methods to prepare data for use in predictive and prescriptive modeling while exploring and examining data to find hidden patterns
- Drive automation through effective metadata management using innovative and modern tools, techniques, and architectures to partially or completely automate the most common, repeatable, and tedious data preparation and integration tasks to minimize manual and error-prone processes and improve productivity
- Propose appropriate (and innovative) data ingestion, preparation, integration, and operationalization techniques in optimally addressing data requirements
- Ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives
- Promote the available data and analytics capabilities and expertise to business unit leaders and educate them in leveraging these capabilities in achieving their business goals
MINIMUM ESSENTIAL QUALIFICATIONS:
- A bachelor’s or master’s degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field or equivalent work experience
- At least five years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks
- At least three years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative
- Knowledge and/or familiarity of the midstream services industry and data generated in support of business activities related to the gathering, compressing, treating, processing, and selling natural gas, NGLs and NGL products, and crude oil will be strongly preferred
- Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as R, Python, Java, C++, Scala, and others
- Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management
- The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows
- Strong experience with database programming languages including SQL, PL/SQL, and others for relational databases, and knowledge and/or certifications on upcoming NoSQL/Hadoop-oriented databases like MongoDB, Cassandra, and others for nonrelational databases
- Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies.
- Knowledge and/or experience in working with SQL on Hadoop tools and technologies including HIVE, Impala, Presto, others from an open source perspective and Hortonworks Data Flow (HDF), Dremio, Informatica, Talend, others from a commercial vendor perspective
- Experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Service Bus, Amazon Simple Queuing Service, others, stream data integration technologies such as Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis, and others
- Basic experience working with popular data discovery, analytics, and BI software tools like Tableau, Qlik, PowerBI and others for semantic-layer-based data discovery
- Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms
- Basic experience in working with data governance/data quality and data security teams and specifically data stewards and security resources in moving data pipelines into production with appropriate data quality, governance and security standards and certification
- Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others
- Familiarity with agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
- Strong written and verbal communication skills with an aptitude for problem-solving
- Must be able to independently resolve issues and efficiently self-direct work activities based on the ability to capture, organize, and analyze information
- Experience troubleshooting complicated issues across multiple systems and driving solutions
- Experience providing technical solutions to non-technical individuals
- Demonstrated team-building skills
- Ability to deal with internal employees and external business contacts while conveying a positive, service-oriented attitude
- Willingness to travel to company locations (up to 5%)
EQUAL EMPLOYMENT OPPORTUNITY:
Targa Resources provides equal employment opportunities based on merit, experience, and other work-related criteria and without regard to race, color, ethnicity, religion, national origin, sex, age, pregnancy, disability, veteran status, or any other status protected by applicable law. We also strive to provide reasonable accommodation to employees’ beliefs and practices that do not conflict with Targa’s policies and applicable law. We value the unique contributions that every employee brings to their role with Targa.
Submit CV To All Data Science Job Consultants Across United States For Free