Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: United States
Overview
We are expanding our efforts into complementary data technologies for decision support in areas of ingesting and processing large data sets including data commonly referred to as semi-structured or unstructured data. Our interests are in enabling data science and search based applications on large and low latent data sets in both a batch and streaming context for processing. To that end, this role will engage with team counterparts in exploring and deploying technologies for creating data sets using a combination of batch and streaming transformation processes. These data sets support both off-line and in-line machine learning training and model execution. Other data sets support search engine based analytics. Exploration and deployment of technologies activities include identifying opportunities that impact business strategy, collaborating on the selection of data solutions software, and contributing to the identification of hardware requirements based on business requirements. Responsibility also includes coding, testing, and documentation of new or modified scalable analytic data systems including automation for deployment and monitoring. This role participates along with team counterparts to develop solutions in an end-to-end framework on a group of core data technologies.
Job Duties
Responsibilities
- Code, test, deploy, Orchestrate, monitor, document and troubleshoot cloud-based data engineering processing and associated automation in accordance with best practices and security standards throughout the development lifecycle
- Work closely with data scientists, data architects, ETL developers, other IT counterparts, and business partners to identify, capture, collect, and format data from the external sources, internal systems and the data warehouse to extract features of interest
- Contribute to the evaluation, research, experimentation efforts with batch and streaming data engineering technologies in a lab to keep pace with industry innovation
- Work with data engineering related groups to inform on and showcase capabilities of emerging technologies and to enable the adoption of these new technologies and associated techniques
- Perform other duties as assigned
- Conform with all company policies and procedures
REPORTING RELATIONSHIP
Data Integration Manager US
Qualifications
Knowledge
- Experience with processing large data sets using Hadoop, HDFS, Spark, Kafka, Flume or similar distributed systems
- Experience with ingesting various source data formats such as JSON, Parquet, SequenceFile, Cloud Databases, MQ, Relational Databases such as Oracle
- Experience with Cloud technologies (such as Azure, AWS, GCP) and native toolsets such as Azure ARM Templates, Hashicorp Terraform, AWS Cloud Formation
- Understanding of cloud computing technologies, business drivers and emerging computing trends
- Thorough understanding of Hybrid Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape
- Experience with Azure cloud services to include but not limited to Synapse Analytics, Data Factory, Databricks, Delta Lake
- Working knowledge of Object Storage technologies to include but not limited to Data Lake Storage Gen2, S3, Minio, Ceph, ADLS etc
- Experience with containerization to include but not limited to Dockers, Kubernetes, Spark on Kubernetes, Spark Operator
- Working knowledge of Agile development /SAFe, Scrum and Application Lifecycle Management
- Strong background with source control management systems (GIT or Subversion); Build Systems (Maven, Gradle, Webpack); Code Quality (Sonar); Artifact Repository Managers (Artifactory), Continuous Integration/ Continuous Deployment (Azure DevOps)
- Experience with NoSQL data stores such as CosmosDB, MongoDB, Cassandra, Redis, Riak or other technologies that embed NoSQL with search such as MarkLogic or Lily Enterprise
- Experience or familiarity with ETL/ELT and Business Intelligence technologies such as Informatica, DataStage, Ab Initio, Cognos, BusinessObjects or Oracle Business Intelligence
Skills
- Ability to quickly prototype and perform critical analysis and use creative approaches for solving complex problems
- Excellent interpersonal, written, and verbal communication skills
- Excellent analytical and troubleshooting skills
- Ability to accept change and to adapt to shifting organizational challenges and priorities
Education
- High School Diploma or equivalent required
- Bachelor’s Degree in related field or equivalent work experience required
Experience
- 2-4 years of hands-on experience with software engineering to include but not limited to Spark, PySpark, Java, Scala and/or Python required
- 2-4 years of hands-on experience with ETL/ELT data pipelines to process Big Data in Data Lake Ecosystems on prem and/or in the cloud required
- 2-4 years of hands-on experience with SQL, data modeling and relational databases and no SQL databases required
Working Conditions
- Subject to stressful situations
- Possibility of working long hours may be required
- Limited travel may be required to support business needs
Submit CV To All Data Science Job Consultants Across United States For Free