Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: Saint Louis, MO
Job Detail:
Hello Team,
Immediate requirement:
Rek:
Cloud Data Engineer
Client: Internal Project to Technology Partners with end client Washington University
Contract : 1+ year
Client Location: St. Louis, MO (Central Time)
Remote: work Remotely full time – willing to be on daily video calls and be recorded for client documentation purposes
Work Status: USC / Green Card
Perfect English/Communication skills
Rate: W2 Role
NO ADDITIONAL LAYERS – WILL NOT HONOR
Required
5+ years Data Engineering
Java or Scala Development experience
use cases with data structures and algorithms
extensive experience with Apache Spark, Data Plan Storage, Delta Lake, Delta Pipelines, Performance Engineering, in addition to standard database/ETL knowledge
building cloud data pipelines, architectures, data sets with advanced knowledge of stream-based, API data extraction processes (bulk API is a must)
Building data models, DB schemas for read and write performance optimization
Supporting data transformation in serverless, overseeing metadata automation, management in a multitenant cloud environment
Must fill out grid with # of years experience as a Business Analyst (BA) (or candidate will not be reviewed.)
—————
For Submittal – please provide candidate info in the following format:
1/26/22
Candidate Name
C-Phone
C-Email
Years Business Analyst
GC/USC
Location
Rate W2
Your email
Role
The Data & Artificial Intelligence (DAI) practice is an exciting solution consulting group within the well-established Technology Partners’ organization. This growing team is obsessed with solving the hardest problems through enabling data teams, analysts, and business users. Our work leveraging delta technologies and AI allow our customers to focus on high-ceiling work critical their overall mission (clinical cancer outcomes, education, biomedical research, precision shopper analytics, etc).
Our team is well aligned with Microsoft and Databricks as channel partners – as a result, most technical work takes place within Azure and leverages Databricks’ Apache Spark engine; along with our own proprietary enhancements. The responsibilities below require extensive knowledge in Apache Spark, Data Plan Storage, Delta Lake, Delta Pipelines, and Performance Engineering, in addition to standard database/ETL knowledge.
Requirements
- BS in Computer Science (preferable MS or PhD in distributed data systems or quantitative fields)
- Understands the working nature of a multi-year effort of iterative deliverables within both client and product environments
- Driven by delivering value and impact daily
- 3+ years of full stack engineering experience (java or scala)
- 3+ years of experience in real-world use cases with data structures and algorithms
- 2+ years of experience in distributed systems, databases, and Spark
- 5+ years of experience in a data engineering role
- 5+ yeas of SQL experience working with relation relational databases, writing complex queries for a variety of databases
- 3+ years of experience building cloud data pipelines, architectures, and data sets with advanced knowledge of stream-based, API data extraction processes (bulk API is a must)
- 3+ years of experience building data models, DB schemas for read and write performance optimization
- 2+ years of experience supporting data transformation in serverless and overseeing metadata automation and management in a multitenant cloud environment
- Experience of BI tools is a plus
Submit CV To All Data Science Job Consultants Across United States For Free