Job Location: Texas, United States
Requirements
Bachelorโ s degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree.
- 5+ years proven ability of professional data development experience
- 3+ years proven ability of developing with Hadoop/HDFS
- 3+ years developing experience with either Java or Python
- 3+ years experience with PySpark/Spark
- Continuous Integration/Continuous Delivery (CI/CD) experience
- Full understanding of ETL concepts
- Exposure to VCS (Git, SVN)
- Proficient with relational data modeling
Key Responsibilities
This Includes Internal And External Facing Applications As Well As Process Improvement Activities
Take ownership of features and drive them to completion through all phases of the entire 84.51ยฐ SDLC..
- Participate in design, development and support of Oracle and Hadoop based solutions
- Perform unit and integration testing
- Collaborate with architecture and lead and senior engineers to ensure consistent development practices
- Collaborate with other engineers to solve and bring new perspectives to complex problems
- Drive improvements in people, practices, and procedures
- Embrace new technologies and an ever-changing environment
Required Skills: Azure, SNOWFLAKE, PythonBasic Qualification: Additional Skills: Background Check: YesDrug Screen: YesNotes: Requirements: Bachelorยs degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree. ย 5+ years proven ability of professional data development experience ย 3+ years proven ability of developing with Hadoop/HDFS ย 3+ years developing experience with either Java or Python ย 3+ years experience with PySpark/Spark ย Continuous Integration/Continuous Delivery (CI/CD) experience ย Full understanding of ETL concepts ย Exposure to VCS (Git, SVN) ย Proficient with relational data modeling
This Includes Internal And External Facing Applications As Well As Process Improvement Activities
Key Responsibilities Take ownership of features and drive them to completion through all phases of the entire 84.51ยฐ SDLC..
ย Participate in design, development and support of Oracle and Hadoop based solutions
ย Perform unit and integration testing
ย Collaborate with architecture and lead and senior engineers to ensure consistent development practices ย Collaborate with other engineers to solve and bring new perspectives to complex problems ย Drive improvements in people, practices, and procedures ย Embrace new technologies and an ever-changing environment Selling points for candidate:Project Verification Info:Candidate must be your W2 Employee: YesExclusive to Apex: NoFace to face interview required: NoCandidate must be local: NoCandidate must be authorized to work without sponsorship: NoInterview times set:: NoType of project: Development/EngineeringMaster Job Title: Big Data: DBABranch Code: Cincinnati
Submit CV To All Data Science Job Consultants Across United States For Free

Leave a Reply