Thoughtworks | Hiring | Data Engineer | BigDataKB.com | 1/11/2022

0

Thoughtworks

Bangalore

IT Service

About Thoughtworks

Founded over 27 years ago, weve grown from a small team in Chicago to a leading global software consultancy of more than 10,000 employees. With our roots in custom systems and agile software delivery, were at the forefront of defining the tech principles used by some of the worlds most successful organizations. We have invested in organic growth, building on the strategy, design, data and engineering capabilities required to bring a truly integrated approach to solving our client’s toughest challenges. Our collaborative, cross-functional teams deliver real results, fast. Thoughtworks challenges curious minds to make a real impact. Together, were creating a place where you are free to make your mark on the world through technology.


Job Description

Responsibilities:

  • Leverage AWS services to ingest data from source systems within their business domain
  • Leverage databricks platform to process, clean and transform data
  • Ensure and measure data quality according to business requirements and tracked KPIs
  • Monitor and maintain existing data product such as data ingestion pipeline and ETLs
  • Alert and act on data quality issues and system incidents within their business domain
  • Ensure that data is available, accurate and fit for purpose following governance guidelines and compliance policies
  • Ensure that knowledge of data mesh framework is shared
  • Write and publish clear documentation of approaches and functionalities
  • Take part in all agile ceremonies (planning, backlog refinement, retrospective, stand ups) and necessary team meetings
  • Make all artifacts available through version control tool such as git
  • Be part of the review process both as a reviewer as well as a submitter (PRs, architecture decisions, approaches.)

Skill Set :

  • 2 to 6 years of Strong experience developing software in an Agile environment following best practices
  • Experience with AWS cloud platform
  • Ability to write production grade python and Scala code
  • Ability to leverage Spark through Databricks platform (PySpark or Scala)
  • Ability to create and automate data ingestion pipelines for both batch and streaming data
  • Experience with monitoring and alerting tooling such as AWS Cloudwatch, Datadogs
  • Experience deploying software using orchestration tools such as Jenkins

experience_range : 2-6 years

skills : Agile Methodologies / aws / Python / pyspark / scala

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

🔍 Explore All Related ITSM Jobs Below! 🚀 ✅ Select your preferred "Job Category" in the Job Category Filter 🎯 🔎 Hit "Search" to find matching jobs 🔥 ➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

LEAVE A REPLY

Please enter your comment!
Please enter your name here