Xoom Inc | Hiring | MTS1, Data Engineer | Chennai | BigDataKB.com | 2022-09-28

0
53

Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE

 

Job Location: Chennai

  • Works with business units and Product Dev teams to design, develop and deliver data solutions on one of the largest data platforms in the world.
  • Supports business units by providing data in a ready-to-use form to data analysts and data scientists for Business insights, predictive analytics, machine learning, etc.
  • Owns and accountable for the design and development of a Data solution feature or a Data Pipeline
  • Spends most of the time developing data pipelines/ETL code for solving various data requirements using traditional ETL tools but also has experience skills for building custom data pipelines.
  • Code is well-commented, easy to maintain, and can be reused across a sub-system or feature. Code may persist for the lifetime of a software version
  • Code is thoroughly tested with very few bugs, and is supported by unit tests. Able to lead feature or sub-system design reviews and code reviews and be recognized as the go-to developer for that component.
  • Leads in architecture discussions, proposes and discusses solutions to system and product changes.
  • Should be comfortable working in an agile environment and with cross-functional teams, should have appetite to learn and be flexible to pick up new technology.

Skills and Experience

  • A Master or Bachelor degree in computer science or equivalent with 10+ years of data engineering experience
  • Good understanding of Data Warehousing and Business Intelligence Application Design Development.
  • Good understanding of Data Modelling Concepts.
  • Deals with both structured and unstructured data sets hence uses different methods to data architecture and applications in building data pipeline.
  • ETL Development including various big data technologies, open source data ingestion and processing frameworks
  • Gains experience with various big-data technologies and solutions (Hive, Pig, Spark) to optimize processing of extremely large datasets.
  • Excellent debugging skills with a solid understanding of data and testing to capture data quality gaps/issues.
  • Good understanding of data processing, data structure optimization and design for scalability.
  • Can be relied on to deliver a data pipeline feature/component on time and to requirements, without data quality issues.
  • Understands and is able to reason about the business requirements, as it relates to their area of expertise/SME.
  • Develops the ability to help in resolving data issues.
  • Understanding of with version control systems, particularly GIT.
  • Strong analytical and problem solving skills.
  • Good understanding of database principles and SQL beyond just data access.
  • Proficient in scripting languages, i.e. Unix/Linux Shell Scripting

Intermediate level knowledge on following technologies, with expertise on few of them:

  • Data Transformation tools like informatica
  • Strong Database fundamentals (Oracle)
  • HBase
  • Hive/Pig
  • Scripting (Shell, Python, Java, Scala)
  • Hadoop eco system
  • Elastic search
  • Stream processing (Kafka)

Intermediate level knowledge on following business domains is a plus:

  • Payments and banking
  • Merchant servicing
  • E-commerce

Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

LEAVE A REPLY

Please enter your comment!
Please enter your name here