Koch Business Solutions India | Jobs | Data Engineer | BigDataKB.com | 31-03-22

โ€”

by

Job Location: Bangalore

Description

The Data Engineer will be a part of the global software development team, which consists of a mixture of mid-to senior developers that support and develop existing applications for Koch Minerals and Trading , formerly known as Koch Supply and Trading (part of Koch Industries).

Koch Minerals & Trading company offer global coverage and world-class market knowledge in diverse commodities – both in their physical and paper form. These companies are backed by worldwide resources and Koch Resources, LLC’s strong credit rating. Koch Minerals & Trading companies have a unique, analysis- and relationship-based foundation that has become their trademark in the global marketplace. From worldwide locations, the companies’ traders, accounting professionals, market analysts, credit risk specialists, logistics specialists, and information technology professionals connect with international markets. To learn more, visit us at www.ksandt.com

Koch Technology Center (KTC) is being developed in India to extend Kochโ€™s IT operations, as well as act as a hub for innovation in the IT function. As KTC rapidly scales up its operations in India, its employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground floor and will play a critical part in helping build out the Koch Technology Center (KTC) over the next several years. Working closely with regional and global colleagues would provide significant global exposure to the employees. The Data Engineer will report to the Software Development Team Lead of the KTC.

A Day In The Life Could Include:

(job responsibilities)

  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build analytics tools that utilize the data pipeline to provide actionable insights
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
  • Implement and support data pipelines and services
  • Be able to contribute to architectural discussions about new data system designs
  • Ability to prioritize, organize and coordinate simultaneous tasks/projects
  • Experimenting with new technologies and solutions, identifying ways we can use technology to create superior value for our customers.
  • High initiative and a passion for driving rapid technical advancement
  • Collaborate with a diverse IT team including business analysts, project managers, architects, developers or vendors to create or optimize innovative technologies and solutions
  • Be able to communicate complex solutions to stakeholders and other team members
  • Strong conceptual, analytical, and problem-solving abilities

What You Will Need To Bring With You:

  • (experience & education required) Experience in one or more common scripting languages (Python, Node.JS preferred)
  • 3+ yearsโ€™ experience in writing SQL queries against relational databases (SQL Server, Postgres SQL etc.โ€ฆ)
  • 2+ yearsโ€™ experience with AWS cloud services: S3, EC2, Lambda, Snowflake.
  • 2+ yearsโ€™ experience with CI/CD tools and concepts (Azure DevOps, Bash, PowerShell, Terraform)
  • 2+ yearsโ€™ experience with Git or other version control technologies
  • 3+ yearsโ€™ experience in building data pipelines or ETL processes
  • 3+ yearsโ€™ experience in tuning queries for performance and scalability
  • 1 – 3 yearsโ€™ experience in visualization tools, such as Tableau or PowerBI
  • 3+ years of experience in AWS native tools set โ€“ S3, Redshift, Aurora, DynamoDB, Lambda, Kinesis or similar tools.
  • 2+ years of Apache Spark development experience (preferably PySpark)
  • Strong analytic skills related to working with unstructured datasets
  • Experience with data pipeline and workflow management tools
  • Experience with object-oriented/object function scripting languages (Python)
  • Strong project management and organizational skills

What Will Put You Ahead:

(experience & education preferred)

  • Understanding of Meta Data & Master Data Management concepts & practices
  • Experience working with enterprise data stores, data lakes, external data sources and APIs
  • Experience working with remote teams spread across the globe
  • Experience with broad set of analytics use cases; one or more of supply-chain, transportation, sales and operations
  • Bachelorโ€™s Degree in Computer Science, Engineering or Mathematics preferred

Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

๐Ÿ” Explore All Related ITSM Jobs Below! ๐Ÿš€ โœ… Select your preferred “Job Category” in the Job Category Filter ๐ŸŽฏ ๐Ÿ”Ž Hit “Search” to find matching jobs ๐Ÿ”ฅ โž• Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐Ÿ“๐Ÿ’ผ

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *