Job Location: The Great Bharat
Job Detail:
About the company:
TensorGo Technologies is an enterprise-grade low code PaaS company for computer vision products. The
platform enables users to build the most complex ML/DL applications in an easier manner by integrating our
APIs. We custom build State-Of-The-Art neural networks to solve the most challenging problems in the world. We
are shaping a smarter tomorrow through our deep learning, computer vision-powered products.
Our fundamental goal is to help companies scale up their businesses, improve their processes, bring down costs
and enhance their customer engagement most efficiently. With our powerful and enterprise-ready solutions years
ahead in the game, we make the future happen at TensorGo.
Gartner Inc. has recognized TensorGo as a Cool Vendor in The Cool Vendor in AI for Computer Vision – 2022.
We also won the accolade for the Best Overall Pitch in the prestigious Oracle APAC Startup Idol 2022.
Visit us at: https://tensorgo.com for more information.
Job Title: Data Engineer
Experience: 5 to 7 years
Work Location: Hyderabad (option to work remotely)
Skillset: Python, PySpark, Kafka, Airflow, Sql, NoSql, API Integration,Data pipeline, Big Data, AWS/ GCP/ OCI/
Azure
Calling out Python ninjas to showcase their expertise in a stimulating environment, geared towards
building cutting-edge products and services. If you have a knack for data processing, scripting and
are excited about delivering a scalable, high-quality data ingestion, API Integration solutions, then
we are looking for you!
You will get a chance to work on exciting projects at our state-of-the-art office, grow along with the
company and be fruitfully rewarded for your efforts!
Requirements:
●
Understanding our data sets and how to bring them together.
●
Working with our engineering team to support custom solutions offered to the product development..
●
Filling the gap between development, engineering and data ops.
●
Creating, maintaining and documenting scripts to support ongoing custom solutions.
●
Excellent organizational skills, including attention to precise details
●
Strong multitasking skills and ability to work in a fast-paced environment
●
5+ years experience with Python to develop scripts.
●
Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
●
You are familiar with pulling and pushing files from SFTP and AWS S3.
●
Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
●
Familiarity with SQL programming to query and transform data from relational Databases.
●
Familiarity to work with Linux (and Linux work environment).
●
Excellent written and verbal communication skills
●
Extracting, transforming, and loading data into internal databases and Hadoop
●
Optimizing our new and existing data pipelines for speed and reliability
●
Deploying product build and product improvements
●
Documenting and managing multiple repositories of code
●
Experience with SQL and NoSQL databases (MySQL, Cassandra)
●
Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
●
Hands-on experience in AirFlow
●
Understanding of best practices, common coding patterns and good practices around
●
storing, partitioning, warehousing and indexing of data
●
Experience in reading the data from Kafka topic (both live stream and offline)
●
Experience in PySpark and Data frames
Responsibilities:
You’ll
●
Collaborating across an agile team to continuously design, iterate, and develop big data systems.
●
Extracting, transforming, and loading data into internal databases.
●
Optimizing our new and existing data pipelines for speed and reliability.
●
Deploying new products and product improvements.
●
Documenting and managing multiple repositories of code.
Submit CV To All Data Science Job Consultants Across Bharat For Free