LocaleAI Technologies | Backend / Data Engineer Intern | Bengaluru | Bharat | BigDataKB.com | 17 Oct 2022

Job Location: Bengaluru

Duration: 6 Months – Preferred – 2022 passed out/ fresh out of college/ 2021 passed out with relevant Internship experience


What would you spend most of your time doing?

As a backend/data engineer at an early stage startup, you will be responsible for laying the foundation of all engineering systems. Your day might begin with designing a new micro-service supposed to handle 500 million pings on its first day in production and end with fierce debates on coding guidelines or the best practices for handling data consistency across distributed systems.

You will be responsible for building and handling our data pipelines that handle millions of pings in production. You will be orchestrating workflows, handling clusters, managing metadata, and transformations. Apart from collecting and monitoring performance metrics, you will play an integral part in discussing and implementing security best practices.

Being an enterprise-focused company, our systems don’t scale linearly or even exponentially. Every new customer brings the scale of millions of customers that they serve. We need to build robust, scale-ready and fault-tolerant services from day one. Our clients rely on it.


Best for someone:

1. Who is a polyglot, fluent in system design principles and not in a particular language or framework. It will be your responsibility to evaluate all available options and pick the best one for the job.

2. Who can move fast without breaking things and insists on rigorous testing.

3. Excited to own the outcome of what (s)he builds while clearly communicating the steps to get there.

Tech stack

  • The frontend is built on VueJS + DeckGL and we follow a component driven architecture.
  • Our backend services are written in Python or Java. Our applications are extremely data-intensive and data consistency is our holy grail. Our infrastructure handles upwards of 50 million requests in a day.
  • We use Postgres and ClickHouse as databases. We use Kafka for our real-time streaming data pipelines and BigQuery for large-scale data processing.
  • We are fully hosted on GCP and use most of their services (GKE, CloudSQL, Composer, Dataproc, etc.) We use Docker containers for deployment and maintain CI/CD pipelines for quick iterations and testing.

We believe frameworks are only the means to achieve something. If you are only interested to use a particular framework, we are probably not the right fit. We are looking for someone with broad experience but previous experience with these technologies is a plus!

If you are looking to learn how to build a company from scratch, if building systems at scale excites you, if you are mesmerized by what the world of GIS can offer or if you are passionate about zero-to-one, we will see you on the other side? ๐Ÿ™‚




Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

๐Ÿ” Explore All Related ITSM Jobs Below! ๐Ÿš€ โœ… Select your preferred “Job Category” in the Job Category Filter ๐ŸŽฏ ๐Ÿ”Ž Hit “Search” to find matching jobs ๐Ÿ”ฅ โž• Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐Ÿ“๐Ÿ’ผ

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *