Job Location: Mumbai
As an MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate systems. You ll help automate and streamline our operations and processes. You ll build and maintain tools for deployment, monitoring, and operations. You ll also troubleshoot and resolve issues in development, testing, and production environments
Responsibilities:
- Operate and maintain systems supporting the provisioning of new clients, applications, and features.
- Day-to-day monitoring of the production service delivery environment to ensure all services and applications are operating optimally and SLAs are met.
- Experience in handling/managing E2E ML lifecycle.
- Experience in building E2E ML pipeline/accelerators for either batch or real-time predictions.
- Hands-on Python 3.x, Pandas, NumPy, SQL
- Should have hands-on in below technologies:
- Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, or any relevant ML E2E PaaS/SaaS.
- Hands-on with REST API for real-time and near real-time (streaming) servings.
Skills:
- At least 3 years experience working with ML services and DevOps concepts, and practices.
- Experience working in cross-functional Agile engineering teams.
- Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines.
- Experience with scripting and coding using Python.
- Excellent Written and Verbal Communication Skills.
- Model management and model performance monitoring (drift monitoring).
- Git for Source code management.
- Ability to collaborate effectively with highly technical resources in a fast-paced environment.
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions.
- Knowledge of machine learning frameworks (either of): Tensorflow, Caffe/Caffe2, Pytorch, Keras, MXNet, Scikit-Learn.
Good To Have (Optional):
- Software deployment and configuration management in both QA and Production environments.
- Good python coding and OOP concepts.
- Design, build and optimize applications containerization and orchestration with Docker and Kubernetes and AWS or Azure.
- Produce build and deployment automation scripts to integrate between services
- Experience with one of the cloud computing platforms: Google Cloud, Amazon Web Service, Azure, Kubernetes.
- Experience in big data technologies preferred: Hive, PySpark, Kafka.
- Experience with logging tools such as Splunk, ElasticSearch, Kibana, Fluentd, Logstash
- Experience with monitoring tools such as Munin, Prometheus, Grafana
Submit CV To All Data Science Job Consultants Across India For Free
๐ Explore All Related ITSM Jobs Below! ๐
โ
Select your preferred “Job Category” in the Job Category Filter ๐ฏ
๐ Hit “Search” to find matching jobs ๐ฅ
โ Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐๐ผ

Leave a Reply