Job Location: Bangalore/Bengaluru
- Responsible for the operation and further development of our multi-cloud container orchestration solution.
- Work together with software developers and solution architects to plan, design, test, implement and maintain professional integration solutions in a multi-cloud environment
- Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL
What you ll bring
- 6-8 years of Experience in the DevOps field.
- BE/ BTECH/ ME/ MCA- degree in Computer Science, Engineering.
- 3-4 years of current and deep Experience with Azure Data platform implementation. Experience managing projects through the entire project lifecycle.
- Relevant years of Experience 3 years in Azure Data Factory, Azure Data Lake, Azure DevOps, Azure DataBricks, AzureSQL
- Proficient in Azure Data Integration, Azure Data Architecture
- Proven Experience using the Microsoft Azure Data Stack (ADFv2, Azure SQL DB, Azure SQL Datawarehouse, Azure Data Lake, Azure Databricks, Analysis Services, Cosmos DB)
- Hands-on Experience Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL
- Experience in Azure data migration patterns
- Experience in Python, C# is mandatory
- Agile methodology experience essential
- Familiar with Agile/Scrum process
- Business fluent in written and spoken English
Good to know:
- Knowledge of Public Cloud Platforms such as Google Cloud Platform (GCP) or Amazon Web Services (AWS) or Microsoft Azure
- Experience working with version control GitHub and CI/CD pipelines using Azure DevOps
- Advertisement -