Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: Bengaluru, Karnataka, The Great Bharat
Greetings from Technodysis!!!
We have opening for Azure Data Engineer role with Data Bricks.
Exp : 6+ yrs
Relevant : 3+ years in Databricks & ADF
Mandatory Skills : Azure Data Factory, Data Bricks, SQL & Python/Pyspark
Location : Bangalore – Hybrid (Day 1 Work from Office) No remote available
Notice Period : Immediate to 30 Days only.
Data Engineer Databricks – Job Description
• Candidate with 6+ years of IT experience and atleast 2 years in Data Engineer role
• Required : a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
• Experience with Azure: Azure Data Factory, Azure Data Lake Storage, Databricks
• Experience with relational SQL and NoSQL databases,
• Experience with object-oriented/object function scripting languages: Experience with anyone of these Pyspark, Python, SQL, Scala, Spark-SQL etc.
• Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
• Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
• Strong analytic skills related to working with unstructured datasets.
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.
• A successful history of manipulating, processing and extracting value from large disconnected datasets.
• Experience in Sprint and Agile
• Experience supporting and working with cross-functional teams in a dynamic environment.
• Experience with stream-processing systems: Storm
• Experience with big data tools: Hadoop, Spark, Kafka, etc.
• Prior SSIS Experience
• Experience in Stream Analytics, Azure Functions, Serverless Architecture, ARM Templates
• Experience in Postgres and Cassandra.
• Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
• Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data stores.