Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: Hyderabad/Secunderabad
Looking for a candidate with 5 years of experience in a Data Engineer role.
Experience with data pipeline and workflow management tools.
Experience with AWS cloud services: EC2, Lake Formation, Athena, Glue, RedShift, Expertise with Apache Spark / Pyspark, Spectrum, S3 Storage, MS SQLServer, Data Pipelines, SQL, PL/SQL.
Overview:
Define/Implement Data Lake reference architecture on AWS.
Define data pipeline and ingestion workflows.
Design the infrastructure required for optimal extraction, transformation, and loading of data from a variety of data sources using SQL and AWS big data technologies.
Build analytics tools that provide actionable insights into customer acquisition and other key business performance metrics.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Keep data separated and secure across multiple data centers and AWS regions.
Create data tools for analytics and business analyst team members that assist them in building and optimizing our product into an innovative industry leader.
Own and manage the data dictionary.
Skills :
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing big data pipelines, architectures, and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with structured and unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable data stores.
Submit CV To All Data Science Job Consultants Across India For Free