Job Location: Bengaluru
summary
bangalore, karnataka
a client of randstad india
permanent
-
reference number
JPC – 73930
job details
Accountabilities โข Data Pipeline – Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity โข Data Integration – Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data โข Data Quality Management – Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms โข Data Transformation – Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process โข Data Enablement – Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation Qualifications & Specifications โข Masters /Bachelorโs degree in Engineering /Computer Science/ Math/ Statistics or equivalent. โข Strong programming skills in Python/R/SAS โข Proven experience with large data sets and related technologies โ SQL, NoSQL, Google / AWS Cloud, Hadoop, Hive, Spark โข Excellent understanding of computer science fundamentals, data structures, and algorithms โข Data pipeline software – Airflow, RJ Metrics, Segment, Amazon Data Pipeline, Apache Pig โข ETL softwareโs – Amazon RedShift, CA Erwin Data Modeler, Oracle Warehouse Builder, SAS Data Integration Server, Pentaho Kettle, Apatar โข Hands-on experience and knowledge of the Data Lake technology …
Accountabilities โข Data Pipeline – Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity โข Data Integration – Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data โข Data Quality Management – Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms โข Data Transformation – Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process โข Data Enablement – Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation Qualifications & Specifications โข Masters /Bachelorโs degree in Engineering /Computer Science/ Math/ Statistics or equivalent. โข Strong programming skills in Python/R/SAS โข Proven experience with large data sets and related technologies โ SQL, NoSQL, Google / AWS Cloud, Hadoop, Hive, Spark โข Excellent understanding of computer science fundamentals, data structures, and algorithms โข Data pipeline software – Airflow, RJ Metrics, Segment, Amazon Data Pipeline, Apache Pig โข ETL softwareโs – Amazon RedShift, CA Erwin Data Modeler, Oracle Warehouse Builder, SAS Data Integration Server, Pentaho Kettle, Apatar โข Hands-on experience and knowledge of the Data Lake technology
-
experience
4
-
skills- Python
- data engineer
-
qualifications- B.E/B.Tech
Submit CV To All Data Science Job Consultants Across Bharat For Free
๐ Explore All Related ITSM Jobs Below! ๐
โ
Select your preferred “Job Category” in the Job Category Filter ๐ฏ
๐ Hit “Search” to find matching jobs ๐ฅ
โ Click the “+” icon that appears just before the company name to see the Job Detail & Apply Link ๐๐ผ

Leave a Reply