Job Location: Webster, MA
Design, implement and maintain a unified data repository with the ability to process and store large amounts of data from unrelated sources.
· Design and build the ingestion processes for several formats (files, DBs, social nets …) and data types (structured and unstructured) using the most appropriate techniques (batch, streaming) in each case.
· Define an appropriate model for the right organization and relationship between data, optimized for fast and scalable queries.
· Develop and maintain the data maps and their relationships.
· Design, develop and maintain a sure APIs layer that allows the external access for reading and writing in the data repository.
· Implement scalable, flexible and high performance data pipelines to support analytics activities and the creation of consolidated data sources for business self-service dashboards and Advanced Analytics
· In collaboration with the Data Analytics & Reporting profiles, for further data analysis and exploitation, implement the processes for the adequate data enrichment and transformation.
· In collaboration with Data Governance profiles implement the quality rules and data governance (dictionary, metadata, traceability, …)
· Communicate the results effectively and propose improvements and actions based on the obtained results.
· Generate associated technical documentation.
· Required follow-up reports generation.
· Define an appropriate model for the right organization and relationship between data, optimized for fast and scalable queries.
· Develop and maintain the data maps and their relationships.
· Design, develop and maintain a sure APIs layer that allows the external access for reading and writing in the data repository.
· Implement scalable, flexible and high performance data pipelines to support analytics activities and the creation of consolidated data sources for business self-service dashboards and Advanced Analytics
· In collaboration with the Data Analytics & Reporting profiles, for further data analysis and exploitation, implement the processes for the adequate data enrichment and transformation.
· In collaboration with Data Governance profiles implement the quality rules and data governance (dictionary, metadata, traceability, …)
· Communicate the results effectively and propose improvements and actions based on the obtained results.
· Generate associated technical documentation.
· Required follow-up reports generation.
Advanced knowledge and experience with Python, Scala, HDFS, Hive, Spark.
Knowledge, Skills and Abilities:
Bachelor’s Degree with 8+ years of experience.
- Database architectures
- Hadoop-based technologies (MapReduce, Hive…)
- Data modeling tools, ETL tools (e.g. Informatica Power Center)
- Computer code: Python, C/C++ Java, Perl…
- SQL technologies, NoSQL technologies.
- Artificial Intelligence, Machine learning and Deep Learning: Understanding of algorithms to work with Data Scientists
- UNIX, Linux, Solaris and MS Windows
- Multidimensional data modeling
If you require an accommodation for a disability so that you may participate in the selection process, you are encouraged to contact the MAPFRE Insurance Talent Acquisition team at talentacquisition@mapfreusa.com.
We are proud to be an equal opportunity employer.
Submit CV To All Data Science Job Consultants Across United States For Free
🔍 Explore All Related ITSM Jobs Below! 🚀
✅ Select your preferred "Job Category" in the Job Category Filter 🎯
🔎 Hit "Search" to find matching jobs 🔥
➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

