Job Location: India
Job Information
Department Name
Job Location
Industry
Country
About us
Exavalu is a US based consulting firm specialized in Digital Transformation Advisory and Digital Solution Delivery for select industries like Insurance, Healthcare and Financial Services. Our Headquarters is in California, US with multiple offices across US, Canada, and delivery centers in India. We were founded by Industry executives and Consulting principals with deep industry experience that allows us to bring advisory strength and solution expertise to clients. We’re in a hyper growth trajectory growing at over 100% year on year.
Join our diverse and inclusive team where you will feel valued and motivated to contribute with your unique skills and experience.
Job Description
Job description:
Responsibilities:
- Develop, implement, support, and operationalize AWS data lake infrastructure & services
- Create and maintain optimal data pipeline architecture,
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, ETL (e.g., Informatica Cloud) and AWS ‘Data Lake’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into patient care, operational efficiency and other key business performance metrics.
- Develop a deep understanding of AWS’s vast data sources and know exactly how, when, and which data to use to solve business problems.
- Monitor and maintain data lake security and data lake services.
- Manage numerous requests concurrently and strategically, prioritizing when necessary
- Troubleshoot technical issues and provide solutions and fixes using various tools and information such as server logs and report debug logs.
- General and administrative tasks
Requirements
Basic Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, Mathematics, or a related discipline.
- 4+ years of experience in Information Technology within a complex, matrixed, and global business environment.
- Experienced as a data engineer with AWS Data Lake Technologies and Services
Expert working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Building and optimizing AWS ‘Data Lake’ data pipelines, architectures, and data sets.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
- Understanding of message queuing, stream processing, and highly scalable AWS ‘Data Lake’ data stores.
- Understanding of database and analytical technologies in the industry including MPP and NoSQL databases (e.g., Snowflake), Data Warehouse design, ETL, BI reporting and Dashboard development.
- Experience with Agile framework and DevOps.
3+yrs of experience in building ETL data pipelines using AWS Glue and Pyspark.
Efficient in developing Spark scripts for data ingestion, aggregation and transformation
Exception Handling and performance optimization techniques on python/pyspark scripts
Preferred Qualifications:
- Professional certifications e.g., AWS Certified solution architect, etc.
- AWS data lake administrations
- Excellent documentation and interpersonal relationship skills with ability to drive achievement of objectives.
- Strong interpersonal and leadership skills.
- Strong written and verbal communication skills including the ability to communicate at various levels within an organization and to explain complex or technical matters in a manner suitable for a non-technical audience.
- Knowledge of best practices related to Data Lake, Data lake governance, Data Security (e.g., HITRUST), and Data Integration & Interoperability.
Submit CV To All Data Science Job Consultants Across Bharat For Free

