Job Location: India
Job Summary:
The Senior Data Engineer will be working collaboratively with business and technical team members to build data pipelines and stage high quality data for consumption by business analysts, data scientists and visualization developers. You will work with product owners, architects, developers to build real time/ near real time scalable data integration solutions to enable our Data & Analytics platform in providing relevant, timely and accurate data to self-service BI tools, web platforms via APIs and Data science tools.
- Responsible for the building, deployment, and maintenance of critical scalable Data Pipelines to assemble large, complex sets of data that meet non-functional and functional business requirements
- Work closely with SMEs, Data Modeler, Architects, Analysts and other team members on requirements to build scalable real time/near real time/batch data solutions.
- Contributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and loading into Data Lake/Cloud Data Warehouse/MPP (Snowflake/Redshift/similar Technologies).
- Interacts with technical teams across Cepheid and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability.
- Performs development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions.
- Keep up with current trends in big data and Analytics , evaluate tools and pace yourself for innovation.
- Mentor Junior engineers, create necessary documentation and Run-books while still being able to deliver on goals
Requirements:
- Bachelor’s degree in the areas of Computer Science, Engineering, Information Systems, Business, or equivalent field of study required
- 10+ years of experience in working with data solutions.
- 5+ years of experience coding in Python, or Scala or similar scripting language.
- 3+ years of experience in developing data pipelines in AWS Cloud Platform (preferred), Azure, or Snowflake at scale.
- 3+ years’ Experience in designing and implementing data ingestion with real-time data streaming tools like Kafka, Kinesis or any similar tools. SAP/Salesforce or other cloud integrations are preferred.
- 2+ years’ experience working with MPP databases such as Snowflake (Preferred), Redshift or similar MPP databases.
- 2 + years’ experience working with Server less ETL processes (Lambda, AWS Glue, Matillion or similar)
- 2+ years’ experience with big data technologies like EMR, Hadoop, Spark, Cassandra, MongoDB or other open-source big data tools.
- Knowledge of professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
- Experience designing, documenting, and defending designs for key components in large, distributed computing systems
- Excellent verbal and written communication skills, especially in technical communications
- Experience participating in an Agile software development team, e.g., SCRUM
- Shift Timings – EST (7:00 pm. – 4:00 am.)
- Client Name: CEPHEID
Interested and ELIGIBLE candidates may share their resume at ptiwari@businessneedsglobal.com
Submit CV To All Data Science Job Consultants Across Bharat For Free
🔍 Explore All Related ITSM Jobs Below! 🚀
✅ Select your preferred "Job Category" in the Job Category Filter 🎯
🔎 Hit "Search" to find matching jobs 🔥
➕ Click the "+" icon that appears just before the company name to see the Job Detail & Apply Link 📝💼

