Job Location: Gurgaon
Job Title: Machine Learning Data Engineer.
Background
SNC-Lavalin is going through an exciting period of change. A Canadian listed engineering and construction firm acquired the UK engineering consultancy Atkins in July 2017, bringing with it a great opportunity to re-position the SNC-Lavalin brand in the eyes of our markets and ahead of our competitors. The combined group now has approximately 50,000 employees around the world and both Atkins and SNC-Lโs offerings combined give us the opportunity to continue our growth. The future of engineering and construction is in the greater use of data, combined with technologies to deliver our services more efficiently and bring different offerings to our clients. From apps, virtual reality and artificial intelligence, to mobility as a service and digital asset management, we are combining our traditional engineering skills with technology to fundamentally change the way we do things.
Atkins is part of the SNC Lavalin Group of companies, the delivery of its Digital and Data Services is via a function called IT Services which is organised as a global operation with significant presence in India. IT Services has developed an operating model to revolutionise the way IT services are provided to the company, moving from a โBusiness as Usualโ focus to an organisation driven by Business Value. Key concepts include: transitioning to digital products, adopting agile delivery approaches across the organisation, leveraging cloud services, and applying focus to supporting company project bids and delivery. The Digital Transformation is delivered through strategic initiatives which will be powered by Connected Data Ecosystem.
Job Details
Purpose of the Role:
This role is for a ML Data Engineer to join our Analytics and Artificial Intelligence team. We create state of art machine learning models to help business deliver high value, smart and market differentiating engineering products and services.
The focus of our team is innovative algorithms and models that make intelligent, automated, decisions in real time to make engineering process better, faster and accurate. To achieve that we collaborate with the engineering, sales, commercial and technology teams.
- You will be responsible to understand the client requirement and architect robust data platform on public cloud platforms.
- You will be responsible for deploying AI algorithms in to the data platform to run predictive analytics at scale.
- You will be responsible for development and deployment of new data platforms
- You will be responsible for using Cloud data services for development of Big Data Platforms
- You will be responsible for creating reusable components for rapid development of data platform
- You will be responsible to provide the essential support to the application team who is responsible for the products user journey
Key Deliverables/Responsibilities
- Work closely with the Product team and stake holders to develop the data platform to meet the requirements of the proposed solution.
- Work with the leadership to set the standards for software engineering practices within the machine learning engineering team and support across other disciplines
- Play an active role in team meetings and workshops with clients.
- Consult and choose the right analytical libraries, programming languages, and frameworks for each task.
- Help the Product Engineering team produce high-quality code that allows us to put solutions into production
- Refactor code into reusable libraries, APIs, and tools.
- Help us to shape the next generation of our products.
Experience Required
Essential:
- 3-5 years of experience in Data Warehousing with Big Data or Cloud
- Postgraduate or Graduate degree educated in computer science or a relevant subject
- Strong coding skills in Python, PySpark and or Java.
- Working knowledge of the pros, cons and usages of various ML/DL applications (such as Keras, Tensorflow, Python scikit learn and R)
- Good knowledge of software engineering principals
- Knowledge of Big Data technologies, such as Spark, Hadoop/MapReduce is desirable but not essential
- Deep knowledge of testing frameworks and libraries
- Experience of working in Agile delivery
- Good knowledge of database management languages e.g. SQL, PostgreSQL.
Desirable:
- Contribution to industry/open source communities.
- Knowledge and practical experience of cloud based platforms and their ML/DL offerings (such as Google GCP, AWS, and Azure) would be advantageous
- Understanding of infrastructure (including hosting, container based deployments and storage architectures) would be advantageous
Behavioural Competencies
Weโre boldโฆ
- We use our curiosity, innovation and creativity to solve problems in cool ways
Weโre proactiveโฆ
- We seek out opportunities, and really listen to our customersโ problems
- We work together in a dynamic and agile way to make change happen quickly
Weโre experimenters โฆ
- Weโre not afraid to try different ways of doing things
- Weโre champions for new ideas and tools
We’re trusted…
- Building close, trusted partnerships really matters to us
- We share our knowledge and work in a collaborative way, joining together for the best results
Submit CV To All Data Science Job Consultants Across India For Free

Leave a Reply