Job Location: Bengaluru
Reversing the impact of climate change is one of the world’s biggest challenges. And businesses have a responsibility to lead the way. While individual consumer choices are important, over 80% of all the emissions reductions necessary for the world to reach Net-Zero, require business-level action. But despite the growing momentum and ambition from companies around the world to set Net-Zero goals, there are significant challenges to delivering on these ambitions. Business leaders don’t really know how they will get there. And the very first step, of getting emissions measurement right, is hard.
Terrascope is a smart carbon management and accounting platform that empowers corporations to decarbonise their operations, portfolios and supply chains in a trusted, confident, and secure manner. We are on a journey to build digital tools and analytics, datasets and algorithms, and an ecosystem of technical expertise and partnerships needed for companies to optimise their climate strategy.
Terrascope is backed by one of the world’s largest food and agri companies and global leader in climate action and sustainability. With their significant strategic advantage and secure funding, the venture is uniquely positioned to drive profit with purpose; driving decarbonization in supply chains while generating outsized financial returns.
In this role you will:
- Enhance data collection, processing and cleansing procedures to include information that is relevant for building analytic systems.
- Develop and support automated and interactive visualisation and BI tools.
- Normalise and develop new data sources. This includes developing and implementing automated cleansing routines and procedures to improve data quality.
- Develop standalone functions and routines to enhance data, either via third party tools or scripts.
- Extend company’s data with third party sources of information when needed.
- Interpret and analyze patterns and present data findings to stakeholders to help support decision making.
- Collect and interpret data, including logical mapping for the purposes of data modellers/scientists etc
- Identify, analyse and escalate enterprise data quality issues, facilitate the determination of issue impact, root cause and solution options.
- Maintain appropriate level of documentation for new and existing initiatives.
Design and build data ingestion/integration solutions to receive data from these prioritized external data sources.
Develop data pipelines for the delivery of structured, semi-structured and unstructured data to the required standards. This might relate to internal applications, data marts/data warehouses or other systems.
- At least 6 years experience in similar role in a tech company/Saas company
- Data frameworks like: Spark, CDAC, CDAC APIs, batch pipelines, schedules Data Fusion on GCP etc.
- Data visualisation skills in any tools like Tableau etc.
- Database querying competency using SQL, PostgreSQL, Hive, Pig, BigQuery etc.
- Experience in writing data pipelines in Java or Python leveraging Spark libraries.
- Keen interest in acquiring knowledge in dimensional modeling, data warehousing design and data management best practices.
- Keen interest in developing solid scripting skills in Spark, Python, SQL, Scala, Dataflow etc.
- Knowledge of Data Factory in Azure or Dataflow in GCP is an advantage.
Even better if you are:
- An entrepreneurial problem solver comfortable in managing risk and ambiguity
- A self-starter with a growth-mindset and proactiveness in working independently to drive toward results
We’re committed to creating an inclusive environment for our strong and diverse team. We value diversity and foster a community where everyone can be his or her authentic self.