Job Location: Chennai
When you join Verizon
Verizon is one of the world’s leading providers of technology and communications services, transforming the way we connect across the globe. We’re a diverse network of people driven by our shared ambition to shape a better future. Here, we have the ability to learn and grow at the speed of technology, and the space to create within every role. Together, we are moving the world forward – and you can too. Dream it. Build it. Do it here.
What you’ll be doing…
You will be a Big Data/Cloud Engineeringconsultant in the Artificial Intelligence and Data Organization (AI&D), contributing to the Data Enablement by ingestion for the Data Warehousing and Big Data initiatives. You will be understanding the Ingestion requirements, and developing the data pipelines using varied tech stack for Batch & Real time data needs on different platforms of Cloud like GCP, On Premises Hadoop Clusters, and Teradata platforms. You will primarily work on Data Engineering high profile project initiatives thatinclude fetching, Loading, Transforming, Storing and present the data to the Business Clients for reporting and ML Engineering teams for insights development.
You will be collaborating with Project Intake teams, Business stakeholders, design & Architecture, Visualization, ML Engineering, Data Science teams to deliver the Data layer with RAW data, processed data ready for curation & BI reporting purposes in various data formats. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the North Star architecture used across the company.
- Compiling and analysing data, preparing the data for creating reports, providing information and/or data validation information regarding business specific data processing, tracking of processes & activities.
- Working with leads to understand requirements and effectively implement in support of the business processes, programs and/or services.
- Building, deploying new data pipelines within Big Data Eco-Systems with large amounts of structured and unstructured data, including integrating data from multiple sources.
- Documenting new/existing pipelines, Data Sources and Data Sets and Improve existing data pipelines by simplifying and increasing performance.
- Performing a variety of ETL/ELT development activities with adequate technical & functional skills having niche skill set like very good experience in Cloud Enablement using GCP, Hadoop tool stack, UNIX shell scripting, Python, Spark processing, Streaming tools like Storm, Kafka, nifi, CI/CD Deployment & well versed with DEVOPS concepts & tools etc..
- Implementing large scale data platforms, Ingestion Automation Frameworks, Data Ops with Industry standards, reusable data products and business ready data by utilizing modern and open source technologies, in a hybrid cloud environment.
- Having good understanding of data lakes, Data Mesh, data warehousing, Visualization & Reporting, data/Analytical Engineering, Data Analytics concepts.
What we’re looking for…
You’ll need to have:
- Bachelor’s degree in Computer Science, Information Technology, Computer Engineering or four or more years of work experience.
- Four or more years of relevant work experience as a big data engineer.
- Four or more years of experience of Hadoop, Spark/Scala, and similar frameworks, Scripting languages like python and messaging systems such as Kafka
- Four or more years of experience of NoSQL, SQL Data Stores and RDBMS databases including HBase, Cassandra.
- Two or more years of experience in automating end to end data lifecycle through CI/CD processes.
- One or more years of experience developing application on GCP using Compute Engine, App Engine, Cloud SQL, Kubernetes Engine, Cloud Storage.
- One or more years of experience with Big Data on GCP – BigQuery, Pub/Sub, Dataproc, Dataflow, Performance standards using Auto-Scaling Solution to meet varying Load requirements.
- One or more years prior experience working with container technology such as Docker, version control systems (Github), build management and automating end to end data lifecycle through CI/CD processes/tools (Screwdriver, Concourse, Jenkins).
Even better if you have one or more of the following:
- Agile development environment experience applying DEVOPS/AIOPS practices.
- Certifications or University programs in a related technology.
- Strong critical thinking, Can-do attitude and is an innovative thinker, excellent communication skills (verbal and written) to liaise with various partners locally and internationally.
- Good conflict resolution and negotiation.
Submit CV To All Data Science Job Consultants Across India For Free

