Job Location: Chennai
When you join Verizon
Verizon is one of the world’s leading providers of technology and communications services, transforming the way we connect across the globe. We’re a diverse network of people driven by our shared ambition to shape a better future. Here, we have the ability to learn and grow at the speed of technology, and the space to create within every role. Together, we are moving the world forward – and you can too. Dream it. Build it. Do it here.
What you’ll be doing…
As a Big Data/Cloud EngineeringDeveloper you will be responsible for ETL/ELT Developmentusing Hadoop Eco tools on Big Data/Cloud platforms to meet business processes and application requirements for collecting, storing, processing, and analysing of huge sets of data in the Verizon Consumer & Business areas of Data Warehouse & Big Data initiatives. Requires good Experienced Big data resource who can support and manage data processing operations using Hadoop Eco tools, Spark, XML, JSON, Shell scripting, SQL, Python, CICD, DevOps & Cloud services (GCP).
You will be a Big Data/Cloud EngineeringDeveloper in the Artificial Intelligence and Data Organization (AI&D), contributing to the Data Enablement by ingestion for the Data Warehousing and Big Data initiatives. You will be working with Leads to understand the Ingestion requirements, and developing the data pipelines using varied tech stack for Batch & Real time data needs on different platforms of Cloud like GCP, On Premises Hadoop Clusters, and Teradata platforms. You will be, primarily work on Data Engineering Ingestion projects aiming work force management with Best Customer Experience, billing use cases, Churn reduction, Voice of Customers and customer Upselling/Cross selling. Implementation of these initiatives will include fetching, Loading, Transforming, Storing and present the data to the Business Clients for reporting and ML Engineering teams for insights development.
You will be following the Project Intake process to deliver the ETL Solution to deliver the Data layer with RAW data, processed data ready for curation & BI reporting purposes in various data formats.
- Compiling and analysing data, preparing the data for creating ETL Pipelines.
- Working with leads to understand requirements and effectively implement in support of the business processes, programs and/or services
- Building,deploying new data pipelines within Big Data Eco-Systems with large amounts of structured and unstructured data, including integrating data from multiple sources
- Documenting new/existing pipelines, Data Sources and Data Sets and Improve existing data pipelines by simplifying and increasing performance.
- Performinga variety of ETL/ELT development activities with adequate technical & functional skills having niche skill set like very good experience in Cloud Enablement using GCP, Hadoop tool stack, UNIX shell scripting, Python, Spark processing, Streaming tools like Storm, Kafka, nifi, CI/CD Deployment & well versed with DEVOPS concepts & tools etc..
- Understanding the Ingestion Automation Frameworks, Data Ops to use them for delivering the Project Objectives.
- Having good understanding of data lakes, Data Mesh, data warehousing, Visualization & Reporting, data/Analytical Engineering, Data Analyticsconcepts.
Where you’ll be working…
This hybrid role will have a defined work location that includes work from home and assigned office days as set by the manager.
What we’re looking for…
You’ll need to have:
- Bachelor’s degree in Computer Science, Information Technology, or Computer Engineering or one or more years of work experience.
- Experience inHadoop, Spark/Scala, and similar frameworks, Scripting languages like python and messaging systems such as Kafka.
- Experience of NoSQL, SQL Data Stores and RDBMS databases including HBase, Cassandra.
- Experience in automating end to end data lifecycle through CI/CD processes.
- Experience developing application on GCP using Compute Engine, App Engine, Cloud SQL, Kubernetes Engine, Cloud Storage
- Experience with Big Data on GCP – BigQuery, Pub/Sub, Dataproc, Dataflow, Performance standards using Auto-Scaling Solution to meet varying Load requirements.
- Experience working with container technology such as Docker, version control systems (Github), build management and automating end to end data lifecycle through CI/CD processes/tools (Screwdriver, Concourse, Jenkins).
Even better if you have one or more of the following:
- Agile development environment experience applying DEVOPS/AIOPS practices.
- Strong critical thinking, and is an innovative thinker, excellent communication skills (verbal and written) to liaise with various partners locally and internationally.
- Good conflict resolution and negotiation skills.
Submit CV To All Data Science Job Consultants Across India For Free

