Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: India
Job Responsibilities:
- Actively Participate in the design and implementation of our Analytics infrastructure to align with the vision of Paytm First Games Data Platform
- You will work closely with Software Engineers & ML engineers to build data infrastructure that fuels the needs of multiple teams, systems and products
- You will automate manual processes, optimize data-delivery and build the infrastructure required for optimal extraction, transformation and loading of data required for a wide variety of use-cases using SQL/Spark
- You will build stream processing pipelines and tools to support a vast variety of analytics and audit use-cases
- Design and implement technical solutions to support the ecosystem of Analytics Infrastructure.
- You will continuously evaluate relevant technologies, influence and drive architecture and design discussions
- Responsible for the implementation of ongoing Hadoop infrastructure including monitoring, tuning and troubleshooting.
- Develop large scale multi-tenant software components on the Platform in an Agile based methodology to provide self-service capabilities
- Take ownership of Platform/PaaS architecture and drive it to the next level of effectiveness to support current and long term requirements
Desired Skills:
- BE/B.Tech/BS/MS/PhD in Computer Science or a related field (ideal)
- 3 – 5 years of work experience building data warehouse and BI systems
- Strong Java skills
- Experience in Scala or Python (plus to have)
- Experience in Apache Spark, Hadoop, Redshift, Athena
- Strong understanding of database and storage fundamentals
- Experience with the AWS stack
- Ability to create data-flow design and write complex SQL / Spark based transformations
- Experience working on real time streaming data pipelines using Spark Streaming
- Experience in applying the appropriate software engineering patterns to build robust and scalable systems.
- Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/HBase…), streaming platforms (Kafka), containers and kubernetes.
- Excellent verbal and written communication and presentation skills, analytical and problem solving skills
Good to Have:
- Good to be as contributor/committer in one of the Bigdata technologies – Spark, Hive, Kafka, Kubernetes, Presto, Yarn, Hadoop/HDFS
- Experience in building REST services and API’s following best practices of service abstractions, Micro-services and Orchestration frameworks.
- Experience in Agile methodology and CICD – tool Integration, automation, configuration management in GIT
Submit CV To All Data Science Job Consultants Across India For Free