Job Location: Gurgaon/Gurugram
Roles and Responsibilities
– Research, evaluate and recommend BigDataNOSQL solutions (existing & emerging) and best use cases for them.
– Should have implementation knowledge of Hadoop Ecosystem BigData Solutions
– Programming languages : Python, Scala.
– Preferred Knowledge on Hive, Spark, Kafka & JSON.
– Working knowledge with various databases: DB2, Oracle, Teradata
– Conduct system performance evaluations.
– Create and manage PySpark jobs for data transformation and aggregation along with query tuning and performance optimization.
– Design data processing pipelines and Experience in implementation of data ingestion pipeline for Batch and Streaming data.
– Assist Development Teams in designing, modeling and validating BigDataNOSQL solutions for their applications.
– Able to thrive in an OnshoreOffshore working model.
– Client handling skills and able to map the business requirements with the technology stack.
– Should have knowledge of any of the below NOSQL Databases :
a) Mongo DB
d) Cosmos DB
e) Dynamo DB