Before u proceed below to check the jobs/CVs, please select your favorite job categories, whose top job alerts you want in your email & Subscribe to our Email Job Alert Service For FREE
Job Location: Bhagya Nagar
Job Detail:
Role Responsibilities:
- Work with business to understand the project requirements
- Design, develop and implement data pipelines involving data fetched from diverse sources and housed in tables and views in the semantic layers
- Responsible for ETL processes and tools as well as database loading and manipulation to ensure that the data needs are met throughout the course of the projects and support.
- Ensure designs are in compliance with specifications
- Prepare and produce releases of software components
- Liaise with business and a team of developers and make sure, the best of the solutions are delivered with in time and budget
- Collaborate with other developers, business/data analysts and leads to design, develop and test project assignments
Education and Experience:
- Bachelor in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
- Minimum 6 years of in Data Engineer role in Big Data platform by using Hadoop concepts and framework
- Hands on experience on Spark Core, Spark-SQL, Scala-Programming and Streaming datasets
- Expertise in HDFS architecture, Hadoop framework – Map Reduce, Hive, PIG, Sqoop, Flume & data warehouse concepts
- CI/CD experience (Jenkins, GitHub) is a must
- Proficient in Hadoop MapReduce programming and Hadoop ecosystem
- Good understanding of RDBMS principles, Shared nothing, MPP architecture
- Hands on performance tuning, query optimization, file and error-handling, restart mechanism
- Hands on experience on UNIX Shell Scripting and Python
- Knowledge of Teradata, SAS, QlikView reporting etc.
- Able to understand the complex transformation logic and translate them to Spark-SQL queries
- Worked on Cloudera distribution framework, Oozie Workflow (or any Scheduler)
- Familiar with Data Warehouse concepts and Change Data Capture (CDC and SCD Types)
- Core Java skillset is an added advantage
- Airflow is an added advantage
- Good Knowledge and experience in any database (Teradata or Data Lake or Oracle or SQL Server) are a plus
Job Types: Full-time, Regular / Permanent
Salary: ₹1,800,000.00 – ₹2,400,000.00 per year
Benefits:
- Paid sick time
- Paid time off
- Provident Fund
Schedule:
- Monday to Friday
Ability to commute/relocate:
- Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required)
Experience:
- Scala: 4 years (Required)
- Hadoop: 4 years (Required)
- Spark: 5 years (Required)
Work Location: In person
Submit CV To All Data Science Job Consultants Across Bharat For Free