ADCI HYD
Hyderābād
Internet
- 1+ years of experience as a Data Engineer or in a similar role
- Experience with data warehousing, and building ETL pipelines
- Experience in SQL
- Bachelors in Computer Science, Engineering, Statistics, Mathematics or related field
- 5+ years of experience in data engineering / business intelligence space
- Strong understanding of ETL concepts and experience building them with large-scale, complex datasets using traditional or map reduce batch mechanism.
- Strong data modeling skills with solid knowledge of various industry standards such as dimensional modeling, star schemas etc.
- Extremely proficient in writing performant SQL working with large data volumes
- Experience designing and operating very large Data Warehouses
- Experience with scripting for automation (e.g., UNIX Shell scripting, Python, Perl, Ruby).
- Excellent problem-solving skills and ability to prioritize and stay focused on big needle movers
- Curious, self-motivated & a self-starter with a ‘can do attitude’. Comfortable working in fast paced dynamic environment.
Job summary
Payroll Technology at Amazon is all about enabling our business to perform at scale as efficiently as possible with no defects. As Amazon’s workforce grows, both in size and geography, Amazon’s payroll operations become increasingly complex, and our customers are asked to do more with less. Process can only get them so far, and that’s where we come in with technology solutions to integrate and automate systems, detect defects before payment, and provide insights. As a data engineer in payroll, you will have to onboard payroll vendors across various geographies by building versatile and scalable design solutions. Having strong written and verbal communication, and the ability to communicate with end users in non-technical terms, is vital to your long-term success.
The ideal candidate will have experience working with large datasets, distributed computing technologies and service-oriented architecture. The candidate should relish working with large volumes of data, and enjoys the challenge of highly complex technical contexts. He/she should be an expert with data modeling, ETL design and business intelligence tools and has hand-on knowledge on columnar databases. He/she is a self-starter, comfortable with ambiguity, able to think big and enjoys working in a fast-paced team.
Responsibilities:
- Design, build and own all the components of a high-volume data warehouse end to end.
- Build efficient data models using industry best practices and metadata for ad hoc and pre-built reporting
- Provide wing-to-wing data engineering support for project lifecycle execution (design, execution and risk assessment)
- Interface with business customers, gathering requirements and delivering complete data & reporting solutions owning the design, development, and maintenance of ongoing metrics, reports, dashboards, etc. to drive key business decisions
- Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
- Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources
- Own the functional and nonfunctional scaling of software systems in your ownership area.
- Implement big data solutions for distributed computing.
- Willing to learn and develop strong skillset in AWS technologies
- Excellent knowledge of Advanced SQL working with large data sets.
- Excellent dimensional modeling skills.
- Good with one of the programming languages such as python, java, ruby, etc
- Good to have experience with AWS technologies including Redshift, RDS, S3, EMR, EML or similar solutions build around Hive/Spark etc.
- Good to have experience with reporting tools like Tableau, OBIEE or other BI packages.
Submit CV To All Data Science Job Consultants Across India For Free

