HomeData Science JobsMorgan Stanley | is Hiring | Data Engineer | BigDataKB.com | 2022-04-05

Morgan Stanley | is Hiring | Data Engineer | BigDataKB.com | 2022-04-05

Job Location: Bangalore/Bengaluru

The position will be responsible for the development of Extract, Transform and load (ETL) components, providing user access to the data via reports, data extracts, utilizing analysis tools such as OLAP, and for coding stored procedures. The role will require the candidate to possess a strong understanding of database concepts including data warehouse, operational data stores, data marts and data lake.

Responsibilities will also require in-depth knowledge of ETL concepts and hands-on experience in implementing data integrations in multiple database platforms using custom development, scripting language such as Unix Shell, data processing frameworks such as Spark, and ETL tools such as Informatica. Candidate must have strong SQL skills and should have experience developing data extracts and user reports.

To be successful, the role will require the individual to understand the banking technology landscape as well as be able to step into existing projects taking on a hands-on development role.

- Advertisement -

Design stable, scalable application database/data warehouse.
Analyze user requirements, envision system features and functionality.
Participate in design discussions and contribute to the architecture process.
Identify potential improvements to the current design/processes.
Plan and coordinate the data/process migration across databases.
Participate in multiple project discussions as a senior member of the team.
Work as part of a banking Agile Squad / Fleet.
Participate in all aspects of SDLC (analysis, design, coding, testing and implementation).
Actively contribute and participate in design and architecture discussions, daily stand-ups, and Agile Sprint planning sessions.


Primary skills

- Advertisement -

5 – 7 years of relevant experience in building ETL applications using Informatica PowerCenter in an enterprise data warehouse environment.
Strong SQL and database programming skills including creating views, stored procedures, triggers, implementing referential integrity, as well as designing and coding for performance.
Knowledge and hands-on experience of MPP database systems (e.g. Teradata).
Programming with Unix/Linux (Shell and/or Perl).
Build ETL frameworks and utilities using Python.
Experience with DevOps processes and tools (e.g. Jenkins and TeamCity).
Practiced understanding of Agile development methodologies and understanding of DevOps Integration
Strong familiarity with Agile software/tools (e.g., JIR) and DevOps processes and tools (e.g. Jenkins and TeamCity).

Good to have skills

Experience in the Financial markets and wealth management.
Experience in building applications in Hadoop eco system (e.g. Hive, PySpark).
Experience in Cloud data platforms (e.g., Azure and AWS).


Apply Here

Submit CV To All Data Science Job Consultants Across India For Free

Sambhaji Rao | Data Science Career Coach
Sambhaji Rao | Data Science Career Coachhttps://bigdatakb.com
No More Pain For Job Search- Let's Take Out Some Time For Society... I am From Bharat.., The Most Blessed Country...Jai Hind- Bharat Mata Ki Jai....


Please enter your comment!
Please enter your name here

How To Find Best Data Science Course

How To Get Guaranteed Data Science Jobs

Most Popular