HomeData Science Jobs - Bharatopentext | Hiring | Sr. Data Engineer | Bengaluru | BigDataKB.com |...

opentext | Hiring | Sr. Data Engineer | Bengaluru | BigDataKB.com | 6 Oct 2022

Job Location: Bengaluru

OPENTEXT – THE INFORMATION COMPANY

As the Information Company, our mission at OpenText is to create software solutions and deliver services that redefine the future of digital. Be part of a winning team that leads the way in Enterprise Information Management.

Also Explore All freshers Data Science Jobs in Bharat & United States below, Or simply scroll down to Continue with this job/post , just hit the + symbol to see the detail & Apply link for the job that suits u... Bharat...
United States...


The Opportunity:

We are seeking a Senior Data Analytics professional with a strong DevOps/SRE background to join our team. The person we hire into this role will work to apply best practices to the problem of delivering timely, accurate, and flexible analytics models that drive observability and executive dashboards. Data Analytics & Business Intelligence is a center of excellence for cloud operations. Cultivating a data-driven culture by supporting internal teams with high-quality data and insights. We are looking for someone to join our team to help us drive excellence across the cloud organization by providing actionable reporting & analytics dashboards!

The successful Senior Data Analytics professional will be responsible for developing monitoring dashboards and supporting delivery team members. Partnering with our infrastructure and operations teams and helping integrate Open Text’s monitoring tools into the Delivery Team’s Release Lifecycle. Ultimately delivering a monitoring/observability-as-code capability. Troubleshooting and monitoring issues require adept knowledge of tagging, observability, development languages and frameworks, cloud infrastructure, and Kubernetes. This role will implement operational data from various sources across the organization as part of a new enterprise observability platform. As a senior member of the team, you will be a key contributor to lead the development of processes that provide observability of commercial cloud products. The first challenge will be to implement a data model across multiple public Hyperscalers to track costs, utilization, deployment metrics that will be published to custom application health dashboards.


You are great at:

  • Collaborate with CloudOps, Support, Engineering, and Product Teams to develop Open Text’s operational/observability data strategy and the roadmap for executing it
  • Create efficient ETLs and/or data consumers/producers to power internal observability dashboards
  • Develop data best practices and processes for adhering to them
  • Deep interest in learning about and keeping up with quickly evolving industry trends around the modern data stack and data best practices
  • Improve reliability of our commercial products through improved dashboard observability and end-to-end processes based on metrics and KPIs
  • Model workflows that help understand usage and drive product improvements
  • Optimize timeliness of our data infrastructure to ensure data is available for decision makers
  • Follow engineering best practices such as testing, version control, code review, observability, and CI/CD to build highly reliable and extensible data models and dashboards
  • Build tools and processes to scale the delivery of high-quality analytical artifacts and insights in a rapidly changing environment
  • Proven track-record of partnering with internal customers, architects, engineers, and other technical partners to gather requirements, understand existing systems and develop value-added products/services.
  • Responsible for the collection, analysis and interpretation of data from various data sources to develop metrics, reports and visualizations of trends and patterns.
  • Performs statistical modeling and analysis of data.
  • Responsible for identifying visualization and data modeling techniques that will lead to proper translation and presentation of desired information.
  • Identifies areas to increase efficiency and automation of data analysis processes.
  • Uses data visualization programs, tools and techniques to generate dashboards, reports and presentations that aid in data storytelling, understanding and interpretation of trends and patterns of business importance.


What it takes:

  • 5+ years of relevant work experience, including experience building and contributing to data analytics initiatives (data ingestion, warehousing, reporting, etc.).
  • Working knowledge of Python, including experience working with the Plots, including Pandas for data manipulation and processing.
  • 2 + years of experience with PowerBI or Tableau or Plotly
  • 3+ years working with configuration and monitoring technology Grafana
  • Experience working with cross-functional teams to plan, model and implement solutions
  • Excellent communication skills and ability to work with technical and non-technical partners from many teams, especially in exploring decisions and trade-offs
  • 3+ years of experience building solutions involving logging, metrics, and traces to provide observability of applications
  • Knowledge and understanding of Cloud computing, PaaS design principles and micro services and containers
  • Capable of writing highly performant SQL
  • Experience with RESTful API calls
  • Scripting and automation experience
  • First-hand experience operating and/or building solutions using job execution frameworks (e.g. Airflow, Dagster, Prefect) and MPP databases (e.g. Redshift, BigQuery, Snowflake).
  • Agile experience
  • Experience with data streaming technologies and use cases
  • Strong analytical skills with high attention to detail and accuracy
  • Ability to develop partnerships and collaborate with other business and functional areas
  • Experience working with large amounts of data in fast paced delivery environment
  • Experience building internal Python or R packages for analysts
  • Experience with analytics & BI tools like Power BI and Tableau
  • Familiarity with any of the following: Django, Git, Jira, Grafana, Prometheus, Zabbix, GrayLog, Splunk, New Relic, Stack Driver, Cloud Watch, Application Insights
  • Experience with any of the public Hyperscalers (Azure, AWS, GCP) or hybrid vendors (Cloud Foundry, Anthos, Tanzu)
  • Experience implementing metric visualization using various Observability tools and frameworks (e.g. Elastic Stack, Grafana, Prometheus, etc.) across a shared service platform
  • Experience with other observability products such as AppDynamics, New Relic, Prometheus, Grafana, Wavefront, etc.
  • Working knowledge of Python and cloud-based data warehouse platforms
  • Advanced proficiency in SQL databases and queries, as well as experience designing data models

At OpenText we understand and value diversity in our employees and are proud to be an Equal Opportunity Employer.

Subject to applicable laws and regulations, OpenText’s Global Vaccination Policy requires all employees to be fully vaccinated against Covid 19 in order to enter an OpenText office. Accommodations may be available.




Apply Here

Submit CV To All Data Science Job Consultants Across Bharat For Free

Explore Other Similar Data Science Jobs wpDataTable with provided ID not found!
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

How To Find Best Data Science Course

How To Get Guaranteed Data Science Jobs