Job Location: Pleasanton, CA
Special Note: LinkedIn and other job boards’ estimated pay range does NOT represent the pay range of Workday. Please contact a Workday Talent Acquisition team member to discuss the role and compensation.
The Data Platform and Observability team is based in Pleasanton, CA, Boston, MA and Dublin, Ireland. We enable real time insights across Workdayโs platforms, infrastructure and applications. Our focus is on the development of a large scale distributed data platform to support critically important Workday applications.
The team provides software for collection, ingestion, storage & visualization of critical data assets. We handle 100s of terabytes of data in the form of billions of messages produced daily by Workday applications and underlying services. If you enjoy writing efficient software or tuning and scaling large distributed systems you will enjoy working with us.
Do you want to solve exciting challenges at substantial scale across private and public clouds for our 4000+ global customers? Do you want to work with premier engineers and facilitate the development of the next generation Observability & Data Platforms? If so, we should chat.
-
Design, build and manage Workdayโs multi-Petabyte scale Operational Data Lakehouse.
-
Work on all aspects of the Data Lakehouse – building and managing large hadoop clusters, Performance and Scaling aspects of ingestion and query services, Security(Authn/Authz)
-
Be responsible for all operational aspects of the Data Lakehouse – Monitoring, Logging & Alerting
-
Be responsible for HA/DR design and implementation
-
Evaluate/implement new open source and cloud native tools and technologies, as needed.
-
Voice support for the platform capabilities, standard methodologies and be an advocate for the modern data stack.
-
Participate in the on-call rotation supporting the data platform.
-
Five plus years of software development work (Java/Scala/Python/Go).
-
Hands-on experience in the Apache Hadoop ecosystem – FileSystems, Security(authn/authz), Distributed query engines.
-
Hands on experience with Spark (Stream & Batch Processing Implementations)
-
Experience with SQL on Hadoop technologies like Hive, Presto & Spark SQL
-
Expertise in performance optimization of analytics workloads – persistence(HDFS/object storage), storage formats(Parquet/ORC), compression, partitioning, Query layer optimization etc.
-
Ability to deal with a high degree of ambiguity and handle things with autonomy.
-
Ability to prioritize multiple tasks in a fast-paced environment.
-
Strong written and verbal communication skills
-
Bachelorโs in Computer Science, Electrical Engineering, or equivalent. Degrees should be composed of coursework structured to build a core foundation in the theory of computation.
to explore what, if any, reasonable accommodations or exemptions Workday is able to offer.
Submit CV To All Data Science Job Consultants Across United States For Free

Leave a Reply