Staff Data Engineer

Company: CoreWeave
Apply for the Staff Data Engineer
Location: London
Job Description:

Overview

CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines infrastructure performance with deep technical expertise to accelerate breakthroughs. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com. We are proud to be a Living Wage accredited Employer.

What You’ll Do

The Monolith AI Platform Engineering Team at CoreWeave is responsible for building and scaling the data and workflow backbone that powers the world’s most advanced engineering simulation and AI workflows. The ambition is to become the super‑intelligent AI test lab for the engineering industry, helping customers ship science, faster. From high‑throughput data ingestion and feature pipelines to model training and real‑time inference, the platform delivers a performant, reliable, and trustworthy data foundation trusted by the world’s largest engineering companies.

The Staff Data Engineer will own and evolve Monolith’s platform data services and ETL offerings — the data onboarding, preparation, and lineage capabilities that turn fragmented, real‑world engineering data into production‑ready training and inference pipelines. You’ll partner with Product, Engineering, and Customer‑facing teams to deeply understand client data challenges and translate them into scalable, self‑serve data platform features.

About The Role

We’re seeking a Staff Data Engineer who can own Monolith’s data platform surface end‑to‑end: from offline batch pipelines and large historical backfills to low‑latency, real‑time streaming data flows that power online inference and feedback loops. You’ll define and drive our data architecture, champion data quality and lineage, and decide how customer data moves through Monolith from raw ingestion to governed, observable, and reproducible training sets.

You’ll primarily work with internal teams (Product, Customer Success, Forward‑Deployed Engineers, Software Engineers, Data Scientists), and step in as a domain expert when clients need deeper guidance.

In This Role, You Will

  • Own Monolith’s Data Platform & ETL Surface
    • Lead the architecture and evolution of core data services for ingestion, transformation, validation, and lineage across training and inference workloads.
    • Design and maintain end‑to‑end data models and schemas that make complex engineering, simulation, and telemetry data discoverable, reusable, and performant.
    • Define standards, contracts, and APIs for how product teams and integrations interact with data services.
  • Design & Operate Batch + Streaming Pipelines
    • Build and operate batch pipelines for large‑scale historical imports, retraining data sets, and migrations from legacy environments.
    • Design and implement streaming pipelines (e.g., using Kafka or similar technologies) for event‑driven or real‑time ingestion and transformation that support online inference, monitoring, and feedback loops.
    • Select and integrate off‑the‑shelf ETL / ELT technologies and own their rollout and long‑term operation.
  • Champion Data Lineage, Governance & DataOps
    • Implement and maintain end‑to‑end data lineage from source systems to derived features and model artifacts, enabling reproducibility, compliance, and faster debugging.
    • Establish DataOps practices: CI/CD for pipelines, observability (metrics, logs, traces), and operational runbooks for data incidents.
    • Help define data quality and governance standards with Security, Compliance, and Customer Success, including privacy and regulatory needs.
  • Partner Across Monolith & CoreWeave
    • Collaborate with Monolith product and engineering teams to expose data services that unlock new user workflows and AI capabilities.
    • Work with CoreWeave infrastructure and AI platform teams to leverage storage, compute, and observability for reliable data flows.
    • Serve as a technical escalation point for forward‑deployed and customer‑facing engineers when questions go deeper than their playbooks, including architecture diagrams of data flow, lineage, and governance constraints.

Who You Are

  • Experience & Level
    • Typically 8+ years of experience as a Data Engineer / Data Platform Engineer (or similar), including ownership of production data pipelines and architectural decisions.
    • Demonstrated Staff‑level impact: leading critical data domains and cross‑team initiatives.
  • Data Engineering & Architecture
    • Deep experience designing end‑to‑end data architectures that cover ingestion, storage, transformation, serving, and observability.
    • Strong, hands‑on experience with both batch and streaming pipelines: Batch: historical backfills; Streaming: Kafka or similar platforms with real‑time transformation.
    • Proficiency with SQL and at least one major analytical database or data warehouse (e.g., PostgreSQL) including schema design and performance tuning
    • Proficiency with Spark, Ray or similar distributed data processing frameworks.
    • Solid understanding of data modeling in multi‑tenant SaaS or platform contexts.
  • Tooling & Ecosystem
    • Hands‑on with data orchestration and ETL tooling (e.g., Airflow, dbt, Dagster, Temporal) and able to evaluate and recommend tools that fit our needs.
    • Experience integrating and operating off‑the‑shelf data infrastructure, including rollout and ongoing ownership.
    • Familiarity with cloud infrastructure and containerization (Docker, Kubernetes, and major cloud providers) for deploying data workloads.
  • Data Lineage, Quality & DataOps
    • Extensive experience implementing data lineage solutions for debugging, compliance, and auditability.
    • Strong background in data quality with validation, monitoring, and guardrails.
    • Proficiency with DevOps / DataOps: infra‑as‑code, CI/CD for pipelines, runbooks, and on‑call participation.
  • Programming, Systems & Communication
    • Strong Python programming for data services and platform integrations, with emphasis on maintainability and tests.
    • Experience in service‑oriented architectures with data contracts, SLAs, and failure modes.
    • Clear written and verbal communicator who can explain data architectures to internal stakeholders and occasionally join client conversations as a deep domain expert.

Preferred

  • Experience in ML/AI platforms or MLOps where data pipelines feed experimentation, training, and inference workflows.
  • Background with time‑series, simulation, or experimental data.
  • Familiarity with feature stores, experiment tracking systems, or model registries and their integration with upstream pipelines.
  • Experience designing data systems for regulated or safety‑critical domains, including privacy, residency, and retention considerations.

Additional Information

We include an obligation to protect client data; an applicant may need to complete a basic criminal record check in compliance with GDPR. Employment offers are conditional upon satisfactory check results.

What We Offer

In addition to a competitive salary, we offer a variety of benefits to support your needs, including family‑level medical and dental insurance, pension contribution, life assurance, critical illness cover, employee assistance program, tuition reimbursement, and a focus on innovative disruption. Benefits may vary by location.

Our Workplace

CoreWeave supports a hybrid work environment with remote work possible for certain locations. New hires attend onboarding at a hub within their first month. Teams gather quarterly to collaborate.

Equal Opportunity

CoreWeave is an equal opportunity employer, committed to fostering an inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.

Export Control & Privacy

This position may require access to export controlled information. The applicant must meet U.S. export regulations. Updated privacy notice for UK and EU job applications is provided, including data processing for recruitment and related rights under GDPR and UK GDPR. We may share data with Greenhouse Software, Inc. and other providers as part of recruitment processes. You have rights to access, rectify, erase, restrict processing, and data portability of your personal data.

#J-18808-Ljbffr…

Posted: April 17th, 2026