London, UK
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
Anthropic runs some of the largest Kubernetes clusters in the industry. We have fleets of hundreds of thousands of nodes across multiple cloud providers and datacenters to train, research, and serve frontier AI models. The Kubernetes Platform team owns the Kubernetes control plane that makes those clusters work.
We are operating at a scale where the defaults stop working. We own the scheduler and extend it to place topology‑sensitive ML workloads across thousands of accelerators at once. We scale the control plane itself — apiserver, etcd, controllers — so it stays responsive as object counts and node counts grow by orders of magnitude. And we build the core cluster services every workload depends on, like service discovery, so they hold up under the same pressure.
We make sure the control plane is fast, correct, and always available. Your work will directly determine whether Anthropic can keep reliably and safely training frontier models as our compute footprint continues to grow.
Key responsibilities
- Own, operate, and extend the Kubernetes scheduler for Anthropic's accelerator fleets, including custom scheduling plugins and policies for gang scheduling, topology awareness, and preemption
- Scale the Kubernetes control plane (apiserver, etcd, controller‑manager) to support clusters far beyond typical limits, and find the next bottleneck before it finds us
- Design, build, and operate core cluster services such as service discovery that every workload in the fleet depends on
- Build and maintain custom controllers, operators, and CRDs
- Partner with research, training, and inference to understand workload shapes and turn their requirements into platform capabilities
- Collaborate with cloud providers on required features and escalations
- Participate in on‑call, lead incident response, and design processes (postmortems, runbooks, SLOs) that help the team avoid repeating failures
- Significant software engineering experience building and operating production distributed systems
- Proficiency in at least one systems‑appropriate language (e.g., Go, Python, Rust, or C++)
- Deep, hands‑on Kubernetes experience (well beyond "user of") into scheduler, controllers, apiserver, or operating large multi‑tenant clusters
- Demonstrated ability to debug complex issues across the stack, from API behavior down to node and network‑level root causes
- A track record of designing for reliability, correctness, and clear failure semantics in systems other engineers depend on
- Strong written and verbal communication; comfort building consensus with internal stakeholders
Preferred qualifications
- Experience with Kubernetes internals or contributions: kube‑scheduler / scheduling framework, apiserver, etcd, client‑go, controller‑runtime, or similar
- Experience building or operating cluster schedulers or batch systems (e.g., Kueue, Volcano, Slurm, or in‑house equivalents)
- Background scaling control planes or coordination systems (etcd, ZooKeeper, Consul, or large DNS/service‑mesh deployments)
- Familiarity with ML infrastructure: GPUs, TPUs, or Trainium; gang scheduling; topology‑aware placement; collective networking such as NCCL
- Experience with GCP and/or AWS, including GKE/EKS internals and Infrastructure as Code
- Low‑level systems experience such as Linux kernel tuning, cgroups, or eBPF
- 8+ years of relevant industry experience, including time leading large, ambiguous infrastructure projects
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We sponsor visas. If an offer is made, every reasonable effort will be made to obtain a visa, with the support of an immigration lawyer.
#J-18808-Ljbffr