Company: Cosine
Location: London
Posted: April 13th, 2026
Job title: ML Systems Engineer - Model Training and Infrastructure (SWE-focused LLMs)
Location: London; full in-office working as default
Start date: ASAP
Compensation: £80,000 - £110,000 Base Salary & £80,000 - £110,000 Share options.
At Cosine, we’re building autonomous AI engineers that plan, write, and ship code inside real development workflows.
Cosine is designed for on-premise and virtual private cloud (VPC) deployments, including fully air-gapped environments. We build our agent tooling entirely in-house and post-train open-source models to deliver reliable, enterprise-grade coding performance in security-critical settings.
In 2024, Cosine achieved a 72% score on OpenAI’s SWE-Lancer benchmark, placing us among the strongest real-world software-engineering AI systems evaluated.
YC-backed and well-funded, Cosine was founded by experienced operators focused on building dependable, production-grade AI.
This role is based in our Hoxton office, five days a week, because close collaboration, fast feedback, and shared context matter for the problems we’re solving.
We’re looking for an ML Systems Engineer to collaborate in training our Lumen models – our open‑source–based software engineering LLMs.
This is a unique, and truly interdisciplinary role that involves developing and deploying our reinforcement learning (RL) training environments, working on synthetic data pipelines at massive scale and running fine‑tuning jobs to train the next generation of SWE models that will be used in both our self‑serve and enterprise products.
We want to make sure that the models we train are the best SWEs in the world - this doesn’t just mean training them to get the right answer, it means training them so that they write readable, maintainable code, that fits with the architectural patterns already present in the codebase. We believe we’re now in the anti‑slop era of coding agents, where data, RL environments and opinionated reward functions will shape the future standards of SWE models. If this sounds exciting, then this could be the role for you.
In this role you will:
Develop and manage synthetic data generation pipelines to curate datasets that will underpin future RL fine‑tunes.
Design, build and deploy containerized services using Docker and platforms like Kubernetes to enable our RL infrastructure.
Build and iterate on large‑scale RL loops where models write code, run tests or tools, and get rewarded (or penalized) accordingly.
Work hands‑on across the stack: custom PyTorch dataloaders, RL objectives, and evaluation on real‑world repos and tasks.
You’ll collaborate closely with infra, product, and research to decide what to train next, how to train it, and how to measure whether it’s actually better for engineers.
Participate in end‑to‑end training of models:
Supervised fine‑tuning on curated code and conversation datasets.
RL on top of those models to align them with software‑engineering objectives.
Architect synthetic data generation pipelines for RL and deploy using containerization technologies.
Ideate on novel and opinionated reward functions for the training of SWE agents.
Improve evaluation for SWE models:
Help maintain/extend an evaluation suite for code models (unit tests, benchmark suites, repo‑level tasks).
Analyze failure modes and feed them back into data and training plans.
Strong software engineering or computer science background:
Knowledge of PyTorch/Tensorflow/JAX:
Data engineering instincts:
Clear communication and ownership:
Nice to have
You don’t need all of these, but the more you have, the more you’ll hit the ground running:
If this sounds like a fit, this is a role where you can meaningfully push the frontier of open-source–based software engineering models.
We value diverse backgrounds, perspectives, and ways of thinking, and we’re committed to creating an inclusive and respectful workplace.
We encourage applications from anyone who meets the role requirements, even if you don’t meet every single qualification. If you need reasonable adjustments at any stage of the hiring process, we’re happy to discuss them.
We’re an in‑office team, five days a week, by design. We believe the work we’re doing benefits from being together, collaborating closely, and building shared context.
What you can expect:
Competitive salary, benchmarked to the market
Equity / share options, so you share in the upside you help create
30 days’ holiday + bank holidays
Genuine 9–5 working hours — we don’t expect late nights or weekend work
Work hard in the office, collaborate closely, and switch off properly
Dog‑friendly office — bring your dog to work
Daily lunch provided
Monthly team breakfasts
Monthly socials
Pension
High-quality equipment to do your best work
We care about focus, sustainability, and doing great work — not performative overwork. We value people who show up, contribute thoughtfully, collaborate well with their colleagues, and then go home.
This role won’t suit everyone. But if you want structure, clarity, strong collaboration, and a team that takes both the work and work‑life balance seriously, it’s a great place to be.
To comply with UK GDPR and our internal data‑protection and equal‑opportunity obligations, we only accept candidate applications and agency submissions via our Applicant Tracking System (ATS). This ensures appropriate privacy notices, lawful processing, auditability, and consistent retention controls.
Any CVs or candidate details received outside the ATS (including via email, Slack, or direct message) will be treated as unsolicited, will not be considered as part of the recruitment process, and will not give rise to any fee or payment obligation.
#J-18808-Ljbffr