Overview
Job title: ML Systems Engineer – Model Training and Infrastructure (SWE-focused LLMs)
Location: London; full in-office working as default
Start date: ASAP
Compensation: £80,000 – £110,000 Base Salary & £80,000 – £110,000 Share options.
The role
We’re looking for an ML Systems Engineer to collaborate in training our Lumen models – our open‑source–based software engineering LLMs.
This is a unique, interdisciplinary role that involves developing and deploying our reinforcement learning (RL) training environments, working on synthetic data pipelines at massive scale and running fine-tuning jobs to train the next generation of SWE models used in both our self-serve and enterprise products.
We want to ensure the models we train write readable, maintainable code that fits with the architectural patterns in the codebase. We believe we’re in the anti-slop era of coding agents, where data, RL environments and opinionated reward functions will shape future SWE model standards.
What You’ll Do
In this role you will:
- Develop and manage synthetic data generation pipelines to curate datasets that will underpin future RL fine-tunes.
- Design, build and deploy containerized services using Docker and platforms like Kubernetes to enable our RL infrastructure.
- Build and iterate on large-scale RL loops where models write code, run tests or tools, and get rewarded (or penalized) accordingly.
- Work hands-on across the stack: custom PyTorch dataloaders, RL objectives, and evaluation on real-world repos and tasks.
You’ll collaborate closely with infra, product, and research to decide what to train next, how to train it, and how to measure whether it’s actually better for engineers.
What We’re Looking For (essential)
- Strong software engineering or computer science background: Typically 3-5 years of experience. You can read, debug, and write non-trivial production code (Python and Go). Experience with Docker and container orchestration (e.g., Kubernetes). Experience with at least one major cloud platform (GCP, AWS, or Azure). You care about code quality, correctness, and maintainability as much as model metrics.
- Knowledge of PyTorch/Tensorflow/JAX: Comfortable implementing custom training loops, losses, and dataloaders.
- Data engineering instincts: Comfortable with large-scale datasets, object storage, dataset sharding, and filtering. Know that data quality and sampling strategies matter as much as architecture.
- Clear communication and ownership: Can translate vague modelling goals into concrete experiments and document decisions with tradeoffs.
Nice to have
- Experience with synthetic data generation pipelines
- Experience with data tooling like SQL, Apache Iceberg and duckDB
- Experience training LLMs in distributed environments
- Safety, robustness, and reward shaping: experience with LLM-as-a-judge, reward hacking detection, or robustness evaluation
- Open-source contributions or research: contributions to open-source LLM tooling, RL libraries, etc.
Why join Cosine
- Direct impact: Your work directly shapes the next generations of Lumen Enterprise SWE models that engineers use every day.
- Real scale: You’ll work with large, modern open-source models, long context lengths, and multi-node training runs.
- Full-stack ML engineering: From custom PyTorch code and distributed systems to data curation, RL infrastructure design and MLOps.
This role won’t suit everyone. If you want structure, clarity, strong collaboration, and a team that takes both the work and work-life balance seriously, it’s a great place to be.
Cosine is an equal opportunity employer
We value diverse backgrounds, perspectives, and ways of thinking, and we’re committed to creating an inclusive and respectful workplace. We encourage applications from anyone who meets the role requirements, even if you don’t meet every single qualification. If you need reasonable adjustments at any stage of the hiring process, we’re happy to discuss them.
Compensation, Benefits & Ways Of Working
We’re an in-office team, five days a week, by design. We believe the work we’re doing benefits from being together, collaborating closely, and building shared context.
What You Can Expect
- Competitive salary, benchmarked to the market
- Equity / share options, so you share in the upside you help create
- 30 days’ holiday + bank holidays
- Genuine 9–5 working hours — we don’t expect late nights or weekend work
- Work hard in the office, collaborate closely, and switch off properly
- Dog-friendly office — bring your dog to work
- Daily lunch provided
- Monthly team breakfasts
- Monthly socials
- Pension
- High-quality equipment to do your best work
We care about focus, sustainability, and doing great work — not performative overwork. We value people who show up, contribute thoughtfully, collaborate well with their colleagues, and then go home.
This role won’t suit everyone. But if you want structure, clarity, strong collaboration, and a team that takes both the work and work-life balance seriously, it’s a great place to be.
Agency & Data Protection Notice
To comply with UK GDPR and our internal data-protection and equal-opportunity obligations, we only accept candidate applications and agency submissions via our Applicant Tracking System (ATS). This ensures appropriate privacy notices, lawful processing, auditability, and consistent retention controls. Any CVs or candidate details received outside the ATS (including via email, Slack, or direct message) will be treated as unsolicited, will not be considered as part of the recruitment process, and will not give rise to any fee or payment obligation.
#J-18808-Ljbffr…
