Research Engineer/Research Scientist – Model Transparency

Company: AI Security Institute

Location: London

Posted: April 27th, 2026

Research Engineer/Research Scientist – Model Transparency

London, UK

About the AI Security Institute

The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

Deadline for applying: Sunday 24th May 2026, end of day

Team Description

The ability to effectively evaluate and monitor AI systems will grow in importance as models become more capable, autonomous, and integrated into society. If models can detect and game evaluations, obscure their reasoning, or behave differently under observation, the safety claims that governments and developers rely on become unreliable. Understanding and addressing these risks is essential to ensuring that oversight of advanced AI systems keeps pace with their capabilities.

The Model Transparency team is a research team within AISI focused on ensuring that evaluations, assessments, and monitoring of frontier AI systems remain reliable as models become less transparent. We research how and why oversight is declining – through phenomena such as evaluation awareness, unfaithful chain-of-thought reasoning, and changes in model architectures – and develop methods (including white and black box methods) to detect, measure, and mitigate potential issues. We share our findings with frontier AI companies (including Anthropic, OpenAI, DeepMind), UK government officials, allied governments, and publicly to inform their deployment, research, and policy decisions. We also work directly with safety teams at frontier labs, contributing to safety case reviews and helping improve their alignment evaluation methodology.

We’re looking for Research Scientists and Research Engineers for the Model Transparency team with expertise in technical AI safety – such as interpretability, capability or alignment evaluations, model transparency – or with broader experience with frontier LLM research and development. An ideal candidate would have a strong track record of high-quality research in technical AI safety or adjacent fields.

We're interested in candidates along the spectrum between Research Engineers and Research Scientists. The application form will ask you to indicate which role you lean towards.

The team is led by Joseph Bloom, advised by Geoffrey Irving. You'll work with talented, mission-driven technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and top universities. You may also collaborate with external research teams including those at frontier AI labs, METR, and FAR.

We are open to hires across a range of experience levels.

Representative Projects You Might Work On

The work could also involve:

What we’re looking for

If you’re unsure whether you meet the criteria below, we’d encourage you to apply anyway – we’d rather you err on the side of applying than not.

Requirements for both roles:

Research Scientists – our requirements are:

We don’t expect RS candidates to meet all of the following, but they are useful signals:

Research Engineers – our requirements are :

We don’t expect RE candidates to meet all of the following, but they are useful signals:

What We Offer

Impact you could not have elsewhere.

Resources & Access

Life & Family

*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.

Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take‑home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

The full range of salaries are available below.

#J-18808-Ljbffr
Apply Now