Alignment Red Team Research Engineer/Scientist

Company: AI Security Institute
Apply for the Alignment Red Team Research Engineer/Scientist
Location: Greater London
Job Description:

A leading AI safety organization in London is seeking Research Engineers/Scientists for their Alignment Red Team. Responsibilities include researching misalignment risks in frontier AI models and running evaluations to inform AI safety policies. Candidates should have strong software engineering and machine learning experience, particularly in Python, and ideally a background in AI research projects. The role also offers unique insights and direct influence on global AI deployment strategies, with a competitive salary and various benefits.#J-18808-Ljbffr…

Posted: March 28th, 2026