Freelance Agent Evaluation Engineer

Company: Mindrift

Location: Manchester

Posted: May 1st, 2026

Please submit your CV in English and indicate your level of English proficiency.

Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems. Participation is project-based, not permanent employment.

What This Opportunity Involves

We're building a dataset to evaluate AI coding agents - how well a model handles real-world developer tasks. You'll create challenging tasks and evaluation criteria within realistic simulated environments:

What This Is NOT

A significant part of the work is done together with AI - it's very hard to create tasks that challenge frontier models without using frontier models.

What We Look For

This opportunity is a good fit for experienced developers, software engineers, and/or test automation specialists open to part-time, non-permanent projects. Ideally, contributors will have:

You don't need to be an expert in every item, but you should be comfortable reading and reasoning about code across the stack.

Why this is hard

How It Works

Apply - Pass qualification(s) - Join a project - Complete tasks - Get paid

Effort estimate

Tasks for this project are estimated to take 20 hours to complete, depending on complexity. This is an estimate and not a schedule requirement; you choose when and how to work. Tasks must be submitted by the deadline and meet the listed acceptance criteria to be accepted.

Compensation

On this project, contributors can earn up to $50 per hour equivalent, depending on their level and pace of contribution.

Compensation varies across projects depending on scope, complexity, and required expertise. Please note that other projects on the platform may offer different earning levels based on their requirements.

#J-18808-Ljbffr
Apply Now