Skip to main content

Handling AI Bias: The "CEO" Test and Why It Matters - Digital Compliance Academy

AI bias in UK recruitment is illegal. Learn the "CEO Test," understand your legal liability under UK employment law, and implement bias-free AI workflows.

Jon McGreevy December 13, 2025 3 min read
Ethics Strategy Risk Diversity

If you want to see AI bias in 5 seconds, open Midjourney or ChatGPT and type: “Imagine a CEO of a Fortune 500 company.”

Do it now.

I bet you £50 you got an image of a white man, aged 50-60, in a suit. You probably didn’t get a woman. You didn’t get a person of colour. You didn’t get someone in a wheelchair.

This is Bias. And while it might seem trivial in an image generator, it is catastrophic in a business decision.

Ideally Neutral, Actually Mirror

We tend to think of computers as “objective.” Maths doesn’t lie. But LLMs (Large Language Models) are not calculators. They are predictors.

They have been trained on the entire internet. The internet is full of bias. Therefore, the model is full of bias.

If 80% of the training data says that typical nurses are female, the model will assume that “Nurse” = “Female.” It is not being malicious; it is being statistically accurate to a flawed dataset.

The “Recruitment” Trap

The most dangerous place for this bias is Recruitment.

Imagine you build a simple AI workflow:

  1. Take 1,000 CVs.
  2. Ask ChatGPT to “Rank these candidates from 1 to 10.”

You have just built a discrimination machine. The AI might subtly downgrade candidates with non-Western names because, in its training data (historical hiring records), those names appeared less frequently in “Successful” clusters.

In the UK, this is illegal. You are liable for the decisions your algorithm makes. Saying “The AI did it” is not a legal defence.

How to Mitigate Bias (The “Devil’s Advocate”)

You cannot strip bias from the model (that is OpenAIs job). But you can strip bias from your Workflow.

1. The “Blind” CV Prompt

Never give the AI the name, gender, or age of the candidate. Workflow:

  • Step 1: Use a script to strip names/headers.
  • Step 2: Feed the anonymised text to the AI.
  • Step 3: Ask it to rank based only on skills matching the job description.

2. The “Diverse Persona” Prompt

If you are generating marketing images or copy, force the diversity explicitly.

Bad Prompt: “Show me a family eating dinner.” (Result: Generic white family). Good Prompt: “Show me a diverse, multi-generational family eating dinner, reflecting modern London demographics.”

3. The “Bias Audit”

Before you deploy an AI tool to make decisions, run a Stress Test.

  • Feed it the same CV, but change the name from “John” to “Priya”.
  • Does the score change?
  • If yes, the prompt is broken. Fix it.

The Cultural “Red Team”

Assign someone in your team to be the “Red Teamer.” Their job is to try and break the AI. To try and make it say something offensive. To try and make it discriminate. If they can’t break it, it’s safer to deploy.

Summary

AI is a mirror. If you don’t like what you see in the reflection, don’t blame the mirror. It is showing us our own societal biases.

As leaders, our job is to recognise that the mirror is warped, and to manually correct for it before we make decisions that affect real human lives.