Episode 169

The Legal Side of AI in Hiring with Paul Britton

The EU AI Act classifies hiring AI as high-risk and enforcement begins in 2026. Paul Britton breaks down the five core principles every TA leader must understand to avoid fines up to €27M or 7% of global turnover.
 

Episode Key Takeaways

Three hundred potential risks reduced to zero. That’s the gap between misunderstanding the EU AI Act and actually implementing it. The act applies globally to any organization recruiting from the EU, making compliance non-negotiable regardless of where your company is headquartered.
Five core principles anchor the entire framework: transparency, explainability, traceability, human interaction, and candidate consent. These aren’t independent checkboxes—they work together. Transparency means candidates know AI is used; explainability means you can explain the decision in plain language; traceability means you can audit the entire process on demand; human oversight means a person reviews AI rejections; consent means documented opt-in, ideally multiple times.
Both vendor and employer share responsibility. A common trap: your software vendor rolls out an AI feature with an update notification, but doesn’t explain how it satisfies the five principles. Proactively reach out to vendors and demand clarity on transparency, explainability, and human interaction before deployment.
Penalties scale with severity. First offence might be a warning, but repeat offences or major violations can trigger fines up to €27 million or 7% of global turnover. GDPR skeptics learned the hard way—regulators do enforce, often with massive fines years after the law takes effect.
Reasonable accommodations and AI compliance align naturally. When a candidate advances and needs accessibility support or reasonable adjustments, human oversight becomes essential. AI shouldn’t interpret or execute special requirements—that’s a human function that builds trust and ensures legal compliance.

Frequently
Asked
Questions

When does the EU AI Act enforcement begin for hiring?
The boring bits came into force August 1, 2024, but enforcement for hiring-specific AI won’t begin until 2026. That gives organizations time to audit and remediate, but waiting until 2026 to act is risky given the scale of potential fines.
Transparency (candidate knows AI is used), explainability (you can explain decisions in plain language), traceability (audit trail available on demand), human interaction (human reviews AI rejections), and candidate consent (documented opt-in). All five must work together, not in isolation.
Yes. If you recruit or consider applicants from the European Union, you must comply regardless of where your company operates. This makes it effectively global legislation for any organization with EU candidate reach.
Yes, but document it carefully. Ask candidates to certify they’re not using AI. If detected, flag it, give them a chance to disable it, and warn them that detection results in removal from the process. Starting a relationship with dishonesty is a red flag.
Demand clarity on how their AI satisfies all five principles: transparency mechanisms, explainability of decisions, audit trail access, human oversight workflows, and consent capture. Don’t accept vague assurances—get specifics in writing before deployment.