Episode 190
AI in TA with Chris Hoyt, Gerry Crispin & Cathy Henesey
Three leading voices in talent acquisition—Chris Hoyt, Gerry Crispin, and Kathy Hennessey—dig into real enterprise adoption of AI hiring tools. Learn where AI actually delivers ROI, why most pilots fail, and how to navigate compliance without falling behind competitors.
Episode Key Takeaways
Waiting on AI adoption means falling behind on efficiency gains, candidate experience, and competitive hiring speed. But the gap between hourly and professional hiring is stark: UPS, McDonald’s, and H&M have eliminated hundreds of recruiters through automation; accountants, nurses, and salespeople still require human judgment and oversight.
Interview intelligence tools gain traction when they solve a specific, measurable business problem—not when they’re cool. Kathy’s HiredScore deployment worked because it mined 2 million legacy candidates for nursing roles, delivering 2,000+ hires (67% RNs) and justifiable ROI. Tools that only save recruiter time, without translating to headcount reduction or revenue impact, don’t fly with finance.
Workflow integration is the make-or-break adoption factor. Even high-promise AI tools get rejected if they require users to toggle between platforms or demand change management that isn’t baked in. The research showed adoption hinges entirely on seamless embedding into existing ATS and recruiter workflows.
The black box problem is real, but so is human inconsistency. Hiring managers receive rigorous training on methodology and competencies, then operate with zero accountability or oversight. Chris notes that legal and compliance partnerships are no longer optional—they’re essential to defending fairness and building a defensible, auditable process.
Reframe the business case away from TA metrics. Kathy’s advice: go to executives with hard numbers—reduced overtime costs, lower staffing agency spend, improved retention and quality. A million-dollar tool investment can’t be justified by time savings; it must move the needle on the P&L that matters to the business.
Frequently
Asked
Questions
Where are enterprises actually using AI in hiring today?
Interview assistance tools—note-taking, transcription, structured prompts, and coaching for consistency—are gaining traction. Candidate sourcing and pipeline mining from legacy databases also show strong ROI. Full end-to-end automation for professional roles doesn’t exist yet. Adoption varies wildly: some orgs encourage building internal agentic solutions; others ban ChatGPT for job descriptions.
How do you measure ROI on AI hiring tools?
Tie it to business outcomes, not TA efficiency. Reduced overtime, lower staffing agency costs, improved retention, faster time-to-fill on critical roles, and reduced attrition all translate to hard P&L impact. Kathy’s example: unfilled nursing positions cost money because they require overtime or contractor backfill. Measure that cost, then show how AI reduces it.
What's the legal risk of recording and transcribing interviews with AI?
No consensus yet. Some legal counsel warns that recorded notes become discoverable in litigation; others argue transparency and documented process actually strengthen your defense. State laws (especially California) are evolving. The safest path: partner with legal upfront to define retention, deletion, and audit policies—and document that you’re actively managing bias and fairness.
Should we reduce recruiters when we implement AI?
Not immediately. Kathy runs 35,000 hires annually with 90-day time-to-fill on nursing roles; she’s nowhere near reducing headcount. The better play: eliminate transactional work, keep the same recruiters, and shift them to consultative, pipeline-building, top-of-license work. That multiplies business value without org disruption or adoption risk.
Why do AI hiring pilots fail?
Poor workflow integration, unclear ROI tied to business metrics, inadequate change management, and lack of legal/compliance partnership. Also: misaligned expectations. If the tool doesn’t solve a real problem (not just a nice-to-have), or if adoption requires users to jump between platforms, it gets rejected—regardless of capability.