Episode 199

Designing AI for Humans: Equity, Empathy & Tech with Tara & Jason

Vendors ship solutions without understanding client problems. Tara and Jason expose how poor design—not technology itself—introduces bias, and why assisted intelligence beats artificial ignorance in hiring.
 

Episode Key Takeaways

Vendors arrive with solutions before understanding problems. The disconnect between what tech companies build and what hiring teams actually need stems from skipping discovery entirely—no conversation about maturity, readiness, or current pain points. Without that groundwork, there’s no way to measure whether the tool solved anything.
Knockout questions and salary screens aren’t inherently biased; implementation is. One healthcare system discovered that a salary rejection email triggered overwhelming candidate response when reworded to invite negotiation. Automating that follow-up via text recaptured entire candidate segments—proving thoughtful design beats raw automation.
Marginalized groups interpret AI gatekeeping as permanent exclusion, even when it isn’t happening. Candidates believe AI is blocking them wholesale, creating anxiety and distrust. Without clear communication about what AI actually does—and transparent definitions across the organization—underrepresented talent opts out before applying.
Diverse hiring teams catch blind spots vendors and homogeneous teams miss. Bringing people from different socioeconomic, geographic, and educational backgrounds into design sessions surfaces problems like the 20-minute security badge line candidates weren’t warned about. Facilitation methods like anonymous murals and Six Hats Thinking ensure quieter voices contribute.
Assisted intelligence—not artificial intelligence—is the near-term win. Coaching recruiters on phone screen technique, helping candidates confirm skills instead of AI guessing, or suggesting better interview questions all reduce legal risk and complexity while improving experience. Most vendors today conflate artificial ignorance with AI, creating dangerous products.

Frequently
Asked
Questions

How do I audit my hiring process for bias before buying new tech?
Start by mapping where candidates fall off and why. Analyze your data: who’s progressing, who’s dropping out, at which stage? Identify the actual problem—ghosting, low conversion, poor fit—before assuming technology fixes it. Bring in people from different backgrounds to walk through the process as candidates would experience it, not as employees.
Ask how they trained their LLM, what problem it solves, and how they validated it. Request examples of how the tool was tested across different candidate demographics. Understand whether it’s true artificial intelligence (rare, highly trained, specific) or assisted intelligence (coaching, confirmation, suggestion). Ask for evidence of bias testing and disparate impact analysis.
Many tools are mobile-first or assume familiarity with modern platforms. Candidates over 50 may not recognize scam recruiter emails or navigate app-based workflows confidently. If your core candidate pool is 55–65 and your tool requires mobile adoption, you’re excluding them by design. Test usability across age groups before rollout.
Ensure the tool doesn’t ask knockout questions (age, background check eligibility, credentials) that should be pre-screened. Allow candidates to pause, restart, and take time without pressure. Test with neurodivergent and anxious candidates to confirm the experience reduces—not increases—friction. Pair AI with human follow-up, not replacement.
Assisted intelligence augments human judgment: it coaches recruiters, surfaces candidate skills for confirmation, or suggests better questions. Artificial intelligence makes autonomous decisions (ranking, screening, rejecting). Assisted intelligence is lower-risk, faster to implement, and improves experience. Most vendors conflate the two; prioritize assisted tools until true AI is properly validated.