Episode 147
Breaking Bias: Skills-First Hiring for Inclusion with Kathryn Marie
Most skills-based hiring platforms reinforce existing biases by averaging job requirements across generic data. Kathryn Marie’s approach flips the model: open-ended questions reveal innate abilities, then match candidates to role-specific needs—unlocking talent that self-selects out of traditional pipelines.
Episode Key Takeaways
AI in recruiting regresses to the mean. When systems train on massive job board data to predict role requirements, they produce generic matches that miss the richness of individual companies, teams, and cultures—and they amplify the biases embedded in that historical data.
Innate abilities matter more than work history for many roles. A ten-year employment gap doesn’t erase someone’s capacity to think analytically, communicate effectively, or solve problems—yet traditional hiring filters them out before they ever get a chance to demonstrate those strengths.
Kathryn Marie’s framework uses open-ended, unlabeled questions to bypass self-bias. Candidates answer prompts like ‘How do you know you’ve understood what someone said?’ without seeing skill labels, which prevents them from screening themselves out before they even read the question.
Task-to-skill translation is the hiring side’s heavy lift. Rather than generic job specs, companies must identify their five to ten core tasks, then define the innate abilities required for each—a process that shifts thinking from ‘I need this job title’ to ‘I need this type of person for this specific work.’
Partnering with nonprofits creates a talent bridge. By connecting organizations with nonprofits that support underrepresented groups—women returners, people with disabilities, neurodivergent candidates—the model opens access to pools of capable people who would never find or apply for the role through conventional channels.
Frequently
Asked
Questions
What's the difference between skills-based hiring and testing technical skills?
Skills-based hiring, as framed here, focuses on innate personal attributes—how someone thinks, communicates, and operates—rather than software proficiency or certifications. Those learned skills can be trained on the job; innate abilities are stable traits present from birth that predict how someone will perform and fit within a team.
Why are open-ended questions better than multiple-choice assessments?
Open-ended answers reveal how candidates actually think and approach problems, not just which box they tick. Every person answers differently, providing rich insight into their reasoning. Multiple-choice forces artificial constraints and misses the nuance that makes matching meaningful and reduces self-selection bias.
How do you match candidate profiles to job requirements?
Companies define their five to ten core tasks and the innate abilities each requires. AI then scores candidate free-text responses against those task-ability pairs, matching unstructured data sets with confidence. The process requires initial company interviews but becomes repeatable after the first few roles.
Can this model work for senior or highly specialized roles?
Yes, with a hybrid approach. For roles requiring credentials—surgeon, lawyer, architect—screen candidates by qualifications first, then use attribute assessment to ensure they fit the team and company culture. The methodology complements, rather than replaces, credentialing requirements.
How does this help organizations improve diversity?
By removing work history as a barrier and partnering with nonprofits serving underrepresented groups, the model surfaces capable candidates who self-select out of traditional pipelines. Women returners, people with disabilities, and neurodivergent candidates can demonstrate innate strengths without employment gaps or confidence barriers blocking them.