Episode 186

Paul Hamilton on Human Intelligence in the Age of AI Hiring

AI can process data faster than ever, but it can’t replace human judgment in hiring decisions. Paul Hamilton from KPMG Canada explains why a human-first mindset matters more as organizations invest in recruitment technology.
 

Episode Key Takeaways

Emotional intelligence—not raw intellect—drives organizational success. The most sought-after skill among employers isn’t analytical thinking alone, but resilience, flexibility, agility, and the ability to manage emotions effectively as an individual contributor or leader. This human capability remains irreplaceable by AI.
Three critical failures emerge when organizations automate hiring without human oversight: loss of judgment and critical thinking in gray-area decisions, erosion of ethical guardrails around bias and societal impact, and regulatory exposure when AI operates without proper constraints. The final hiring decision must always include humans in the loop.
The loan-approval analogy breaks down in hiring because circumstances matter. A candidate from a marginalized community or low-income background may score lower on automated systems, yet dismissing them without conversation perpetuates inequality. Automation should screen applications and assessments, but human judgment must evaluate context and potential.
Six human skills define the future of recruiting: critical thinking and problem-solving, discernment between good and great candidates, emotional intelligence, creative thinking in employer branding, ethical judgment in DEI, and adaptability. These cannot be outsourced to AI; they must be deepened across recruiting teams.
AI’s real value lies in eliminating low-value transactional work—resume screening, initial outreach, workload assessment—so recruiters can shift to strategic advisory roles. The barbell model applies: either automate volume hiring completely or go ultra-high-touch for scarce talent. The middle ground is uncompetitive.

Frequently
Asked
Questions

Should AI make hiring decisions or just support them?
AI should analyze data and summarize findings, but humans must make final decisions. Gray areas in talent assessment require judgment, ethical reasoning, and understanding of organizational culture that algorithms cannot provide. Regulatory bodies are already constraining unilateral AI decision-making for good reason.
Critical thinking, emotional intelligence, creative problem-solving, ethical judgment, and adaptability become non-negotiable. Recruiters evolve into talent advisors who consult strategically with business leaders, analyze market trends, and differentiate employer brand—not transactional screeners. Sales and marketing acumen will matter more than ever.
Start with trial and error; this terrain is new for everyone. Collaborate across finance, strategy, and change management—TA cannot do this alone. Build a community of practice with peers at other organizations. Focus on the journey, not just the destination. Expect mistakes, learn from them, and have fun during the transformation.
Automate application screening, initial assessments, candidate ranking, and workload distribution. Use AI to extract insights from unstructured data, identify market trends, and craft targeted messaging. Preserve high-touch engagement for scarce talent and strategic business consultation. The goal is freeing capacity for advisory work, not replacing relationship-building.
If 30–40% of global hiring flows through a single algorithm, one flaw affects millions uniformly—far worse than distributed human error across hiring committees. Human bias is flawed but localized; algorithmic bias is systemic and scalable. This concentration risk demands regulatory guardrails and vendor diversity.