Episode 191

Luciano Pollastri on Building a Human-First Approach to AI at Work

Fear of replacement, hallucinations, and compliance risk are blocking AI adoption in talent teams. Luciano Pollastri shares how to move from competing with AI to complementing it—and why your experience is what makes the partnership work.
 

Episode Key Takeaways

The ‘killer prompt’ myth is preventing real AI literacy. Prompts designed for one person’s context rarely transfer; instead, build your own by layering four inputs: context (who you are), perspective (your history and stakeholders), intention (what you expect), and perception (the emotional dimension). This framework turns AI from a black box into a tool you control.
Luciano spent his first 12 hours with ChatGPT trying to break it—then realized it could generate job descriptions with real quality. That fear of replacement is universal, but it dissolves once you understand AI is a predictability model guessing the next word, not a threat. Education on how it actually works is the shortcut to adoption.
Delegation without expertise is a trap. Never outsource tasks you don’t know how to do yourself, or you’ll become dependent on AI and unable to catch its hallucinations. Early-career talent especially need space to build foundational skills; automating their learning opportunities leaves them replaceable and the organization vulnerable.
Interview transcription freed him to focus on what AI cannot do: reading tone, catching hesitation, noticing when a candidate is reverse-engineering answers. The equation is simple—you plus AI beats AI alone, but only if you’re a positive value, not competing for the same space.
The future of work isn’t about productivity replacing humanity. It’s about each person defining their own operating model with AI so they can reach new skills, exchange with different teams, and innovate. Think of yourself as a manager of both people and artificial resources.

Frequently
Asked
Questions

What are the main fears blocking AI adoption in talent teams?
Fear of replacement (AI can do my job), compliance risk (exposing confidential data), fear of commitment (not knowing how to use it), and fear of hallucination (not trusting the output). Each person typically hits one or more of these before they move past them through hands-on use and basic education.
Layer four inputs: context (your role and background), perspective (your history and stakeholders), intention (what output you need), and perception (emotional or situational nuance). Give AI as much of your own information as possible so it overwrites its generic training data and adapts to your specific context.
Never delegate anything you don’t already know how to do yourself. If you outsource job description writing, competency framework design, or interview structure before you’ve built expertise, you won’t be able to catch errors, you’ll become dependent, and your team loses the chance to develop those skills.
Use AI to transcribe and summarize so you can focus on reading tone, body language, and hesitation—the human signals that reveal whether a candidate is authentic or reverse-engineering answers. This frees you to ask deeper follow-up questions and make better decisions.
Stop competing with AI for the same tasks. Instead, define what AI handles (transcription, summarization, pattern-matching) and what you handle (judgment, perception, connection). The goal is you plus AI being stronger than either alone—which only works if you’re adding genuine value, not just delegating.