Beyond Candidate Cheating: The Rising Threat of Sophisticated Fraud in Hiring
In July 2024, KnowBe4 – a cybersecurity company that trains organizations to spot social engineering attacks – accidentally hired a North Korean hacker as a remote IT worker. They only discovered the fraud after sending him a company laptop and watching him immediately begin installing malware across their systems.
Let that sink in for a moment. A company whose entire business is teaching people to recognize sophisticated attacks got infiltrated through their own hiring process.
If you think this is an isolated incident, think again! We’re entering an era where the distinction between “candidate cheating” and actual fraud has become critically important – and most TA teams aren’t prepared for what’s coming.
The Spectrum of Deception: Not All AI-Assisted Applications Are Fraud
Let’s be clear about what we’re dealing with, because conflating these threats leads to the wrong responses:
Traditional Embellishment has always existed. Candidates exaggerating achievements, inflating job titles, stretching dates of employment. This isn’t new, and while it’s problematic, it’s not what we’re talking about here.
AI-Assisted Candidates are becoming the norm. People using ChatGPT to polish their resumes, prepare for interviews, even get real-time coaching during technical interviews or video calls. This is the “cheating” that’s been dominating industry conversations, and yes it’s a challenge for evaluating true capability. But it’s not fraud.
Organized Fraud Rings represent a different level entirely. Teams of people working together to pass technical assessments, multiple individuals rotating through different interview stages, sophisticated operations designed to get unqualified people hired. Their goal is to get a paycheck, access to systems, or both.
State Actors and Corporate Espionage sit at the extreme end. North Korean operatives securing IT jobs to fund the regime and steal intellectual property. Competitors planting spies to access confidential information, client data, and strategic plans. The Rippling vs. Deel lawsuit revealed allegations of exactly this – an employee allegedly hired by Rippling who was secretly working for their direct competitor, Deel, with access to sensitive business information.
The techniques overlap. The motivations are worlds apart.
Why This Moment is Uniquely Dangerous
Three factors have converged to create a perfect storm for hiring fraud:
1. Virtual-First Hiring Removed Physical Presence Cues
When you’re sitting across a table from someone, you pick up on things. Micro-expressions. Confidence levels. Whether they’re reading from notes. The physical reality of another human being creates friction that makes fraud harder.
Virtual interviews remove that friction. You’re looking at a screen. You can’t see what’s happening off-camera. You don’t know who else is in the room. The person on video might not even be the person who submitted the application or completed the technical assessment.
2. AI Makes Sophisticated Fraud Scalable
Pre-AI, running an elaborate hiring fraud operation required significant human resources and coordination. Now, AI can generate compelling resumes, pass initial screenings, provide real-time interview coaching, and even complete technical assessments. What used to require a team can now be orchestrated by a handful of people with the right tools.
In the KnowBe4 case, the fraudster used a stolen identity with an enhanced photo and a VPN to make it appear he was in the US. He passed background checks. He made it through multiple interview rounds. He received and accepted an offer. All while being a North Korean operative.
3. The Economics Have Changed
Access to corporate systems, client data, intellectual property, and financial information is extraordinarily valuable. For state actors, it funds operations. For organized crime, it enables fraud, ransomware, and data theft. For corporate spies, it provides competitive advantage worth millions.
The return on investment for successfully infiltrating a company through hiring is massive. Which means sophisticated attackers are willing to invest significant resources in beating your hiring process.
The Inadequate Response: Why “One In-Person Interview” Doesn’t Fix This
I’m seeing a pattern emerge: companies experiencing AI-assisted candidate issues mandate at least one in-person interview, believing this solves the problem.
It doesn’t. Here’s why:
Fraudsters will show up. If the payoff is high enough (access to systems, client data, months of salary) they’ll absolutely fly in for an in-person interview. The North Korean IT worker ring has been documented sending people to the US specifically for this purpose.
It only tests one moment. Even if the person who shows up in-person is legitimate, you have no guarantee they’re the same person who completed the technical assessment, participated in earlier video interviews, or will actually show up for work.
It creates a false sense of security. Your team thinks they’ve solved the problem and stops looking for other indicators of fraud. Meanwhile, sophisticated attackers have already adapted.
It doesn’t address the core vulnerability: your interview process isn’t designed to detect fraud in the first place.
What Your Security Team Knows (That You Might Not)
I’ve had conversations with Data Protection Officers and security leaders who are increasingly alarmed at hiring practices that essentially roll out the welcome mat for fraudsters:
“We’re fighting to implement multi-factor authentication, device biometrics and zero-trust architecture, and then HR hires someone after three Zoom calls and gives them access to everything.”
From a security perspective, hiring is the weakest link in the access control chain. Every fraudster who gets hired is:
- Bypassing all your external security measures
- Getting privileged internal access
- Potentially accessing client systems, financial data, and confidential information
- Creating insider threat risks that are nearly impossible to detect
Your DPO isn’t being difficult when they push back on virtual-only hiring or want stronger verification processes. They’re seeing the threat landscape you might not be aware of.
The hiring process is a critical security gate. Every person you hire is a potential access point. And right now, many organizations are treating that gate like a friendly suggestion instead of a serious checkpoint.
What Actually Works: A Layered Defense Strategy
There’s no single silver bullet for detecting hiring fraud. But there are multiple layers that, combined, make fraud significantly harder and more likely to be caught:
1. Background Checks Are Essential, But Not Sufficient
Comprehensive background verification should be table stakes:
- Identity verification (document validation, biometric checks)
- Employment history confirmation
- Education verification
- Criminal record checks where legally permitted
- Reference checks (but be aware these can be faked too)
The KnowBe4 case showed that even background checks can be fooled with stolen identities and fabricated documents. They’re necessary but not sufficient.
2. Device Biometrics and Identity Verification
Technology can help verify that the same person is showing up across touchpoints:
- Device fingerprinting to ensure the same device is used throughout the process
- Behavioral biometrics (typing patterns, mouse movements) to detect when different people are using the same account
- Live identity verification during video interviews
- Proctoring tools for technical assessments (with appropriate candidate consent and transparency)
These tools aren’t foolproof, but they create friction that makes fraud operations more difficult and expensive to run.
3. Effective Interview Techniques That Root Out Both Cheaters and Fraudsters
This is where most hiring processes completely fail. Your interviews need to be designed not just to evaluate capability, but to detect inconsistency, coached responses, and fraud.
Here are two specific techniques that work:
Technique 1: The Deep-Dive Follow-Up
Most fraudsters and AI-coached candidates can handle surface-level questions. They’ve prepared for common behavioral questions. They can recite project examples. But they struggle when you go deep.
How it works:
- Ask a standard behavioral question: “Tell me about a time you solved a complex technical problem.”
- Let them give their prepared answer
- Then drill down with specific, granular follow-ups:
- “What was your specific role versus your teammates’ roles in that solution?”
- “Walk me through the decision-making process when you chose that approach over alternatives.”
- “What would you have done differently knowing what you know now?”
- “Can you draw/diagram the architecture you implemented?” (for technical roles)
What to watch for:
- Vague or evasive answers when pressed for specifics
- Inconsistencies in the story as you dig deeper
- Inability to explain technical details they should know intimately
- Pauses that suggest they’re consulting something (or someone) off-camera
- Defaulting to generalizations instead of specific examples
A real candidate who actually did the work can talk for hours about the details. A fraudster or heavily-coached candidate will struggle once you move past their prepared script.
Technique 2: The Real-Time Problem-Solving Test
Don’t just ask about past experience. Create a situation where they have to demonstrate thinking in real-time, under observation.
How it works:
- Present a realistic problem relevant to the role (not a gotcha, but something they’d actually encounter)
- Ask them to talk through how they’d approach it
- Make it interactive—ask questions, introduce complications, challenge assumptions
- Watch how they think, not just what they conclude
For technical roles: “Here’s a scenario we actually faced last month. Walk me through how you’d troubleshoot this.”
For non-technical roles: “A client just sent this email [show real example with identifying info removed]. How would you respond and why?”
What to watch for:
- Can they think on their feet, or are they clearly trying to look something up?
- Do they ask clarifying questions (good sign) or jump to generic solutions (red flag)?
- Is their thinking process logical and consistent with their claimed experience?
- How do they handle uncertainty—do they admit when they don’t know something, or fake expertise?
Real expertise shows up in how people think through problems, not just what they know. Fraudsters and heavily-coached candidates can memorize answers but struggle to demonstrate genuine problem-solving in the moment.
4. Consistent Interview Structure and Documentation
Here’s where interview intelligence and structured processes become critical:
Follow a clear interview plan. Don’t get swept up in “this candidate is saying all the right things” hype. Gut feel is exactly what fraudsters exploit. Stick to your evaluation criteria.
Document everything. Record interviews (with consent), take detailed notes, score against specific criteria. This creates an audit trail that’s invaluable if you later discover fraud.
Look for consistency across touchpoints. Does the person in interview three sound like the same person from interview one? Do their technical assessment results align with their interview performance? Are there gaps or contradictions in their story?
Train interviewers to spot red flags.
- Overly perfect, rehearsed-sounding answers
- Inability to provide specific details when pressed
- Inconsistencies in their experience narrative
- Technical knowledge that doesn’t match claimed experience level
- Behavioral signs they’re being coached (pauses, looking off-camera, reading)
5. Trust Your Instincts, But Verify
If something feels off, it probably is. But “feeling off” isn’t enough, you need to investigate:
- Conduct additional reference checks
- Request portfolio work or code samples that you can verify
- Do a second round of interviews with different interviewers
- Conduct a probationary period with limited system access
- Monitor early behavior closely (like KnowBe4 did, which caught their fraudster)
The tension is real. You want to move quickly to secure great candidates, but you also need to be thorough enough to catch fraud. The best approach is to build verification into your standard process so it doesn’t slow down legitimate candidates while making fraud significantly harder.
The Reality: Not All AI Is Fraud, But All Fraud Is Now AI-Enhanced
We need to stop conflating AI-assisted candidates with actual fraud. They’re different problems requiring different solutions:
AI-assisted candidates are a quality and fairness issue. Your interview techniques need to evaluate actual capability, not just polished presentations. This is about better interviewing skills and moving beyond gut feel.
Fraud operations are a security threat. They require verification processes, identity management, background checks, and sophisticated detection techniques. This is about protecting your organization, your clients, and your data.
Effective interviewing sits at the center of both problems. Strong interview techniques that probe deeply, require real-time problem-solving, and document everything will help you identify both the candidate who’s over-relying on ChatGPT and the fraudster who’s trying to infiltrate your organization.
Your Role as a Gatekeeper Has Never Been More Critical
Here’s what I need you to understand: as a recruiter or hiring manager, you are a critical security control. You’re not just filling positions, you’re deciding who gets access to your company’s systems, your clients’ data, your intellectual property, and your colleagues’ information.
Every fraudster who gets hired represents:
- Potential data breaches affecting thousands of people
- Financial losses from fraud or theft
- Regulatory violations and legal liability
- Reputational damage that takes years to repair
- Security incidents that could cripple operations
The KnowBe4 incident could have been catastrophic if they hadn’t caught it quickly. Other organizations might not be as lucky (or as vigilant).
Your security team isn’t being paranoid. Your DPO isn’t being difficult. They’re seeing the threat landscape clearly and they need you to take hiring security as seriously as you take hiring quality.
What To Do Now
If you’re responsible for hiring in your organization, here’s your action plan:
This week:
- Review your current interview process: Are you equipped to detect fraud, or just evaluate capability?
- Talk to your security team: What are their concerns about hiring practices?
- Audit your verification processes: Background checks, identity verification, technical assessment proctoring
This month:
- Train your interviewers on fraud detection techniques, not just candidate evaluation
- Implement structured interview plans that include deep-dive follow-ups and real-time problem-solving
- Establish clear escalation processes for when something feels off
- Review your documentation and audit trail capabilities
This quarter:
- Evaluate technology solutions: device biometrics, interview intelligence, identity verification
- Build relationships with your security and legal teams and make them partners in hiring, not gatekeepers
- Create a response plan: What happens if you discover you’ve hired a fraudster?
The stakes are too high to treat hiring as purely a talent acquisition function. It’s a security function. It’s a risk management function. And it requires the same level of diligence, verification, and skepticism that you’d apply to any other critical access control.
We’re not going back to a world without AI in hiring. But we can build hiring processes that are sophisticated enough to separate legitimate candidates from fraud operations, and that starts with recognizing the threat, understanding what actually works, and taking your role as a gatekeeper seriously.
Because the next person you hire might be exactly who they say they are. Or they might be a North Korean operative with a laptop, waiting to install malware on your systems.
The only way to know the difference? Ask better questions. Dig deeper. Verify everything. And never let “they said all the right things” override your process. Your company’s security depends on it.