Extended definition
The technical interview is the part of the loop where the candidate demonstrates that they can actually do the work. Other interviews assess motivation, behaviour, and judgment; the technical interview assesses craft.
Formats vary by function: live coding for engineers, take-home assignments for some roles, system design discussions for senior engineers, modelling exercises for data scientists, design critiques for product designers. The format that works best is whatever produces the strongest evidence of hands-on capability for that specific role.
Technical interviews are also among the most contested parts of the hiring process — candidates dislike high-pressure live formats, companies dislike unmonitored take-homes, and the industry has spent the last decade iterating on formats that balance signal and candidate experience.
How a technical interview works
A working technical interview has four design choices:
- Format match to the role — Live coding works for engineers in many contexts but poorly approximates senior engineering work, which is more about judgment and design. Senior roles often replace coding with system design discussions or technical case studies. Choose the format that maps to what the role actually does.
- Realistic problems, not abstract puzzles — Algorithmic puzzles popularised by FAANG-era hiring select for puzzle skill more than job performance. Modern technical interviewing favours problems closer to actual work — building a small feature, debugging real code, designing a realistic system component.
- Clear scoring criteria — Technical interviews need rubrics as much as behavioural ones. What separates a 4 from a 3 on this design problem? What evidence supports each level? Without rubrics, technical scoring varies wildly between interviewers.
- Calibrated interviewers — Engineers often interview as if they’re hiring themselves — same background, same techniques, same preferences. Technical interview calibration matters at least as much as behavioural calibration; without it, hire bars vary between teams within the same company.
Take-home assignments raise specific issues: how long should they take (4 hours is a common cap), how do you account for AI assistance, how do you compare candidates who spend wildly different amounts of time. Most modern teams still use take-homes selectively but pair them with discussion interviews where the candidate explains their work.
Why technical interviews matter
Technical interviews are where the hardest hiring decisions get evidence. Behavioural interviews surface judgment and self-presentation; technical interviews surface what the candidate can actually do.
For technical roles, weak technical interviewing means hiring on credentials and presentation rather than craft, which produces a high false-positive rate. For VPs of engineering, product, or data, technical interview quality is one of the largest controllable inputs to team performance — a calibrated, well-designed technical loop produces hires who build well; an uncalibrated one produces hires who present well.
Common mistakes and misconceptions about technical interviews
- Using algorithmic puzzles as the primary signal — Whiteboard algorithms test puzzle skill more than engineering judgment. Modern technical interviewing favours realistic problems — building, debugging, designing — over abstract puzzles.
- Letting interviewers freelance the format — If three engineers all interview “technical depth” in three different ways, scoring isn’t comparable. Standardised technical interview formats per role are part of structured interviewing.
- Ignoring the candidate experience — High-pressure technical interviews with hostile interviewers select for candidates who tolerate hostile environments — which isn’t usually what the role needs. Tough problems and respectful conduct aren’t in tension.
- Treating take-homes as unmonitored — Take-homes without follow-up discussion can’t be properly evaluated — interviewer doesn’t know what the candidate actually did versus what they got help with. Pair take-homes with a debrief interview where the candidate walks through the work.
- Skipping rubric development for technical interviews — Behavioural interviews get rubrics; technical interviews often don’t. The result is wide variation in scoring even with strong individual interviewers.
Frequently asked questions
What is a technical interview?
A technical interview assesses a candidate's hands-on capability in the technical domain of the role — coding for engineers, modelling for data scientists, design problems for designers — through demonstration rather than discussion. Other interviews assess motivation, behaviour, and judgment; the technical interview assesses craft.
What's a good technical interview format?
The format that best mimics the actual work of the role. Live coding works for many engineering positions; system design discussions suit senior engineers; take-home assignments work when paired with a follow-up walkthrough; design critiques work for product designers. Algorithmic puzzles correlate poorly with job performance and have fallen out of favour at thoughtful companies.
How long should a technical interview take?
45-90 minutes for live formats, 2-4 hours for take-home assignments paired with a 30-45 minute debrief. Longer technical interviews fatigue candidates without producing more signal; shorter ones don't allow enough depth on real problems. Take-homes longer than 4 hours create candidate-experience problems and skew toward candidates with available time.
Should technical interviewers be calibrated?
Yes — at least as much as behavioural interviewers. Different engineers interview differently and rate differently for the same evidence. Calibration through joint scoring exercises and shared rubrics makes technical scoring comparable across the panel. Without calibration, technical scores vary more by interviewer than by candidate.
How do you handle AI assistance on technical interviews?
Increasingly, modern technical interviews assume candidates may use AI tools — and design questions accordingly. The interview tests how the candidate uses AI, debugs AI output, makes architectural decisions, and explains their reasoning. Banning AI without monitoring is unrealistic; testing AI-assisted work is more aligned with how the role will actually be performed.