Extended definition
Quality of hire is the metric every TA function should care about most and the one most TA functions struggle to measure rigorously. Cost per hire and time to fill are easy to calculate; quality of hire requires connecting recruiting data to performance management data, which most companies do imperfectly.
The principle is straightforward: a low-cost, fast hire who underperforms or leaves quickly is worse than a higher-cost, slower hire who excels and stays. Quality of hire is the metric that prevents TA from optimising the wrong things.
It’s also the metric that most directly justifies investments in interviewing quality, calibration, and structured assessment.
How to calculate quality of hire
There’s no single industry-standard formula. Most companies build a composite from several inputs:
- Performance rating in first review cycle — Usually 6-12 months in. The new hire’s performance score from their manager, often weighted at 40-50% of the composite.
- Retention at defined milestones — 90-day, 1-year, and sometimes 2-year retention rates. New hires who exit early signal hire-quality problems either at hiring decision or onboarding stage.
- Hiring manager satisfaction — Survey of the hiring manager 60-90 days post-hire. Captures fit, ramp, and trajectory before formal performance data exists.
- Time to productivity — How long until the new hire is performing at expected level. Shorter ramp signals stronger hire and stronger onboarding.
A common composite formula:
Quality of hire = (Performance score × 0.4) + (Retention multiplier × 0.3) + (Hiring manager satisfaction × 0.2) + (Ramp speed × 0.1)
The exact weights vary by company and by what’s measurable. Companies without formal performance ratings can substitute hiring manager assessment scores or proxy metrics. The discipline matters more than the precision — a roughly-right quality of hire metric tracked consistently over time produces actionable insight; an exact formula tracked once does not.
Quality of hire should be tracked by source, by recruiter, by hiring manager, and by interview loop composition. The variation across these dimensions reveals where hiring quality is being created and where it’s being lost.
Why quality of hire matters
Quality of hire is the metric that connects recruiting to business outcome. Without it, TA optimises proxies — speed, cost, volume — that may or may not produce good hires.
With it, every other recruiting investment can be justified or defunded based on actual outcome. For VPs of TA, quality of hire is the metric to fight for at the executive level — it’s the only number that frames TA as a value-creation function rather than a cost centre.
CHROs increasingly use quality of hire as the primary measure of TA effectiveness in board reporting, displacing cost-per-hire as the headline number.
Common mistakes and misconceptions about quality of hire
- Treating it as unmeasurable — Imperfect measurement is better than no measurement. A composite using available proxies (manager satisfaction, retention, early performance signals) produces actionable insight even without a formal performance rating system.
- Using a single input as the whole metric — Retention alone doesn’t equal quality — a strong hire who leaves for a great opportunity isn’t a hire-quality failure. Performance alone misses retention. The composite handles each input’s weakness.
- Not segmenting the data — Aggregate quality of hire across the whole company hides where it’s high (which sources, which hiring managers, which interview compositions) and where it’s low. Segmentation is what makes the metric actionable.
- Reporting too late — Quality of hire data only becomes available 6-12 months after hires join, which means the insight is always backward-looking. Pair it with leading indicators — calibration drift, scorecard quality, candidate NPS — that show up faster.
- Using quality of hire to punish recruiters individually — Hire quality depends on recruiter, hiring manager, interview panel, onboarding, and manager development. Pinning it on one role distorts the metric. Use it to surface system-level patterns, not to single out individuals.
Frequently asked questions
What is quality of hire?
Quality of hire measures how well new hires perform once they're in the role — typically a composite of performance ratings, retention, hiring manager satisfaction, and ramp time. It's the metric that connects recruiting to actual business outcome. Cost per hire and time to fill are easy to calculate; quality of hire requires connecting recruiting data to performance management data, which most companies do imperfectly.
How do you measure quality of hire?
Most companies use a composite combining performance rating in the first review cycle, retention at defined milestones (90-day, 1-year), hiring manager satisfaction surveyed 60-90 days post-hire, and time to productivity. The weights vary; consistency over time matters more than the exact formula. Aggregate the inputs into a single score for trend analysis.
What's a good quality of hire benchmark?
There isn't a universal benchmark because the metric is constructed differently across companies. The useful comparison is internal — quality of hire by source, by recruiter, by hiring manager, by interview composition. Variation reveals where hiring quality is being created or lost. Aggregate scores trend slowly and benchmark poorly across organisations.
Why is quality of hire so hard to measure?
Because it requires connecting recruiting data to performance and retention data, which sit in different systems and follow different review cadences. Performance data lags by 6-12 months; retention by 12-24. The measurement effort is real, but most companies can produce a meaningful composite using data they already have if they decide to.
Who is responsible for quality of hire?
Shared across recruiter, hiring manager, interview panel, and the people manager post-hire. Pinning quality of hire on the recruiter alone distorts the metric — onboarding, manager quality, and team context all matter. Use the metric to surface system-level patterns rather than to evaluate individual contributors.