Blog
Latest

For years, time-to-fill has functioned as the de facto report card for recruiting teams. It's clean, it's simple, it fits in a dashboard. If you're filling roles in 28 days instead of 45, the assumption is that something is working. If it creeps past 60, someone has explaining to do.
But here's the uncomfortable truth that every honest HR leader already knows: a fast bad hire is worse than a slow good one. Time-to-fill tells you how quickly you processed candidates. It tells you nothing about whether you hired the right person.
This distinction matters more now than it ever has. The average cost of a bad hire is estimated at somewhere between 30% and 150% of that employee's annual salary, depending on seniority and role complexity.
Turnover within the first year — which disproportionately signals a screening failure — has been climbing since the pandemic reshuffled worker expectations and labor markets alike. And yet the metrics most organizations use to evaluate their recruiting function are still largely borrowed from manufacturing process efficiency: speed, volume, cost per unit.
It's time to measure differently. Not because speed doesn't matter — it does — but because speed without quality is a trap, and most organizations have built elaborate systems to optimize for the trap.
Why Time-to-Fill Became the Dominant Metric (And Why That Made Sense Once)
Time-to-fill wasn't always a bad metric. When recruiting was primarily a coordination function — post the job, collect applications, schedule interviews, extend offers — process efficiency was a legitimate performance indicator.
If a recruiter was taking 90 days to fill roles that competitors filled in 30, something operationally was probably broken.
The metric also has executive appeal. It's easy to explain, easy to benchmark against industry data, and easy to set targets around. HR leaders operating in resource-constrained environments needed metrics that would hold up in a board conversation, and time-to-fill delivered.
But something shifted. As talent markets tightened and skill sets became more specialized, the bottleneck in recruiting stopped being process and started being judgment.
The question was no longer "how fast can we move people through the funnel?" It was "are we moving the right people through the funnel?" Those are fundamentally different problems, and time-to-fill is only equipped to answer the first one.
Today, most recruiting functions are optimizing process efficiency while the real problem — screening accuracy — quietly compounds.
The Metrics We're Missing: A Framework for What Actually Matters
If you want to build a recruiting function that creates real business value, you need a metrics framework organized around outcomes, not activities. Here's how to think about that framework across four dimensions.
1. Quality of Hire
Quality of hire is the most important metric in recruiting and the one most organizations either don't track or measure so imprecisely that the data is useless.
The reason it's difficult is that it requires connecting recruiting data to performance data — and those systems rarely talk to each other. But difficulty isn't an excuse to stop trying. Even a rough quality-of-hire signal is more strategically useful than a precise time-to-fill number.
What to measure:
90-day performance ratings for new hires, benchmarked against cohort averages. Are the people your recruiting team is sending through performing at or above expectations in their first 90 days, or are managers quietly managing them down?
First-year retention rates by source and by recruiter. If candidates sourced through one channel are leaving at 2x the rate of another, that's a screening signal, not a market signal.
Hiring manager satisfaction scores, collected at 30, 60, and 90 days. Not "how happy are you with the recruiting process?" but "how would you rate this hire's performance relative to expectations set during recruiting?" The 30-day pulse catches early misalignment before it becomes a termination. The 60-day check reveals whether initial concerns are resolving or compounding. The 90-day review is your first statistically meaningful quality signal — and your earliest window to connect recruiting decisions to outcomes before the data gets too old to act on.
Promotion velocity for recent hires. Are the people you're bringing in progressing at the rate you'd expect from high-quality talent? Stalled or lagging progression within the first two years is often a sign of skills misalignment that screening failed to catch.
The goal is to create a closed loop between what your recruiting function promises and what the business actually receives. Without that loop, you're flying blind.
The financial case for getting this right. According to SHRM's 2024 research, the average cost of a bad hire runs between 50% and 200% of that employee's annual salary — a range that accounts for recruiting costs, onboarding investment, lost productivity, team disruption, and eventual replacement costs. The same research pegs first-year failure rates at around 46% across industries. That's nearly one in two hires underperforming or churning within 12 months.
Run the math for a mid-market company making 25 hires per year — the lower end of what many growing companies manage. At a 25% first-year failure rate (well below the 46% industry average), assuming an average fully-loaded salary of $75,000, you're losing approximately $937,500 to $3.75 million annually to bad hires. Reducing that failure rate by just 10 percentage points saves $187,500 to $750,000 per year.
Double your hiring volume to 50 roles annually, and those numbers double too: $1.875M to $7.5M in annual losses, and $375,000 to $1.5 million in annual savings from a 10-point improvement in retention. Hire hundreds of people a year — the math scales accordingly.
For most mid-market talent acquisition functions, that savings figure exceeds the total annual cost of running the entire recruiting team. Quality of hire isn't a nice-to-have metric. It's the business case for your department's existence.
Source: SHRM, Talent Acquisition Benchmarking Report, 2024.
2. Screening Accuracy
This is the metric that almost no organization measures, and it may be the most diagnostic one available.
Screening accuracy asks a deceptively simple question: at the resume review and initial screening stage, how well does your process predict downstream success?
To measure it, you need to track two populations over time: candidates who passed your initial screen and were ultimately hired, and candidates who were rejected at the screen stage. You can't always track the second group, but you can look at proxy signals within the first.
What to measure:
Interview-to-offer ratio. If your team is interviewing 20 candidates to make one offer, your initial screening is likely poorly calibrated — either letting through too many unqualified candidates (high false positives) or being so restrictive that you're filtering out qualified ones (high false negatives). Industry benchmarks vary by role type, but ratios above 8:1 at the full-cycle interview stage typically signal a screening problem.
Screen-to-interview conversion quality. Track whether candidates who advance from phone screen to hiring manager interview are arriving well-prepared and well-matched. If hiring managers are consistently frustrated with the quality of who they're meeting, your screen isn't doing its job.
Offer decline rates. This one is nuanced because offer declines can reflect compensation or process issues, not screening issues. But patterns in decline reasons can reveal misalignment between how roles are presented in screening and what candidates actually experience as they move forward.
Keyword-to-performance correlation. This is the data most damning for traditional resume-based screening: how well do the credentials your ATS flagged actually predict 90-day performance? For most organizations running keyword-based screening, the correlation is weaker than they'd expect.
3. Candidate Experience and Pipeline Health
These are leading indicators — they tell you about the quality of your future hiring outcomes before those outcomes materialize.
A deteriorating candidate experience doesn't just hurt your employer brand. It systematically filters out a specific type of candidate: the high-performing passive candidate who has options.
People who need a job badly enough will tolerate an 11-step application process or a four-week silence after their first interview. People who are actually in demand won't. If your process is slow, opaque, or impersonal, you're quietly self-selecting for desperation and filtering out agency — which is roughly the opposite of what most organizations say they want.
What to measure:
Application completion rates. What percentage of candidates who start an application finish it? Significant drop-off at any step is a friction signal. If you're losing candidates at the work history section because they have to re-enter everything already in their resume, that's a fixable UX problem costing you qualified applicants.
Time-to-first-response. Not time-to-offer, not time-to-fill — time-to-first-meaningful-response after application. Candidates consistently report that silence is more damaging to employer brand than rejection. A fast, honest "not a fit" builds more goodwill than two weeks of nothing.
Candidate NPS, by stage. Measure candidate experience at multiple checkpoints: after application, after screen, after final interview, after offer (accepted or declined). Segment by outcome — what do rejected candidates think of your process? Their perception matters for your talent pipeline more than you might expect.
Diversity of pipeline at each stage. Track the demographic composition of your candidate pool at application, screen, interview, and offer stages. If the pool is diverse at the top but narrows significantly by the time you're making offers, your screening criteria are likely introducing bias — not your sourcing.
4. Downstream Business Impact
This is the category that elevates recruiting from a cost center conversation to a strategic one.
Most recruiting functions struggle to connect their work to business outcomes, and that's partly a systems problem and partly a measurement mindset problem. But the organizations that have done this work consistently find that the conversation with leadership changes. Recruiting stops being "how fast are you filling roles" and becomes "how are our hiring decisions affecting revenue, retention, and team performance."
What to measure:
Revenue per employee for recent hire cohorts. In sales roles especially, the performance delta between a well-screened hire and a misaligned one is measurable in quota attainment and ramp time. Track this.
Manager time-to-productivity estimates. Every new hire creates a productivity tax on their manager and team during onboarding. A rough estimate of the time managers spend supporting new hires through their first 90 days — and how that varies based on hire quality — helps quantify the cost of screening failures in terms leadership can internalize.
Regrettable turnover rate within 24 months. Distinguish between turnover you'd have chosen (managed out or performance separated) and turnover you'd rather have avoided (high performer exits). Regrettable turnover is the most direct signal of cumulative screening failure. If your regrettable turnover is concentrated in the 6–18 month window, you're solving a screening problem, not a retention problem.
Internal mobility rate of externally hired candidates. Are the people you're bringing in from outside eligible for and interested in internal advancement? Low internal mobility among external hires can indicate that your screening criteria are optimized for filling today's role rather than hiring for organizational fit and growth potential.
The Structural Problem: Why Most Companies Don't Measure This Way
Acknowledging that these metrics matter is the easy part. Actually measuring them requires confronting some structural realities most HR teams would rather not.
The systems don't connect. Your ATS holds screening data. Your HRIS holds performance and retention data. Your survey tools hold hiring manager satisfaction data. Connecting these systems requires either IT collaboration, data infrastructure investment, or both — and in most organizations, those resources are allocated elsewhere.
The incentives are misaligned. Recruiting teams are typically evaluated on speed and volume. Quality of hire is someone else's metric — usually a manager's, sometimes HR Business Partners, rarely the recruiting team's. When the person responsible for measuring something isn't accountable for it, measurement tends to lag.
The timeline is wrong. Time-to-fill is visible in real time. Quality of hire doesn't materialize for 90 days, 6 months, or a year. In organizations with quarterly planning cycles and monthly dashboards, metrics that take a year to validate tend to get deprioritized in favor of metrics that give you data this week.
Resume-based screening creates false confidence. When your screening criteria are "right credentials, right job titles, right keywords," the screening feels objective — and objective-feeling processes are hard to challenge. But those criteria are proxies for skills, not measures of skills. They're also proxies developed by looking at people who already got hired, which encodes historical hiring bias directly into your filter. The confidence is real. The accuracy is not.
What Better Measurement Requires
Fixing your hiring metrics isn't just a matter of adding new columns to a dashboard. It requires a few foundational commitments.
Start with the outcome, work backward. Define what a successful hire looks like at 90 days, 1 year, and 3 years in the specific role you're filling. Not "meets expectations" as a generic standard — specific, role-relevant success criteria. Then ask whether your screening process is actually selecting for the skills and attributes that drive those outcomes.
Build a simple closed-loop system, even manually at first. You don't need a sophisticated analytics platform to start connecting recruiting decisions to outcomes. A quarterly review where recruiting leads sit down with people managers to discuss how recent hires are performing — and trace that back to screening decisions — creates accountability and generates insight without requiring a data warehouse. Start there. Build from there.
Measure what you screen for. If your screening process is largely based on resume keywords, measure what those keywords actually predict. Randomly audit a sample of hires from the past two years: which screening signals predicted high performance, and which didn't? Most organizations that do this for the first time are surprised by how poorly their existing criteria predict the outcomes they care about.
Separate screening criteria from hiring manager preferences. Some of the most persistent sources of screening inaccuracy aren't systemic biases or bad criteria — they're individual hiring manager preferences that have hardened into requirements over time. "Must have worked at a company of similar size" or "needs a degree from a recognizable school" often reflect personal experience, not evidence. Regularly pressure-testing which criteria are predictive versus preferential is a core quality-of-hire discipline.
Consider what resume screening leaves on the table. Traditional screening is designed to surface candidates who look like previous successful hires. That's a conservative, backward-looking approach that systematically undervalues transferable skills, non-linear career paths, and candidates whose competencies were developed in contexts different from your company's own. The candidate who spent six years in nonprofit operations before pivoting to enterprise software may be exactly what you need — and they'll be filtered out by a keyword screen before a human ever sees their application. Competency-based and skills-based screening approaches are better equipped to surface candidates like this, because they measure what someone can do rather than what their career looks like from the outside.
The Honest Disqualification Principle
One more thing that most hiring metrics frameworks ignore entirely: the cost of misleading candidates who aren't a fit.
When your screening criteria are opaque, keyword-dependent, or disconnected from the actual skills a role requires, two things happen. First, you let through candidates who look right on paper but aren't actually suited to the role — creating the quality-of-hire problems we've been discussing. Second, and less often acknowledged, you give false hope to candidates who could have invested their time elsewhere.
A candidate who goes through three rounds of interviews for a role they were never genuinely considered for didn't just waste time. They passed on other applications. They prepared extensively. They emotionally invested. Treating candidates honestly — including communicating disqualification clearly and respectfully when it happens — is not just an employer branding consideration. It's an ethical one.
Metrics that only count your wins (offers made, positions filled) never capture this cost. Building in candidate feedback mechanisms, and taking seriously what rejected candidates report about transparency and communication, is part of measuring what actually matters.
A Different Scorecard
If you were to build a recruiting metrics framework from scratch, optimized for business outcomes rather than process efficiency, it might look something like this:
Leading indicators (visible during the recruiting process): application completion rates, time-to-first-response, pipeline diversity at each stage, interview-to-offer ratio.
Lagging indicators (visible post-hire): 90-day performance ratings, first-year retention by source, hiring manager satisfaction at 60 and 180 days, promotion velocity, regrettable turnover within 24 months.
Strategic indicators (visible at the business level): revenue per employee for recent cohorts, manager time-to-productivity, internal mobility rate of external hires.
Time-to-fill lives somewhere in the leading indicators category, alongside cost-per-hire and other process efficiency metrics. It's not irrelevant — a process so slow that you're consistently losing candidates to faster-moving competitors is a genuine problem. But it should be one data point among many, weighted by how well it correlates with outcomes you actually care about.
What This Requires of Leadership
Changing how you measure hiring is ultimately a leadership conversation, not just an HR conversation.
It requires executives who are willing to hold recruiting accountable for outcomes, not just outputs — and who are willing to provide the systems access and cross-functional collaboration that outcome measurement requires. It requires managers who give honest performance feedback on new hires and connect that feedback to screening processes, rather than quietly absorbing the cost of a bad hire and never saying anything. And it requires HR leaders willing to challenge their own metrics, even when those metrics look good.
The organizations that will hire best in the next decade won't be the ones with the most sophisticated ATS or the fastest processes. They'll be the ones who figured out how to measure whether the people they're hiring are actually the right people — and built their recruiting function around that question.
Time-to-fill is not that question. It's time to ask a better one.
Your 30/60/90-Day Plan to Start Measuring What Matters
You don't need a multi-year transformation to begin. You need a sequenced plan that builds credibility, creates quick wins, and lays the infrastructure for sustained improvement. Here's what that looks like in practice.
Days 1–30: Establish Your Baseline
You can't improve what you haven't measured. The first 30 days are about diagnosis, not change.
Audit your current metrics. Inventory what your recruiting function actually tracks today. Time-to-fill, cost-per-hire, offer acceptance rate — write it down. These are your starting benchmarks, and you'll need them to show progress later.
Pull 12 months of hire data. Go back to every external hire made in the last year. Map each one to their current status: still employed, voluntarily left, involuntarily separated, promoted. Even this basic analysis will surface patterns you haven't been seeing.
Launch a 30-day hiring manager pulse. Select every hire made in the past 30–60 days and send hiring managers a two-question survey: (1) How is this hire performing relative to expectations? (2) If you could change one thing about how this person was screened, what would it be? The qualitative responses here are gold.
Calculate your first-year failure rate. Using your 12-month hire data, determine how many hires from the previous 12 months are gone within their first year. Benchmark against the 46% industry average. This number will anchor every quality-of-hire conversation you have going forward.
Map your screening criteria to role outcomes. For your top five most frequently filled roles, document the current screening criteria. Then ask hiring managers: which of these criteria actually predicted success? Which ones didn't? You'll almost always find surprises.
Days 31–60: Build the Feedback Loop
With a baseline established, the second month is about creating the systems and habits that generate ongoing signal.
Formalize the 30/60/90 hiring manager check-in. Standardize a lightweight three-question survey sent automatically at 30, 60, and 90 days post-hire: performance relative to expectations, skills gaps observed, and one open-ended prompt. Keep it under 3 minutes. Automate it if you can; do it manually if you can't. Consistency matters more than sophistication at this stage.
Create a quality-of-hire scorecard. Combine your 90-day performance rating, hiring manager satisfaction score, and first-year retention status into a single quality index per hire. It doesn't need to be complex. A simple 1–5 composite score tracked in a spreadsheet is enough to start identifying patterns.
Segment your pipeline diversity data. Run your last 6 months of applications through each screening stage — application, screen, interview, offer — and map demographic composition at each step. Where does the pool narrow? If you don't yet have this data, start collecting it now.
Connect with people managers directly. Schedule 30-minute conversations with five to seven managers who've hired in the past year. Don't lead with data. Lead with curiosity: what do your best recent hires have in common? What did you wish you'd known about your worst ones before extending an offer? These conversations will generate insight your surveys won't capture.
Run your first ROI estimate. Using your first-year failure rate, average fully-loaded salary, and the SHRM cost-of-bad-hire framework (50%–200% of annual salary), calculate the annual cost of your current screening outcomes. Share it with your HR leadership and at least one business stakeholder. The number will create urgency in places where urgency currently doesn't exist.
Days 61–90: Act on What You've Learned
The third month is where measurement becomes management. You now have enough data to make one or two meaningful changes — and to begin building the case for more.
Identify your highest-leverage screening change. Based on your hiring manager conversations and your quality-of-hire scorecard, what is the one screening criterion or process step that most frequently produces misalignment? Is it a required credential that doesn't predict performance? An interview format that assesses the wrong things? A gap in how you evaluate transferable skills? Choose one thing to change and document the hypothesis: "If we stop requiring X and start assessing for Y, we expect to see Z improvement in 90-day performance ratings."
Pilot competency-based screening on one role type. Select your most frequently filled role and redesign the screening criteria around competencies and demonstrated skills rather than credentials and keywords. Run the new approach for 60–90 days and compare quality-of-hire scores for that cohort against your baseline.
Present your findings to leadership. Pull together your baseline metrics, your ROI estimate, your pipeline diversity analysis, and your planned intervention into a short business case. The goal isn't to show a perfect system. It's to demonstrate that your recruiting function is connecting its decisions to business outcomes — and that you have a plan to improve them.
Set your 12-month targets. Based on what you've learned, set specific, measurable targets for the metrics that matter most to your organization. Not "improve quality of hire" — "increase 90-day hiring manager satisfaction scores from X to Y" or "reduce first-year regrettable turnover from X% to Y%." Specificity creates accountability, and accountability creates results.
The organizations that measure well don't do it because they have better data infrastructure than everyone else. They do it because someone decided to start — imperfectly, incrementally, with whatever data they had available. Thirty days from now, you can have more insight into your hiring outcomes than most recruiting teams accumulate in a year. The question is whether you decide to start.
That helps mid-market companies screen for competency, not credentials — filtering talent in rather than out. Learn more at getclara.io.*
CLARA is an AI-powered skill-alignment and talent-accuracy hiring platform that helps mid-market companies screen for competency, not credentials — filtering talent in rather than out. Learn more at getclara.io.