Mar 5, 2026·Skills AssessmentHow to Give Feedback to Candidates After a Skill Assessment
Silence damages your employer brand. Learn the best practices for providing post-assessment feedback to improve the candidate experience.
Dr. Russell T. WarneChief Scientist

Most organizations invest considerable thought into selecting and administering skill assessments. However, far fewer consider what happens after the assessment is complete—specifically, whether and how to communicate the results to candidates. This oversight is a massive missed opportunity. How an organization handles post-assessment feedback directly impacts the candidate experience, the employer brand, and in some cases, the legal defensibility of the hiring process. Managing this correctly requires understanding what candidates are owed and what responsible, effective feedback actually entails.
Whether to Provide Feedback at All
The first hurdle many organizations face is deciding whether to share assessment results at all. The answer depends heavily on the context of the hiring pipeline. In general, candidates who have invested time completing an assessment have a reasonable expectation of some form of communication. Silence is not a neutral response; candidates experience it as dismissive, which severely damages their perception of the employer.
At the early screening stages, when the applicant pool is massive, providing highly individualized score reports to every single candidate is operationally impossible. At this stage, automated but respectful updates acknowledging the assessment and outlining the next steps are sufficient. However, at later stages, when a smaller group of finalists has completed comprehensive, multi-dimensional evaluations, providing substantive, individualized feedback becomes both feasible and highly appropriate. The practical rule is simple: the depth of the feedback should scale proportionately with the time and effort the candidate has invested.
There is also a legal dimension to consider. Under the Americans with Disabilities Act, candidates who requested accommodations during testing may have grounds to request specific information about the administration. More broadly, organizations must be prepared to demonstrate to the Equal Employment Opportunity Commission that their assessments were applied consistently and used in a non-discriminatory manner. Documented, uniform feedback practices are a cornerstone of that defense.
Understanding What Candidates Actually Want
When candidates ask for feedback, they are usually asking one of three specific questions. First, they want to know the dispositional outcome: did the test help or hurt their chances? Candidates who are rejected after a screening assessment deserve a timely, respectful notification. Leaving them in the dark after they have invested time in the process is a failure of basic professional courtesy.
Second, advancing candidates often seek interpretive feedback: what does the score actually mean? Sharing a cognitive profile with appropriate context is incredibly useful for setting expectations on both sides.
Third, rejected candidates frequently ask for developmental feedback: how can they improve their score next time? This is a natural question, but organizations must answer it honestly. Cognitive ability scores reflect highly stable characteristics that do not shift substantially through short-term cramming. Falsely promising a candidate that studying harder will drastically raise their cognitive score is deceptive. Instead, organizations should offer practical guidance on eliminating avoidable errors, such as familiarizing themselves with the test format beforehand to reduce anxiety.
The Principles of Framing Score Feedback
When sharing actual scores, framing matters just as much as the raw data. Scores must never be presented in isolation. A number without a reference point is entirely meaningless. Candidates need to understand exactly what population their score is being compared against and where they fall relative to the average. A score of 112 is only useful if the candidate knows whether that represents the 50th or the 80th percentile of the norm group.
Furthermore, scores must be presented with an explicit acknowledgment of their margin of error. Every psychological assessment contains a degree of statistical noise. Presenting a score as a flawless, exact measurement drastically overstates the instrument's precision. Responsible feedback acknowledges that the score is an estimate—the best available estimate, but an estimate nonetheless.
Finally, feedback must be directly connected to the role's requirements and calibrated to the audience. Telling a candidate their verbal reasoning score is at the 65th percentile is confusing jargon. Translating that into plain language—explaining that they outperformed roughly 65 out of 100 people in the comparison group, and relating that specifically to the job's heavy demands for written communication—transforms a sterile metric into meaningful, actionable context.
Delivering Unfavorable Feedback
Giving feedback to candidates who were rejected due to their assessment results requires intense care. The conversation risks being either uselessly vague or demoralizingly blunt. The guiding principle here is honesty without unnecessary elaboration.
If a candidate asks why they were rejected, they deserve the truth: their assessment score did not meet the necessary threshold for the role's specific demands. What they absolutely do not need is a lecture conflating their test score with their overall intelligence, personal worth, or future career prospects. An assessment score is merely evidence of performance on a specific instrument under specific conditions; it is never a verdict on a human being.
Crucially, organizations must not fabricate alternative excuses. If a low cognitive score was the primary reason for rejection, claiming that the role was filled internally or that they lacked specific experience is dishonest and counterproductive. Candidates who receive honest, respectful feedback—even when it is negative—consistently report better impressions of the employer than those who are fed evasive lies.
Using Feedback for Advancing Candidates
For finalists, assessment feedback serves an entirely different purpose: it is a tool for productive dialogue. Sharing a cognitive profile with a finalist and discussing how it aligns with the role's demands builds mutual understanding. This conversation also serves as a brilliant secondary evaluation. How a candidate responds to their own score report—whether they engage thoughtfully, whether they demonstrate self-awareness regarding their weaknesses, and whether they possess intellectual honesty—provides massive insight into their character that the raw score alone cannot capture. This should be a two-way dialogue, treating the candidate as an active participant in a mutual selection process rather than a passive subject.
The Mandate of Consistency
Whatever feedback protocol an organization adopts, it must be applied with absolute consistency across all candidates for the same role. Providing detailed score breakdowns to some demographics while giving vague non-answers to others creates severe legal exposure and violates the fundamental conditions of equitable hiring. Organizations must document their feedback practices—including timing, format, and depth of information—and strictly enforce them across all hiring managers.
Professionally developed assessments drastically simplify this process by generating standardized, highly interpretable score reports. The Reasoning and Intelligence Online Test (RIOT) is engineered specifically for this standard. Developed by Dr. Russell Warne and built to meet the rigorous guidelines of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education, RIOT produces comprehensive reports that include the contextual information candidates and hiring teams actually need. By providing detailed index scores across Verbal Reasoning, Fluid Reasoning, Spatial Ability, Working Memory, Processing Speed, and Reaction Time alongside an overall IQ score, RIOT ensures that delivering consistent, professional, and highly informative feedback is a straightforward operational reality.
AuthorDr. Russell T. WarneChief Scientist