Skip to content

What Is Assessment Centre?

An assessment centre is a selection event — typically lasting half a day to two days — where candidates complete multiple exercises observed by trained assessors. Activities include group discussions, case study presentations, role plays, and psychometric tests, each designed to evaluate specific competencies. Assessment centres are primarily used for graduate schemes and management-level roles where multiple candidates are assessed simultaneously.

Candidate Assessment & Selectionassessmentassessment-centreselectiongroup-exerciseUpdated March 2026

TL;DR

An assessment centre is a structured evaluation process where multiple candidates complete a series of exercises observed by trained assessors over a half-day or full day. It combines simulations, group tasks, and interviews to predict job performance more accurately than any single method. Employers use it when the stakes of a wrong hire are high.

What an Assessment Centre Actually Tests

Assessment centres exist because interviews lie. Candidates rehearse answers, interviewers develop unconscious biases, and neither party gets a realistic read on how the person performs under pressure. An assessment centre fixes this by putting candidates into situations that mirror real job demands.

A typical programme includes an in-tray exercise (prioritising a fictional manager's inbox), a group discussion observed for participation and influence, a role-play with a trained actor playing a difficult client, and a competency-based interview. Some organisations add psychometric tests, written reports, or a presentation. Each exercise targets specific competencies: a management trainee scheme might assess commercial awareness, communication, and resilience. A technical role might add a case study with financial modelling.

The scoring is systematic. Assessors record behaviours against observable indicators rather than forming a general impression. A candidate who speaks confidently but fails to build on others' ideas in the group exercise will score differently from one who listens carefully and moves the discussion forward. This behavioural evidence makes the final rating defensible and auditable.

Why It Matters for Recruitment

Assessment centres significantly outperform other selection methods for predicting job performance. Research published in the Journal of Applied Psychology places the validity coefficient of assessment centres at approximately 0.37, compared to 0.38 for structured interviews used alone. When combined, they approach the upper limit of predictive accuracy available to practitioners.

For volume hiring, the business case is straightforward. If a contact centre hires 200 customer service agents per year and a bad hire costs approximately 30% of first-year salary, reducing mis-hire rates by even 15 percentage points saves significant budget. Assessment centres typically increase selection accuracy enough to justify their cost once a programme runs at scale.

The secondary benefit is [candidate experience](/glossary/candidate-experience). Applicants who attend a well-run assessment centre, even those who are unsuccessful, consistently report a more positive impression of the employer than those who received only a panel interview. They saw the culture, met future colleagues, and felt genuinely assessed rather than filtered. That impression affects whether they accept offers, refer friends, and reapply in future.

For recruiters and staffing agencies, assessment centres represent a premium deliverable. Agencies that design and facilitate assessment centres on behalf of clients operate at a higher margin than those providing CV screening alone. It positions the agency as a strategic partner rather than a transactional supplier.

In Practice

A regional logistics company asked a staffing agency to fill 12 operations supervisor roles across three depots. The agency proposed a one-day assessment centre rather than a standard telephone screen and panel interview process.

The programme ran two cohorts of 18 candidates each, observed by six assessors trained over a half-day the previous week. Exercises included: a 20-minute in-tray simulation involving a driver absence crisis, a group problem-solving task around route optimisation, and a 30-minute structured interview. Each candidate was observed by at least two different assessors across the day.

Of the 36 candidates assessed, 14 met the benchmark across all three exercises. The client selected 12 and made offers to all 14, with 12 accepting. Twelve months later, the client reported that 11 of the 12 supervisors were rated "meeting or exceeding expectations" in their annual review. The previous cohort hired via CV and single interview had a 12-month retention rate of 58%. The assessment centre cost the client 22% more per hire upfront and saved them approximately $340,000 in turnover costs over the following year.

Key Facts

ConceptDefinitionPractical Implication
Assessor-to-candidate ratioTypically 1 assessor per 2-3 candidatesBelow this ratio, observation quality drops and behavioural evidence becomes unreliable
Assessor wash-upThe final calibration session where assessors compare ratings before producing an overall scorePrevents individual bias from skewing the outcome; takes 15-30 minutes per candidate
Validity coefficientA statistical measure of how well a selection tool predicts job performance (0 = none, 1 = perfect)Assessment centres score approximately 0.37; unstructured interviews score around 0.14
Exercise fidelityHow closely a simulation mirrors the actual jobHigher fidelity produces better prediction but increases design cost; aim for 70-80% match
[Competency framework](/glossary/competency-framework)The set of defined behaviours the assessment is designed to measureMust be defined before exercises are designed; typical programmes assess 4-6 competencies
Pass markThe minimum score required across exercises to be recommended for hireSetting this too low defeats the purpose; calibrate against the performance data of existing high performers

Key Statistics

  • Assessment centres have predictive validity of 0.36-0.45 for job performance.

    US Office of Personnel Management, 2023

Frequently Asked Questions

What exercises are typically included in an assessment centre?
A well-designed assessment centre maps exercises to specific competencies, with each exercise providing strong evidence on 2-3 defined competencies. Common exercises include: an in-tray or inbox simulation testing planning, priority-setting, and written communication; a leaderless group discussion testing collaboration, influencing, and communication without a formally designated leader; a role-play simulation with a trained role-player testing interpersonal skills in a challenging stakeholder situation; and a presentation exercise testing analytical speed and persuasive communication. Each assessor independently scores their assigned candidates using behavioral anchors before any group discussion.
How reliable are assessment centres compared to interviews for predicting job performance?
The US Office of Personnel Management cites assessment centre predictive validity at 0.36-0.45 for job performance — comparable to a structured interview at 0.55-0.70, and significantly better than an unstructured interview. The value is not in replacing structured interviews but in adding convergent evidence: multiple trained assessors observing the same candidate across multiple exercises produce scores that can be triangulated to reduce the influence of any single assessor's bias. Organizations that redesigned final selection stages as assessment centres report material improvements in 12-month performance ratings for new hires.
What is the main source of error in running an assessment centre?
The most common source of validity loss is assessors sharing impressions before completing individual scoring — so-called 'wash-up' discussions. When assessors talk through candidates before scoring, the most confident or senior assessor's view anchors everyone else's ratings, collapsing multiple independent observations into a single perspective. The critical procedural control is independent scoring first: every assessor completes and submits their scores for every candidate before any group discussion begins. Without this discipline, the assessment centre's multi-observer advantage is largely negated.