What Is AI Screening?
AI screening in recruitment is automated technology that evaluates job applications at scale by analysing CVs, assessments, or video responses to identify qualified candidates. Machine learning models score applicants against predefined criteria, removing manual CV review for high-volume roles. AI screening tools integrate with ATS platforms to reduce time-to-shortlist.
TL;DR
AI screening uses algorithms to automatically evaluate job applicants — ranking CVs, scoring assessments, analysing video interviews, or filtering out candidates before a human reviews anything. It can reduce time-to-shortlist by 60-80% at scale. It can also automate your existing biases into a system that operates at 500 applications per hour.
What AI Screening Actually Does
Most AI screening tools do one of four things: parse and rank CVs, score structured assessments, analyse recorded video interviews, or flag applications that match defined criteria. CV parsing extracts structured data from unstructured documents and compares it against a predefined profile. Assessment scoring evaluates answers to situational judgement questions, personality inventories, or cognitive tests. Video analysis uses natural language processing or, controversially, facial expression analysis to score recorded responses.
The quality varies enormously. CV parsers from enterprise ATS providers are generally reliable for extracting work history and credentials from standard formats. They are unreliable for non-standard CVs, career changers, people who list skills in unusual ways, and anyone using a creative CV format. The system trained on successful hires from the last five years will systematically penalise candidates who look different from that cohort — including in dimensions that are legally protected.
Video analysis tools that claim to infer personality or cultural fit from facial micro-expressions have been widely criticised by researchers and rejected by several regulators. The scientific basis for inferring personality from facial movement is thin. HireVue removed its facial analysis component in 2021 following scrutiny, though it retains NLP-based analysis of speech content.
The more defensible applications are structured assessment scoring — where the AI is scoring defined content against a known rubric — and CV keyword screening used as a first filter with human review at the next stage.
Why Bias Is the Central Concern
An AI screening tool is only as good as the data it was trained on and the criteria it was trained to optimise. Amazon famously scrapped a machine learning recruiting tool in 2018 after it systematically downgraded CVs from women — because it was trained on historical hire data from a historically male-dominated tech workforce. The algorithm learned that men were hired more often, so it learned to prefer male signals.
This is not an edge case. It is the structural problem with any supervised learning approach to screening: you are optimising for past decisions, and past decisions reflect past biases. The solution requires deliberate debiasing of training data, regular auditing of output distributions across protected characteristics, and human oversight at decision points.
In the US, New York City Local Law 144 (effective July 2023) requires employers using AI hiring tools to conduct annual bias audits and disclose results to candidates. The EU AI Act classifies employment AI as high-risk, requiring conformity assessments and human oversight. The UK has not yet enacted equivalent requirements, but the ICO has issued guidance on automated decision-making under GDPR that applies directly to AI screening.
In Practice
A financial services firm receives 12,000 applications for 80 graduate roles. Manual review is not possible at that volume. They deploy an AI tool to score CVs against a profile built from the last three years of successful hires. The tool screens to 900 candidates for human review. A bias audit of the output shows that candidates from Russell Group universities pass at a rate 2.4x higher than candidates from other universities — not because they perform better in the subsequent assessment, but because the historical hire data overrepresented that group. The firm adjusts the model to remove university prestige as a ranking signal. Pass rates equalise. Offer acceptance rates and first-year performance metrics are unchanged.
Key Facts
| Concept | Definition | Practical Implication |
|---|---|---|
| CV Parsing | Automated extraction and ranking of structured data from CVs | Unreliable for non-standard formats; requires human review at shortlist stage |
| Assessment Scoring | AI evaluation of structured test responses | More defensible than video analysis; rubric should be validated for role relevance |
| Video Interview AI | NLP or facial analysis of recorded responses | Facial analysis lacks scientific validity; NLP on speech content is more defensible |
| Training Data Bias | AI learns to replicate patterns in historical hire decisions | Past biased hiring creates future biased screening |
| NYC Local Law 144 | Requires annual bias audits for AI hiring tools used in NYC | Sets precedent for AI hiring regulation in US jurisdictions |
| EU AI Act | Classifies employment AI as high-risk | Conformity assessments, transparency, and human oversight required |
| Bias Audit | Systematic review of AI output distributions across protected groups | Mandatory in some jurisdictions; best practice everywhere |
Key Statistics
In 2024, AI-powered hiring tools processed over 30 million applications globally.
Industry research cited in glossary body, 2024
88% of CVs submitted for high-volume roles come from candidates who do not meet the basic qualifications.
Ideal (now acquired by Ceridian), 2023