Protecting your recruitment process from AI abuse
Protecting your recruitment process from AI abuse
Job seekers are increasingly utilising generative AI tools such as ChatGPT during recruitment to increase their chances of securing employment. How can you protect the integrity of your recruitment process when candidates may be using AI tools?
The risks of AI use by candidates
AI can mask a candidate’s true skills, knowledge and abilities. This is because AI can be used, e.g. to generate polished CVs which exaggerate (or even fabricate) experience or skills, produce cleverly tailored cover letters, write content for job application forms, compose online written assessments and anticipate and prepare answers to interview questions. It can also be very difficult to work out what is real versus what has been generated by AI, particularly if a candidate has made subtle edits to the AI output. This can, in turn, lead to you make a poor hiring decision. A candidate might even be offered a job that they are not sufficiently qualified for.
SEPTEMBER 2024 SURVEY BY CAPTERRA
41% Of UK jobseekers have used AI to exaggerate or lie about their skills on a CV or application form.
75% More likely to received a job offer if the application has used AI.
Protecting your recruitment process
Steps you can take to protect the integrity of your recruitment process include:
Some candidates may request the use of AI tools as a reasonable adjustment for a disability during the recruitment process, e.g. those with cognitive or learning disabilities. In that case, consider whether this would alleviate the disadvantage, arising from their disability and whether what they have requested is “reasonable” in the circumstances.
• Expressly prohibiting the use of generative AI in job applications by including appropriate wording in your recruitment pack. From the outset, this sets the ethical standards you expect candidates to abide by.
• Including a mandatory question on your application form or otherwise during the screening process that specifically asks the candidate to declare whether they have used AI tools to assist with their application and, if so, how.
• Placing more emphasis on skills-based tests/ assessments (such as interpersonal, critical thinking, decision-making and problem-solving skills) and their conduction these in a live test environment, e.g. using timed tests, full-screen tracking, in-person assessments, practical exercises, etc. AI can’t learn skills and ability if tests are held under controlled circumstances
• Strengthening your interview process. Conduct face-to-face interviews and ask probing competency-based questions that are specific to the candidate’s purported experience and skills as declared on their CV.
• Using screening tools that are designed to detect AI-generated content in job applications. If you do use these, be transparent about it to candidates.
Tip. Ensure hiring managers are aware of how AI can be misused by candidates, and that they know what “red flags” to watch out for, e.g. polished language on a CV which is then inconsistent with how the candidate performs at interview, or generic words or phrases that seem to be commonly used by candidates.
Some candidates may request the use of AI tools as a reasonable adjustment for a disability during the recruitment process, e.g. those with cognitive or learning disabilities. In that case, consider whether this would alleviate the disadvantage, arising from their disability and whether what they have requested is “reasonable” in the circumstances.
Steps you could take include prohibiting the use of generative AI in job applications, including a mandatory question that asks whether AI tools have been used, placing more emphasis on skills-based assessments conducted in live test environments, asking more probing competency-based interview questions and using AI detection tools.