fbpx

AI and Candidate Assessments

What Recruiters Need to Know

As we reflect on the past 12 months, one major topic that has dominated conversations is Artificial Intelligence (AI). This year, generative AI has become an integral part of our daily activities, with 75% of workers reporting its use in their workplaces1. The value of AI is increasingly recognised, with 70% of business leaders noting a significant boost in productivity through its integration2.

While many organisations are establishing their stance, AI’s role in candidate assessment remains a topic of debate.

Here are five prompts for thought when considering using AI in this space:

number Establish and communicate your position on using AI to candidates

The benefits of AI in the workplace are becoming increasingly evident. Many Talent Acquisition teams, in particular, are experiencing reduced recruiting times thanks to AI-driven tools that streamline their processes. Skills in using AI effectively and prompt engineering are becoming highly sought after. Therefore, it is beneficial to communicate your stance on AI to candidates early in the recruitment process.

Tips

Explain how AI supports employees and where it is encouraged: Clearly outline how AI is integrated into your workplace and the specific areas where it enhances productivity and efficiency.

Clarify desired AI-related skills: Be transparent about the AI-related skills you value, such as prompt engineering, and explain how these skills are assessed during the recruitment process.

Guide candidates on AI usage: Provide clear guidelines on how candidates can use AI during the recruitment process and the reasons behind these guidelines.

Outline AI misuse: Specify how AI should not be used by candidates and explain the rationale to ensure fairness and integrity in the assessment process

By establishing and communicating your position on AI, you can set clear expectations, fostering a transparent and fair recruitment environment. How do you plan to integrate these tips into your recruitment strategy?

number1 Cultural attitudes towards AI and cheating

Recognising cultural differences in the support and adoption of AI and cheating is crucial for global organisations. Regional variations can significantly impact how AI is integrated into processes.

East Asia, North America and Western Europe tend to be more supportive of AI whereas Eastern Europe, Parts of Africa and Latin America may be more cautious in adoption3.   

It’s important to note that within these regions, there can be significant variations influenced by macroeconomic factors and specific industry needs. Even within an organisation, different departments might experience varying levels of AI adoption based on their relevance and exposure to the technology.

When it comes to attitudes towards cheating, cultural perspectives can vary widely. In some cultures, behaviours deemed as cheating in one context might be seen as resourceful and a means of getting ahead in another. For instance, competitive environments may view actions like bending the rules as beneficial. Conversely, other cultures regard such behaviours as unethical and take a firm stance on monitoring candidates to ensure fairness. In these cultures, strict oversight is essential, while in others, it might be perceived as a sign of mistrust.

By understanding and respecting these cultural attitudes, global organisations can tailor their AI strategies to better align with regional perspectives and enhance their overall effectiveness.

number2 Impact on assessments

AI’s influence on candidate assessments is a complex issue, with both potential benefits and challenges. Here are some key points to consider:

Asynchronous video interviewing

While asynchronous video interviews can save time, there’s a risk that candidates might use AI to generate recommended responses. Interviewers might look for behaviors indicating AI use, but this could inadvertently create DE&I risks, as these behaviors might be confused with neurodiverse traits.

Aptitude assessments

Our testing shows that AI struggles with completing aptitude assessments effectively within time limits. AI often achieves low scores, giving users false confidence. For clients concerned about AI cheating, we recommend using non-verbal, interactive formats like Swift Global Aptitude.

Situational judgment tests (SJTs)

Our SJTs are designed so that the response and scoring mechanisms are not transparent to candidates or AI, making it difficult to predict the ‘correct’ answers. Formats that use more interactivity, such as hotspots  are likely to be harder for AI to complete. Whereas other standard SJT formats that present the full scenario and response options at once, might be more susceptible to AI support.

Saville Wave / personality assessments

The interactive rate-rank format of Wave assessments makes it challenging for AI to assist effectively. We produce response style scores (e.g., ratings acquiescence and consistency of rankings) to identify unusual patterns that can be further explored. Additionally, candidates are typically assessed later on their behavioral responses, where they need to provide evidence of their strengths. It’s important to remember that there isn’t a single ‘template’ for success; different combinations of strengths and weaknesses can create effective profiles for various roles.

These considerations highlight the need for careful design and monitoring of assessment processes to ensure fairness and integrity. 

number3 The increasing importance of interviewer skill

Even live interviews are not immune to the impact of artificial intelligence. Tools have been developed that process interview questions in real time, generating immediate responses for the candidate. This use of AI poses a clear challenge to the integrity of recruitment processes, as it could unfairly advantage candidates who rely on it, enabling them to present a distorted view of their skills and abilities. This makes it critical that interviewers are able to recognise when candidates are using AI tools to assist their responses. Signs such as delayed responses, unnatural delivery and difficulty elaborating on details can suggest AI assistance.

Alongside detection, Interviewers should also consider ways to deter and disrupt AI tools being used in live interviews. Structuring interviews with open-ended, skills-based questions that require candidates to reference specific experiences can expose reliance on AI, as these responses are easier to verify against a candidate’s job history. Additionally, incorporating probing follow-up questions or asking for greater detail on specific points can disrupt AI-generated answers, making it more difficult for the technology to maintain coherence and consistency.

Organisations should recognise the importance of training interviewers to identify and mitigate the use of emerging AI tools as investing in interviewer training can have a significant impact. By equipping interviewers with the skills to detect AI-assisted responses, companies can safeguard the authenticity of the hiring process and ensure a true assessment of candidates’ abilities and experiences. This investment strengthens the overall quality and fairness of the talent acquisition, allowing for the most accurate and reliable hiring decisions.

number4 Not one size fits all

AI’s role in recruitment will not look the same across all organisations or even within all roles in a single company. Each organisation must carefully define its stance on the use of AI tools by candidates and tailor its approach to align with its unique values, requirements, and hiring objectives.

Some companies may choose to embrace AI, encouraging candidates to leverage it as a tool during the application process. For instance, in technical roles where AI usage is a likely component of the job, allowing it to be used in the hiring process can be a logical approach, aligning with the skills required for the role.

Other companies may choose to prohibit its use entirely. For instance, in creative or communication-focused roles, where originality and natural expression are highly valued, AI-assisted applications may not be considered desirable.

Regardless of the stance, organisations must ensure that their policies are applied fairly and transparently. For instance, if AI tools are allowed, all candidates should be informed and provided with equal access. Conversely, if prohibited, clear guidelines should be communicated to prevent misunderstanding and AI use must be effectively monitored for and fairly penalised.

Find Out More


For more information about our approach to AI and to discuss your assessment requirements with one of our experts, get in touch.

Get In Touch