AI-Driven Cheating and Impersonation Prompt Return to In-Person Job Interviews

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Companies like Google, Cisco, and McKinsey are reinstating in-person interviews after a surge in candidates using AI tools to cheat during virtual assessments and scammers employing AI to impersonate applicants, leading to fraud and data theft. This shift aims to counteract the direct harms caused by AI misuse in hiring.[AI generated]

Why's our monitor labelling this an incident or hazard?

AI systems are explicitly involved as tools used by candidates to cheat and by scammers to impersonate others, leading to harms including fraud and potential data or financial loss. These harms fall under violations of rights and harm to individuals. Since the AI system's use has directly or indirectly led to these harms, this qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainability

Industries
Business processes and support servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Human resource managementICT management and information security

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Companies embrace in-person interviews to dodge the chatbots

2025-08-12
Axios
Why's our monitor labelling this an incident or hazard?
The article highlights the use and misuse of AI in hiring but does not report any realized harm such as injury, rights violations, or disruption caused by AI systems. It mainly focuses on the response of companies to AI-enabled challenges in recruitment, which is a governance or societal response to AI's impact. Therefore, it fits the definition of Complementary Information, as it provides supporting context and updates on AI's role in the labor market without describing a specific AI Incident or AI Hazard.
Thumbnail Image

AI Is Forcing the Return of the In-Person Job Interview

2025-08-12
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
AI systems are explicitly involved as tools used by candidates to cheat and by scammers to impersonate others, leading to harms including fraud and potential data or financial loss. These harms fall under violations of rights and harm to individuals. Since the AI system's use has directly or indirectly led to these harms, this qualifies as an AI Incident.
Thumbnail Image

AI Is Forcing the Return of the In-Person Job Interview

2025-08-12
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by candidates to cheat in interviews and by scammers to impersonate others, leading to fraud and deception. These are direct harms to individuals and communities (harm to people and harm to communities). The AI involvement is in the use of AI tools to generate answers or fake identities, which directly leads to these harms. The article also discusses responses to these harms but the primary focus is on the realized harm caused by AI misuse. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Won't Help You Cheat Your Way Through Job Interviews Forever

2025-08-12
Crooks and Liars
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used by candidates to generate answers during interviews, which is a misuse of AI in a real-world context. However, the article does not report any direct or indirect harm such as health injury, rights violations, or other significant harms caused by the AI use. The companies' response to revert to in-person interviews is a mitigation strategy but does not indicate an AI Incident or Hazard. Therefore, this is best classified as Complementary Information, providing context on societal and governance responses to AI misuse in hiring.
Thumbnail Image

AI Drives Return to In-Person Interviews to Fight Deepfakes and Cheating

2025-08-12
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to cheat in interviews and to impersonate candidates via deepfake videos and AI voices, leading to fraud and dishonesty. These are direct harms caused by the use and misuse of AI systems in hiring contexts. The companies' response to revert to in-person interviews is a reaction to these realized harms. Hence, the event involves AI system use leading directly to harm, fitting the definition of an AI Incident.
Thumbnail Image

Companies Bring Back In-Person Interviews to Curb AI-Driven Hiring Fraud

2025-08-13
La Voce di New York
Why's our monitor labelling this an incident or hazard?
The article reports on realized harms caused by AI systems used for impersonation and cheating in hiring processes, which qualifies as an AI Incident due to violations of trust and potential economic harm. However, the main focus is on the companies' responses to these harms, such as bringing back in-person interviews and biometric checks, which are measures to mitigate and prevent further incidents. Since the article centers on these responses rather than detailing a new specific AI Incident or AI Hazard event, it fits best as Complementary Information, providing context and updates on societal and governance responses to AI-related harms.
Thumbnail Image

Tech Report: Companies return to in-person interviews as AI cheating rises, Musk targets Apple, Threads growth surges 41NBC News | WMGT-DT

2025-08-13
41NBC News | WMGT-DT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools being used by candidates to cheat during interviews and AI-enabled scammers impersonating job seekers to steal data or money. These are direct harms caused by the use and misuse of AI systems, fitting the definition of an AI Incident as the AI system's use has directly led to harm (fraud, deception, and potential financial or data loss). The other parts of the article about Musk, Kodak, and Meta do not describe AI-related harms or hazards. Hence, the main event qualifies as an AI Incident due to realized harm from AI misuse in recruitment.
Thumbnail Image

How AI is forcing this big change in the way Google, Cisco, McKinsey and other companies hire techies

2025-08-14
ETCIO.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by candidates to cheat in online coding tests, which is a misuse of AI systems. This misuse directly impacts the hiring process, causing harm to companies by compromising the fairness and reliability of candidate evaluation. Although no physical harm or legal violation is detailed, the harm to the recruitment process and potential data or financial theft is a significant harm to organizations and their operations. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI misuse in hiring.
Thumbnail Image

Google CEO Sundar Pichai emphasises in-person interviews: How AI is failing to pick the right talent - Times of India

2025-08-16
The Times of India
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems used in recruitment, it does not describe any realized harm or incident resulting from AI use. Instead, it focuses on the limitations of AI and a company's response to improve hiring processes. Therefore, it is best classified as Complementary Information, providing context and response to AI's role in recruitment without reporting an AI Incident or Hazard.
Thumbnail Image

Google brings back in-person interviews to skirt AI cheating

2025-08-18
ETCFO.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI tools by job candidates during virtual interviews, leading to challenges in accurately assessing technical skills. However, the AI system's role here is indirect and relates to misuse by users rather than a malfunction or development issue. No direct harm such as injury, rights violations, or property damage has occurred. The company's response to reintroduce in-person interviews is a governance measure to mitigate potential harm from AI misuse. Therefore, this event is best classified as Complementary Information, as it provides context on societal and organizational responses to AI misuse rather than describing a direct AI Incident or Hazard.
Thumbnail Image

Google brings back in-person interviews to skirt AI cheating - The Economic Times

2025-08-18
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions candidates abusing AI tools during virtual interviews, which directly undermines the hiring process and the ability of companies to assess technical skills accurately. This misuse has caused harm to companies' recruitment integrity and poses national security risks due to AI-enabled fake profiles. The AI system's role in facilitating cheating and scams is pivotal to the harm described. Hence, this qualifies as an AI Incident due to realized harm linked to AI misuse in recruitment.
Thumbnail Image

Why does Google plan on returning to in-person interviews for new recruits? | Company Business News

2025-08-16
mint
Why's our monitor labelling this an incident or hazard?
An AI system (AI tools used by candidates to generate coding solutions) is explicitly involved and is being misused during virtual interviews, leading to a significant issue in the hiring process. This misuse constitutes a harm related to the reliability and fairness of recruitment, which can be considered a violation of labor rights or fair employment practices. Since the harm (cheating facilitated by AI) is occurring and affecting Google's hiring operations, this qualifies as an AI Incident. The event focuses on the consequences of AI misuse in hiring rather than just a general update or policy change, so it is not merely Complementary Information.
Thumbnail Image

Sundar Pichai reveals new interview process at Google in AI era, says, 'We are making sure...'

2025-08-17
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions candidates using AI tools to cheat in virtual interviews, which is a misuse of AI systems. This misuse directly impacts the integrity of hiring processes, which can be considered a violation of labor rights or fair employment practices. However, the article does not describe a specific incident of harm occurring to individuals or groups, but rather a widespread challenge prompting changes in interview protocols. Since the harm is ongoing and related to misuse of AI leading to unfair hiring practices, it qualifies as an AI Incident due to violation of labor rights and integrity in employment processes.
Thumbnail Image

Major Companies Including Google and McKinsey Are Bringing Back In-Person Job Interviews to Combat AI Cheating

2025-08-18
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The article centers on the use of AI by candidates to cheat in interviews, which is a misuse of AI systems. However, the harm described is primarily about integrity and fairness in hiring, not a legally or fundamentally recognized harm such as injury, rights violation, or disruption of critical infrastructure. The companies' response to bring back in-person interviews is a governance or operational adaptation to AI misuse. Therefore, this event is best classified as Complementary Information, as it provides context on societal and organizational responses to AI misuse rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Google changes coding interviews to stop AI cheating, and it's not the only one

2025-08-20
India Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems being used by candidates to cheat in coding interviews, which is a misuse of AI in the hiring process. While this misuse could indirectly lead to harm by allowing unqualified candidates to be hired, the article does not report any realized harm such as employment of unqualified staff causing damage or rights violations. The companies' response to reintroduce in-person interviews is a mitigation measure. Therefore, this event is best classified as Complementary Information because it provides context on societal and governance responses to AI misuse in hiring, rather than describing a direct AI Incident or a plausible future AI Hazard.
Thumbnail Image

Prevent AI 'jockey', Google and McKinsey return face-to-face interview

2025-08-20
idnfinancials.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems being used by candidates to generate interview responses, which is a form of AI use that could lead to harm in the hiring process (e.g., unfair advantage, misrepresentation). However, the article does not report any actual harm or incident resulting from this misuse, only the potential for misuse and the companies' preventive measures. Therefore, this is best classified as Complementary Information, as it provides context on societal and organizational responses to AI misuse risks in recruitment, without describing a realized AI Incident or a plausible AI Hazard causing harm.
Thumbnail Image

Google brings back in-person job interviews as CEO Sundar Pichai cracks down on AI cheating

2025-08-26
India Today
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI tools by candidates to cheat in virtual interviews, which is a misuse of AI, but it does not report any realized harm such as unfair hiring decisions or legal violations. The companies' shift back to in-person interviews is a governance response to this challenge. Since no direct or indirect harm from AI systems is described, and the event is about policy changes addressing AI misuse, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Google and FAANG tackle AI cheating in job interviews, but how do candidates actually cheat? Explained

2025-08-26
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by candidates to cheat in job interviews, which directly undermines the fairness and integrity of hiring processes. This constitutes a violation of labor rights and fair employment practices, a recognized form of harm under the framework. The harm is realized, not just potential, as companies have acknowledged the problem and are changing their hiring methods in response. The AI systems' development and use have directly led to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

AI can't ace your interview anymore: Google to bring back in-person hiring

2025-08-26
Business Standard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by candidates to cheat during virtual interviews, which indirectly harms the integrity of the hiring process and could violate labor rights or fairness principles. Although this is a significant issue, the article mainly describes the problem and companies' responses rather than a concrete AI Incident causing realized harm. Therefore, it fits best as Complementary Information, providing context and updates on societal and governance responses to AI's impact on hiring practices.
Thumbnail Image

Google Restores In-Person Job Interviews Amid Rising AI Cheating Concerns

2025-08-26
The Hans India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the misuse of AI-powered tools by candidates during virtual interviews, which has led to concerns about the integrity of the hiring process. This misuse constitutes a harm related to violations of fair labor and employment practices, as it undermines the fairness and authenticity of candidate evaluations. The AI system's use (AI-assisted cheating) has directly led to this harm, prompting Google and other companies to change their hiring practices. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm in the form of unfair hiring practices and potential labor rights violations.
Thumbnail Image

Google Brings Back In-Person Interviews to Combat AI Cheating

2025-08-26
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
An AI system is involved indirectly as candidates are using AI-powered tools to cheat in virtual assessments, which undermines the hiring process. The harm here is a violation of fair labor and hiring rights, as the misuse of AI tools leads to unfair advantages and compromises the integrity of employment decisions. Although the AI misuse is by candidates rather than Google itself, the event describes realized harm caused by AI use in the hiring context. Therefore, this qualifies as an AI Incident due to the direct impact on fair hiring practices and labor rights.