UPSC Plans AI-Based CCTV Surveillance to Prevent Exam Cheating

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Union Public Service Commission of India has floated tenders for AI-powered exam monitoring, including Aadhaar-based fingerprint or facial recognition and live AI CCTV surveillance. Aimed at preventing cheating and impersonation in NEET, NET, and civil service exams, the move responds to alleged irregularities and seeks to bolster exam integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details the intended use of AI systems (facial recognition, AI-based CCTV surveillance) in exam monitoring to prevent cheating and impersonation. There is no indication that any harm or violation has occurred due to the AI system's development or use. The event is about the planned implementation of AI technology to reduce exam fraud, which could plausibly lead to benefits or potential risks, but no realized harm is described. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to incidents related to privacy, surveillance, or rights violations, but no incident has yet occurred.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityFairnessAccountabilityRobustness & digital securityDemocracy & human autonomyHuman wellbeing

Industries
Education and trainingGovernment, security, and defenceDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/Property

Severity
AI hazard

Business function:
Monitoring and quality controlCompliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

NEET-NET Row: UPSC To Use AI-based CCTV Surveillance To Prevent Cheating

2024-06-24
NDTV
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems (facial recognition, AI-based CCTV surveillance) in exam monitoring to prevent cheating and impersonation. There is no indication that any harm or violation has occurred due to the AI system's development or use. The event is about the planned implementation of AI technology to reduce exam fraud, which could plausibly lead to benefits or potential risks, but no realized harm is described. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to incidents related to privacy, surveillance, or rights violations, but no incident has yet occurred.
Thumbnail Image

Amid NEET, NET exam mess, UPSC moots AI-based CCTV surveillance to prevent cheating

2024-06-24
Economic Times
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems (facial recognition and AI-based CCTV surveillance) to monitor exam candidates and prevent cheating. However, the event is about the planned deployment and tender for these AI systems, not about an actual incident of harm or malfunction. There is no indication that the AI systems have caused harm or that cheating has been prevented or failed due to AI malfunction. The focus is on the prospective use of AI to mitigate exam irregularities, which is a governance and preventive measure. Therefore, this is Complementary Information about AI deployment and governance in exam integrity, not an AI Incident or AI Hazard.
Thumbnail Image

UPSC to Introduce Facial Recognition, AI Surveillance to Safeguard Exam Integrity - Times of India

2024-06-25
The Times of India
Why's our monitor labelling this an incident or hazard?
The article focuses on the planned use of AI systems for exam security, highlighting their capabilities and intended functions. There is no indication that these AI systems have malfunctioned or caused harm, nor that any AI-related incident has occurred. The event involves the use of AI systems with the potential to prevent exam malpractice, which could plausibly lead to harm reduction but does not itself constitute harm. Therefore, this event is best classified as an AI Hazard, as it describes the plausible future use of AI systems that could impact exam integrity and security.
Thumbnail Image

UPSC to go hi-tech with AI, facial recognition for cheat-free exams

2024-06-24
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (facial recognition, AI-based CCTV surveillance) in the exam process, indicating AI system involvement. However, it only describes the planned deployment and intended benefits to prevent cheating and impersonation, with no indication that the AI systems have malfunctioned or caused harm. There is no report of injury, rights violations, or other harms resulting from AI use. The event is about the adoption of AI technology to enhance exam security, which is a governance and operational update rather than an incident or hazard. Hence, it fits best as Complementary Information, providing context on AI adoption and governance in a high-stakes public examination setting.
Thumbnail Image

Amid NEET-NET row, UPSC moots for AI-based CCTV surveillance to prevent cheating

2024-06-24
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems (facial recognition, biometric authentication, AI-based video surveillance) to monitor exams and prevent cheating. However, it does not describe any realized harm or incidents caused by these AI systems. The focus is on the planned deployment and the potential to prevent malpractices, which implies a plausible future risk mitigation rather than an existing harm. Therefore, this qualifies as Complementary Information, providing context on governance and technological responses to exam cheating issues, rather than an AI Incident or AI Hazard.
Thumbnail Image

Amid NEET, NET row, UPSC moots AI-based CCTV surveillance to curb cheating

2024-06-24
Business Standard
Why's our monitor labelling this an incident or hazard?
The article details the intended deployment of AI systems (facial recognition and AI-based CCTV surveillance) to prevent cheating and impersonation in exams. However, it does not report any actual harm or incident resulting from these AI systems yet. The event is about the proposal and tendering process for these AI solutions, indicating a potential future use. While the use of such AI systems could plausibly lead to privacy concerns or rights violations, the article does not describe any realized harm or incident. Therefore, this is best classified as an AI Hazard, reflecting the plausible future risk of harm from AI surveillance in exams.
Thumbnail Image

Amid NEET, UGC-NET exam mess, UPSC moots AI-based CCTV surveillance to prevent cheating

2024-06-24
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based CCTV surveillance and biometric authentication systems, which qualify as AI systems under the definitions provided. The AI system's involvement is in its intended use to monitor exams and prevent cheating, which is a use case rather than a malfunction or harm event. There is no indication that any harm has occurred due to these AI systems; rather, the article discusses the planned implementation to reduce malpractice. Therefore, this event does not qualify as an AI Incident or AI Hazard but rather as Complementary Information about governance and societal responses to exam integrity issues involving AI technology.
Thumbnail Image

Live AI-Based CCTVs To Aadhaar-Based Fingerprint Check: UPSC Plans Upgrade Amid NEET, UGC Exam Mess | Exclusive - News18

2024-06-24
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based CCTV surveillance and biometric authentication systems to monitor exam candidates and prevent cheating and impersonation. This clearly involves AI systems in use. However, the article does not describe any actual harm caused by these AI systems, nor does it describe any malfunction or misuse leading to harm. Instead, it details planned measures to prevent cheating and fraud, which are existing concerns in examination integrity. Since no AI-related harm has occurred or is described as imminent, and the focus is on the planned deployment and governance response, the event fits the definition of Complementary Information. It informs about societal and governance responses to AI use in examination monitoring, enhancing understanding of AI's role in this context without reporting an incident or hazard.
Thumbnail Image

Amid NEET, UGC NET Exam Mess, UPSC Moots AI-Based CCTV Surveillance to Prevent Cheating

2024-06-24
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The article details the UPSC's plan to use AI systems for exam surveillance and biometric verification to prevent cheating. While AI systems are clearly involved, there is no indication that these systems have caused harm or malfunctioned. The AI deployment is intended to reduce harm (cheating), not cause it. There is no mention of any AI-related incident or hazard occurring or plausibly arising from the AI system's use. The focus is on the planned use of AI technology as a preventive measure, making this a case of Complementary Information about AI adoption and governance in exam integrity.
Thumbnail Image

UPSC set to use AI-based surveillance, facial recognition against exam cheating

2024-06-25
India Today
Why's our monitor labelling this an incident or hazard?
The article details the planned use of AI systems (facial recognition, AI-based CCTV surveillance) in the examination process to prevent cheating and impersonation. However, the event is about the adoption and tendering process for these AI technologies, with no reported harm or incident occurring yet. The use of AI here is intended to prevent harm (exam fraud), but no direct or indirect harm has been reported or occurred. Therefore, this is a plausible future use of AI that could lead to harm if misused (e.g., privacy violations, wrongful accusations), but the article does not report any such harm or incident. Hence, it qualifies as an AI Hazard due to the plausible risks associated with AI surveillance and biometric data use in exams, but not an AI Incident or Complementary Information.
Thumbnail Image

Amid NEET, NET Row, UPSC To Use AI-Based CCTV And Facial Recognition To Safeguard Exam Integrity

2024-06-24
english
Why's our monitor labelling this an incident or hazard?
The article details the planned use of AI systems (facial recognition and AI-based CCTV surveillance) in the context of exam security to prevent malpractices such as cheating and impersonation. However, the event is about the intended deployment and procurement process, with no actual harm or incident reported yet. The AI systems' use could plausibly lead to improved exam integrity and prevent harms related to fraud, but no realized harm or incident is described. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to preventing or causing harm related to exam integrity, but no incident has occurred yet.
Thumbnail Image

UPSC introduced AI-based CCTV surveillance to prevent cheating amid NEET, NET exam controversies | details

2024-06-25
India TV News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition and live CCTV surveillance systems to monitor exams and prevent cheating. This involves AI systems in development and planned deployment. However, there is no report of any actual harm, malfunction, or incident caused by these AI systems so far. The AI system's role is in preventing cheating, which is a positive use case, but the deployment of such surveillance systems could plausibly lead to privacy violations or other harms in the future. Since no harm has yet occurred, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Amid NEET, NET exam mess, UPSC moots AI-based CCTV surveillance to prevent cheating

2024-06-24
The Tribune
Why's our monitor labelling this an incident or hazard?
The article details UPSC's intention to use AI-based surveillance and biometric authentication to prevent cheating in exams, which is a preventive measure. There is no report of harm caused by the AI system, nor any malfunction or misuse leading to harm. The AI system is being introduced to reduce exam irregularities, which are existing problems but not caused by AI. Hence, this is a potential future application of AI to reduce harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are explicitly involved.
Thumbnail Image

UPSC moots AI-based CCTV surveillance amid NEET, NET exam mess

2024-06-24
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems (facial recognition and AI-based CCTV surveillance) in the examination process to prevent cheating and impersonation. However, it does not report any actual harm or incident resulting from these AI systems yet. The event is about the proposal and tendering process for these AI solutions, indicating potential future use but no realized harm or malfunction. Therefore, it constitutes a plausible future risk scenario related to AI deployment in sensitive contexts, but no direct or indirect harm has occurred as per the article. Hence, it is best classified as an AI Hazard, reflecting the credible potential for AI-related issues in exam surveillance and privacy concerns if misused or malfunctioning in the future.
Thumbnail Image

UPSC plans AI-based surveillance to prevent cheating

2024-06-24
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based CCTV surveillance and biometric authentication systems, which qualify as AI systems. However, the event describes a planned deployment and tender invitation, with no indication that any harm or incident has yet occurred. There is no report of injury, rights violation, or other harm resulting from the AI system's use. The event concerns the intended use of AI systems, which could plausibly lead to privacy concerns or rights issues in the future, but no such harm is reported or implied as having occurred. Therefore, this is best classified as an AI Hazard, reflecting the plausible future risk of harm from AI-based surveillance in exams.
Thumbnail Image

UPSC to use Face ID, AI to stop cheating

2024-06-24
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The article details the intended deployment of AI systems (facial recognition and AI-based CCTV surveillance) to prevent cheating and impersonation. However, it does not report any actual harm or incident resulting from these AI systems yet. The event is about the planned use of AI to mitigate exam fraud, which is a preventive measure and does not describe realized harm or a direct incident. Therefore, it is best classified as an AI Hazard, since the use of AI in surveillance could plausibly lead to incidents related to privacy violations or other harms, but no such harm is reported at this stage.
Thumbnail Image

UPSC Proposes AI-Powered CCTV Surveillance To Curb Cheating, Officers Say 'Need To Implement AI In All Competitive Exams'

2024-06-25
Free Press Journal
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned (AI-based CCTV surveillance and facial recognition) being used in the context of exam monitoring. The AI system's use is intended to prevent cheating and impersonation, which are forms of unfair advantage and harm to the integrity of the examination process, affecting the rights of honest candidates. Since the AI system is being deployed to prevent harm (cheating) and no actual harm caused by the AI system is reported, this event describes a plausible use of AI to mitigate harm rather than causing harm. Therefore, it does not qualify as an AI Incident. However, the deployment of AI surveillance systems in exams could plausibly lead to concerns such as privacy violations or misuse, but these are not explicitly stated or realized in the article. The article mainly reports on the planned or ongoing implementation of AI systems to address cheating, which is a governance and societal response to an existing problem. Hence, this is best classified as Complementary Information, as it provides context on AI adoption and governance measures in the examination ecosystem without reporting an incident or hazard.
Thumbnail Image

Amid NEET, NET exam mess, UPSC moots AI-based CCTV surveillance to prevent cheating

2024-06-24
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems (facial recognition, AI-based CCTV surveillance) to monitor exam candidates and detect cheating or irregularities. However, the event is about the planned deployment of these AI systems to prevent cheating, not about any realized harm or incident caused by AI malfunction or misuse. There is no indication that the AI system has caused injury, rights violations, or other harms yet. The focus is on the prospective use of AI to improve exam integrity, which could plausibly prevent harm but does not itself constitute an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context on AI adoption and governance in examination processes without reporting an AI Incident or Hazard.
Thumbnail Image

UPSC Proposes AI Based CCTV Surveillance to Curb Cheating Amid NEET, NET Controversies

2024-06-25
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of AI systems (facial recognition and AI-based CCTV surveillance) in examination settings, which can be reasonably inferred as AI systems. The article focuses on the development and intended use of these AI systems to prevent cheating, which is a plausible future harm scenario if cheating occurs or if the system malfunctions or is misused. However, no actual harm or incident related to the AI systems has been reported yet. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to incidents related to exam integrity or privacy concerns, but no direct or indirect harm has yet occurred.
Thumbnail Image

UPSC Embraces AI Tech to Combat Examination Malpractices

2024-06-24
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems for exam monitoring and fraud prevention, which involves AI system development and use. However, since the AI systems are not yet in operation and no harm or incident has occurred or been averted, this constitutes a plausible future risk scenario rather than an actual incident. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to incidents related to privacy violations, wrongful accusations, or other harms if misused or malfunctioning, but no such harm has yet materialized.
Thumbnail Image

Amid NEET, NET exam mess, UPSC moots AI-based CCTV surveillance to prevent cheating

2024-06-24
metrovaartha.com
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems (facial recognition and AI-based CCTV surveillance) for exam monitoring to prevent cheating and impersonation. However, it does not report any actual harm or incident resulting from these AI systems yet. The event is about the planned deployment of AI technology to mitigate exam fraud risks, which could plausibly lead to prevention of harms but does not itself describe an AI Incident or an AI Hazard. It is primarily an update on the adoption of AI technology in examination processes, thus fitting the category of Complementary Information.
Thumbnail Image

Amid NEET, NET exam mess, UPSC moots AI-based CCTV surveillance to prevent cheating

2024-06-24
NewsDrum
Why's our monitor labelling this an incident or hazard?
The article details the intended use of AI systems for surveillance and biometric verification to prevent cheating in exams. However, it does not report any actual harm or incident resulting from these AI systems yet. The focus is on the planned deployment of AI technology to mitigate exam fraud risks. Since no harm has occurred but the AI system's use could plausibly prevent or detect cheating (which is a form of unfair means and violation of examination integrity), this constitutes a potential AI Hazard rather than an Incident. There is no indication of realized harm or malfunction of the AI system at this stage, nor is the article primarily about governance or societal response to a past AI incident. Therefore, the classification is AI Hazard.