NY State Audit Warns of AI Oversight Risks in Agencies

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

New York State Comptroller Thomas DiNapoli's audit found that agencies like the DMV, corrections, and the Office for the Aging lack centralized guidance and robust oversight on AI use, highlighting potential risks such as data misuse and inadvertent noncompliance that could lead to future AI incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The report identifies a lack of AI guardrails and oversight in state agencies, which could plausibly lead to misuse or unintended consequences involving AI systems, such as mishandling confidential information or irresponsible AI use. However, no direct or indirect harm has yet occurred according to the description, so this constitutes a potential risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securityTransparency & explainabilityRespect of human rightsSafetyFairnessDemocracy & human autonomy

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPublic interestEconomic/Property

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

NY Comptroller warns the state lacks Artificial Intelligence guardrails

2025-04-05
NCPR
Why's our monitor labelling this an incident or hazard?
The report identifies a lack of AI guardrails and oversight in state agencies, which could plausibly lead to misuse or unintended consequences involving AI systems, such as mishandling confidential information or irresponsible AI use. However, no direct or indirect harm has yet occurred according to the description, so this constitutes a potential risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DiNapoli's audit: State agencies need more guidance on AI use to avoid risks

2025-04-03
Brooklyn Eagle
Why's our monitor labelling this an incident or hazard?
The article describes a situation where AI systems are in use within state agencies but focuses on the inadequacy of guidance and oversight, which could plausibly lead to misuse or unintended consequences. Since no actual harm or incident has been reported yet, this situation fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm in the future if not properly managed.
Thumbnail Image

NYS Comptroller calls for improved guidance on state agencies' use of AI following audit

2025-04-03
https://www.wbng.com
Why's our monitor labelling this an incident or hazard?
The article discusses the current state of AI governance within New York State agencies, emphasizing the lack of sufficient oversight and policies to ensure AI systems are reliable and accurate. While it identifies potential risks and the possibility of misuse, it does not report any realized harm or incidents caused by AI systems. Therefore, this situation represents a plausible risk of harm due to AI use without actual harm having occurred, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

DiNapoli audit finds state agencies largely flying solo with AI

2025-04-03
baynews9.com
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI systems by state agencies and the associated risks due to inadequate guidance and oversight. However, it does not report any realized harm or incident caused by these AI systems. Instead, it identifies potential risks and governance gaps that could plausibly lead to harm if unaddressed. Therefore, this event fits the definition of an AI Hazard, as it highlights circumstances where AI use could plausibly lead to harm due to insufficient controls and policies.
Thumbnail Image

New York State Needs Better AI Governance, Report Says

2025-04-04
Government Technology
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in use by New York state agencies (e.g., inmate phone call monitoring, facial recognition, fraud detection) and highlights governance gaps that create risks of irresponsible use, bias, and unintended consequences. No actual harm or incidents are reported, but the risks are credible and plausible given the described lack of oversight, training, and policy clarity. The involvement of AI systems is clear, and the focus is on potential future harms rather than realized harms. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.