Deloitte Refunds Australia After AI-Generated Report Errors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Deloitte issued a partial refund to the Australian government after its AI-assisted report on the welfare compliance system contained fabricated citations and errors due to hallucinations from OpenAI's GPT-4o. The incident raised concerns about AI reliability in official documents, though Deloitte claimed the main findings remained unchanged.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (generative AI language model) used in the creation of a report that contained fabricated and false information (hallucinations). This misinformation in an official government report can be considered a violation of legal obligations and harms the integrity of governmental decision-making, which fits the definition of harm to rights and obligations under applicable law. The AI system's use directly led to these harms, making this an AI Incident. The partial refund and report revision are responses but do not negate the incident classification.[AI generated]
AI principles
Robustness & digital securitySafetyTransparency & explainabilityAccountability

Industries
Government, security, and defence

Affected stakeholders
GovernmentBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Deloitte to partially refund Australian government for report with apparent AI-generated errors

2025-10-07
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI language model) used in the creation of a report that contained fabricated and false information (hallucinations). This misinformation in an official government report can be considered a violation of legal obligations and harms the integrity of governmental decision-making, which fits the definition of harm to rights and obligations under applicable law. The AI system's use directly led to these harms, making this an AI Incident. The partial refund and report revision are responses but do not negate the incident classification.
Thumbnail Image

Deloitte to pay money back to Albanese government after using AI in $440,000 report

2025-10-06
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI large language model in producing the report, which led to hallucinated (fabricated) references and errors. These errors were discovered and led to a partial refund, indicating a failure or misuse of the AI system in the report's development. However, there is no indication that these errors caused injury, legal rights violations, or other significant harms as defined in the framework. The harm is primarily related to the quality and reliability of the report, which is a reputational and contractual issue rather than a direct AI Incident. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard but rather constitutes Complementary Information about the risks and challenges of using generative AI in official reports and the governance responses (refund, public disclosure).
Thumbnail Image

Deloitte is giving the Australian government a partial refund after it used AI to deliver a report with errors

2025-10-06
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (a generative large language model) in the development of a government report. The AI's outputs directly led to the inclusion of fabricated references and quotes, constituting misinformation and a breach of expected standards for official documents. This misinformation can be classified as harm to communities and a violation of obligations under applicable law (accuracy and transparency in public administration). Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm (erroneous report content).
Thumbnail Image

Deloitte issues refund for error-ridden government report that used AI

2025-10-06
Financial Times News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a generative AI large language model) used in producing a government report. The errors in the report (incorrect citations and references) were identified and corrected, and Deloitte issued a refund. There is no indication that these errors caused injury, legal rights violations, or significant harm to communities or infrastructure. The substantive findings and recommendations remain unchanged, and the issue was resolved with the client. Therefore, while the event illustrates risks of AI use (hallucinations in generative AI), it does not meet the threshold for an AI Incident or AI Hazard. Instead, it provides an update on AI-related risks and mitigation in a professional context, fitting the definition of Complementary Information.
Thumbnail Image

Deloitte is giving the Australian government a partial refund after it used AI to deliver a report with errors

2025-10-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The use of a generative AI system in producing the report directly caused the inclusion of false information, which is a form of harm to the integrity and reliability of official documentation. This meets the criteria for an AI Incident as the AI system's outputs led to misinformation, which can be considered harm to communities and trust in public institutions. The partial refund and correction indicate recognition of the harm caused. Although no physical harm occurred, the misinformation and breach of trust are significant harms under the framework's scope.
Thumbnail Image

Deloitte to Refund Government After $440,000 Report Contained Multiple AI-Generated Errors

2025-10-07
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The report was partially generated by an AI system and contained multiple errors, which directly led to financial consequences (refund) and criticism from a government official. The AI system's involvement in producing erroneous content that affected a government report constitutes harm to the integrity and reliability of public sector operations, which falls under harm to communities or breach of obligations under applicable law. The incident is a direct consequence of the AI system's malfunction or misuse in generating inaccurate information, meeting the criteria for an AI Incident.
Thumbnail Image

Deloitte to repay Albanese government after using AI in $440,000 report: 'Human intelligence problem'

2025-10-06
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI large language model in producing the report. The AI system's outputs included hallucinated and fabricated references, which directly caused inaccuracies and errors in the report. This led to reputational harm and undermined trust in a government document, which is a form of harm to communities and a breach of obligations related to transparency and accuracy. Deloitte's repayment and public criticism further confirm the materialized harm. Hence, this is an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Government use of consultancy firms under fire

2025-10-06
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
An AI system was used to prepare the report, and its outputs contained errors and a fabricated quote, indicating a malfunction or misuse of the AI system. This led to misinformation in an official government document, which can be considered harm to the integrity of public information and potentially harm to communities relying on accurate data. However, the harm is indirect and non-physical, and the department has addressed the issue with corrections and a refund. Given the direct involvement of AI in producing erroneous content that caused reputational and informational harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte To Refund Australian Government After Admitting To Use AI In $440k Report Littered With Errors

2025-10-07
Mashable India
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was explicitly used in the development of the report, and its outputs contained significant errors (hallucinated references), which directly led to harm: the government received a flawed report and had to seek a refund. This constitutes an AI Incident because the AI system's use directly caused harm (financial and reputational) to a public institution and undermined trust in the report's reliability. The event is not merely a potential risk or a complementary update but a realized harm caused by AI use.
Thumbnail Image

Deloitte admits to using AI in $440k report, to repay Australian govt after multiple errors spotted

2025-10-07
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system in producing a report that contained multiple factual errors and fabricated references, which were publicly identified and led to a partial refund. The AI system's outputs directly contributed to the harm of disseminating false information in an official government report, which can be seen as a violation of obligations under applicable law and a harm to the integrity of public information. The harm is realized, not just potential, and the AI's role is pivotal in causing the incident. Thus, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte will refund Australian government for AI hallucination-filled report

2025-10-06
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event describes a case where the development and use of an AI system (a generative AI large language model) directly led to the publication of a flawed government report containing hallucinated quotes and references. This misinformation constitutes harm to communities and a breach of obligations under applicable law, as it undermines the integrity of government processes and public trust. Deloitte's failure to disclose the use of AI and the resulting fabricated content meets the criteria for an AI Incident. The refund and corrections are responses to the incident but do not negate the harm caused.
Thumbnail Image

The cautionary lessons of Deloitte's AI sloppiness

2025-10-06
Australian Financial Review
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate parts of the report, and its outputs included fabricated information, which is a malfunction or misuse of the AI system. This led to harm in the form of misinformation and financial repercussions, as well as potential harm to public trust and integrity of government processes. Although no physical harm or direct legal rights violations are mentioned, the incident clearly involves harm caused by the AI system's malfunctioning outputs. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI system's use.
Thumbnail Image

'Human intelligence problem': Labor senator slams Deloitte's AI bungle

2025-10-06
Australian Financial Review
Why's our monitor labelling this an incident or hazard?
The article describes a situation where AI was used in producing a report that contained errors, leading to a refund demand. While this shows a malfunction or misuse of AI in development or use, there is no evidence of harm to people, infrastructure, rights, property, or communities. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI's role in a public sector report and the response to its shortcomings.
Thumbnail Image

Deloitte to refund government, admits AI errors in $440k report

2025-10-05
Australian Financial Review
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in generating a government report that contained fabricated information and errors. These errors constitute a breach of obligations under applicable law, particularly regarding intellectual property and accuracy in official documents. The harm is realized as the government received a flawed report, leading to financial and reputational consequences. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Consultants Forced to Pay Money Back After Getting Caught Using AI for Expensive "Report"

2025-10-06
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (generative AI, GPT-4o) in producing a report with fabricated data and citations, which directly led to financial harm (repayment of funds) and reputational damage. The AI's hallucinations caused the harm by producing false information that was relied upon in an official government report. This meets the criteria for an AI Incident because the AI system's use directly led to harm (financial and reputational) and a breach of obligations under applicable law and professional standards. The event is not merely a potential risk or a complementary update but a realized harm caused by AI use.
Thumbnail Image

"Deloitte Issues Refund for Error-Ridden Australian Government Report That Used AI"

2025-10-07
Reason
Why's our monitor labelling this an incident or hazard?
The article describes errors in a government report that may have involved AI in its production, but there is no clear evidence that the AI system caused harm or that the errors led to any of the defined harms (injury, rights violations, disruption, etc.). The refund is a remediation step but does not indicate an AI Incident. Since the AI's role is uncertain and no harm is reported, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on the use and limitations of AI in report generation and the response to errors.
Thumbnail Image

Deloitte's AI use created a blunder Down Under

2025-10-07
Morning Brew
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Deloitte used an AI language model to generate a report that contained false citations and mistakes, which were identified by academics. This misuse or malfunction of the AI system directly led to harm in the form of financial loss and misinformation. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Deloitte agrees to refund Australian government after AI hallucinations found in report

2025-10-06
Neowin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate content that contained fabricated information, which is a direct consequence of AI hallucinations. This misinformation in a government report can lead to harm by misleading decision-makers and the public, thus constituting an AI Incident. The refund and admission of wrongdoing further confirm the materialization of harm due to AI use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Mistake Proves Costly for Deloitte: Big Four Firm To Give Partial Refund to Australian Government After Errors Found in AI-Generated USD 2,90,000 Report | 📲 LatestLY

2025-10-07
LatestLY
Why's our monitor labelling this an incident or hazard?
The report was produced with the help of a generative AI system, which directly led to the inclusion of false and inaccurate information. This constitutes harm because it affects the quality and trustworthiness of an official government report, impacting the community relying on this information. The AI system's malfunction or misuse in generating erroneous content fulfills the criteria for an AI Incident as the harm has occurred and is directly linked to the AI system's use.
Thumbnail Image

Deloitte refunds Australian government over AI in report

2025-10-06
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (a generative AI large language model) used in the development and production of a government report. The AI's hallucinated outputs (fabricated citations and quotes) led to misinformation, which is a form of harm to the integrity of information and trust in public institutions. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and loss of trust) in a significant public context. Although the harm is not physical or legal rights violation, it is a significant, clearly articulated harm where the AI system's role is pivotal. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Deloitte refunds Australian government after errors found in AI-generated report

2025-10-07
NewsBytes
Why's our monitor labelling this an incident or hazard?
An AI system was used in generating parts of the report, and its outputs contained inaccuracies that were published and later corrected. The errors represent a failure in the AI system's use, leading to misinformation that could harm the integrity of government operations and public trust. Although no physical harm or direct legal violation is reported, the dissemination of false information in an official context constitutes harm to communities and public institutions. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated errors in an important government report.
Thumbnail Image

Deloitte to refund government over AI errors

2025-10-06
Information Age
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system that produced fabricated references and quotes in an official government report, which is a direct consequence of AI hallucinations. This led to the dissemination of false information, which can be considered a violation of obligations under applicable law and a harm to the integrity of public administration. Deloitte's acknowledgment and refund indicate recognition of the harm caused. Although the substantive content was not affected, the presence of fabricated information in an official document is a clear harm linked to AI use. Hence, this is classified as an AI Incident.
Thumbnail Image

Deloitte refunds Australian government after AI 'made up citations' in report

2025-10-06
CityAM
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was used in the development of the report, and its malfunction (fabricated citations) led to errors in the report. However, the errors did not cause any direct or indirect harm as defined by the framework (no injury, rights violation, or disruption). The issue was resolved with a refund and correction of the report. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI use and its challenges in professional services, along with the response taken, without involving realized or plausible harm.
Thumbnail Image

Australian government due refund after incorrect AI references in Deloitte report - Cryptopolitan

2025-10-06
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI in the preparation of the report and the resulting inaccuracies (hallucinations) that led to errors. These errors caused financial harm (refund) and reputational harm to the government and Deloitte. Since the harm has already occurred and the issue is resolved with a refund and corrected report, this qualifies as an AI Incident due to the realized harm caused by AI system use (hallucinations leading to incorrect report content). The event does not describe potential future harm or broader ecosystem responses, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI involvement and harm are clearly described.
Thumbnail Image

Deloitte Refunds AU$440K Report Over GPT-4o AI Hallucinations

2025-10-06
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system whose outputs contained fabricated and false information, leading to a flawed official report for a government department. This constitutes harm to communities and public trust (harm category d) because the report was intended to assess compliance in a welfare payment system affecting millions. The AI system's hallucinations directly caused the inaccuracies, which is a malfunction or misuse of the AI system in its use phase. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI system use.
Thumbnail Image

Deloitte to refund Australia after AI report mistakes on welfare system

2025-10-06
Cybernews
Why's our monitor labelling this an incident or hazard?
The AI system was used in the development of the report and produced false information (hallucinated references), which is a malfunction of the AI system. This led to a flawed report that could have harmed welfare recipients by misinforming decisions about compliance and fines. Although the main findings were reportedly unchanged after corrections, the initial harm of misinformation and potential impact on welfare recipients' rights and trust is present. Therefore, this qualifies as an AI Incident due to indirect harm to people (welfare recipients) and violation of trust in a system affecting fundamental rights.
Thumbnail Image

Deloitte reembolsará parcialmente al gobierno australiano por reporte con errores generados por IA

2025-10-07
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (a generative language model) used in the creation of a report. The AI's hallucinations (fabricated citations and references) directly caused the dissemination of false information to a government department, which is a breach of trust and legal obligations. This misinformation could have led to harm in legal and policy contexts, fulfilling the criteria for an AI Incident. The reimbursement by Deloitte acknowledges the harm caused. Therefore, this is classified as an AI Incident due to realized harm caused by AI-generated errors in an official government report.
Thumbnail Image

Deloitte to refund Australia government for $440,000 erroneous AI report - The Times of India

2025-10-07
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a generative AI system in the report's creation, and errors (hallucinations) likely caused by AI were identified. However, these errors did not result in injury, rights violations, or significant harm but led to a refund and criticism. The event is an update on a previously reported issue involving AI use and its consequences, focusing on the response and correction rather than new harm or potential harm. Hence, it aligns with Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Deloitte to partially refund Australian government for report with...

2025-10-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (generative AI language model) whose use in report writing directly led to the inclusion of fabricated and false information. This misinformation constitutes a breach of obligations under applicable law, as the report was an official audit relied upon by the government. The harm is realized, not just potential, as the government was misled by AI-generated errors. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm to legal compliance and trust in public sector processes.
Thumbnail Image

Empresa vai reembolsar governo australiano após relatório com erros gerados por IA

2025-10-08
O Globo
Why's our monitor labelling this an incident or hazard?
An AI system (a large language model) was used in producing the report, and its outputs contained fabricated and incorrect information, which is a malfunction or misuse of the AI system leading to harm. The harm here is indirect but real: dissemination of false information in an official government report, which can be considered harm to communities or a violation of trust. Since the AI system's malfunction directly led to these errors and the government had to seek reimbursement and issue corrections, this qualifies as an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI outputs.
Thumbnail Image

Deloitte contraint de rembourser le gouvernement australien pour un rapport truffé d'erreurs réalisé avec l'IA

2025-10-07
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The report was created with the use of AI and contained significant inaccuracies that misled the government, causing financial harm and undermining trust. The AI system's involvement in producing false information and fabricated content directly led to these harms. Therefore, this qualifies as an AI Incident due to the realized harm (financial loss and misinformation) caused by the AI system's outputs.
Thumbnail Image

Deloitte admits using AI to produce report, to pay part of $440,000 fee to Albanese government

2025-10-07
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI model GPT-4o) used in producing a government report. The AI's malfunction (hallucinations) caused fabricated references and inaccuracies, which directly led to harm by undermining the credibility of a government report and misleading stakeholders. This fits the definition of an AI Incident as the AI system's malfunction directly led to harm to communities (public trust and information integrity). The event is not merely a potential risk (hazard) or a complementary update, but a realized incident with tangible consequences, including financial repayment and political criticism.
Thumbnail Image

Deloitte To Repay Australian Government After AI Errors Found In Official Report

2025-10-07
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system in producing a report with fabricated content (hallucinations), which was published and later corrected. The AI system's involvement in generating false citations and quotes directly led to misinformation harm, which affects the integrity of public information and government transparency, constituting harm to communities and a breach of obligations related to truthful reporting. The refund and revisions confirm the harm was realized and significant. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte's AI Fallout Explained: The $440,000 Report That Backfired

2025-10-08
NDTV
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI model GPT-4o) was explicitly used in producing the report. The AI's hallucinations directly led to the inclusion of fabricated and false information in an official government report, which constitutes harm to the integrity of public information and potentially violates obligations related to accuracy and transparency in government reporting. Although no physical harm occurred, the misinformation and false references can be considered harm to communities and a breach of obligations under applicable law (e.g., public trust, transparency). Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction and use in a high-stakes context.
Thumbnail Image

Le cabinet de conseil Deloitte doit rembourser le gouvernement australien après avoir présenté un document réalisé avec l'IA et truffé de fausses informations : Actualités - Orange

2025-10-07
Orange Actualités
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system in the development and production of a report that contained false information, directly leading to harm in the form of misinformation and financial loss. The AI system's hallucinations caused the inclusion of fabricated references and decisions, which is a clear case of AI malfunction leading to harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is directly linked to the AI system's outputs.
Thumbnail Image

Deloitte forced to refund Aussie government after admitting it used AI to produce error-strewn report

2025-10-07
TechRadar
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI in producing a flawed report with fabricated information, which required correction and refund. The AI system's outputs directly caused the dissemination of false information, a form of harm to communities and a breach of professional and possibly legal obligations. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use without sufficient safeguards.
Thumbnail Image

Escándalo en Deloitte: Por usar la IA, le entregó informe a gobierno australiano con información falsa

2025-10-07
BioBioChile
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (a generative language model) in the creation of a government report. The AI's outputs included fabricated citations and false information, which directly led to harm by misleading a government client and necessitating corrections and financial reimbursement. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and breach of trust) and disruption in official government advisory processes. The incident is not merely a potential risk but a realized harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system's role is central to the event.
Thumbnail Image

Accounting Giant Deloitte Embarrassed by Report for Australian Government Filled with AI-Generated Garbage

2025-10-07
Breitbart
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system that produced fabricated and false information (hallucinated citations and quotes) in a government report, which was published and paid for by taxpayers. This misinformation undermines the reliability of the report and breaches intellectual property rights by misattributing research. The harm is realized, not just potential, as the report was publicly released and required correction and partial refund. The related legal case further illustrates direct harm caused by AI-generated false information in a critical professional context. The AI system's malfunction (hallucination) and use without proper verification directly led to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

Deloitte to partially refund Australia for report with apparent AI-generated errors

2025-10-07
ABC News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system in writing the report, which produced fabricated and incorrect information (hallucinations). The harm includes misinformation to a government department, misquotation of a federal court judge, and false academic references, which can be considered a violation of legal obligations and harm to institutional trust. Deloitte's partial refund and report revision confirm the materialization of harm. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to significant harm.
Thumbnail Image

Deloitte To Refund Part Of $440,000 Fee After AI Errors In Australian Government Report

2025-10-08
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Azure OpenAI GPT-4o) used in producing content that contained fabricated and inaccurate information, which was published in an official government report. The errors were directly caused by the AI's hallucinations and led to reputational and professional harm, necessitating corrections and partial refund. This meets the criteria for an AI Incident because the AI system's use directly led to harm (misinformation in a government document), affecting trust and compliance frameworks. The event is not merely a potential risk or a complementary update but a realized harm caused by AI use.
Thumbnail Image

Deloitte caught using AI

2025-10-07
News.com.au
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Deloitte used a generative AI large language model (Azure OpenAI GPT-4o) in producing a report that contained fabricated references and errors. These fabricated references constitute a violation of intellectual property rights and misinformation harm. The harm has materialized, as the report was published with false citations, leading to reputational damage and financial consequences (refund to the government). Therefore, the event meets the criteria for an AI Incident because the AI system's use directly led to harm (violation of intellectual property rights and misinformation).
Thumbnail Image

Deloitte rimborsa il governo australiano dopo gli errori di un report scritto con l'IA

2025-10-07
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system (ChatGPT) in the development of a government report. The AI-generated content included fabricated and incorrect information, which constitutes a malfunction or misuse of the AI system's outputs. This led to a financial consequence (refund of contract payment) and reputational harm, as well as political criticism, indicating harm to the integrity of public decision-making processes. Although no physical harm occurred, the incident involves violations of trust and potential harm to public governance and accountability, which falls under harm to communities and breach of obligations under applicable law. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's erroneous outputs in a critical public report.
Thumbnail Image

Deloitte terá de reembolsar Governo australiano por erros aparentemente gerados por IA

2025-10-07
Pplware
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system (Azure OpenAI) in drafting the report, which contained fabricated and incorrect references. These errors were significant enough to require a financial reimbursement to the government and were publicly criticized as distorting legal facts. The AI system's outputs directly led to misinformation and potential legal and reputational harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the erroneous report was published and relied upon by the government before correction.
Thumbnail Image

"KI-Gatsch" in Bericht: Deloitte erstattet Australien Geld zurück

2025-10-07
newsORF.at
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (GPT-4) used in generating report content. The AI-generated errors led to false citations and misinformation in an official government report assessing an automated penalty system impacting social welfare recipients. This misinformation can indirectly harm affected individuals and communities by influencing policy and administrative decisions. The harm is realized as the report was published and used before correction, and the financial and reputational consequences for Deloitte and the government are evident. Hence, this qualifies as an AI Incident due to indirect harm to communities and violation of trust in public information.
Thumbnail Image

"Son cosas por las que un universitario se metería en un lío": Deloitte entregó a Australia un informe hecho con IA

2025-10-08
Xataka
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI in producing a report that contained fabricated and false references, including a fake judicial citation and non-existent studies. This misinformation directly harmed the government by misleading them and causing financial loss, as Deloitte must return part of the payment. The AI system's malfunction (hallucinations) and lack of human oversight led to this harm. The harm includes violation of intellectual property and professional obligations and damage to trust in official documents. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Deloitte rimborsa l'Australia per lo studio con errori redatto con l'IA - Future Tech

2025-10-08
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event describes a case where the development and use of an AI system (GPT-4o) directly led to the production of a flawed official report with false and fabricated content. This caused harm in terms of misinformation and financial loss (the cost of the report and subsequent reimbursement). Therefore, it qualifies as an AI Incident because the AI system's malfunction directly led to harm (financial and reputational).
Thumbnail Image

Deloitte admits AI hallucinated quotes in government report, offers partial refund

2025-10-07
TechSpot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (a generative AI large language model) whose outputs included fabricated citations and quotes, which were incorporated into a government report. This misinformation was not detected before publication, indicating a malfunction or misuse of the AI system. The harm includes the dissemination of false information in an official context, undermining trust and potentially affecting government policy decisions, which aligns with violations of obligations under applicable law and harm to communities. Deloitte's partial refund and corrections acknowledge the issue but do not negate the realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte to partially refund Australian government for report with apparent AI-generated errors

2025-10-07
CNA
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (generative AI language model) in the development of a report that contained fabricated content (hallucinations). Although the fabricated information was published and later corrected, the incident caused misinformation and potential reputational harm to the government and stakeholders relying on the report. The harm is indirect, stemming from the AI-generated errors leading to misinformation and breach of trust. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs in an official government document.
Thumbnail Image

Deloitte's AI fiasco: Why chatbots hallucinate and who else got caught

2025-10-08
Business Standard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system in producing a government report, which led to fabricated and false information (hallucinations). This misinformation was published and caused reputational and financial harm, as well as undermining oversight and credibility. The AI system's malfunction (hallucination) directly caused the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI use.
Thumbnail Image

KI-Müll: Consulter gibt Australien Geld zurück

2025-10-07
heise online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (automated decision-making system for social welfare compliance) whose malfunction and misuse have directly led to harm to people (unjust penalties to poor Australians). The use of a generative AI model by Deloitte to produce the audit report with fabricated references is also AI involvement, but the primary harm stems from the automated system's failures. The harm includes violations of rights and harm to communities, meeting the definition of an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI system use and malfunction.
Thumbnail Image

Deloitte to refund Australian government after AI hallucinations found in report

2025-10-07
Fast Company
Why's our monitor labelling this an incident or hazard?
The report generated or reviewed by Deloitte involved AI hallucinations, which means the AI system produced false or fabricated content. This directly led to the dissemination of incorrect information to a government department, which constitutes a harm related to misinformation and breach of trust. The refund and correction indicate acknowledgment of the harm caused. Therefore, this qualifies as an AI Incident because the AI system's malfunction (hallucinations) directly led to harm through misinformation in an official government report.
Thumbnail Image

Empresa confia demais e relatório tosco feito por IA dá prejuízo de US$ 440 mi

2025-10-08
Canaltech
Why's our monitor labelling this an incident or hazard?
The use of a generative AI system in producing a flawed report with fabricated references directly led to financial loss and undermined the integrity of a government document. The harm is realized and directly linked to the AI system's malfunction (hallucinations and inaccurate outputs). The event fits the definition of an AI Incident because the AI system's use caused harm (financial and reputational) and compromised the quality of a critical public report. Although the government maintains the recommendations are valid, the presence of false information and the financial consequences confirm the incident's material harm.
Thumbnail Image

Empresa confia demais na IA em relatório cheio de erros que vai custar US$ 440

2025-10-08
Canaltech
Why's our monitor labelling this an incident or hazard?
The use of a generative AI system in producing a government report that contained false and inaccurate information constitutes an AI Incident because the AI's malfunction (hallucinations) directly caused harm by compromising the accuracy and reliability of an official document. This impacts the trustworthiness of public policy decisions and can harm communities dependent on these policies. The event involves the AI system's use and malfunction leading to realized harm, not just potential harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Auditoria usa IA, passa vergonha e devolve dinheiro de projeto * Tecnoblog

2025-10-07
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI GPT-4o) in producing a government report that contained fabricated and false information, which is a direct consequence of AI hallucinations (malfunction). This led to harm by undermining the credibility of the audit, misleading government decision-making, and causing financial consequences (partial reimbursement). The harm is realized and directly linked to the AI system's malfunction. Therefore, this qualifies as an AI Incident under the framework, as it caused harm to communities (public trust and governance) and breaches obligations related to accuracy and reliability in public service.
Thumbnail Image

Deloitte reembolsa governo da Austrália após erros cometidos por IA em relatório

2025-10-08
TecMundo
Why's our monitor labelling this an incident or hazard?
The use of AI in generating the report led to factual inaccuracies (hallucinations) that compromised the report's credibility and reliability. This constitutes harm related to misinformation and undermines trust in public administration, which falls under harm to communities or violation of obligations under applicable law. The AI system's malfunction directly caused these harms, qualifying this event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI erfand Experten: Deloitte zahlt Geld an Regierung zurück

2025-10-07
Blick.ch
Why's our monitor labelling this an incident or hazard?
An AI system (GPT-4o) was explicitly used in generating the report, and its outputs included fabricated references and court rulings, which directly led to harm: the government received a flawed report, resulting in financial repayment and damage to trust. The AI's hallucinations were central to the incident, fulfilling the criteria for an AI Incident as the AI system's use directly led to a violation of obligations and harm to the community (public trust and governance). The event is not merely a potential risk or complementary information but a realized harm caused by AI outputs.
Thumbnail Image

Scandalo Deloitte: dovrà risarcire il governo australiano per un report pieno di errori generati dall'IA

2025-10-08
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The event describes a case where an AI system (a large language model) was used in report generation, resulting in factual inaccuracies and fabricated references. This caused harm to the government in terms of financial cost and misinformation, which fits the definition of an AI Incident due to harm to property and communities (trust). The AI's role is pivotal as the errors stemmed from its hallucinations. The event is not merely a product announcement or update, nor is it a potential future harm scenario, so it is not a hazard or complementary information. Therefore, it qualifies as an AI Incident.
Thumbnail Image

'Reasonable' for Deloitte to keep payments for flawed report

2025-10-09
Australian Financial Review
Why's our monitor labelling this an incident or hazard?
The article describes errors related to AI in a consulting report and a partial repayment, but does not describe any harm caused by the AI system itself or its outputs. There is no evidence of injury, rights violations, or other harms as defined for an AI Incident. Nor does the event describe a plausible future harm scenario that would qualify as an AI Hazard. The main focus is on the financial and contractual outcome related to the flawed report, which is complementary information about AI-related issues but not an incident or hazard.
Thumbnail Image

Deloitte repays almost $98,000 of its $440,000 fee for AI-error report

2025-10-08
Australian Financial Review
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system in producing a government report, which led to errors significant enough to require reissuing the report and partial fee repayment. This indicates that the AI system's use indirectly caused harm in the form of misinformation or flawed official documentation, which can be considered a violation of obligations under applicable law or harm to the integrity of public information. The repayment and training are responses to this incident. Therefore, this qualifies as an AI Incident due to the realized harm from AI-related errors in an official context.
Thumbnail Image

Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations | Fortune

2025-10-07
Fortune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system that produced fabricated and incorrect content in a government report, which was published and later found to contain errors. The AI's hallucinations directly led to misinformation in an official document, which can be considered harm to communities (public trust and accurate information) and a breach of obligations under applicable law (intellectual property and truthful reporting). The harm has materialized as the report was published and used by the government, and a refund was issued due to the errors. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm.
Thumbnail Image

Deloitte to pay partial refund to Australian govt for report using AI

2025-10-07
Malay Mail
Why's our monitor labelling this an incident or hazard?
The event describes the use of a generative AI system in producing a report with errors, which led to a partial refund and public controversy. Although the AI system was involved in the report's creation, the errors are not definitively linked to AI malfunction or misuse, and no direct or indirect harm (such as health injury, rights violations, or disruption) is reported. The main issue is reputational and contractual, not a realized AI Incident. Therefore, this event is best classified as Complementary Information, as it provides context on AI use and its implications without describing a new AI Incident or Hazard.
Thumbnail Image

Deloitte: Peinliche KI-Fehler in Bericht für Arbeitsministerium

2025-10-07
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system that produced false and fabricated information in an official government report, which was not properly checked before use. This misuse of AI directly caused reputational and professional harm, and could have led to poor policy decisions based on incorrect data. The AI system's malfunction (fabrication of references) and the failure to verify outputs constitute an AI Incident under the definitions, as it caused harm to the integrity of information and trust in public administration. The harm is indirect but clearly linked to the AI system's outputs and Deloitte's reliance on them without adequate oversight.
Thumbnail Image

Deloitte's AI governance failure exposes critical gap in enterprise quality controls

2025-10-08
Computerworld
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (GPT-4o) whose outputs included fabricated information that was not detected before report delivery, leading to a breach of quality controls and a financial refund. While this represents a significant governance failure and harm to trust and quality assurance, it does not meet the threshold for physical injury, rights violations, or other direct harms defined under AI Incident. It also does not describe a plausible future harm scenario but rather a realized governance failure. Therefore, it is best classified as Complementary Information highlighting systemic governance challenges and responses in AI adoption.
Thumbnail Image

Deloitte muss Geld für KI-verseuchte Regierungsstudie zurückzahlen

2025-10-07
WinFuture.de
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI language model) was used in the development of the report, and its outputs directly led to the inclusion of fabricated and false information in an official government document. This misinformation can cause harm by misleading policymakers and the public about the automated social welfare system, potentially affecting rights and trust. The harm is realized (not just potential), as the flawed report was published and required correction and partial refund. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated misinformation in a government context, violating transparency and potentially impacting human rights and governance.
Thumbnail Image

Fehltritt von Deloitte in Australien: Bericht mit KI-Fehlern sorgt für Aufsehen

2025-10-07
Neue Zürcher Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system in producing a report with fabricated and false information, which was published and used by a government ministry. This led to misinformation and potential harm to the integrity of governmental decision-making and public trust. Although physical harm did not occur, the harm to communities and violation of legal obligations related to truthful reporting and transparency are present. The AI system's hallucinations directly contributed to the incident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

La IA le sale cara a Deloitte: tendrá que reembolsar el pago a Australia por usarla en un informe lleno de errores

2025-10-06
Expansión
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a large language model AI system was used in generating parts of the report, which contained fabricated references and citations (hallucinations). These errors were significant enough to cause the government to demand a refund, indicating harm to the integrity and reliability of an official government document. The AI system's malfunction (hallucination) directly led to the inclusion of false information, which is a breach of obligations related to accuracy and transparency in public administration. Although the substantive recommendations were not changed, the presence of false data in an official report is a clear harm to the community's trust and the government's operational integrity. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte hará reembolso al gobierno de Australia por usar IA en un informe

2025-10-06
Expansión
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system (GPT-4o) in producing the report, which led to hallucinated (false) references and citations. These errors were material enough to require corrections and a partial refund, indicating harm to the integrity and trustworthiness of an official government document. This harm affects the community's trust and the proper functioning of government oversight, fitting the definition of harm to communities or a breach of obligations under applicable law. The AI system's involvement is direct in generating the erroneous content, and the harm has already occurred, making this an AI Incident rather than a hazard or complementary information. The event is not unrelated, as the AI system's malfunction (hallucinations) caused the issue.
Thumbnail Image

En Australie, les hallucinations de l'IA mettent Deloitte dans l'embarras

2025-10-08
LesEchos.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI in generating a government report that contained fabricated information, which is a direct consequence of AI hallucinations. This misinformation in an official report can cause harm to communities by misleading policy decisions and undermining trust in public institutions. Deloitte's partial refund and correction of the report confirm the harm has materialized. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to harm (financial and reputational) and misinformation affecting public trust and governance.
Thumbnail Image

Australia Paid Deloitte $290K for a Report. There's an AI Issue

2025-10-07
Newser
Why's our monitor labelling this an incident or hazard?
The report explicitly involved the use of a generative AI system (Azure OpenAI) in its writing, which produced fabricated quotes and references—hallucinations typical of generative AI. These errors were published in an official government report, misleading readers and stakeholders, which constitutes harm to communities through misinformation and undermines trust in public institutions. The AI system's malfunction (hallucination) directly led to this harm. Although the harm is informational and reputational rather than physical, it fits within the framework's scope of harm to communities and violations of obligations under applicable law protecting rights to accurate information. The event is not merely a product announcement or a general AI-related news item but a concrete case of AI misuse causing harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Deloitte devuelve parte de contrato en Australia tras errores en informe hecho con IA

2025-10-06
Diario La República
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used in the development of the report, and its outputs contained fabricated information (hallucinations) that led to errors in an official government document. This caused reputational and financial consequences (partial contract refund) and raised concerns about the quality and reliability of AI-assisted consultancy. The errors represent a form of harm related to misinformation and breach of trust in a public context, which can be considered harm to communities and violation of obligations under applicable law (accuracy and integrity in official reporting). Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI system's use in the report's creation.
Thumbnail Image

Deloitte to partially refund Australian government for report with apparent AI-generated errors - WTOP News

2025-10-07
WTOP
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI language model) used in the report's creation. The AI's hallucination (fabrication of false information) directly led to the dissemination of inaccurate and misleading content in an official government report. This constitutes a violation of legal and intellectual property norms and harms the government's ability to rely on accurate information for decision-making. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs in a critical context.
Thumbnail Image

Deloitte's AI hallucination-filled report raises challenge for consultants

2025-10-07
Brisbane Times
Why's our monitor labelling this an incident or hazard?
An AI system was used in the creation of a report that contained hallucinated (fabricated) content, which is a malfunction or misuse of the AI system. This led to the delivery of an error-tainted report to a government client, causing harm through misinformation and financial loss (partial refund). Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (misinformation and financial harm).
Thumbnail Image

Deloitte remboursera le gouvernement australien pour un rapport de 440 000 $ australiens truffé d'hallucinations générées par l'IA~? le cabinet de conseil ayant admis avoir utilisé GPT-4o

2025-10-07
Developpez.com
Why's our monitor labelling this an incident or hazard?
The report was generated using GPT-4, an AI system, which produced fabricated and incorrect information (hallucinations). These errors were directly linked to the AI's outputs and led to misinformation in a government report, which is a clear harm to the community and a breach of trust and legal obligations. The harm is realized, not just potential, as the report was published and relied upon before correction. Deloitte's reimbursement and the ministry's response confirm the incident's material impact. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Quando l'IA sbaglia: Deloitte costretta a restituire fondi per un report con fonti false

2025-10-07
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used in the development of a government report. The AI-generated false citations and fabricated references constitute misinformation and a breach of intellectual property and academic integrity rights, which are harms under the framework. The harm has already occurred as the report was published and disseminated with false information, leading to reputational damage and undermining public trust. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs in a public and official context.
Thumbnail Image

Deloitte to partially refund Australian government for report with apparent AI-generated errors

2025-10-07
Spectrum News Bay News 9
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI language model) used in the report's creation. The AI-generated hallucinations directly led to the inclusion of fabricated and misleading information in an official government report, which constitutes a violation of legal and ethical standards. This misinformation could have harmed the government's ability to manage welfare system compliance effectively, thus indirectly causing harm. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated errors affecting legal compliance and public trust.
Thumbnail Image

Deloitte vai reembolsar parcialmente o governo australiano por relatório com erros gerados por IA

2025-10-08
Expansão
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI language model) used in producing a report. The AI's malfunction (hallucinations causing fabricated citations and references) directly led to harm by disseminating false information in an official government report, undermining trust and potentially affecting policy decisions. This fits the definition of an AI Incident because the AI system's use and malfunction directly caused harm (misinformation and breach of obligations). The partial reimbursement and corrections are responses to the incident but do not negate the occurrence of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Deloitte sent a report littered with ChatGPT hallucinations to the Australian Government.

2025-10-07
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) that generated hallucinated, false citations in a report delivered to a government client. This misinformation constitutes a violation of intellectual property rights and potentially harms the integrity of governmental decision-making processes. The AI system's malfunction (hallucination) directly led to the harm of disseminating false information and reputational damage, meeting the criteria for an AI Incident.
Thumbnail Image

Deloitte doubles down on AI despite costly mistakes

2025-10-07
Rolling Out
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system (Azure OpenAI GPT-4o) in producing a government report that contained fabricated citations and factual errors attributable to AI hallucinations. These errors represent a violation of professional and possibly legal standards, harming the integrity of the report and misleading the government client. The harm is direct and material, as evidenced by the refund and the need for a corrected report. The AI system's malfunction (hallucination) during use led to this harm, fulfilling the criteria for an AI Incident. The expansion of the AI partnership does not negate the incident but provides context. Hence, the classification is AI Incident.
Thumbnail Image

Scandale en Australie : l'IA de Deloitte invente des sources pour un rapport officiel

2025-10-07
Génération-NT
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Deloitte used a generative AI system to produce parts of a government report, which contained fabricated citations and judicial references that do not exist. This misinformation directly harms the integrity of the audit and the government's decision-making process, constituting harm to communities and public trust. The AI system's malfunction (hallucination of sources) is the pivotal cause of this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to significant harm.
Thumbnail Image

Law Professor Catches Deloitte Using Made-Up AI Hallucinations In Government Report - Above the Law

2025-10-08
Above the Law
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Azure OpenAI) in producing a government report that contained fabricated and incorrect information, including false legal citations and misquotations. These errors constitute a violation of obligations under applicable law and professional standards, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The AI system's hallucinations directly led to the dissemination of false information in an official government document, which the government relied upon, thus causing harm. Deloitte's acknowledgment and partial refund do not negate the harm caused. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte risarcisce l'Australia per un documento scritto con l'AI

2025-10-07
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to produce a government report. The AI-generated document contained fabricated citations and false statements, which were not detected by the government before publication. This misinformation constitutes harm to the government and potentially to the public relying on the report, fitting the definition of harm to communities or violation of obligations under applicable law. The harm has materialized, and the AI system's malfunction or misuse is a direct cause. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Deloitte muss Australien Geld zurückgeben, weil Bericht voller KI-Halluzinationen war

2025-10-07
DER STANDARD
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (likely a generative AI) that produced hallucinated content (false information and non-existent sources) in an official report. This misuse of AI directly led to harm in the form of misinformation and financial consequences (repayment of funds). Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through inaccurate outputs affecting a government client.
Thumbnail Image

Deloitte Gets Busted For Report With AI Hallucinations, Forks Over $291K To Aussie Gov

2025-10-07
2oceansvibe News | South African and international news
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a large language model) in producing a government report. The AI's hallucinations caused factual inaccuracies and false references, undermining the report's credibility and leading to financial penalties and reputational harm. This constitutes a violation of obligations under applicable law (contractual and possibly regulatory standards for government reporting) and harm to communities relying on accurate welfare system information. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs and the resulting consequences.
Thumbnail Image

AI gen nei report di consulenza, quanti rischi: il caso Deloitte

2025-10-07
Agenda Digitale
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system (GPT-4) in producing a government report, which directly led to the dissemination of false information and fabricated references. This caused harm to the integrity of public administration and trust in official documents, which can be considered harm to communities and a breach of obligations under applicable law related to transparency and intellectual property. The AI system's hallucinations were a direct factor in the harm, and the lack of proper governance and oversight exacerbated the issue. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte inciampa sull'intelligenza artificiale, dovrà rimborsare il governo australiano

2025-10-08
Affari Italiani
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in producing a consultancy report that contained fabricated data and serious errors. This led to financial harm (waste of government funds) and reputational damage, which qualifies as harm to property and possibly harm to communities due to misinformation. The AI system's malfunction or misuse directly contributed to these harms, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI halluziniert, Deloitte muss Busse zahlen

2025-10-07
finews.ch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system that produced fabricated content ('hallucinations') in an official report, leading to misinformation and loss of trust. This constitutes harm to communities (misinformation affecting public or institutional decision-making) and a breach of obligations related to accuracy and transparency. The harm has materialized as the report was published and required correction, and Deloitte faced reputational damage and financial consequences. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Go home, AI, you're on 'shrooms

2025-10-07
Business Insurance
Why's our monitor labelling this an incident or hazard?
The report's errors stem from AI-generated hallucinations, meaning the AI system's outputs were inaccurate and misleading. The dissemination of fabricated information by a government report can be seen as harm to communities through misinformation and a breach of obligations under applicable law regarding accuracy and transparency. Since the harm has already occurred and the AI system's malfunction (hallucinations) directly caused it, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Hallucination Forces Deloitte to Pay Fine

2025-10-08
finews.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI) in producing a flawed government report with fabricated information (hallucinations). This directly led to harm by undermining the reliability of an official report, which affects public trust and the integrity of governmental processes. The financial penalty and reputational damage are consequences of this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through misinformation and breach of trust.
Thumbnail Image

Deloitte reembolsará parcialmente al gobierno australiano por reporte con errores generados por IA

2025-10-07
Santa Maria Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system that generated erroneous and fabricated content in an official report. The errors led to financial consequences (partial reimbursement) and represent a breach of trust and misinformation, which qualifies as harm to the government and public interest. Therefore, this is an AI Incident because the AI system's malfunction directly led to harm (misinformation and financial loss).
Thumbnail Image

Deloitte to partially refund Australian government for report with apparent AI-generated errors

2025-10-07
Winnipeg Sun
Why's our monitor labelling this an incident or hazard?
While the report contained errors possibly linked to AI-generated content (hallucinations), there is no indication that these errors caused injury, rights violations, or other harms as defined for an AI Incident. The issue is about inaccuracies in documentation rather than harm caused by AI system malfunction or misuse. The refund and resolution are responses to quality concerns, not to an incident of harm. Therefore, this event is best classified as Complementary Information, providing context on AI-related errors and responses without constituting a new AI Incident or Hazard.
Thumbnail Image

Deloitte Bets Big on AI Despite Fake Citations in Report

2025-10-07
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system (large language model) in producing a government report. The AI's involvement directly led to fabricated academic references and invented quotes, which are errors causing misinformation and undermining trust in official documents. This constitutes harm to communities and a breach of obligations under applicable law concerning truthful reporting. The financial reimbursement by Deloitte further indicates material consequences. Although no physical injury or direct legal rights violations are mentioned, the incident fits the definition of an AI Incident due to the realized harm caused by the AI system's outputs in a critical public document.
Thumbnail Image

Deloitte fined for using AI as it 'makes up citations'

2025-10-07
birminghampost
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was explicitly used in the creation of the report, and its malfunction (fabrication of citations) directly led to inaccuracies in an official government document. This constitutes a violation of trust and potentially a breach of obligations under applicable law related to accuracy and transparency in public reporting. The harm is realized as the government had to seek a refund and the report's credibility was compromised, affecting the management of a welfare system. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated misinformation in an official context.
Thumbnail Image

Deloitte costretta a rimborsare l'Australia: ha scritto (male) un report con l'AI

2025-10-07
Key4biz
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system in producing a report with fabricated references and errors, which misled a government department and required remediation (refund and report revision). This constitutes a violation of trust and potentially a breach of obligations under applicable law related to public accountability and intellectual property (due to invented citations). The harm is realized and directly linked to the AI system's outputs, qualifying this as an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case of AI misuse causing harm.
Thumbnail Image

Deloitte issues refund for error-ridden Australian government report that used AI

2025-10-07
Luxembourg Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used in the development of a government report. The errors in the report (incorrect citations and references) were corrected, and Deloitte refunded the payment, indicating acknowledgment of the problem. However, the errors did not lead to any direct or indirect harm as defined (such as injury, rights violations, or disruption of critical infrastructure). The substantive content and recommendations remained unchanged, and the issue was resolved without further impact. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. Instead, it is best classified as Complementary Information because it provides an update on the use of AI in consultancy work, highlights risks like hallucinations in generative AI, and shows a governance response (refund and correction) to an AI-related issue.
Thumbnail Image

Deloitte to partially refund Australian government for report with apparent AI-generated errors

2025-10-07
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system that generated erroneous content in an official government report, leading to misinformation and a financial refund. The AI's malfunction (fabrication of quotes and references) directly contributed to the harm of disseminating false information and undermining trust. This fits the definition of an AI Incident because the AI system's malfunction led to harm related to misinformation and breach of trust in a governmental context.
Thumbnail Image

Scandale en Australie : Deloitte utilise l'IA pour un rapport rempli d'erreurs

2025-10-08
Economie Matin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (GPT-4o) used in generating parts of a strategic government report. The AI's malfunction (hallucination of false citations and references) directly led to the publication of erroneous and fabricated information, which is a clear harm to the integrity of the report and the community's trust. This harm falls under violations of intellectual property rights and harm to communities through misinformation. The incident is materialized, not just potential, and has led to political and financial repercussions. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte muss für KI-Fehler bezahlen

2025-10-07
inside-it.ch
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the development of an official report, where the AI's hallucinated outputs (fabricated citations and quotes) led to misinformation and reputational harm to real persons, as well as undermining the credibility of the report. This constitutes a violation of trust and intellectual property rights (misattribution of quotes), and harm to communities through dissemination of false information in a government context. The harm is realized, not just potential, and directly linked to the AI system's malfunction. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Quando l'Intelligenza Artificiale diventa... avida stupidità: il clamoroso caso Deloitte in Australia

2025-10-06
ScenariEconomici.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a large language model (GPT-4) as the AI system responsible for generating the erroneous report. The AI's hallucinations caused the inclusion of false information and invented citations, which directly harmed the quality and reliability of the report delivered to a government client. This constitutes harm to the client and potentially to the public relying on the report's findings, fitting the definition of an AI Incident due to realized harm caused by the AI system's outputs. The event is not merely a potential risk or a complementary update but a concrete case of AI misuse or malfunction leading to harm.
Thumbnail Image

Deloitte erstattet australischer Regierung nach KI-Fehlern

2025-10-06
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
An AI system (a generative AI model hosted on Azure) was used in the creation of the report, and its outputs contained inaccuracies that led to a flawed official document. This constitutes harm in terms of misinformation and undermining trust in government processes, which can be considered harm to communities and a violation of obligations under applicable law regarding accuracy and integrity of official information. The AI's involvement directly led to the errors, and the partial refund indicates recognition of this harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Deloitte reembolsa governo australiano após erros em relatório com recurso a IA - Renascença

2025-10-06
Rádio Renascença
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system (GPT-4o) in producing part of the report. The AI system's outputs contained multiple errors, including fabricated references, which directly led to misinformation in an official government report. This misinformation constitutes harm to the community's trust and potentially violates standards of accuracy and transparency expected in public administration. Deloitte's reimbursement and the report correction confirm the harm was realized and linked to the AI system's use. Hence, this is an AI Incident as the AI system's use directly led to harm through erroneous outputs in an important public document.
Thumbnail Image

Deloitte to Issue Partial Refund to Government Following $440,000 AI-Utilized Report - Internewscast Journal

2025-10-07
internewscast.com
Why's our monitor labelling this an incident or hazard?
An AI system was used in the development of the report, and its outputs contained errors that led to financial consequences (partial refund) and reputational harm. Although the errors were limited to references and footnotes and did not affect substantive findings, the AI's involvement directly led to harm (financial loss and disrespect to researchers). Therefore, this qualifies as an AI Incident due to harm caused by the AI system's use in the report generation.
Thumbnail Image

Deloitte Australia's AI scandal and the 'canary' in the consulting coal mine

2025-10-08
businessdesk.co.nz
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI in producing a flawed consulting report that caused financial harm and public backlash. The AI system's outputs were erroneous, leading to a tangible negative impact, including Deloitte repaying part of the payment. This meets the definition of an AI Incident because the AI system's use directly led to harm (financial and reputational), and the harm is realized, not just potential.
Thumbnail Image

Deloitte admits AI use in $440,000 report after errors spark outrage

2025-10-07
News9live
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a generative AI large language model was used in preparing the report, and that the AI-generated content included fabricated references and errors. These errors led to a partial refund and public criticism, indicating harm has materialized. The harm is indirect but significant, involving misinformation in an official government report, undermining trust and accountability. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and breach of obligations). The event is not merely a potential risk or a complementary update but a realized incident involving AI misuse or malfunction in a critical context.
Thumbnail Image

Consultants Forced to Pay Money Back After Getting Caught Using AI for Expensive "Report"

2025-10-07
Skeptic Society Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (generative AI, GPT-4o) in producing a report with fabricated data (hallucinations). The AI's malfunction (hallucinated citations) directly led to harm: financial loss to the government (repayment of $291,000) and undermining the integrity of an official review, which can be considered a violation of obligations under applicable law and professional standards. The harm is realized and directly linked to the AI system's use. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte's $440,000 AI Blunder: When Artificial Intelligence Turns Corporate Genius into Expensive Fiction -- and Why Kenya Should Be Terrified

2025-10-07
Soko Directory
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Deloitte used AI to produce a report with fabricated content, which was delivered to the Australian government and required a refund after the errors were discovered. The AI system's malfunction or misuse directly led to the dissemination of false information, which can be considered harm to communities and institutions by undermining trust and potentially influencing policy decisions based on falsehoods. The harm is realized, not just potential, and the AI system's role is pivotal in causing this harm. Hence, this is an AI Incident.
Thumbnail Image

Deloitte erstattet australischer Regierung teilweise für fehlerhaften KI-Bericht

2025-10-07
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system in producing a report that contained fabricated and false information. This misinformation in an official government report constitutes a violation of intellectual property rights (due to false citations) and breaches obligations under applicable law to provide truthful and accurate information. The harm is realized as the government was misled and the public potentially misinformed, which fits the definition of an AI Incident. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Deloitte erstattet Teilbetrag für fehlerhaften KI-Report an australische Regierung

2025-10-07
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system in the creation of a government report. The AI-generated errors, including fabricated citations and false legal quotes, directly led to harm by undermining the report's reliability and trustworthiness, which is a violation of legal and intellectual property standards and harms the community's trust in public institutions. Deloitte's partial refund acknowledges the harm caused. Hence, the event meets the criteria for an AI Incident due to realized harm stemming from AI system use.
Thumbnail Image

Deloitte rimborserà il governo australiano per gli errori in un report fatto con l'IA

2025-10-08
Forbes Italia
Why's our monitor labelling this an incident or hazard?
The article describes a situation where AI was used in producing a report with errors, but it does not confirm that the AI system malfunctioned or directly caused harm. The harm is indirect and related to the quality of the report and contractual obligations, but no explicit legal or human rights violations or physical harm are reported. The event mainly updates on the consequences and responses related to AI use in consultancy work, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Deloitte To Repay Australian Government After AI-Generated Errors Found In Official Report

2025-10-07
NDTV Profit
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used in the report drafting process, and its outputs directly led to factual errors and fabricated references in an official government document. These inaccuracies constitute a violation of professional and ethical standards, potentially breaching obligations related to intellectual property and accuracy in official reporting, which can be considered harm to the integrity of information and public trust (harm to communities). The harm has materialized as the report was published with false information, requiring correction and refund. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated errors in an official context.
Thumbnail Image

Deloitte to Refund Australian Government for AI Errors -- Accounting Weekly

2025-10-08
Accounting Weekly
Why's our monitor labelling this an incident or hazard?
The use of generative AI (an AI system) in drafting the report caused the inclusion of false and fabricated information, which is a direct harm to the integrity and reliability of the report. This constitutes a violation of trust and potentially a breach of professional and legal standards, falling under harm to communities and possibly violations of obligations under applicable law. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs in a critical public document.
Thumbnail Image

Deloitte gerät in Kritik wegen KI-Fehlern in Regierungsbericht

2025-10-08
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating content for a government report. The AI's hallucinations caused the inclusion of false information, which is a form of misinformation harm affecting trust and potentially violating obligations related to accuracy and transparency in official documents. Although the errors were corrected and the core recommendations were unaffected, the incident involved realized harm through dissemination of false information. Therefore, this qualifies as an AI Incident due to the direct role of the AI system's outputs causing harm (misinformation) in an official government context.
Thumbnail Image

Deloitte to part repay government over dodgy AI report, for which it got $440,000 | Region Canberra

2025-10-08
Region Canberra
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI) used in producing a government report. The AI's hallucinations caused the report to contain false information, which is a direct malfunction of the AI system leading to harm—specifically, misinformation in a government document that affects public trust and administrative integrity. The partial repayment and public criticism underscore the harm caused. Although no physical injury occurred, the harm to the integrity of government operations and potential violation of obligations for truthful reporting meet the criteria for an AI Incident under violations of obligations intended to protect fundamental rights and public trust. Hence, this is not merely a hazard or complementary information but a realized AI Incident.
Thumbnail Image

Deloitte AI Report Hallucinations Australian Welfare - News Directory 3

2025-10-08
News Directory 3
Why's our monitor labelling this an incident or hazard?
The use of generative AI in producing the report led to the creation of fabricated and false information, which constitutes harm in terms of misinformation and breach of intellectual property and factual accuracy. Since the fabricated content was present and confirmed, this is a realized harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated false content causing harm to the report's integrity and potentially misleading stakeholders relying on it.
Thumbnail Image

Deloitte to refund government after using AI in $440,000 report

2025-10-08
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a generative large language model) in the creation of a government report. The AI-generated fabricated references and quotes, which were not factual, directly caused harm by disseminating false information and violating intellectual property rights. Although Deloitte claims the findings were unaffected, the presence of fabricated content in an official report is a clear harm to the community's trust and a breach of obligations under applicable law protecting intellectual property rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Deloitte's AI-Generated Errors In Australia Echo Troubled History Worldwide

2025-10-08
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The use of a generative AI system in drafting the report directly led to the production of false information and fabricated references, which caused harm by misleading the government and undermining trust. Although human experts reviewed the report, the AI-generated errors were significant enough to require a refund, indicating the AI system's malfunction or misuse contributed to the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial and reputational) and violations of professional standards.
Thumbnail Image

Un rapport rédigé par IA, facturé 440.000 dollars, plonge Deloitte dans l'embarras en Australie - Tunisie numerique

2025-10-09
Tunisie Numerique
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (GPT-4) used in the development and delivery of a government report. The AI-generated content contained fabricated information, which is a malfunction or misuse of the AI system's outputs. This led to reputational harm and erosion of trust in government processes, which can be considered harm to communities and governance. The harm is realized, not just potential, as the report was published and caused political and public controversy. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm (misinformation and loss of trust). It is not merely complementary information because the main focus is on the harm caused by the AI-generated errors, nor is it unrelated or only a hazard since harm has occurred.
Thumbnail Image

Deloitte fa scrivere un report all'AI ed è completamente inventato

2025-10-09
Punto Informatico
Why's our monitor labelling this an incident or hazard?
An AI system (GPT-4o) was explicitly used to generate the report content. The AI's hallucinations (fabricated sources and false information) directly led to the harm: a government report containing false data and citations, undermining trust and violating professional standards. This constitutes a violation of obligations under applicable law and professional ethics, thus fitting the definition of an AI Incident. The harm is realized, not just potential, as the government demanded reimbursement and the report's credibility was compromised.
Thumbnail Image

Deloitte dans la tourmente après avoir remis un rapport truffé d'erreurs générées par une IA - Siècle Digital

2025-10-09
Siècle Digital
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the development and use of the report, and its malfunction (hallucinations producing false information) directly led to harm: reputational harm to Deloitte, financial harm due to contract reimbursement, and potential harm to the integrity of government decision-making processes. This fits the definition of an AI Incident because the AI system's use directly caused significant harm through misinformation and breach of trust. The event is not merely a potential hazard or complementary information but a realized incident involving AI-generated harm.
Thumbnail Image

Deloitte ora deve rimborsare il governo australiano. Ha fatto un report con l'AI pieno di errori

2025-10-10
Money.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative language model GPT-4o) used in report generation, and the AI's outputs contained errors leading to a formal reimbursement and report correction. However, the harm is limited to errors in citations and references, with no indication of injury, rights violations, or other significant harms as defined. The incident is a notable example of AI misuse or malfunction in a professional setting but does not meet the threshold for an AI Incident or AI Hazard. It primarily informs about the risks and governance challenges of AI in consultancy and auditing, fitting the definition of Complementary Information.
Thumbnail Image

Duur Deloitte-rapport zit vol onzin door AI, Australische overheid krijgt geld terug

2025-10-09
AD
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used in the report's writing, which malfunctioned or was misused by overreliance, producing fabricated and false information. This led to harm in the form of misinformation and breach of intellectual property and trust, which fits under violations of obligations intended to protect intellectual property rights and harm to communities (public trust). Since the harm has already occurred and the AI system's role is pivotal, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Duur Deloitte-rapport zit vol onzin door AI, Australische overheid krijgt geld terug

2025-10-09
De Gelderlander
Why's our monitor labelling this an incident or hazard?
The report's errors and fabricated sources were caused by overreliance on AI-generated content, which directly led to misinformation and reputational harm to the government client and Deloitte. The AI system's malfunction or misuse in content generation caused a breach of trust and dissemination of false information, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident.
Thumbnail Image

Duur Deloitte-rapport zit vol onzin door AI, Australische overheid krijgt geld terug

2025-10-09
Provinciale Zeeuwse Courant
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in generating a report that contained fabricated and false information, which was discovered and led to the government demanding a refund. The AI's misuse directly caused harm in terms of misinformation, financial loss, and reputational damage. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial and reputational) and violation of trust, which can be considered harm to property and communities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Deloitte herstelt fouten na gebruik AI in rapport voor Australische overheid

2025-10-08
Accountant.nl
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI in producing the report and that this use led to errors in citations and references, which are factual inaccuracies. These errors constitute harm to the integrity and reliability of the report, which can be considered harm to communities or harm to property (informational property). Since the AI system's involvement directly contributed to these errors, this qualifies as an AI Incident. The harm is realized (not just potential), as evidenced by the need for a corrected report and financial reimbursement. Therefore, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte Australië betaalt geld terug na AI-fouten in overheidsrapport · Accountancy Vanmorgen

2025-10-07
Accountancy Vanmorgen
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system (Azure OpenAI language tool) in producing a government report that contained fabricated and incorrect information, which is a direct harm to the integrity of legal and scientific information and thus a violation of intellectual property and legal standards. The harm has materialized as the report was published and used by the government, leading to misinformation and reputational damage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Door AI verzonnen rechtbankuitspraken en wetenschappelijke referenties: Deloitte betaalt rapport van 250.000 euro voor Australische overheid deels terug

2025-10-09
De Morgan - French News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate parts of a legal audit report, which contained fabricated and incorrect legal references and court rulings. This misinformation directly impacts the Australian government's understanding and application of law, thus constituting a violation of legal obligations and harm to governance. The AI's hallucinations caused the harm by producing false information relied upon by a government ministry. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs in a critical legal context.
Thumbnail Image

Πολυεθνική χρέωσε 440.000 δολάρια την κυβέρνηση για έκθεση που συνέταξε με ChatGPT - Aftodioikisi.gr

2025-10-09
Aftodioikisi.gr
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used in generating content for a government report. The AI's outputs included fabricated information and false references, which are recognized as 'hallucinations' in AI language models. This led to a flawed report being delivered to the government, causing financial loss and political controversy. Although no physical harm occurred, the harm to the integrity of public information and trust, as well as financial harm to the government, fits within the scope of AI Incident definitions, particularly under harm to communities and breach of obligations related to transparency and accuracy in public administration. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Σκάνδαλο "ChatGPT" στην Αυστραλία: Η Deloitte επιστρέφει στο κράτος μέρος αμοιβής για μελέτη γεμάτη ανακρίβειες

2025-10-09
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (GPT-4o) used in generating content for a government report. The AI's hallucination caused the inclusion of false information, which led to financial repercussions and public criticism. The harm here is indirect but clear: the AI's malfunction led to the dissemination of false information in an official context, undermining trust and breaching obligations related to truthful reporting. This fits the definition of an AI Incident as the AI system's malfunction directly led to harm (violation of obligations and reputational harm).
Thumbnail Image

Η Deloitte παραδέχεται χρήση τεχνητής νοημοσύνης σε έκθεση κόστους 440.000 δολαρίων, με πολλαπλά σφάλματα

2025-10-09
Insomnia.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system in producing a flawed official report with multiple errors, including fabricated references, which were directly linked to the AI's typical hallucination behavior. The errors led to financial consequences and public criticism, indicating realized harm. The AI system's involvement in the development and use of the report directly contributed to these harms. Although no physical injury or legal violation is reported, the misinformation in a government report impacts public trust and the integrity of social welfare enforcement, which constitutes harm to communities and public interest. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Επιστροφή χρημάτων από τη Deloitte μετά τη χρήση Τεχνητής Νοημοσύνης σε έκθεση αξίας 440.000 δολαρίων

2025-10-07
Lawspot
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system (GPT-4) in producing the report. The AI's outputs included fabricated or incorrect references, which directly led to harm by disseminating false or misleading information in a government report. This constitutes a violation of obligations under applicable law and harms the community's trust. The partial refund and public acknowledgment of errors confirm the harm's materialization. Hence, this is an AI Incident as the AI system's use directly caused harm.
Thumbnail Image

Σκάνδαλο με Deloitte στην Αυστραλία: Επιστρέφει χρήματα μετά από έκθεση με λάθη που έγραψε τεχνητή νοημοσύνη

2025-10-09
Dialogos
Why's our monitor labelling this an incident or hazard?
The event describes a concrete case where an AI system was used in the development of a report that contained false information due to AI hallucinations. This led to reputational harm and political backlash, and Deloitte had to return part of the contract payment. The AI system's malfunction directly contributed to the harm. Although the harm is non-physical, it affects public trust and the integrity of government-related evaluations, which fits within the scope of harm to communities or breach of obligations under applicable law. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Η Deloitte χρέωσε 440.000 δολάρια την αυστραλιανή κυβέρνηση για έκθεση που συνέταξε με το ChatGPT

2025-10-09
Lamia Report
Why's our monitor labelling this an incident or hazard?
The Deloitte report was produced with the assistance of an AI system that generated false references and errors, leading to a flawed official government document. This constitutes an AI Incident because the AI system's use directly contributed to the dissemination of inaccurate information, causing reputational and financial harm and undermining public trust. The harm is realized, not merely potential, as the report was published and caused political reactions and a refund. Therefore, this event meets the criteria for an AI Incident due to indirect harm caused by AI-generated misinformation in an official context.
Thumbnail Image

Deloitte charged the Australian government $440,000 for a report it compiled using ChatGPT - ProtoThema English

2025-10-09
protothemanews.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used in producing the report, and its malfunction (hallucinations generating false citations) directly led to harm in the form of misinformation and erosion of trust in a government report. This constitutes harm to communities and public trust, fitting the definition of an AI Incident. Although no physical harm occurred, the incident involves violations of informational integrity and public accountability, which are significant harms under the framework. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

AI-generated errors force Deloitte to repay part of Australian government contract

2025-10-09
Gulf Business
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI language system in creating a report with fabricated content, which led to financial restitution by Deloitte. The AI system's malfunction (hallucination producing false information) directly caused harm by disseminating inaccurate legal and academic references, breaching legal and intellectual property standards. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's outputs.
Thumbnail Image

Deloitte AI slammed at commitee, could face supplier sin bin

2025-10-09
The Mandarin
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in generating a government report that contained fabricated and hallucinated content, which is a direct misuse of AI outputs leading to harm. The harm includes misinformation affecting government policy and potential violation of intellectual property rights through fabricated citations. The AI system's malfunction or misuse directly led to these harms, fulfilling the criteria for an AI Incident. The event also triggered governance responses such as contract rule changes and refund demands, but the primary focus is on the realized harm caused by the AI-generated false content.
Thumbnail Image

$440,000 for AI Slop: Deloitte Partially Refunds Government After AI Fabricated Details in Report - WinBuzzer

2025-10-09
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system that produced fabricated and false information in a government report, leading to significant factual inaccuracies and breaches of integrity. The AI system's malfunction (hallucination) directly caused harm by misleading a government department and potentially affecting public policy decisions. The fabricated citations also implicate intellectual property rights violations. The harm is realized, not just potential, as evidenced by the partial refund and public outcry. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte AI Report Scandal: Government Demands Refund After Botched Welfare Analysis

2025-10-09
Bangla news
Why's our monitor labelling this an incident or hazard?
The report was produced using AI (Azure OpenAI), which generated fabricated and incorrect information that misled the Australian government in policy decisions related to welfare penalties. This misinformation represents a violation of trust and potentially a breach of obligations in public policy contexts, causing harm to the community and governance processes. The harm is realized, not just potential, as the government relied on the flawed AI-generated analysis. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte. Um relatório com dados inventados reacendeu o debate sobre a IA

2025-10-10
SAPO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (GPT-4) that generated false information in a government report, which led to Deloitte having to return funds and sparked political and public backlash. The harm includes misinformation, financial loss, and reputational damage, which fall under harm to communities and violations of obligations under applicable law. The AI system's malfunction or misuse directly caused these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Non divulgué, mal maîtrisé : Deloitte tancé pour son usage de l'IA générative

2025-10-10
Silicon
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system (GPT-4o) in producing a government report. The AI's hallucinations (invented sources and judicial precedents) directly caused misinformation in an official context, which is a form of harm to communities and governance (harm category d). The failure to disclose AI use and the resulting false information also constitute violations of obligations under applicable law and transparency norms (harm category c). The harm has materialized, as evidenced by parliamentary inquiry, financial reimbursement, and political fallout. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deloitte. Um relatório com dados inventados reacendeu o debate sobre a IA - 24 Notícias

2025-10-10
24 Notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system (GPT-4) was used to generate a report with false information that was delivered to the Australian government, causing financial and reputational harm. This is a direct consequence of the AI system's use and output, fulfilling the criteria for an AI Incident. The harm includes violation of trust and potential breach of obligations in public sector reporting, which aligns with harm category (c) regarding violations of obligations under applicable law. The event is not merely a potential risk or a complementary update but a realized incident with concrete consequences.
Thumbnail Image

When a consultancy uses an AI consultant

2025-10-13
Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system (GPT-4o chatbot) was explicitly used to prepare a government report, and the report was found to be riddled with errors caused by the AI's hallucinations. Deloitte's partial refund indicates acknowledgment of harm caused by the AI's malfunction or misuse. The errors in a government report can lead to harm in policy-making and public trust, fulfilling the criteria for harm to communities or property. The event involves the use and malfunction of an AI system leading directly to harm, thus it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A IA está mudando quem é contratado: saiba quais habilidades vão ajudar a manter você empregado

2025-10-29
Terra
Why's our monitor labelling this an incident or hazard?
The article does not describe any AI system malfunction, misuse, or harm to individuals, communities, or infrastructure. It is an analytical and informative piece about AI's influence on employment and skills development, which fits the definition of Complementary Information as it provides context and understanding of AI's broader societal impact without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Responsabilidade e supervisão humana: O uso da Inteligência Artificial exige mais inteligência humana

2025-10-29
Expansão
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of a generative AI system (an AI system) that produced false and unverifiable information in an official government report. This led to a direct harm: financial loss to the government and reputational damage to Deloitte, as well as potential harm to public trust and legal/ethical violations. The AI system's role is pivotal as the errors stemmed from AI hallucinations and lack of human supervision. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm.
Thumbnail Image

Reinventando o papel do executivo na era da IA-Quântica - Brasil, país digital - #BrasilPaisDigital

2025-10-29
Brasil, país digital - #BrasilPaisDigital
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI Incident where harm has occurred or an AI Hazard where plausible harm is imminent. Instead, it provides a high-level overview of AI-related security challenges, the need for responsible leadership, and the evolving cybersecurity environment including quantum computing threats. It fits the definition of Complementary Information as it enhances understanding of AI ecosystem risks and governance without reporting a new primary harm or imminent hazard.
Thumbnail Image

IA vai dar poder às mulheres no mundo laboral ou deixá-las para trás?

2025-10-30
euronews
Why's our monitor labelling this an incident or hazard?
The article is a detailed analysis and discussion of AI's potential and observed impacts on women's employment, based on research and statistical data. It does not report any realized harm, incident, or imminent risk caused by AI systems. There is no mention of AI system malfunction, misuse, or direct harm to individuals or communities. The content is primarily informative and contextual, fitting the definition of Complementary Information as it enhances understanding of AI's societal implications without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Deve confiar nos conselhos de investimento do ChatGPT?

2025-10-30
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and other AI tools) used in investment decision-making. It discusses the use and potential misuse of AI in providing financial advice, which could plausibly lead to financial harm (losses) for retail investors. However, no actual incident of harm is reported; the harms are potential and regulatory concerns are highlighted. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (financial harm), but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

AI security: o novo campo de batalha da cibersegurança - Tek Notícias

2025-10-31
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Yellow.ai) being manipulated to generate malicious code that allowed attackers to access sensitive operator data, which constitutes a direct harm to data privacy and security. This manipulation of the AI system led to a security breach affecting organizations, fitting the definition of an AI Incident due to violation of privacy and potential harm to property or communities. The discussion of broader risks and the need for security frameworks supports the incident classification rather than just a hazard or complementary information.
Thumbnail Image

5 dicas para usar IA no trabalho e evitar erros e alucinações

2025-10-31
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating false or fabricated legal citations ('hallucinations') that have been submitted in court cases, causing judicial harm and sanctions. This is a direct harm to legal processes and potentially to individuals' rights and justice outcomes, fitting the definition of an AI Incident. The article also discusses privacy risks from AI tools, which can lead to violations of confidentiality, another form of harm. The presence of AI systems is clear, their use is the cause of the harm, and the harms have materialized. The article also offers advice to mitigate these harms, but the primary focus is on the realized harms from AI use, not just potential risks or responses, so it is not merely Complementary Information or an AI Hazard.
Thumbnail Image

IA acelera transformação na rotina jurídica brasileira

2025-10-31
Terra
Why's our monitor labelling this an incident or hazard?
The article primarily offers a broad overview of AI adoption in the legal field in Brazil, emphasizing benefits and ongoing regulatory and ethical debates without reporting any realized harm or specific event involving AI malfunction or misuse. There is no mention of actual incidents causing injury, rights violations, or other harms. The focus is on the evolving landscape, potential risks, and governance responses, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Trabalhos "malfeitos" com IA causam prejuízos e mostram o perigo do uso sem controle

2025-10-31
InfoMoney
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI-generated content and its impact on work processes. The harms described include economic losses due to rework and social harm in terms of employee trust and frustration, which are significant harms to communities and organizations. However, these harms are indirect and systemic rather than arising from a specific AI system failure or misuse event. The article focuses on the broader phenomenon and its implications, including expert opinions and research findings, rather than reporting a concrete incident or a plausible future hazard. Thus, it fits the definition of Complementary Information, providing supporting data and context about AI's societal impact and governance challenges.
Thumbnail Image

IA está se tornando mais inteligente e egoísta, aponta estudo

2025-10-31
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article focuses on research findings about potential negative social behaviors of AI systems and the risks of humans emotionally relying on AI that may act selfishly. There is no mention of any realized harm, incident, or malfunction involving AI systems. The concerns are about plausible future harms if AI behavior is not managed properly, making this a discussion of potential risks rather than an actual incident or hazard event. Therefore, it fits best as Complementary Information, providing context and insight into AI development and its societal implications without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Nvidia e Hyundai fecham parceria para joint-venture de IA | Exame

2025-10-31
Exame
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it concerns AI development and applications in autonomous driving and robotics. However, there is no indication of any harm, malfunction, or risk of harm occurring or plausibly arising from this collaboration at this stage. The article is about strategic partnership and future AI development, which is general AI-related news without specific incidents or hazards. Therefore, it is best classified as Complementary Information, providing context on AI ecosystem developments without reporting an AI Incident or AI Hazard.
Thumbnail Image

OCDE: Brasil não tem data centers para atender demanda da inteligência artificial - ConvergenciaDigital

2025-10-31
ConvergenciaDigital
Why's our monitor labelling this an incident or hazard?
The article focuses on a report assessing AI infrastructure availability and its strategic implications for Brazil and other countries. It does not describe any realized harm or incident involving AI systems, nor does it present a specific credible risk of harm from AI systems. The content is primarily informational and contextual, relating to AI ecosystem development and policy considerations, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

O "veneno digital" da IA: quando poucos dados bastam para comprometer grandes decisões - HSM Management

2025-10-31
HSM Management
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically large language models, and discusses a security vulnerability ('data poisoning') that could plausibly lead to significant harms such as incorrect decisions and financial or reputational damage. However, it does not describe an actual incident where harm has occurred due to such an attack. Instead, it presents research findings and expert warnings about potential risks and the need for governance and security measures. Therefore, the event is best classified as an AI Hazard, as it concerns a credible risk of harm from AI system vulnerabilities that could lead to incidents if exploited.
Thumbnail Image

Erros da IA levam Deloitte a reembolsar Austrália num exemplo do impacto da tecnologia

2025-11-02
Sapo - Portugal Online!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems (e.g., Azure OpenAI, ChatGPT, Meta's AI chatbot, Google's AI tool) producing false or fabricated content that caused harm such as defamation, misinformation, and financial loss. The harms have materialized, as evidenced by lawsuits, reimbursements, and apologies. The AI systems' use directly led to these harms, fulfilling the criteria for AI Incidents. The article does not merely discuss potential risks or responses but reports on actual harms caused by AI outputs.
Thumbnail Image

Erros da IA levam Deloitte a reembolsar Austrália. O que está em causa?

2025-11-02
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems producing false information that caused harm, such as the Deloitte report with AI-generated errors leading to reimbursement, the false interview with Michael Schumacher resulting in a successful lawsuit, and defamation cases involving AI chatbots spreading false claims. These are direct harms caused by the use or malfunction of AI systems, fulfilling the criteria for AI Incidents. The harms are realized, not just potential, and involve violations of rights and reputational damage, which are covered under the AI Incident definition.
Thumbnail Image

Erros da IA levam Deloitte a reembolsar Austrália num exemplo do impacto da tecnologia

2025-11-02
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Azure OpenAI, ChatGPT, Meta's AI chatbot, Google's AI tool) generating false or fabricated content that caused reputational damage, legal disputes, and financial penalties. These harms fall under violations of rights (defamation, misinformation) and harm to communities. The AI systems' outputs directly led to these harms, fulfilling the criteria for AI Incidents. The article reports on actual realized harms, not just potential risks, so the classification is AI Incident.
Thumbnail Image

Relatório com erros e entrevista falsa: inteligência artificial obriga Deloitte a reembolsar Austrália

2025-11-02
JN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems (e.g., Azure OpenAI, ChatGPT, Meta's AI chatbot) producing false or fabricated content that led to legal and financial harm, including defamation lawsuits and reimbursements. The harms are realized and directly linked to the AI systems' outputs, fulfilling the criteria for AI Incidents. The harms include violations of rights (defamation, false information damaging reputations) and harm to communities (misinformation). The article reports on actual incidents, not just potential risks or responses, so the classification is AI Incident.
Thumbnail Image

Erros da IA levam Deloitte a reembolsar Austrália

2025-11-02
Revista SÁBADO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems (e.g., Azure OpenAI, ChatGPT, AI chatbots) producing false or fabricated content that caused harm, including legal actions and financial reimbursements. The harms include violations of rights (defamation), harm to individuals and organizations, and financial loss. Since these harms have materialized and are directly linked to the use or malfunction of AI systems, the events qualify as AI Incidents under the OECD framework.
Thumbnail Image

Erros da IA levam Deloitte a reembolsar Austrália num exemplo do impacto da tecnologia

2025-11-03
SAPO
Why's our monitor labelling this an incident or hazard?
The events involve AI generative systems producing false or fabricated content that caused reputational harm, legal disputes, and financial losses. The Deloitte case involves an AI-generated report with false citations leading to reimbursement, indicating harm to property and trust. The fabricated interview and defamation lawsuits show violations of personal rights and reputational harm caused by AI-generated misinformation. These harms are direct consequences of AI system outputs, meeting the criteria for AI Incidents as defined by the framework.