NYC MyCity Chatbot Gives Dangerous, Illegal Advice to Businesses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

New York City's official MyCity AI chatbot, launched to provide legal and regulatory guidance to businesses, has been found to give dangerously inaccurate and misleading information. The chatbot's errors include advising users to break laws on housing, labor, and business regulations, potentially causing legal violations and harm to individuals and communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system ('MyCity' chatbot) is explicitly mentioned and is reported to hallucinate, producing incorrect and misleading information about legal and regulatory matters. This misinformation can cause harm to users who rely on it for important decisions, such as eviction rights or discrimination laws. The harm is realized as users receive wrongful information that could lead to legal or personal consequences. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly leads to harm through misinformation and potential violation of rights or legal obligations.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyRespect of human rightsHuman wellbeingDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
BusinessWorkersGeneral public

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Citizen/customer serviceCompliance and justice

AI system task:
Interaction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

New York 'MyCity' Chatbot Hallucinating: Incorrect, Misleading Data Shared

2024-03-30
Tech Times
Why's our monitor labelling this an incident or hazard?
The AI system ('MyCity' chatbot) is explicitly mentioned and is reported to hallucinate, producing incorrect and misleading information about legal and regulatory matters. This misinformation can cause harm to users who rely on it for important decisions, such as eviction rights or discrimination laws. The harm is realized as users receive wrongful information that could lead to legal or personal consequences. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly leads to harm through misinformation and potential violation of rights or legal obligations.
Thumbnail Image

How New York City's AI chatbot may be giving dangerous advice to city businesses | - Times of India

2024-04-01
The Times of India
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as powered by Microsoft's Azure AI services. Its use has directly led to harm by providing false and illegal advice to business owners, which can cause violations of labor rights, housing rights, and consumer protections. This constitutes harm to people and communities, fulfilling the criteria for an AI Incident. The article documents realized harm rather than just potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's outputs, not on responses or governance measures. Therefore, the event is classified as an AI Incident.
Thumbnail Image

New York City's AI-Powered Chatbot Gives Businesses Disastrous and Potentially Illegal Advice

2024-03-29
Breitbart
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the chatbot powered by Microsoft's Azure AI) whose use has directly led to harm by providing false legal advice that could cause users to break laws, constituting violations of legal rights and potentially harming individuals and communities. The harm is realized, not just potential, as the chatbot is actively giving incorrect information that users might rely on. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of legal obligations and potential harm to people.
Thumbnail Image

NYC's Business Chatbot Is Telling Users To Break The Law

2024-03-31
PC Magazine
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as providing incorrect legal information, which can directly lead to harm by causing businesses to act unlawfully or infringe on workers' rights. The harm is realized as the misinformation is actively being given to users, and the inconsistent responses exacerbate the risk. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and potential legal and rights violations.
Thumbnail Image

A New York business chatbot is sending out some particularly bad information

2024-04-01
TechRadar
Why's our monitor labelling this an incident or hazard?
The AI chatbot is explicitly mentioned and is clearly an AI system designed to provide legal and business information. The chatbot's inaccurate advice could plausibly lead to violations of legal rights or obligations, which constitutes a potential violation of human rights or legal obligations (harm category c). Since no actual harm or incident is reported, but the risk is credible and recognized, this qualifies as an AI Hazard. The article also includes disclaimers and efforts to improve the system, but these do not negate the plausible risk of harm from the AI's flawed outputs.
Thumbnail Image

NYC's business chatbot is reportedly doling out 'dangerously inaccurate' information

2024-03-30
engadget
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as powered by Microsoft's Azure AI. Its use has directly led to misinformation that could cause harm to users by misguiding them about legal and policy matters, which constitutes harm to communities and potential violations of rights. Although the chatbot is a pilot and the city acknowledges it is a work in progress, the inaccuracies have already manifested and pose real risks. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use and malfunction (inaccurate outputs).
Thumbnail Image

NYC's government chatbot is lying about city laws and regulations

2024-03-29
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the MyCity chatbot powered by a large language model) whose use is directly causing harm by disseminating false legal information that can mislead users about their rights and obligations under city law. This misinformation can lead to real-world harms such as wrongful eviction or failure to comply with labor regulations, which constitute violations of rights and harm to individuals and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation about legal and regulatory matters.
Thumbnail Image

New York City's AI chatbot is telling people to break laws and do crimes

2024-03-29
Quartz
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as providing advice. Its outputs are factually incorrect and encourage illegal behavior, which constitutes a violation of legal rights and can harm individuals and communities relying on this information. The harm is realized as users may act on this false advice, leading to legal and social consequences. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

NY's AI Chatbot for small businesses suggests them to breaks laws, steal wages

2024-04-01
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the MyCity chatbot) that provides legal advice. The chatbot's erroneous outputs have the potential to cause harm by encouraging illegal actions such as wage theft and discrimination, which are violations of labor and fundamental rights. Since these harms are occurring or are very likely to occur due to reliance on the chatbot's advice, this qualifies as an AI Incident under the framework. The harm is not merely potential but is already manifest in the misleading advice given, which can lead to legal violations and harm to workers and communities.
Thumbnail Image

New York City's AI chatbot advises businesses to steal tips from workers

2024-03-29
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Microsoft-powered chatbot) whose use has directly led to the dissemination of false information that encourages illegal behavior by employers, specifically violating labor laws protecting workers' tips. This constitutes a violation of human and labor rights due to the AI system's misleading outputs. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of potential legal violations and harm to workers' rights.
Thumbnail Image

Official NYC Chatbot Encouraging Small Businesses to Break the Law

2024-03-30
Futurism
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system designed to provide legal advice. Its malfunction or erroneous outputs are causing users to receive and potentially act on illegal advice, which constitutes a violation of legal rights and could harm tenants, employees, and small business communities. The harm is realized as the chatbot is actively encouraging illegal actions, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal obligations. The event is not merely a potential risk but an ongoing issue with direct consequences, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law

2024-03-29
THE CITY
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as providing authoritative but incorrect legal and regulatory information. Its use has directly led to misinformation that could cause businesses and landlords to break laws, which constitutes a violation of legal rights and could harm individuals and communities. The harm is realized or ongoing, as the bot has been in use for months and errors have been documented and acted upon. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and harm through misinformation leading to potential or actual legal violations and harm to people and communities.
Thumbnail Image

NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law | naked capitalism

2024-03-31
naked capitalism
Why's our monitor labelling this an incident or hazard?
The AI system (the NYC chatbot) is explicitly mentioned and is central to the event. It is used to provide legal and regulatory information to businesses, but it provides false and misleading information that could cause users to break the law, violating labor, housing, and consumer protection rights. The harm is realized or ongoing, as the misinformation is actively being disseminated and has already misled some users. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and legal obligations (point c), and harm to communities through misinformation. The event is not merely a potential risk (hazard) or a complementary update; it documents actual harm caused by the AI system's outputs.
Thumbnail Image

NYC Government Chatbot Under Fire for Providing Inaccurate Information on City Laws and Regulations

2024-04-01
bbntimes.com
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system based on large language models that generate responses to user queries. Its deployment by the NYC government to provide official information means users rely on it for critical decisions. The chatbot has disseminated incorrect information about legal obligations and policies, such as the acceptance of Section 8 vouchers and worker pay regulations. This misinformation can directly harm users by causing them to violate laws or miss out on benefits, fulfilling the criteria for harm to persons or communities. The AI system's malfunction (inaccurate outputs) is the direct cause of this harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Read More

2024-04-01
BruneiDirect
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (powered by Microsoft's Azure AI) used to provide policy information. Its inaccurate responses have directly led to misinformation about legal and workers' rights issues, which constitutes harm to people relying on this information. Although the city labels the chatbot as a pilot and includes disclaimers, the realized harm from misinformation qualifies this as an AI Incident under the framework, specifically under harm to people (a) and violations of rights (c).
Thumbnail Image

You Shouldn't Trust a Government-run Chatbot to Give You Good Advice

2024-03-29
Lifehacker
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot powered by Microsoft's Azure AI) is explicitly involved and is malfunctioning by hallucinating false information. While no direct harm is reported as having occurred, the misleading advice could plausibly lead to harm such as legal violations or financial damage to users relying on the chatbot. This fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to an AI Incident involving harm to people or communities. The article also discusses Microsoft's new safety system as a potential mitigation but does not indicate that harm has been averted or that the system is currently effective, so it is not complementary information. Hence, the classification is AI Hazard.
Thumbnail Image

NYC's AI chatbot was caught telling businesses to break the law....

2024-04-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (a large language model-based chatbot) used by the city government to provide guidance. Its outputs have directly led to misinformation that could cause businesses to break laws, which is a violation of legal rights and could cause harm to people and communities. The harm is realized or ongoing, as the chatbot continues to provide false and harmful advice. The involvement of the AI system in causing this harm is clear and direct, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NYC's AI Chatbot Was Caught Telling Businesses to Break the Law. the City Isn't Taking It Down

2024-04-03
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (a large language model-based system) used by the city to provide guidance. Its use has directly led to misinformation that could cause legal violations and harm to individuals and businesses, constituting harm to rights and communities. The harm is realized because the chatbot is actively dispensing false and harmful advice, not merely posing a future risk. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and the potential for injury, legal violations, and harm to community trust and safety.
Thumbnail Image

NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down

2024-04-04
The Hindu
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (a large language model-based chatbot) used by the city government. It is malfunctioning by providing false and harmful advice that misstates laws and policies, which can lead users to break the law or engage in unsafe behavior. This constitutes a violation of legal rights and could cause harm to individuals and communities. The harm is realized as users are receiving and potentially acting on incorrect legal advice. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly led to harm (legal violations and misinformation causing potential harm).
Thumbnail Image

NYC's AI chatbot criticised for advising businesses to break the law

2024-04-04
Euronews English
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (a large language model-based chatbot) used by the public sector. It is malfunctioning by providing false and harmful legal advice, which can lead to violations of labor rights and other legal protections, thus causing harm to individuals and businesses. The harm is realized as users receive and may act on incorrect guidance. The city's decision to keep the chatbot active despite known issues exacerbates the risk. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction and use.
Thumbnail Image

NYC Leaving AI Chatbot In Place After It Advised Small Businesses To Break The Law, Said It's OK To Serve Cheese 'If It Has Rat Bites'

2024-04-04
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the MyCity Chatbot) that is actively used by small business owners to navigate legal regulations. The chatbot has given advice that encourages breaking laws and unsafe practices, such as serving cheese with rat bites, which can harm public health and violate legal standards. The AI's inaccurate outputs have directly led to misinformation and potential legal and health harms, fulfilling the criteria for an AI Incident. The presence of disclaimers does not negate the harm caused by the AI's misleading advice. Hence, this is not merely a hazard or complementary information but a realized incident involving harm linked to the AI system's use.
Thumbnail Image

NYC Faces Backlash Over AI Chatbot's Misleading Guidance for Small Businesses

2024-04-04
Tech Times
Why's our monitor labelling this an incident or hazard?
The AI chatbot is explicitly mentioned and is generating algorithmic responses to business queries. The chatbot's inaccurate advice has directly led to misinformation that risks legal violations and harm to individuals' rights (e.g., wrongful termination advice, misinformation about sexual harassment and pregnancy rights) and public health (e.g., permitting serving rodent-bitten cheese). These constitute violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misleading guidance. The article does not merely discuss potential harm or future risks but documents actual misleading outputs causing harm, which is central to the report.
Thumbnail Image

NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down

2024-04-03
Financial Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NYC chatbot) whose malfunction (providing false legal advice) has directly led to potential violations of labor rights (e.g., firing workers for protected reasons) and breaches of city regulations (waste disposal and composting rules). These constitute violations of human rights and harm to communities as defined in the framework. The harm is realized as the chatbot continues to provide incorrect guidance, which can mislead users into illegal actions. Therefore, this qualifies as an AI Incident.
Thumbnail Image

NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down - World News

2024-04-04
Castanet
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system generating algorithmic text responses. Its use has directly led to harm by providing false and harmful advice that misstates laws and encourages illegal actions, which can injure individuals or businesses legally and financially. The harm is realized as the chatbot continues to dispense incorrect guidance, and experts express concern about the risks. The city's decision to keep the chatbot online despite known issues further implicates the AI system's role in ongoing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US chatbot caught telling businesses to break the law

2024-04-04
Perth Now
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system generating algorithmic text responses. Its use has directly caused harm by providing false legal advice that could lead to violations of labor rights and local laws, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal obligations. The harm is realized as users are receiving and potentially acting on incorrect information. The city's decision to keep the faulty system operational without adequate oversight exacerbates the risk. Therefore, this event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US chatbot caught telling businesses to break the law

2024-04-04
The West Australian
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system providing algorithmically generated responses. Its use has directly led to harm by giving false legal advice that encourages illegal actions, such as firing workers unlawfully and improper waste disposal, which are violations of law and rights. This meets the criteria for an AI Incident because the AI system's malfunction has caused realized harm through misinformation and potential legal violations affecting businesses and workers. The continued operation of the chatbot despite known issues exacerbates the harm.
Thumbnail Image

NY chatbot kept on despite its advice contravening laws - Taipei Times

2024-04-04
Taipei Times
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system generating algorithmic text responses. Its use has directly led to the dissemination of false and harmful advice that contradicts laws and policies, which can cause harm to users who rely on it. This meets the criteria for an AI Incident because the AI system's malfunction and use have directly caused harm through misinformation and legal risk. The continued operation despite known issues exacerbates the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

NYC's AI chatbot caught telling businesses to break the law

2024-04-05
Jamaica Gleaner
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (a large language model-based chatbot) whose use has directly led to harm by dispensing false and harmful legal advice, which could cause businesses to violate laws and regulations. This constitutes a violation of legal rights and could harm individuals and communities relying on the information. The harm is realized as the chatbot is actively providing misleading guidance, not just a potential risk. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and the harm caused or likely caused to users and the community.
Thumbnail Image

Yapay zekâ botu yasaları çiğnemeyi tavsiye etti - Sözcü Gazetesi

2024-04-04
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as providing policy-contradictory and legally questionable answers. Its deployment and use have directly led to dissemination of misleading information that could harm individuals (e.g., advising employers on wrongful termination, misleading about health and safety standards). This constitutes a violation of rights and potential harm to people, fitting the definition of an AI Incident. The presence of disclaimers does not negate the harm caused by the AI's outputs, especially since the bot remains publicly accessible and continues to provide such responses.
Thumbnail Image

New York Belediyesinin yapay zeka sohbet robotu, işletmelere yasaları çiğnemesini önerdi

2024-04-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI chatbot) whose use has directly led to harmful outcomes by providing misleading and potentially illegal advice to users. This constitutes violations of labor rights and public health risks, which fall under harms to persons and communities. The AI system's malfunction or flawed outputs have caused these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information. The presence of expert criticism and ongoing availability of the chatbot further supports the classification as an incident.
Thumbnail Image

New York Belediyesi'nin yapay zeka sohbet robotu, işletmelere yasaları çiğnemesini önerdi

2024-04-04
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI chatbot) whose use has directly led to harmful outcomes by advising users to engage in illegal or unethical practices, thus violating labor rights and potentially endangering public health. The AI system's malfunction or misuse in providing such advice meets the criteria for an AI Incident, as it has caused or could cause harm to people and communities. The continued availability of the chatbot despite these issues exacerbates the risk. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yapay zekadan işletmelere öneri: 'Yasaları çiğneyin'

2024-04-04
Dünya
Why's our monitor labelling this an incident or hazard?
The AI system (chatbot) is explicitly mentioned and is involved in the use phase, providing advice to users. The advice includes recommendations that contravene laws and public health standards, which can directly harm people and violate their rights. The chatbot's malfunction or flawed outputs have led to misinformation and potential legal and health harms. The presence of disclaimers does not negate the harm caused by the AI's outputs. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm and rights violations.
Thumbnail Image

Belediyenin yapay zeka robotu, işletmeleri 'sahtekarlığa' teşvik etti - Diken

2024-04-04
Diken
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as a chatbot developed and deployed by the city. Its use has directly led to harmful misinformation encouraging illegal or unethical behavior by businesses, which can cause harm to employees (labor rights violations) and the environment (improper waste disposal). The chatbot's advice contradicts city policies and legal frameworks, indicating malfunction or flawed design. The harm is realized or ongoing as the chatbot remains accessible and continues to provide such advice. Hence, this fits the definition of an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

New York City's AI chatbot tells business owners to break the law

2024-04-04
WPIX
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the chatbot) whose use is directly leading to harm by advising users to break the law, specifically labor laws protecting employees. This constitutes a violation of human and labor rights due to the AI system's outputs. The harm is realized as business owners may act on this incorrect advice, causing injury to employees' rights and well-being. Therefore, this qualifies as an AI Incident.
Thumbnail Image

UPDATE 1-New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-05
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot using Microsoft's Azure AI) is explicitly mentioned and is providing advice that is factually incorrect and could lead to legal violations if followed. This constitutes harm to people (business owners) through misinformation that could cause legal and financial injury. The harm is realized in the form of confusion and risk of legal consequences, meeting the criteria for an AI Incident. The article describes actual use and harm, not just potential risk, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the chatbot's incorrect advice causing harm, not on responses or governance measures. Therefore, this event is classified as an AI Incident.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws - ET CISO

2024-04-05
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot) is explicitly mentioned and is used to provide legal and regulatory information to business owners. It has given wrong answers that, if followed, would result in breaking laws, which constitutes a violation of legal rights and could cause harm to the affected individuals. Although the harm is indirect and arises from reliance on incorrect AI outputs, it is a clear case of an AI Incident because the AI's malfunction has directly contributed to potential legal harm. The article reports realized harm in terms of confusion and risk of legal consequences, not just a hypothetical risk, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-04
Aol
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot) is explicitly mentioned and is a generative AI system providing legal and regulatory advice. Its use has led to incorrect advice that, if followed, would cause business owners to break laws, which is a direct harm to individuals and potentially to the community. The harm is realized as business owners have already been confused and warned about possible legal consequences. Therefore, this is an AI Incident due to the direct link between the AI system's outputs and the potential for legal harm to users.
Thumbnail Image

NYC AI Chatbot Will Remain Accessible Public Despite Advising Businesses to Break the Law: Mayor Adams

2024-04-05
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system providing algorithmically generated legal advice. Its malfunction—giving advice that encourages breaking the law—directly leads to harm by misleading users, which can cause legal violations and liabilities. This fits the definition of an AI Incident because the AI system's use has directly led to harm (legal and potentially financial harm to users). The article describes realized harm and ongoing risk, not just potential future harm, so it is not merely a hazard. The focus is on the AI system's malfunction and its consequences, not on complementary information or unrelated news.
Thumbnail Image

New York City Defends AI Chatbot That Advised Entrepreneurs to Break Laws

2024-04-04
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The AI system (MyCity chatbot) is explicitly mentioned and is used to provide legal and regulatory advice to business owners. The chatbot has given wrong answers that, if followed, would lead to breaking laws, which is a direct harm to the users' legal rights and could cause financial or legal injury. The harm is realized in the form of misinformation and potential legal violations. The city acknowledges the errors but continues to operate the chatbot, which continues to provide inaccurate information. Therefore, this is an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Faulty AI that told people to break the law defended by New York mayor

2024-04-05
TechRadar
Why's our monitor labelling this an incident or hazard?
The MyCity AI chatbot is an AI system using large language models. It has been reported to give incorrect legal advice encouraging illegal actions such as discrimination and withholding workers' tips, which are violations of law and rights. This misinformation has already been disseminated and could cause harm to business owners and workers, fulfilling the criteria for an AI Incident due to violations of legal obligations and potential harm to people. The mayor's defense and acknowledgment of the problem do not negate the realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

New York City Mayor back AI chatbot that shared wrong advice to businesses - Republic World

2024-04-05
Republic World
Why's our monitor labelling this an incident or hazard?
An AI system (the MyCity chatbot powered by Microsoft's Azure AI) is explicitly involved and is being used to provide legal and regulatory advice to business owners. The chatbot has provided incorrect information that could lead to legal infractions, which constitutes harm to users (potentially injury to their legal standing and business operations). This harm is occurring as business owners are confused and could follow wrong advice, which is a direct consequence of the AI system's outputs. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (confusion and risk of legal violations). The article does not merely discuss potential future harm or general AI news, but actual realized harm from the chatbot's incorrect advice.
Thumbnail Image

New York's AI Chatbot Keeps Getting Facts Wrong, 6 Months and $600,000 After Launch

2024-04-05
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot) is explicitly mentioned and is used to provide regulatory information. Its inaccurate outputs have already misled users on important legal matters, which could cause harm to business owners (e.g., legal trouble, financial loss) if they rely on the chatbot's incorrect answers. This meets the criteria for an AI Incident because the AI's use has directly led to harm through misinformation and potential violations of rights or obligations under applicable law. The harm is realized or ongoing, not merely potential, as the chatbot is widely available and has been used for six months with documented inaccuracies.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-05
CTV News
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot) is explicitly mentioned and is in use providing legal advice to business owners. Its malfunction or inaccurate outputs have directly led to misinformation that could cause harm to users if they follow the wrong advice, such as breaking laws or facing legal consequences. The harm is realized in the form of misleading information and potential legal violations, which fits the definition of an AI Incident. The city acknowledges the problem and is working to fix it, but the harm is ongoing. Therefore, this event is classified as an AI Incident.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-04
ThePrint
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (generative AI based on Microsoft's Azure AI service) that is actively used by entrepreneurs for legal guidance. It has provided incorrect and misleading advice that could lead to violations of labor laws and city regulations, which constitutes harm to individuals and businesses (harm to rights and potential legal harm). The harm is indirect but real, as users relying on the chatbot's advice may face legal consequences. The article documents ongoing issues with the chatbot's outputs causing confusion and potential harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New York City government chatbot advises businesses to break laws

2024-04-05
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system used by businesses to navigate laws and regulations. It has provided blatantly illegal advice, such as serving cheese partially eaten by rodents and unlawfully taking workers' tips. These outputs directly lead to violations of health, labor, and consumer protection laws, which constitute harm to people and communities. The AI system's malfunction or misuse is central to the incident, and the harm is realized, not just potential. Hence, this is classified as an AI Incident.
Thumbnail Image

AI chat blunder as businesses are told to break the law, city won't take it down

2024-04-04
The US Sun
Why's our monitor labelling this an incident or hazard?
The AI chatbot is explicitly described as an AI system (powered by Microsoft's Azure AI) providing generated text responses. Its use has directly resulted in the dissemination of false legal advice that violates laws, which is a breach of legal obligations and could harm users who follow this advice. The harm is realized and ongoing, as the chatbot remains active and continues to provide misleading information. Therefore, this event meets the criteria for an AI Incident due to violations of legal rights and potential harm to users caused by the AI system's outputs.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-05
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The AI system (MyCity chatbot) is explicitly mentioned and is in active use. Its malfunction or erroneous advice has directly led to potential harm by encouraging illegal behavior (e.g., taking workers' tips unlawfully). This constitutes an AI Incident because the AI's use has directly led to a violation of legal obligations and potential harm to individuals (business owners and employees).
Thumbnail Image

NYC Mayor Defends AI System That Tells Business Owners to Commit Wage Theft

2024-04-05
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot powered by Microsoft) is explicitly mentioned and is involved in providing outputs that have directly led to harm by encouraging illegal actions (wage theft, discrimination). This constitutes violations of labor and civil rights, which fall under the category of harm to human rights and breach of legal obligations. The chatbot's malfunction (inaccurate and illegal advice) is the direct cause of this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-05
Daily Maverick
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system using generative AI technology. Its malfunction—providing incorrect legal advice—has already caused confusion and could lead to legal consequences for business owners who rely on it. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm (legal and financial risks) to people. The city is working to mitigate these harms, but the harm is ongoing and realized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws | Law-Order

2024-04-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system providing legal and regulatory advice to business owners. It has given incorrect information that, if followed, would cause users to break laws, which is a direct harm to individuals and a violation of legal rights. The harm is realized in the sense that business owners have received and may rely on this faulty advice, risking legal consequences. The AI system's malfunction is the root cause of this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people (business owners) through misleading legal advice.
Thumbnail Image

AI News: NYC Mayor Backs AI Chatbot Despite Legal Misadvice

2024-04-05
Coingape
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system designed to provide real-time advice to business owners. It has malfunctioned by giving incorrect legal information, which could cause users to violate laws unknowingly, constituting harm to their legal rights and exposing them to legal risks. The article documents actual instances of misleading advice and the resulting fear among users, indicating realized harm rather than just potential risk. The city's warning to users and the mayor's acknowledgment of the problem confirm the AI system's role in causing this harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

New York City Justifies AI Chatbot's Advise for Business Owners to Break Laws - EconoTimes

2024-04-05
EconoTimes
Why's our monitor labelling this an incident or hazard?
The MyCity AI chatbot is explicitly mentioned and is an AI system providing advice to business owners. The chatbot has given incorrect legal advice that, if followed, would cause users to break laws, which is a violation of legal obligations and potentially human rights related to labor laws. Although no direct harm is reported as having occurred yet, the misinformation creates a credible risk of harm. The city's decision to keep the chatbot active despite known errors and criticisms about lack of oversight further supports the classification as an AI Hazard. There is no indication that harm has already materialized, so it is not an AI Incident. The article focuses on the problematic use and potential consequences of the AI system rather than on responses or updates, so it is not Complementary Information. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

AI is telling NY business owners to commit crimes, and the mayor is defending it

2024-04-05
Android Headlines
Why's our monitor labelling this an incident or hazard?
An AI system (MyCity chatbot) is explicitly involved and has been used in a public service context. The chatbot has directly provided incorrect legal advice that could lead to violations of laws and rights, which constitutes harm to individuals and communities. The misinformation could cause real-world harm if followed, fulfilling the criteria for an AI Incident. The mayor's defense and the addition of disclaimers do not negate the fact that harm has occurred or is occurring due to the AI system's outputs.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-04
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The AI system (the MyCity chatbot) is explicitly mentioned and is a generative AI system providing legal and regulatory advice. It has given wrong answers that, if followed, would entail breaking the law, which constitutes a direct link to potential harm to people (business owners) through legal violations. The harm is realized in the sense that business owners have been confused and warned about possible serious legal consequences. The city acknowledges the errors and the chatbot is still in use, but the harm from misinformation is already occurring. This fits the definition of an AI Incident because the AI system's malfunction has directly led to harm (or at least significant risk of harm) to people. The event is not merely a hazard or complementary information, as the harm is ongoing and linked to the AI system's outputs.
Thumbnail Image

NYC Defends AI Chatbot Amid Criticism and Legal Missteps.

2024-04-07
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot powered by Microsoft Azure AI) is explicitly involved. Its use has directly led to harm in the form of misinformation and legal misguidance to small business owners, which can cause violations of laws and ethical standards, thus constituting harm to individuals and communities. The chatbot's malfunction or poor performance in providing accurate legal advice is central to the incident. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm through misleading and illegal advice, impacting users' rights and legal compliance.
Thumbnail Image

NYC AI Chatbot Debacle Illustrates the Challenges of AI Deployment

2024-04-05
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI chatbot is explicitly mentioned and is clearly an AI system providing advice to users. The chatbot's hallucinations have directly led to users receiving illegal and harmful advice, which constitutes harm to individuals and communities and breaches legal obligations. The event involves the use of the AI system and its malfunction (hallucination). The harm is realized, not just potential, as users have been given wrong and illegal guidance. The presence of disclaimers does not negate the fact that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NYC defends AI chatbot that advised entrepreneurs to break laws

2024-04-05
chinadailyhk
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system (a generative AI chatbot relying on Microsoft's Azure AI service) that is being used to provide legal and regulatory information to business owners. It has given wrong advice that, if followed, would entail breaking the law (e.g., advising employers they can take a cut of workers' tips, incorrect minimum wage information, and cashless store policies violating city law). This misinformation can cause harm to people (business owners) by exposing them to legal risks and potential penalties, which fits the definition of harm to persons or groups. The AI system's malfunction or inaccurate outputs have directly led to this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

After giving wrong answers, NYC chatbot labeled as 'beta' project | StateScoop

2024-04-03
StateScoop
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system (generative AI) used publicly to provide information. It has been documented to provide false and misleading answers on legal and worker rights topics, which can harm users by causing them to act on incorrect information, thus harming their rights and potentially their well-being. The harm is realized and ongoing, not just potential. The city's decision to keep the chatbot online despite these issues, labeling it as 'beta', does not negate the harm caused. Hence, this is an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

New York City defends AI chatbot that advised entrepreneurs to break laws

2024-04-06
telecomlive.com
Why's our monitor labelling this an incident or hazard?
The MyCity chatbot is an AI system deployed by New York City to provide information to entrepreneurs. It has given wrong advice that, if followed, would cause users to break laws. This is a direct link between the AI system's outputs and potential legal violations, which is a harm under the definition of AI Incident (violation of applicable law). The harm is realized or at least occurring as the chatbot is actively giving such advice. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

NYC Defends AI Chatbot Amid Criticism and Legal Missteps. | AI in Daily Life AI chatbot | CryptoRank.io

2024-04-07
CryptoRank
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot powered by Microsoft Azure AI) is explicitly involved. Its use has directly led to harm by providing illegal and misleading advice to small business owners, which can cause legal and ethical violations, thus harming individuals and communities. The harm is realized, not just potential, as the chatbot's advice has already misled users. The event involves malfunction and misuse of the AI system in a public sector context, leading to violations of legal and ethical standards. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.