AI Legal Service DoNotPay Sued for Unauthorized Practice and Substandard Legal Documents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

DoNotPay, an AI-powered legal chatbot, faces a class-action lawsuit alleging it practiced law without a license and provided substandard legal documents, causing harm to users. Plaintiffs claim the AI system misrepresented itself as a lawyer, resulting in ineffective legal assistance and potential violations of users' legal rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

DoNotPay is an AI system that generates legal documents and advice. The lawsuit claims it practices law without a license and provides substandard legal documents, which harms users who rely on it for legal representation or advice. This harm to users' legal rights and interests is a violation of applicable law protecting fundamental rights. The AI system's use is central to the harm, as it is the tool providing the unauthorized legal services. Hence, this event qualifies as an AI Incident due to indirect harm caused by the AI system's unauthorized and potentially harmful use.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRespect of human rightsHuman wellbeingRobustness & digital security

Industries
Consumer servicesGovernment, security, and defence

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

Business function:
Compliance and justiceCitizen/customer service

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

'Robot lawyer' DoNotPay is being sued by a law firm because it 'does not have a law degree'

2023-03-12
Business Insider
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system that generates legal documents and advice. The lawsuit claims it practices law without a license and provides substandard legal documents, which harms users who rely on it for legal representation or advice. This harm to users' legal rights and interests is a violation of applicable law protecting fundamental rights. The AI system's use is central to the harm, as it is the tool providing the unauthorized legal services. Hence, this event qualifies as an AI Incident due to indirect harm caused by the AI system's unauthorized and potentially harmful use.
Thumbnail Image

Lawsuit pits class action firm against 'robot lawyer' DoNotPay

2023-03-09
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal services, and the lawsuit alleges that its use has caused harm to users by delivering substandard legal documents and advice, constituting unauthorized practice of law and unfair competition. This fits the definition of an AI Incident because the AI system's use has directly led to harm involving violations of legal rights and consumer protection laws. The event is not merely a general news or policy update but involves realized harm and legal claims against the AI system's operation.
Thumbnail Image

'Robot lawyer' DoNotPay is being sued by a law firm because it 'does not have a law degree'

2023-03-12
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system that generates legal documents and advice, thus fitting the definition of an AI system. The lawsuit claims that the AI system is practicing law without a license, leading to substandard legal outcomes for users, which constitutes a violation of legal obligations and potentially harms users relying on its outputs. This harm is directly linked to the use of the AI system, fulfilling the criteria for an AI Incident. The event is not merely a legal dispute or general news but involves realized harm from the AI system's use, justifying classification as an AI Incident.
Thumbnail Image

"Robot Lawyer" Faces Lawsuit For Practicing Law Without A License In US

2023-03-12
NDTV
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI-powered legal service platform that provides legal assistance such as drafting demand letters and court filings. The lawsuit claims that DoNotPay lacks a law license and has provided substandard legal services, which is a violation of legal and consumer rights. This constitutes a breach of obligations under applicable law intended to protect fundamental and consumer rights, fitting the definition of an AI Incident. The harm is realized as consumers have received poor legal services and the company is accused of unauthorized practice of law.
Thumbnail Image

Class-action suit seeks redress from 'robot lawyer' practicing law without license

2023-03-09
CBS News
Why's our monitor labelling this an incident or hazard?
The AI system DoNotPay is explicitly mentioned as providing legal services through AI and automated processes. The lawsuit claims that the service provided substandard and poorly done legal documents, which harmed users who relied on them. This constitutes harm to persons and a violation of legal rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's use is central to the incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

The Rise of the Robot Lawyer? DoNotPay's Legal AI Faces Several Challenges

2023-03-10
Lexology
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DoNotPay's AI legal assistant) whose use was intended to directly influence legal proceedings by guiding a litigant in court. Although the AI did not actually argue a case due to legal pushback, the article details the direct involvement of the AI system in a scenario that could have led to harm such as unauthorized practice of law, ethical violations, and potential legal malpractice. Since the AI system's use was planned but postponed before actual harm occurred, and the article focuses on the challenges and risks rather than a realized harm, this constitutes an AI Hazard rather than an AI Incident. The article also provides broader context on legal and ethical issues, but the primary focus is on the plausible future harm from the AI system's intended use in court.
Thumbnail Image

AI Lawyer DoNotPay Getting Sued for Offering Legal Services Without License

2023-03-12
Tech Times
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal services, which is explicitly stated. The lawsuit claims that users received substandard legal documents and advice, which constitutes harm to their legal rights and interests. The AI system's use in this context has directly led to this harm. The event is not merely a complaint or potential risk but an actual legal claim of harm caused by the AI system's outputs. Hence, it qualifies as an AI Incident under the framework, specifically under violations of legal rights and harm to individuals relying on AI-generated legal services.
Thumbnail Image

You Know What Doesn't Have Diploma Privilege? AI

2023-03-10
Above the Law
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal services such as drafting legal documents. The lawsuit alleges that the AI's outputs were substandard and caused harm to users by failing to deliver effective legal assistance, which is a direct harm to individuals relying on the system. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (legal and practical harm to users) and potential violations of legal rights (unauthorized practice of law).
Thumbnail Image

DoNotPay doesn't live up to its billing as a 'robot lawyer,' offers 'substandard' legal docs, suit claims

2023-03-10
ABA Journal - Law News Now
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI-powered legal chatbot that generates legal documents and advice. The lawsuit alleges that its use has directly harmed customers by providing substandard and inaccurate legal documents, which caused practical harm (e.g., undelivered demand letters, unusable documents). The harm is linked to the AI system's use and its outputs, fulfilling the criteria for an AI Incident due to violation of legal rights and harm to individuals. The event is not merely a potential risk or a complementary update but a concrete incident of harm caused by the AI system's outputs.
Thumbnail Image

'Robot lawyer' that never got court day sued for practicing without license

2023-03-13
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DoNotPay's chatbot) used to provide legal services, which is an AI system by definition as it generates legal advice and documents. The lawsuit alleges that the AI system's use caused harm to users by providing poor legal assistance and continuing to charge fees improperly, constituting direct harm to persons and violations of legal rights. The AI system's development and use are central to the incident, and the harm has materialized, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Law Firm Sues 'World's First Robot Lawyer' For Not Having A Degree

2023-03-13
IndiaTimes
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal services without proper licensing or supervision by qualified lawyers, which has directly led to harm for users who received substandard legal documents. The lawsuit alleges that the AI system's outputs were not competent legal advice, causing harm to users' legal rights and interests. This fits the definition of an AI Incident as the AI system's use has directly led to violations of legal obligations and harm to individuals relying on it.
Thumbnail Image

World's first robot LAWYER is being sued by a law firm

2023-03-14
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system DoNotPay is explicitly mentioned as providing legal advice and generating legal documents. The lawsuit alleges that the system operates without proper legal qualifications and supervision, leading to actual harm to users (e.g., increased fines, incorrect legal arguments). This constitutes harm to individuals (a form of harm to people) and a violation of legal practice norms, which can be considered a breach of obligations under applicable law protecting professional and consumer rights. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. Although the founder disputes the claims, the reported consequences and legal action indicate realized harm rather than just potential risk.
Thumbnail Image

World's First Robot Lawyer Sued by a Law Firm for Lack of Law Degree

2023-03-14
INQUIRER.net USA
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system designed to provide legal information and assistance, which fits the definition of an AI system. The lawsuit alleges that the AI system's use has directly led to harm, specifically substandard legal results and unauthorized practice of law, which is a breach of legal and professional rights. Therefore, this event qualifies as an AI Incident due to violations of applicable law and harm to users relying on the system for legal services.
Thumbnail Image

DoNotPay, the 'Robot Lawyer' Is Being Sued

2023-03-13
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DoNotPay as an AI-powered 'robot lawyer' using AI (including OpenAI's ChatGPT) to generate legal documents and assist customers. The lawsuit alleges that the AI system's outputs were substandard and misleading, causing financial harm to users and violating legal standards. This harm is direct and realized, not merely potential. The AI system's role is pivotal as the product's AI-generated legal assistance is central to the claims of harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Robot lawyer' company faces class-action lawsuit over allegedly 'substandard,' AI-generated documents: 'An otherwise-blank piece of paper with his name printed on it'

2023-03-15
TheBlaze
Why's our monitor labelling this an incident or hazard?
DoNotPay uses AI to generate legal documents, which is explicitly mentioned. The lawsuit claims these documents are substandard and have caused harm to users by providing ineffective or blank legal documents, which can be seen as a violation of users' rights and potentially harmful outcomes. The AI system's use is central to the harm alleged, fulfilling the criteria for an AI Incident. Although the company denies the allegations, the event describes realized harm linked to the AI system's outputs, not just potential harm or general AI-related news. Hence, it is classified as an AI Incident.
Thumbnail Image

Robot lawyer DoNotPay accused of practicing law without licence

2023-03-13
Firstpost
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal services, which fits the definition of an AI system. The lawsuit alleges that the AI system's use has directly led to harm—users received substandard legal documents and were misled about the qualifications behind the service, constituting a violation of legal rights and consumer harm. The involvement of AI in providing these services and the resulting legal complaint about unauthorized practice and harm to users meets the criteria for an AI Incident. The event is not merely a hazard or complementary information, as actual harm and legal action have occurred.
Thumbnail Image

'Robot lawyer' that never got court day sued for practicing without license

2023-03-13
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay's chatbot) was used to provide legal advice and document drafting, which is an AI system performing complex decision-making and content generation. The lawsuit alleges that the AI's outputs caused harm to users, such as increased fines and continued charges despite account cancellation, which are direct harms to individuals and violations of legal rights. The involvement of the AI system in causing these harms qualifies this event as an AI Incident under the framework, as the AI's use directly led to harm to persons and legal violations.
Thumbnail Image

'Robot lawyer' DoNotPay not fit for purpose, says complaint

2023-03-13
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article describes a lawsuit against DoNotPay, an AI system marketed as a 'robot lawyer,' for providing inadequate legal services that caused harm to customers. The AI system's outputs (legal documents and filings) were faulty or undelivered, leading to potential legal and financial harm. This constitutes an AI Incident because the AI system's use directly led to harm (violation of rights and financial loss). The involvement of the AI system is explicit, and the harms are realized, not merely potential. Therefore, this event fits the definition of an AI Incident.
Thumbnail Image

Can A Robot Lawyer Defend Itself Against Class Action Lawsuit For Unauthorized Practice Of Law

2023-03-13
Techdirt
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (DoNotPay's "robot lawyer") used in the provision of legal services. The lawsuit and complaint detail actual harms caused by the AI system's outputs and services, including legal and financial harm to users, which constitutes violations of legal rights and consumer protection laws. The AI system's malfunction or misuse (providing substandard legal documents and failing to perform as promised) has directly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly caused harm to persons and breaches of legal obligations.
Thumbnail Image

Human man sues 'robot lawyer' company DoNotPay over 'substandard' legal documents

2023-03-15
Law & Crime
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DoNotPay's AI-powered legal document generation and 'robot lawyer' services). The harm arises from the use of this AI system to provide unauthorized legal services that are substandard and have caused real harm to consumers, such as ineffective legal documents and potential loss of legal rights due to delays. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to persons and violations of legal protections. The lawsuit and the described harms confirm that the AI system's involvement is material and harmful, not merely a potential risk or complementary information.
Thumbnail Image

World's First Robot Lawyer Under Attack: AI-Powered Defendant 'DoNotPlay' Sued by Firm Due to Lack of Law Degree

2023-03-15
Science Times
Why's our monitor labelling this an incident or hazard?
The AI system 'DoNotPlay' is explicitly described as an AI-powered legal assistant providing legal documents and advice. Its use has directly led to harm: users received substandard legal documents, faced financial losses, and were misled into believing they were receiving competent legal services. The lawsuit highlights violations of legal practice laws and consumer harm, which fall under violations of applicable law and harm to individuals. The AI system's development and use are central to the incident, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

This Robot Lawyer Is Being Sued By A Law Firm For Not Having A Law Degree - Wonderful Engineering

2023-03-13
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DoNotPay) used for legal advice and document drafting, which falls under AI system involvement. The lawsuit alleges unauthorized practice of law, which is a violation of legal frameworks and could lead to harm to consumers if unlicensed legal advice is given. However, since no actual harm or injury has been reported or confirmed, and the event centers on the potential legal and regulatory issues, this constitutes an AI Hazard rather than an AI Incident. The article highlights plausible future harm and regulatory challenges but does not document realized harm.
Thumbnail Image

Setback for the start-up DoNotPay: the lawyer-robot is sued for not having a university degree

2023-03-15
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DoNotPay chatbot) used for legal services. The lawsuit claims that the AI's use has caused harm by providing deficient legal documents and unauthorized legal practice, which can be seen as a violation of legal rights and consumer protection. This constitutes an AI Incident because the AI system's use has directly led to harm (deficient legal assistance and potential legal violations).
Thumbnail Image

في قفص الاتهام.. الروبوت المحامي يواجه دعوى قضائية لهذه الأسباب

2023-03-15
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DoNotPay chatbot) whose use is alleged to have caused harm to users through inadequate legal advice, which can be considered a violation of legal and professional standards potentially harming individuals relying on it. This constitutes an AI Incident because the AI system's use has directly led to harm (poor legal advice causing negative results) and legal action is underway.
Thumbnail Image

بعد خداعه مُوكّليه.. مُقاضاة أوّل محام روبوت في العالم

2023-03-17
JawharaFM (Jawhara FM)
Why's our monitor labelling this an incident or hazard?
The AI system (robot lawyer) is explicitly described as providing legal advice and participating in court proceedings. The harm has materialized as clients received substandard advice leading to adverse results, which is a direct harm to individuals' legal rights and interests. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to persons).
Thumbnail Image

شركة أمريكية تقاضى أول "محامى روبوت" فى العالم.. دايلى ميل تكشف التفاصيل - اليوم السابع

2023-03-16
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The AI system DoNotPay is explicitly mentioned as providing legal advice and services without a license, which is illegal and harmful to users who relied on it. The harm includes users receiving substandard legal advice leading to adverse results, which is a direct consequence of the AI system's use. The event involves the use of an AI system and the resulting legal and consumer harm, fitting the definition of an AI Incident rather than a hazard or complementary information. The lawsuit and user complaints confirm realized harm rather than potential harm.
Thumbnail Image

فرحة لم تكتمل.. شركة أمريكية تقاضى أول "محامى روبوت" فى العالم (فيديو) - اليوم السابع

2023-03-16
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay) is explicitly described as an AI-powered legal assistant providing legal advice and services. The lawsuit alleges that the AI's outputs were substandard and caused harm to users, which constitutes harm to individuals relying on its advice. Additionally, the AI system is accused of operating without proper legal authorization, which is a violation of applicable law protecting professional rights. These factors meet the criteria for an AI Incident because the AI system's use has directly led to harm and legal violations.
Thumbnail Image

لمزاولته المهنة دون ترخيص.. أول محامي روبوت مهدد بالسجن - هبة بريس

2023-03-13
هبة بريس
Why's our monitor labelling this an incident or hazard?
The AI system (robot lawyer) is explicitly mentioned and is alleged to have caused harm by practicing law without a license, which constitutes a violation of legal rights and professional regulations. This fits the definition of an AI Incident because the AI's use has directly led to a legal dispute involving harm to rights and legal obligations. The event is not merely a potential risk but an ongoing legal incident with claims of harm and misuse of the AI system.
Thumbnail Image

يزاول المهنة دون ترخيص.. مقاضاة أول روبوت محام في العالم

2023-03-16
قناة العربية
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay chatbot) is explicitly described as providing legal services without proper licensing, which is a misuse of AI in a regulated professional domain. The harm includes violation of legal practice regulations and harm to clients due to poor advice, fulfilling the criteria for an AI Incident involving violation of obligations under applicable law and harm to individuals. Therefore, this qualifies as an AI Incident.
Thumbnail Image

رفع دعوى قضائية ضد أول محامٍ روبوت في العالم

2023-03-16
البيان
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay) is explicitly mentioned as providing legal advice without proper licensing, leading to client complaints about inadequate and harmful advice. This constitutes a violation of legal professional standards and potentially harms users relying on the AI's outputs. The event describes actual harm caused by the AI system's use, meeting the criteria for an AI Incident due to violations of legal and professional obligations and harm to users.
Thumbnail Image

أول روبوت محامٍ في مواجهة العدالة

2023-03-16
البيان
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay chatbot) is explicitly mentioned and is used in a legal advisory role. The lawsuit alleges that the AI's use without proper certification is illegal and that its advice caused harm to clients, which constitutes a violation of legal and professional standards, thus causing harm to individuals relying on it. This fits the definition of an AI Incident because the AI system's use has directly led to harm (unsatisfactory legal advice and negative consequences for clients).
Thumbnail Image

‫ محاكمة أول روبوت محام في العالم بسبب مزاولة المهنة دون ترخيص

2023-03-16
جريدة الشرق
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay chatbot) is directly involved in providing legal services without proper licensing, which is a violation of legal professional regulations. The reported harm includes users receiving inadequate or harmful legal advice, which can be considered harm to individuals (harm to persons) and a violation of legal rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm and legal violations.
Thumbnail Image

صدق أو لا تصدق أول روبوت امام "المحكمة" - منوعات

2023-03-17
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay) is explicitly mentioned as providing legal services, which is a use of AI. The lawsuit alleges that the AI's outputs caused harm to users by providing inadequate or harmful legal advice, which is a direct harm to individuals relying on the system. This fits the definition of an AI Incident because the AI system's use has directly led to harm (legal and possibly financial harm to users) and breaches legal obligations regarding professional licensing and practice. Therefore, this event is classified as an AI Incident.
Thumbnail Image

دعوى قضائية بحق أول روبوت محام في العالم

2023-03-16
Alrai-media
Why's our monitor labelling this an incident or hazard?
The AI system (robot lawyer) is explicitly described as using AI to provide legal advice and participate in court proceedings. The lawsuit alleges that the AI's outputs caused harm by providing inadequate legal guidance, which is a direct harm to users (harm to persons). The event involves the use of an AI system and the harm caused by its outputs, meeting the criteria for an AI Incident. The legal challenge also highlights violations of professional licensing laws, which relate to breach of obligations under applicable law protecting professional rights. Hence, this is not merely a potential hazard or complementary information but a realized incident involving harm and legal violations.
Thumbnail Image

أول محامي روبوت مهدد بالسجن لمزاولته المهنة بلا ترخيص

2023-03-13
almodon
Why's our monitor labelling this an incident or hazard?
The robot lawyer is an AI system providing legal services. The lawsuit alleges that it practiced law without a license and produced poor-quality legal documents, which harmed users. This is a direct harm linked to the AI system's use, violating legal frameworks protecting professional rights. The event involves realized harm and legal violations caused by the AI system's use, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مقاضاة أول روبوت محامٍ في العالم... "لا يملك شهادةً في القانون!"

2023-03-16
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the AI legal chatbot 'DoNotPay') whose use has directly led to harm—clients receiving poor legal advice resulting in adverse outcomes. The lawsuit alleges unauthorized practice of law, which is a violation of legal obligations and professional rights. The AI system's role is pivotal as it provided the contested services. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

روبوت اشتغل محامي بدون ترخيض وهيتسجن

2023-03-13
جريدة البشاير
Why's our monitor labelling this an incident or hazard?
The DoNotPay robot lawyer is an AI system providing legal services autonomously. The lawsuit alleges it practiced law without a license, which is a breach of legal obligations and professional regulations, constituting a violation of applicable law intended to protect fundamental and professional rights. The AI system's use has directly led to harm by providing substandard legal documents and unauthorized legal advice, which can harm users. Therefore, this event meets the criteria for an AI Incident due to the AI system's use causing legal violations and harm to users.
Thumbnail Image

دعوى قضائية بحق أول روبوت محام في العالم

2023-03-16
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it provides legal services, but the event centers on a legal complaint about unauthorized practice rather than a realized harm or direct threat. There is no indication of injury, rights violation, or other harm caused by the AI system's use. Therefore, this is not an AI Incident or AI Hazard but rather a legal/governance response to the AI system's deployment, fitting the definition of Complementary Information.
Thumbnail Image

أول محامي روبوت مهدد بالسجن لمزاولته المهنة دون ترخيص

2023-03-14
جريدة البعث
Why's our monitor labelling this an incident or hazard?
The AI system (robot lawyer) is explicitly mentioned and is being used to provide legal services. The lawsuit alleges unauthorized practice of law, which constitutes a violation of legal frameworks intended to regulate professional conduct. This is a clear example of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law protecting fundamental rights related to professional licensing. Although no physical harm is reported, the legal violation and potential harm to the legal profession's integrity qualify as an AI Incident under the framework.
Thumbnail Image

دعوى قضائية بحق أول روبوت محام في العالم

2023-03-16
الواحــة.. أول صحيفة إعلاميّة جامعة تصدر بجنوب الجزائر
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DoNotPay) functioning as a legal advisor, which is an AI system by definition. The lawsuit alleges that the AI's outputs caused harm by providing substandard legal advice, which led to negative consequences for clients. This is a direct harm to individuals (harm to persons) and a violation of legal practice regulations (a breach of applicable law). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

دعوى قضائية بحق أول روبوت محام في العالم | صحيفة تواصل نيوز

2023-03-16
تواصل
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay) is explicitly described as providing legal advice and courtroom assistance, which is the use of an AI system. The lawsuit alleges that the AI system operates without proper legal certification and that its advice caused negative outcomes for users, indicating realized harm to individuals and a breach of legal obligations. This fits the definition of an AI Incident as the AI system's use has directly led to harm and violations of applicable law protecting professional and intellectual property rights.
Thumbnail Image

مقاضاة أول روبوت محامي في العالم

2023-03-15
بوابة اخبار اليوم
Why's our monitor labelling this an incident or hazard?
The AI system (DoNotPay) is explicitly mentioned as providing legal assistance and is being legally challenged for unauthorized practice of law. This involves the use of an AI system in a way that allegedly violates legal regulations protecting professional and intellectual property rights. Although no direct physical harm is reported, the incident involves a violation of legal frameworks and professional standards, which fits the definition of an AI Incident under violations of applicable law and rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

محامي بدون شهادة جامعية.. أول قضية في العالم ضد الروبوت الذكي

2023-03-15
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The AI system DoNotPay is explicitly mentioned and is used in a legal advisory capacity. The lawsuit and client complaints indicate that the AI's use has directly caused harm to users by providing inadequate legal help, which can be considered harm to individuals and a violation of legal rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm and legal violations.
Thumbnail Image

رفع دعوى قضائية ضد أول محام روبوت في العالم

2023-03-15
صحيفة الاقتصادية
Why's our monitor labelling this an incident or hazard?
The AI system (Donotpay) is explicitly mentioned as providing legal assistance, which is an AI system performing complex decision-making and content generation. The lawsuit alleges that the AI system is operating without proper legal certification and that its use has caused harm to clients by providing poor-quality legal help, worsening their situations. This constitutes direct harm to persons and a violation of legal rights, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's use.
Thumbnail Image

أول محامي روبوت يواجه السجن والإيقاف بتهمة العمل بدون ترخيص

2023-03-16
صحيفة صدى الالكترونية
Why's our monitor labelling this an incident or hazard?
The AI system (robot lawyer) is explicitly mentioned and is central to the event. The event involves the use of the AI system in a legal context without proper licensing, which is a breach of legal obligations protecting professional rights. This fits the definition of an AI Incident as it involves a violation of applicable law intended to protect fundamental and professional rights. The event is not merely a potential risk but an actual legal case alleging harm through unauthorized practice, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

هل تصدق؟ دعوى قضائية بحق أول روبوت في العالم

2023-03-16
فلسطين أون لاين
Why's our monitor labelling this an incident or hazard?
The event describes a legal challenge against an AI system (DoNotPay) for operating without a license and providing legal advice, which is a governance and regulatory issue. There is no report of actual harm caused by the AI system to individuals or groups, nor disruption or violation of rights as defined under AI Incidents. The main focus is on the legal proceedings and regulatory scrutiny, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI use rather than reporting a new harm or plausible future harm.
Thumbnail Image

Revés para la start up DoNotPay: demandan al abogado-robot por no tener título universitario

2023-03-15
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DoNotPay chatbot) used for legal services. The system's use has directly led to harm, as users received deficient legal documents and potentially unauthorized legal practice, which can harm users' legal rights and interests. The legal action and complaints indicate realized harm, not just potential. Hence, this is an AI Incident due to violation of legal practice norms and harm to users relying on the AI for legal assistance.
Thumbnail Image

Demandan a DoNotPay, el 'abogado robot', por proporcionar "servicios legales no autorizados"

2023-03-14
20 minutos
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal services through automated document generation and advice. The lawsuit alleges that the system's use has caused harm by misleading consumers and violating laws against unauthorized legal practice. This constitutes a breach of obligations under applicable law protecting professional and consumer rights, fitting the definition of an AI Incident. The harm is realized through legal and consumer rights violations, not merely potential or hypothetical, so it is not a hazard or complementary information.
Thumbnail Image

Un abogado robot que utiliza IA está siendo demandado por no tener un título en derecho

2023-03-14
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal advice through a chatbot interface. The lawsuit alleges that its use constitutes unauthorized practice of law, violating legal frameworks intended to protect fundamental and professional rights. This is a direct legal harm linked to the AI system's use, fitting the definition of an AI Incident under violations of human rights or breach of applicable law.
Thumbnail Image

Demandan a primer abogado robot que utiliza IA por no tener un título en Derecho

2023-03-14
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
The AI system DoNotPay is explicitly mentioned and is central to the event. The lawsuit is based on the AI system's use without proper legal credentials, which is a regulatory and legal issue. However, the article does not report any actual harm caused by the AI system, only a legal challenge to its operation. Therefore, this event represents a plausible risk or regulatory hazard rather than an incident with realized harm. It is best classified as an AI Hazard because the use of an unlicensed AI legal assistant could plausibly lead to harm or legal violations in the future, but no such harm is reported yet.
Thumbnail Image

El 'abogado-inteligencia artificial' de DoNotPay, demandado por intrusismo profesional tras haberle impedido actuar en un tribunal

2023-03-14
Genbeta
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (DoNotPay's AI-powered legal chatbot) in providing legal services, which has led to a lawsuit accusing the company of unauthorized practice of law and misleading consumers. The AI system's involvement has indirectly caused harm to users by providing substandard legal assistance, violating consumer rights and legal protections. The presence of harm (legal and consumer rights violations) linked to the AI system's use meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Demandan a un 'abogado robot' por no tener un título en derecho

2023-03-17
Red Uno
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal advice, which is the use of an AI system. The lawsuit alleges it is practicing law without a license, a violation of legal obligations protecting professional rights. This constitutes a violation of applicable law (c) as defined in the framework. Since the lawsuit is active and the AI system is currently providing services, this is a realized issue, not just a potential hazard. Therefore, this qualifies as an AI Incident due to violation of legal obligations related to professional licensing. There is no indication of physical harm or other types of harm, but the legal violation is sufficient for classification as an AI Incident.
Thumbnail Image

بعد خداعه العملاء.. أول محام روبوت في العالم يتعرض للمقاضاة

2023-03-15
Aljazeera
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the AI 'robot lawyer') whose use has directly led to harm in the form of misleading customers and violating legal rights. The lawsuit alleges that the AI system was falsely represented as a qualified legal advisor, which constitutes a violation of legal obligations and consumer rights. This is a direct harm caused by the AI system's use and the company's misrepresentation, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

أول محام روبوت في العالم يمثل أمام القضاء بتهمة تضليل العملاء

2023-03-17
موقع قناة المنار - لبنان
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the AI-powered legal assistant) whose use has directly led to harm in the form of misleading clients and legal violations. The lawsuit alleges that the company falsely represented the AI as a qualified legal advisor, which is a breach of legal and consumer rights. This meets the criteria for an AI Incident because the AI system's use has directly caused harm related to violations of legal rights and obligations. The presence of a lawsuit and formal legal complaints confirms that harm has materialized, not just a potential risk.
Thumbnail Image

أول "محام روبوت" في العالم.. "نصّاب"

2023-03-16
An-Nahar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('robot lawyer') whose use has directly led to harm by misleading customers into believing they were receiving legitimate legal advice, which they were not. This misrepresentation can cause legal and financial harm to clients, constituting a violation of rights and harm to individuals. The AI system's development and use are central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

أول محام روبوت في العالم يتعرض للمقاضاة

2023-03-16
الرسالة نت
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('DoNotPay' robot lawyer) whose development and use have directly led to harm in the form of misleading customers and potential legal rights violations. The AI system's role is pivotal as it is the basis for the claims and the lawsuit. The harm is realized, not just potential, as customers were misled and may have suffered from inadequate legal assistance. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

بعد خداعه العملاء.. أول محامي روبوت في العالم يتعرض للمقاضاة

2023-03-17
الانتباهة أون لاين
Why's our monitor labelling this an incident or hazard?
The article describes a legal challenge against a company using an AI system, but the harm alleged is about misleading advertising rather than harm caused by the AI system's operation or outputs. There is no evidence of injury, rights violations, or other harms directly or indirectly caused by the AI system. The event is about legal and societal responses to AI deployment and claims, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

مقاضاة أول روبوت محام في العالم | Mustaqbal Web

2023-03-18
Mustaqbal Web
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'robot lawyer') whose use has directly led to harm by misleading customers about the nature and quality of legal advice, which constitutes a violation of rights and consumer harm. The lawsuit alleges that the AI system's outputs are not legally valid and that the company misrepresented the system's capabilities, leading to harm. This fits the definition of an AI Incident because the AI system's use has directly caused harm to people (customers) through misinformation and potential legal detriment.
Thumbnail Image

أول روبوت محامي أمام المحكمة كمتهم! | MEO

2023-03-18
MEO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the robot lawyer) whose use has directly caused harm to customers by providing misleading and unlicensed legal advice, leading to negative consequences for those customers. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (harm to individuals relying on legal advice). The legal complaint and the described consequences confirm realized harm rather than just potential harm. Hence, the classification is AI Incident.
Thumbnail Image

ABD'de robot avukata dava açıldı: Hukuk diploması yok

2023-03-15
TRT haber
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system (a chatbot providing legal advice). The lawsuit claims that its use has led to harm by misleading customers into believing they receive qualified legal services when they do not, which can be considered a violation of legal and consumer rights. This constitutes a violation of obligations under applicable law protecting consumer and professional standards, fitting the definition of an AI Incident. Therefore, this event is classified as an AI Incident due to the direct harm caused by the AI system's use in legal advice without proper qualifications or oversight.
Thumbnail Image

Robot Avukat Dava Edildi: "Avukatlık Belgesi Yok"

2023-03-12
tamindir.com
Why's our monitor labelling this an incident or hazard?
The article describes a lawsuit against DoNotPay, an AI legal assistant, for allegedly providing unauthorized legal advice. While the AI system is clearly involved, there is no indication that its use has directly or indirectly caused harm as defined by the framework (e.g., injury, rights violations, or community harm). The event is about the legal challenge and regulatory concerns, which are responses to the AI's deployment rather than evidence of harm. Hence, it fits best as Complementary Information, providing context on governance and societal responses to AI use in legal services.
Thumbnail Image

Robot avukata ruhsat davası! Hukuk firmasıyla mahkemelik

2023-03-16
Ak�am
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal consultancy without proper licensing, which has led to a lawsuit due to its unauthorized practice of law and misleading advice. The AI's involvement in giving legal advice without oversight has directly caused harm by misleading users and violating legal frameworks. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm related to legal rights and consumer protection.
Thumbnail Image

'Robot avukat'a hukuk firmasından dava: Diploması yok

2023-03-14
www.gercekgundem.com
Why's our monitor labelling this an incident or hazard?
DoNotPay is an AI system providing legal advice through a chatbot interface. The lawsuit alleges that it operates without proper licensing and provides inadequate legal counsel, which has harmed users by misguiding them legally. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm, specifically a violation of legal and professional rights and obligations. The harm is realized as at least one user received insufficient legal service, justifying classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ABD'de robot avukata dava açıldı

2023-03-15
Yeni Alanya Gazetesi
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (DoNotPay) providing legal services and the legal dispute over its unauthorized practice of law. However, there is no indication that the AI system has caused direct or indirect harm such as injury, rights violations, or other harms defined under AI Incident. The event is about a legal challenge and regulatory compliance, not about realized or imminent harm. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it is best classified as Complementary Information because it provides context on societal and legal responses to AI systems in the legal domain.
Thumbnail Image

ABD'de robot avukata dava açıldı

2023-03-15
Halkın Sesi Gazetesi KKTC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a chatbot providing legal advice) whose use is being legally contested due to unauthorized practice and misleading claims. This relates to violations of legal and consumer rights, which fall under violations of applicable law protecting fundamental and intellectual property rights. Since the AI system's use has directly led to legal complaints and potential harm to consumers (misleading them about legal services), this qualifies as an AI Incident under the framework.
Thumbnail Image

Indore News : प्रोफेसर और छात्रों ने मिलकर बनाया खास रोबोट " एकलव्य द स्नाइपर ", सर्जिकल स्ट्राइक में है सक्षम

2023-03-16
News18 India
Why's our monitor labelling this an incident or hazard?
The robot described is an AI-enabled system with autonomous or semi-autonomous capabilities for targeting and surveillance in military operations. Its development and potential use in surgical strikes imply a direct link to possible harm to persons and property, fulfilling the criteria for an AI Incident. The article indicates the robot is already developed and capable of use, not just a theoretical hazard. Therefore, this event qualifies as an AI Incident due to the direct potential for harm through its use in military operations.
Thumbnail Image

दुनिया का पहला 'रोबोट वकील' बन गया मुजरिम...अब कोर्ट में उसी के खिलाफ चलेगा केस

2023-03-17
hindi
Why's our monitor labelling this an incident or hazard?
The AI system (robot lawyer) is clearly involved and its use is under legal scrutiny. However, there is no indication that the AI system has directly or indirectly caused harm as defined by the AI Incident criteria. The event centers on a legal challenge and potential regulatory consequences, which constitute a governance or societal response to AI use rather than an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context on legal and governance issues related to AI without describing realized or plausible harm.
Thumbnail Image

AI से लैस दुनिया का पहला 'रोबोट वकील' खुद बन गया मुजरिम, पेशी से पहले कठघरे में हुआ खड़ा

2023-03-16
Hindustan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (robot lawyer) involved in unauthorized legal practice, leading to a lawsuit alleging harm through provision of poor-quality legal documents. This is a direct consequence of the AI system's use without proper credentials, causing legal and professional harm. Therefore, it meets the criteria for an AI Incident due to violation of legal obligations and potential harm to clients relying on the AI's outputs.
Thumbnail Image

दुनिया का पहला रोबोट वकील खुद कठघरे में, कानूनी फर्म ने अदालत में दायर किया मुकदमा |World's first robot lawyer himself in the dock, law firm files suit in | Patrika News

2023-03-16
Patrika News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('robot lawyer') being used to provide legal services without proper licensing or qualifications, leading to a lawsuit. This involves the use of an AI system causing harm through substandard legal documents and unauthorized practice, which fits the definition of an AI Incident due to violation of legal obligations and potential harm to users. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

First Robot Advocate: खुद ही मुजरिम बना देश का पहला रोबोट वकील, जानिए क्या है कानूनी दांव-पेंच

2023-03-16
Good News Today
Why's our monitor labelling this an incident or hazard?
The robot lawyer is an AI system providing legal advice, and its use has led to a legal dispute about unauthorized practice of law. This is a governance and regulatory issue concerning AI deployment and compliance with legal frameworks. There is no reported harm to individuals or communities, nor disruption or violation of rights caused by the AI system itself. Therefore, this event is best classified as Complementary Information, as it provides important context on societal and legal responses to AI use in law, rather than describing an AI Incident or AI Hazard.