Tesla's FSD 'Mad Max' Mode Promotes Aggressive, Law-Breaking AI Driving

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla released a new 'Mad Max' mode for its Full Self-Driving (FSD) AI system, enabling vehicles to drive aggressively, frequently exceed speed limits, and perform risky maneuvers. This has led to regulatory investigations and at least one wrongful death lawsuit, raising significant safety and legal concerns in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system, Tesla's Full Self-Driving (FSD) system, which is an autonomous driving AI system. The use of the 'Mad Max' mode has directly led to unsafe driving behaviors that pose a risk to human safety, fulfilling the criteria for harm to persons. The ongoing investigation and reported crashes further confirm that harm has occurred or is occurring. Therefore, this qualifies as an AI Incident due to the AI system's use leading to direct harm and safety violations.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (death)

Severity
AI incident

Business function:
Manufacturing

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Tesla reintroduces 'Mad Max' Full Self-Driving mode that breaks speed limits

2025-10-16
engadget
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, Tesla's Full Self-Driving (FSD) system, which is an autonomous driving AI system. The use of the 'Mad Max' mode has directly led to unsafe driving behaviors that pose a risk to human safety, fulfilling the criteria for harm to persons. The ongoing investigation and reported crashes further confirm that harm has occurred or is occurring. Therefore, this qualifies as an AI Incident due to the AI system's use leading to direct harm and safety violations.
Thumbnail Image

Tesla finally outs truly aggressive FSD driving mode that it calls Mad Max

2025-10-16
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system controlling vehicle behavior. The new Mad Max mode changes the AI's driving style to be more aggressive, which could plausibly increase the risk of accidents or harm. Since no actual harm or incident is reported, but the update introduces a credible risk of future harm, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the new AI feature and its potential implications rather than describing any realized harm.
Thumbnail Image

Tesla brings back 'Mad Max' 'Full Self-Driving' mode that ignores speed limits

2025-10-16
Electrek
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system involved in autonomous driving decisions. The reintroduction of a mode that encourages speeding and rolling stop signs directly increases the risk of injury or harm to persons, fulfilling the criteria for an AI Incident. The article reports actual use of the system with these risky behaviors, and ongoing investigations and lawsuits indicate realized or ongoing harm or risk. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and potential or actual harm to human health and safety.
Thumbnail Image

Tesla's Controversial 'Mad Max' Full Self-Driving Mode Is Back, For Some Inexplicable Reason - BGR

2025-10-16
BGR
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system controlling vehicle navigation and decision-making. The 'Mad Max' mode increases aggressive driving behavior, which has previously been linked to incidents causing or nearly causing harm, including wrongful death lawsuits and dangerous driving behaviors. Although the article does not report a new incident, the reintroduction of this mode amidst ongoing safety concerns plausibly increases the risk of injury or harm to persons. Therefore, this event constitutes an AI Hazard, as it could plausibly lead to an AI Incident involving harm to people.
Thumbnail Image

Tesla FSD Trolls Safety Regulators With a New "Mad Max" Profile That Ignores Speed Limits

2025-10-16
autoevolution
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system controlling autonomous driving behavior. The new "Mad Max" profile causes the vehicle to exceed speed limits and drive aggressively, which directly increases the risk of accidents and harm to people. The article provides evidence of the AI system's use leading to unsafe driving behavior, which is a direct cause of potential injury or harm. This meets the criteria for an AI Incident as the AI system's use has directly led to harm or increased risk of harm to persons. The involvement is in the use phase, and the harm is related to safety and health risks. Hence, the event is classified as an AI Incident.
Thumbnail Image

Tesla Lets FSD Go Rogue With New "Mad Max" Mode

2025-10-16
HotCars
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system controlling vehicle driving behavior. The new "Mad Max" mode increases speed and lane changes beyond legal limits, which could plausibly lead to traffic accidents or legal violations (harms to persons and potential regulatory breaches). Since the article does not report any actual accidents or injuries caused by this mode, but highlights concerns and potential risks, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the new mode's risk profile, not on responses or updates to past incidents. It is not Unrelated because the AI system's use is central to the event and its potential harms.
Thumbnail Image

Tesla's "Mad Max" Mode Breaks the Law While Regulators Watch

2025-10-16
Gadget Review
Why's our monitor labelling this an incident or hazard?
The Tesla 'Mad Max' mode is an AI system involved in autonomous driving assistance. Its use has directly led to traffic law violations and a wrongful death case, indicating realized harm to people and communities. The system's aggressive behavior and failure to comply with legal frameworks have triggered regulatory investigations and litigation. These factors meet the criteria for an AI Incident, as the AI system's use has directly caused harm and legal violations.
Thumbnail Image

Tesla Announces New FSD Mad Max Mode

2025-10-16
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Mad Max mode is an AI system involved in autonomous vehicle control. The event concerns the use and deployment of this AI system. Although no harm has been reported, experts raise concerns about potential safety risks from aggressive AI driving behavior, which could plausibly lead to accidents or injury. Therefore, this event fits the definition of an AI Hazard, as it describes a circumstance where the AI system's use could plausibly lead to harm in the future. It is not an AI Incident because no harm has occurred yet, nor is it Complementary Information or Unrelated, as the focus is on the new AI system's potential risk rather than a general product announcement or response to past incidents.
Thumbnail Image

Tesla's "Mad Max" driving mode is back, and it's already bending the road rules

2025-10-17
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving software) whose use (activation of 'Mad Max' mode) leads to the vehicle breaking traffic laws and driving aggressively, which directly increases the risk of injury or death. The article mentions ongoing wrongful-death lawsuits and regulatory investigations, indicating that harm has already occurred or is highly likely. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons and violations of legal obligations.
Thumbnail Image

Tesla's Latest Update Is Literally Dystopian

2025-10-17
InsideHook
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving software) whose use (the aggressive 'Mad Max Mode') could plausibly lead to harm such as traffic accidents or injury due to its assertive driving behavior. Although there is no direct report of harm or accident resulting from this mode, the context of a federal investigation and the nature of the mode suggest a credible risk of future harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the update itself introduces a potential risk, and it is not unrelated since the AI system's behavior is central to the potential hazard.
Thumbnail Image

Tesla Revives Mad Max Mode in FSD, Allows 15 MPH Over Speed Limits

2025-10-17
WebProNews
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system controlling vehicle driving behavior. The "Mad Max" mode explicitly encourages speeding and aggressive driving, which increases the risk of accidents and harm to people. The article references ongoing federal investigations and lawsuits related to crashes involving Tesla's driver-assistance systems, indicating that harm has occurred or is ongoing. The AI system's use in this mode directly or indirectly leads to harm (injury or death), fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but involves realized or ongoing harm linked to the AI system's operation.
Thumbnail Image

Tesla's "Mad Max" mode returns, stirring excitement and concern

2025-10-17
ArenaEV.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving software) whose use is directly linked to potentially unsafe driving behaviors such as speeding and rolling stops, which are violations of traffic laws and pose risks to human safety. The system's aggressive driving mode increases the likelihood of incidents causing injury or harm to people, fulfilling the criteria for an AI Incident. The article highlights ongoing regulatory investigations and user reports indicating realized or ongoing harm risks, not just potential future harm. Therefore, this qualifies as an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

Tesla reintroduces 'Mad Max' self-driving mode set to ignore speed limits: 'The definition of reckless driving'

2025-10-18
The Cool Down
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system involved in autonomous driving tasks. The 'Mad Max' mode increases risk by encouraging aggressive driving behaviors that can lead to accidents and fatalities, as evidenced by past lawsuits and ongoing investigations. The article reports actual harm and safety incidents linked to Tesla's AI driving features, fulfilling the criteria for an AI Incident. The AI system's use and potential malfunction have directly or indirectly caused harm to people, meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Deploys Self-Driving Mode That Ignores Speed Limits

2025-10-18
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, Tesla's Full Self-Driving software, which is explicitly described as operating vehicles autonomously and making real-time driving decisions. The new 'Mad Max' mode increases speeds beyond legal limits and performs aggressive maneuvers, which has already been observed in real-world use. Given the history of accidents and regulatory investigations linked to Tesla's FSD, the deployment of a mode that encourages speeding and aggressive driving directly increases the risk of injury or death to persons, fulfilling the criteria for an AI Incident. The harm is direct and materialized or highly likely, as speeding and reckless driving are well-known causes of traffic accidents and fatalities. Therefore, this event qualifies as an AI Incident due to the AI system's use leading to harm or significant risk thereof.
Thumbnail Image

Tesla Introduces 'Mad Max' Full Self Driving Mode That Ignores Speed Limits

2025-10-21
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically Tesla's Full Self-Driving autonomous driving system, which uses AI to make real-time driving decisions. The 'Mad Max' mode permits the AI to drive aggressively, including breaking speed limits, which could plausibly lead to harm such as traffic accidents or injury. Although no harm is reported as having occurred yet, the mode's design inherently increases risk, making it a credible potential source of harm. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving injury or harm to persons or groups.
Thumbnail Image

Tesla Adds "Mad Max" Mode to Its Self-Driving System - Details Inside

2025-10-21
TimesNow
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system involved in autonomous vehicle control. The new "Mad Max" mode increases aggressiveness, which could plausibly lead to safety risks or incidents in the future. However, since no harm or incident has occurred or is reported, and the article focuses on the feature introduction rather than harm or risk assessment, this is best classified as an AI Hazard due to plausible future harm from aggressive autonomous driving behavior.
Thumbnail Image

Tesla active à nouveau le mode Mad Max sur FSD

2025-10-17
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) whose new aggressive driving mode could plausibly lead to harm (e.g., accidents or injury) due to higher speeds and assertive maneuvers. However, no actual harm or incident is reported in the article. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of future harm from the AI system's use, but not an AI Incident since no harm has occurred yet.
Thumbnail Image

Tesla relance ce mode "fou" de sa conduite autonome qui fait sauter les limites de vitesse

2025-10-17
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving system) whose use (reactivation of an aggressive driving mode) directly increases the risk of harm to persons by encouraging speeding and risky maneuvers. Given that the mode allows exceeding speed limits and incomplete stops, it plausibly leads to injury or harm, fulfilling the criteria for an AI Incident. The ongoing investigations and reported user observations confirm that the AI system's use has already led to concerns about safety and potential harm, not just a hypothetical risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Le mode " Mad Max " de Tesla pousse-t-il le bouchon trop loin ?

2025-10-17
Numerama.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system involved in autonomous driving decisions. The "Mad Max" mode modifies the AI's driving style to be more aggressive, which has been observed to sometimes violate traffic laws and create potentially dangerous situations. This constitutes a direct link between the AI system's use and risks of harm to persons and regulatory violations. The article references ongoing investigations and lawsuits related to accidents involving Tesla's autonomous driving, indicating realized or at least credible harm. Hence, this qualifies as an AI Incident due to direct or indirect harm caused by the AI system's use and malfunction in safety-critical contexts.
Thumbnail Image

FSD "Mad Max" : Tesla lâche enfin les chevaux de sa conduite autonome !

2025-10-17
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Tesla's FSD) whose update introduces a more aggressive driving profile. Although this could plausibly lead to harm (e.g., accidents due to aggressive maneuvers), no actual harm or incident is reported. The article focuses on the update and its potential implications, including regulatory concerns, but does not describe any realized injury, property damage, or rights violations. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of future harm from the AI system's new behavior.
Thumbnail Image

Tesla lance un mode " Mad Max " pour une conduite autonome ... sans respect des limitations de vitesse ?

2025-10-17
KultureGeek
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system enabling autonomous driving. The "Mad Max" mode explicitly encourages driving behaviors that violate traffic laws (speeding and aggressive lane changes), which have been observed in practice. This increases the risk of accidents and harm to people, fulfilling the criteria for harm to health and safety. The ongoing regulatory investigation and the potential for increased accidents confirm the direct or indirect link between the AI system's use and harm. Hence, this is an AI Incident due to realized or ongoing harm linked to the AI system's deployment and behavior.
Thumbnail Image

Tesla ajoute un mode "Mad Max" à ses voitures, une conduite autonome plus nerveuse qui divise les utilisateurs

2025-10-18
Daily Geek Show
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving software) whose new mode changes the vehicle's autonomous driving behavior. While the mode could plausibly lead to safety risks or accidents due to its more assertive driving style, the article does not mention any realized harm or incidents caused by this mode. Therefore, it represents a credible potential risk (AI Hazard) rather than an AI Incident. The article focuses on the implications and user reactions to this new mode, emphasizing the balance between autonomy and responsibility, but no direct harm has occurred as per the description.
Thumbnail Image

Tesla's self-driving cars under fire again

2025-10-18
Fox News
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system involved in autonomous vehicle operation. The reported incidents include actual crashes and injuries caused by the system's failure to recognize traffic signals and safely navigate intersections and railroad crossings. The NHTSA investigation and prior lawsuits confirm that the AI system's malfunction or misuse has directly led to harm. Hence, this event meets the criteria for an AI Incident due to direct harm to persons and property caused by the AI system's use and malfunction.
Thumbnail Image

Tesla brings back Mad Max Full Self-Driving mode that ignores speed limits

2025-10-17
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving system) whose use has been associated with numerous crashes and fatalities, as documented by lawsuits and regulatory investigations. The 'Mad Max' mode explicitly encourages behaviors that violate traffic laws and increase risk, such as speeding and rolling stop signs, which can directly lead to injury or death. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Tesla is heading into multi-billion-dollar iceberg of its own making

2025-10-20
Electrek
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Tesla's FSD) whose development and deployment have directly caused harm to customers through unmet promises and misleading marketing. The financial harm to customers and legal actions represent violations of consumer rights and contractual obligations, fitting the definition of an AI Incident. The article details realized harm rather than potential harm, and the AI system's role is pivotal in causing these harms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

US probes driver assistance software in 2.9 million Tesla vehicles after reported crashes

2025-10-19
RNZ
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system designed to assist driving with some level of autonomy. The reported crashes, traffic violations, and injuries are directly linked to the use of this AI system, fulfilling the criteria for harm to persons and property. The investigation by NHTSA is a response to these realized harms caused by the AI system's malfunction or failure to perform safely. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla clears major hurdle in effort to expand Full Self-Driving tech: 'Surprising to see'

2025-10-21
The Cool Down
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving technology) and its use (testing on Swedish roads). However, there is no indication of any realized harm (such as accidents, injuries, or rights violations) or a credible risk of harm described in the article. The focus is on regulatory approval and expansion of testing, which is a factual update about AI deployment. Therefore, this qualifies as Complementary Information, as it provides context and updates on AI system deployment without reporting an incident or hazard.
Thumbnail Image

特斯拉FSD软件上线"疯狂麦克斯"功能,可忽略速度限制

2025-10-17
新浪财经
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system controlling vehicle driving behavior. The 'Mad Max' mode enables the AI to ignore speed limits and stop signs, which directly causes unsafe driving and potential harm to road users. The fact that NHTSA has previously recalled FSD Beta for similar issues and is investigating the system further confirms the direct link between the AI system's use and realized or imminent harm. Therefore, this event qualifies as an AI Incident due to the direct harm to safety and violation of traffic regulations caused by the AI system's operation.
Thumbnail Image

特斯拉FSD软件上线"疯狂麦克斯"功能 可忽略速度限制 - cnBeta.COM 移动版

2025-10-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system controlling vehicle behavior. The 'Mad Max' mode enables the AI to ignore traffic rules, such as stop signs and speed limits, which directly risks injury or harm to people and property. The article references past regulatory recalls and investigations due to these behaviors, confirming that harm or risk of harm has materialized or is ongoing. Hence, this is an AI Incident involving the use and malfunction of an AI system leading to safety hazards.
Thumbnail Image

Tesla's "Mad Max" mode draws NHTSA attention over speed concerns By Investing.com

2025-10-24
Investing.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system providing driver assistance with autonomous capabilities. The "Mad Max" mode operates at higher speeds, and reports of traffic violations and crashes linked to this system show that its use has caused harm. The NHTSA investigation confirms the seriousness of these incidents. Since the AI system's use has directly or indirectly led to harm (traffic crashes and safety violations), this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US agency asking Tesla about 'Mad Max' driver assistance mode

2025-10-24
Reuters
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system that makes real-time driving decisions. The 'Mad Max' mode is a more aggressive version of this system, reportedly enabling speeds above legal limits, which has led to traffic safety violations and crashes. Since the AI system's use has directly or indirectly led to harm (traffic violations and crashes), this qualifies as an AI Incident under the framework. The investigation by NHTSA confirms the seriousness of the issue and the realized harm associated with the AI system's operation.
Thumbnail Image

US agency asks Tesla about 'Mad Max' driver assistance mode

2025-10-25
ETAuto.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system providing autonomous driving assistance. The reports of the 'Mad Max' mode operating above speed limits and causing traffic violations and crashes demonstrate direct harm to people (injuries and crashes). The NHTSA investigation confirms the system's involvement in these harms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use and malfunction.
Thumbnail Image

US investigates Tesla's 'Mad Max' high-speed driver assistance mode

2025-10-24
The Guardian
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system providing driver assistance with autonomous features. The 'Mad Max' mode reportedly causes vehicles to operate above speed limits and has been linked to multiple crashes and injuries, which are harms to persons. The NHTSA investigation and reports of traffic safety violations confirm that the AI system's use has directly or indirectly caused harm. This meets the criteria for an AI Incident as the AI system's malfunction or use has led to injury and safety violations.
Thumbnail Image

U.S. Regulator Probes Tesla's 'Mad Max' Mode Over Safety Concerns

2025-10-24
Forbes
Why's our monitor labelling this an incident or hazard?
Tesla's 'Mad Max' mode is an AI-driven driver-assistance system that controls vehicle behavior, including speed. The regulatory probe is due to concerns that this mode enables driving at unsafe speeds, which could plausibly lead to accidents or injuries. Since the article describes an ongoing investigation without reporting actual crashes or injuries caused by this mode, the event is best classified as an AI Hazard, reflecting a credible potential for harm stemming from the AI system's use.
Thumbnail Image

US Agency Asking Tesla About 'Mad Max' Driver Assistance Mode

2025-10-24
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system providing autonomous driving assistance. The 'Mad Max' mode is a more aggressive AI driving mode that reportedly causes vehicles to exceed speed limits, leading to traffic safety violations and crashes. The NHTSA investigation into millions of vehicles and multiple reports of crashes indicates that harm has occurred or is occurring due to the AI system's use. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to persons. The investigation and reports confirm realized harm rather than just potential harm, so it is not merely an AI Hazard or Complementary Information.
Thumbnail Image

'Crazy': Fury over Musk's 'Mad Max' move

2025-10-27
News.com.au
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system that controls vehicle behavior including speed and lane changes. The 'Mad Max' mode explicitly encourages aggressive driving that exceeds speed limits and weaves through traffic, which is a direct risk to road safety and human health. The involvement of the US highway safety authority investigating the feature confirms that the AI system's use is linked to potential or actual harm. This meets the criteria for an AI Incident as the AI system's use has directly led to a situation with realized or imminent harm to people (road safety risks).
Thumbnail Image

US agency acts over reports of Tesla 'Mad Max' driver assistance mode

2025-10-24
The Independent
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system qualifies as an AI system because it provides advanced driver assistance with autonomous capabilities. The reported 'Mad Max' mode involves aggressive driving behavior controlled by the AI system, which has directly led to traffic safety violations, crashes, and injuries. The NHTSA investigation and the reported incidents confirm that harm has occurred due to the AI system's use. Hence, this event meets the definition of an AI Incident as the AI system's use has directly led to harm to people and violations of safety laws.
Thumbnail Image

Tesla 'Mad Max' mode reportedly being probed by feds

2025-10-24
KRON4
Why's our monitor labelling this an incident or hazard?
The Tesla 'Mad Max' mode is an AI system feature for autonomous driving. The reported incidents of vehicles running red lights and causing crashes demonstrate direct harm resulting from the AI system's malfunction or unsafe behavior. The federal probe and complaints confirm that the AI system's use has led to injury or harm to people and property damage, fitting the definition of an AI Incident.
Thumbnail Image

Tesla's 'Mad Max' Mode is Now Being Investigated by U.S. Regulators

2025-10-25
Gizmodo
Why's our monitor labelling this an incident or hazard?
The 'Mad Max' mode is an AI system (automated driving mode) whose use is being investigated due to reports of speeding and unsafe driving behavior. While no confirmed incidents of harm are reported in this article, the potential for harm (e.g., traffic accidents due to speeding) is credible and plausible. The investigation by NHTSA indicates regulatory concern about the AI system's safety and compliance with traffic laws. Since the harm is potential and not yet realized or confirmed, this event fits the definition of an AI Hazard rather than an AI Incident. The article also references prior investigations and lawsuits related to Tesla's AI driving systems, providing context but not changing the classification of this specific event.
Thumbnail Image

Tesla's New 'Mad Max' Self-Driving Mode Keeps Blowing Speed Limits

2025-10-24
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system that automates driving tasks including acceleration and navigation. The 'Mad Max' mode causes the vehicle to drive at speeds exceeding legal limits, which is a direct violation of traffic laws and poses a clear risk of injury to drivers and others on the road. The article references actual incidents, investigations, and lawsuits related to accidents involving Tesla vehicles with FSD engaged, indicating realized harm. The AI system's behavior is a contributing factor to these harms, fulfilling the criteria for an AI Incident under the framework. The presence of ongoing regulatory scrutiny and legal consequences further supports this classification.
Thumbnail Image

NHTSA Reportedly Probing Tesla's 'Mad Max' Driver Assist Mode For Safety Details

2025-10-24
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The 'Mad Max' mode is an AI system feature within Tesla's FSD technology, designed to assist driving at higher speeds. The NHTSA investigation highlights that this AI system's outputs have directly or indirectly led to traffic safety law violations, crashes, and injuries, fulfilling the criteria for harm to persons and disruption of safe vehicle operation. The report of 58 incidents, including injuries and crashes, confirms realized harm caused by the AI system's use. Hence, this event is classified as an AI Incident.
Thumbnail Image

Tesla Revives Controversial 'Mad Max' Mode Despite Safety Scrutiny

2025-10-24
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that controls vehicle behavior. The 'Mad Max' mode encourages aggressive driving, which has already resulted in unsafe behaviors like rolling stops and speeding, leading to accidents and wrongful death claims. Regulatory investigations and recalls indicate that harms have materialized. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident involving injury or harm to people and disruption of safe infrastructure operation.
Thumbnail Image

Tesla's "Mad Max" mode is now under federal scrutiny

2025-10-24
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's full self-driving feature) whose use and behavior ('Mad Max' mode) have directly led to safety hazards and reported incidents that threaten or have caused harm to people. The federal investigation and prior lawsuit confirm that the AI system's malfunction or risky behavior has resulted in or contributed to injury or death risks, fulfilling the criteria for an AI Incident. Therefore, this event is classified as an AI Incident due to the realized harm and ongoing safety concerns linked to the AI system's operation.
Thumbnail Image

Tesla driver avoids plane crash caught on video, fans falsely credit self-driving

2025-10-26
Electrek
Why's our monitor labelling this an incident or hazard?
The event does not describe an AI Incident because no harm has directly or indirectly resulted from the AI system's use or malfunction. The Tesla vehicle was manually driven during the incident, and the AI system was not involved in avoiding the crash. However, the spread of misinformation about the AI system's role in preventing the crash could plausibly lead to future harm by encouraging overreliance or misuse of the system. Therefore, this event qualifies as an AI Hazard, as it highlights a credible risk stemming from the development and use of AI-based driver assistance systems and the societal response to them.
Thumbnail Image

Tesla FSD "Mad Max" Profile Under NHTSA Scrutiny After Reports of Ignoring Speed Limits

2025-10-27
autoevolution
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system that controls vehicle driving behavior. The 'Mad Max' profile causes the vehicle to speed and drive aggressively, ignoring traffic laws, which has already resulted in reported crashes and injuries. The NHTSA's investigations and reports of harm confirm that the AI system's use has directly led to harm to people and disruption of safe vehicle operation. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's malfunction or unsafe behavior.
Thumbnail Image

Tesla 'Mad Max' gets its first bit of regulatory attention

2025-10-24
TESLARATI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving AI) whose new aggressive driving mode could plausibly lead to harm (e.g., traffic accidents) due to its higher speeds and frequent lane changes. The regulatory attention and information request by NHTSA reflect concern about potential safety hazards. Since no actual harm or incident has been reported yet, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risk and regulatory scrutiny rather than a realized harm or incident.
Thumbnail Image

Mad Max Mode: Tesla's New High-Speed Challenge | Technology

2025-10-24
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system providing autonomous driving assistance. The new "Mad Max" mode enabling higher speeds beyond legal limits has been linked to multiple traffic violations and crashes, indicating direct or indirect harm to human safety. The NHTSA investigation confirms the seriousness of these harms. Since the AI system's use has contributed to realized harm (traffic crashes and violations), this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why US government is probing Tesla's 'Mad Max' driving mode

2025-10-25
NewsBytes
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that controls vehicle behavior autonomously or semi-autonomously. The 'Mad Max' mode, a more aggressive version of FSD, has been associated with multiple crashes, injuries, and traffic violations, including running red lights and collisions. These harms fall under injury to persons and disruption of safe traffic management, directly linked to the AI system's use. The ongoing investigation by NHTSA into these incidents confirms the materialized harm caused by the AI system's malfunction or misuse, meeting the criteria for an AI Incident.
Thumbnail Image

Tesla Revives Controversial 'Mad Max' Mode Despite Safety Scrutiny

2025-10-24
finanzen.ch
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system that controls vehicle behavior autonomously or semi-autonomously. The 'Mad Max' mode modifies the AI's driving style to be more aggressive, which has been observed to cause unsafe driving actions that have led to accidents and wrongful death claims. The regulatory investigations and recalls confirm that harm has occurred or is ongoing. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons and is under official scrutiny for safety violations.
Thumbnail Image

Tesla's 'Mad Max' mode draws scrutiny as safety regulators probe aggressive driving behavior

2025-10-24
Insurance Business
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system providing driver-assistance with autonomous features. The 'Mad Max' mode encourages aggressive driving that has already led to crashes and injuries, constituting harm to persons. The NHTSA investigation and prior recalls confirm that the AI system's use has directly or indirectly led to harm. The event is not merely a potential hazard or complementary information but a current incident involving realized harm linked to AI system behavior.
Thumbnail Image

Tesla under scrutiny as it slips out 'Mad Max' driving mode

2025-10-24
Insurance Business
Why's our monitor labelling this an incident or hazard?
The 'Mad Max' mode is an AI-driven driver-assistance system that influences vehicle behavior in real-time, thus qualifying as an AI system. The regulator's inquiry and the reported aggressive driving behaviors suggest a credible risk of harm (e.g., traffic accidents, injury) stemming from the AI system's use. Since no actual harm has been reported yet, but plausible future harm exists due to the system's capabilities and observed behavior, this event fits the definition of an AI Hazard rather than an AI Incident. The focus is on potential risk rather than realized harm.
Thumbnail Image

Tesla FSD Revenue Declines in Q3 2025 Amid Safety and Adoption Woes

2025-10-25
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system involved in autonomous driving. The article details a near-incident (ignoring street signs and nearly hitting a mannequin) and widespread safety and adoption concerns, regulatory scrutiny, and challenges in achieving full autonomy. While no actual injury or damage is reported, the described issues indicate plausible risks of harm in the future if the system malfunctions or is used unsupervised. The decline in revenue and adoption reflects consumer skepticism tied to these safety concerns. Since the article focuses on potential risks and challenges rather than realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

NHTSA opens new Tesla investigation over Full Self-Driving -- 2.9 million vehicles under scrutiny

2025-10-26
ECOticias.com
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving (Supervised) technology is an AI system providing advanced driver-assistance features requiring driver supervision. The investigation by NHTSA is due to reports of unlawful driving behavior while using this AI system and errors made by the system itself, such as turning into oncoming traffic, which have led to crashes. These incidents involve direct or indirect harm to people (injury or risk thereof) and thus meet the criteria for an AI Incident. The scale of the investigation (2.9 million vehicles) and prior related investigations reinforce the significance of the harm. The event is not merely a potential hazard or complementary information but a current investigation into realized or ongoing harms linked to the AI system's use and malfunction.
Thumbnail Image

'Mad Max' mode returns to Tesla -- It ignores speed limits, but drivers need to know this first

2025-10-23
ECOticias.com
Why's our monitor labelling this an incident or hazard?
The Full Self-Driving system is an AI system that makes real-time driving decisions. The Mad Max mode modifies the AI's behavior to ignore speed limits and stop signs, which has been observed to cause reckless driving. This directly implicates the AI system's use in causing or enabling harm to road users, fulfilling the criteria for an AI Incident. The article also mentions ongoing investigations and legal challenges related to safety and regulatory compliance, reinforcing the seriousness of the harm. The harm is not merely potential but already evidenced by reckless driving footage and safety concerns, thus prioritizing classification as an AI Incident over AI Hazard or Complementary Information.
Thumbnail Image

US investigates Tesla's 'Mad Max' high-speed driver assistance mode

2025-10-24
News Flash
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed to assist driving by making real-time decisions. The 'Mad Max' mode is a variant of this system that reportedly encourages aggressive driving behavior, including speeding and running stop signs or red lights. The NHTSA investigation cites multiple crashes and injuries linked to the use of FSD, indicating direct harm caused by the AI system's operation. Therefore, this event meets the criteria for an AI Incident due to the AI system's use directly leading to injury and traffic safety violations.
Thumbnail Image

Tesla's 'Mad Max' driver assistance mode sparks probe by feds after cars seen operating at higher speeds

2025-10-24
News Flash
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system providing driver assistance with autonomous features. The 'Mad Max' mode's operation above speed limits and failure to comply with traffic signals has directly caused crashes and injuries, as reported by NHTSA. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to persons and violations of traffic safety laws. The investigation and reports confirm realized harm rather than just potential risk, so this is not merely a hazard or complementary information.
Thumbnail Image

Tesla Driver Assistance Mode Under Scrutiny: US Agency Investigation - News Directory 3

2025-10-24
News Directory 3
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system involved in autonomous driving assistance. The reported 'Mad Max' mode allegedly enables speeding beyond legal limits, which could plausibly lead to traffic accidents or injuries. Since the investigation is ongoing and no actual harm has been reported, the event is best classified as an AI Hazard due to the credible risk of harm from the AI system's use. There is no indication of realized harm or legal violations yet, so it is not an AI Incident. The article focuses on the investigation and potential risks, not on a response or update to a past incident, so it is not Complementary Information.
Thumbnail Image

Tesla driver avoids plane crash, fans falsely credit self-driving

2025-10-27
Skeptic Society Magazine
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Tesla's FSD and Autopilot) but does not describe an incident where the AI system caused or prevented harm. Instead, it focuses on misinformation about the system's involvement in avoiding a crash. There is no realized harm or plausible future harm directly linked to the AI system in this event. The misinformation and public reaction constitute complementary information about societal responses and perceptions of AI systems rather than a new incident or hazard. Therefore, this event is best classified as Complementary Information.
Thumbnail Image

Tesla a encore des ennuis : le mode de conduite agressif Mad Max sous le coup d'une enquête fédérale

2025-10-24
Les Numériques
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that controls vehicle behavior autonomously. The 'Mad Max' mode is a setting within this AI system that encourages aggressive driving, including speeding, which has led to multiple traffic violations and accidents. The NHTSA's investigation indicates that harm to people and public safety has occurred or is ongoing, directly linked to the AI system's use. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (accidents and traffic violations).
Thumbnail Image

Les Tesla ont désormais un mode de conduite " Mad Max ", et il inquiète les autorités américaines

2025-10-26
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system enabling autonomous driving decisions. The 'Mad Max' mode increases risk by encouraging aggressive driving that has already resulted in traffic violations such as ignoring stop signs and speeding. These behaviors pose direct risks of injury or harm to people and property, fulfilling the criteria for an AI Incident. The involvement of the NHTSA and ongoing investigations further confirm the seriousness and realized nature of the harm. Although the system requires human supervision, the AI's outputs have directly contributed to unsafe driving behavior, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla révèle les secrets de sa nouvelle conduite autonome " FSD " et c'est vraiment hallucinant

2025-10-28
Numerama.com
Why's our monitor labelling this an incident or hazard?
The article focuses on explaining the AI technology behind Tesla's FSD system and its advancements, without reporting any realized harm or direct/indirect incidents caused by the AI system. It does not describe any event where the AI system led to injury, property damage, rights violations, or other harms. Therefore, it does not qualify as an AI Incident or AI Hazard. The content serves as informative background and context about the AI system's development and capabilities, fitting the definition of Complementary Information.
Thumbnail Image

Tesla : Trop dangereux, le nouveau mode de conduite autonome " Mad Max " dans le viseur des autorités américaines

2025-10-27
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system controlling vehicle driving behavior. The new "Mad Max" mode promotes aggressive driving that violates traffic laws, leading to real-world harm including collisions and injuries. The NHTSA investigation confirms the seriousness and direct link between the AI system's use and the harm caused. This meets the criteria for an AI Incident because the AI system's use has directly led to injury and harm to people, as well as violations of legal obligations related to road safety.
Thumbnail Image

Le mode "Mad Max" de Tesla attire l'attention de la NHTSA pour des questions de vitesse Par Investing.com

2025-10-24
Investing.com France
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system providing autonomous driving assistance. The 'Mad Max' mode is a variant of this AI system operating at higher speeds, which has been linked to traffic violations and accidents. The NHTSA's investigation and the reported accidents demonstrate that the AI system's use has directly or indirectly led to harm to persons, fulfilling the criteria for an AI Incident. Therefore, this event qualifies as an AI Incident due to realized harm associated with the AI system's operation.
Thumbnail Image

Tesla revives 'Mad Max' mode in Full Self-Driving

2025-10-27
Fox News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving software) whose use (the Mad Max mode) influences vehicle behavior in ways that have already been reported to include risky driving actions such as rolling stop signs and speeding. These behaviors can directly lead to injury or harm to persons and communities, fulfilling the criteria for harm under the AI Incident definition. The article also notes ongoing investigations and lawsuits, underscoring the seriousness of the safety concerns. Since harm is occurring or highly plausible and linked to the AI system's use, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla : l'Autopilot s'améliore... mais il a plus d'accidents

2025-10-26
Motor1.com
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system involved in vehicle control. The article reports that accidents have occurred while using Autopilot, indicating realized harm linked to the AI system's use. Although Tesla claims safety improvements, the data shows a trend of increased accidents, which implies the AI system's malfunction or limitations have contributed to harm. The lack of detailed transparency does not negate the fact that accidents have happened. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in driving.
Thumbnail Image

Federal investigators are looking into Tesla's Mad Max mode, which reportedly defies speed limits

2025-10-27
engadget
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system, an AI system for autonomous driving, is reported to have induced vehicle behaviors that violate traffic laws, including speeding and running red lights. These behaviors directly risk injury or harm to people and disrupt safe traffic management, fulfilling the criteria for harm under AI Incident definitions. The investigation by NHTSA confirms the seriousness of these harms. Hence, the event is classified as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Feds investigating Tesla's 'Mad Max' mode

2025-10-27
Mashable
Why's our monitor labelling this an incident or hazard?
Tesla's 'Mad Max' mode is an AI system enabling autonomous driving with aggressive behaviors that violate traffic laws and pose safety risks. The NHTSA investigation and reports of vehicles speeding excessively and running stop signs demonstrate realized harm or risk to public safety. The AI system's use has directly or indirectly led to potential injury or harm to people, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but involves actual use and regulatory response to harmful AI behavior.
Thumbnail Image

Feds investigating Tesla's 'Mad Max' mode

2025-10-27
Mashable SEA
Why's our monitor labelling this an incident or hazard?
Tesla's 'Mad Max' mode is an AI system feature (full self-driving mode) that autonomously controls vehicle behavior, including speeding and aggressive lane changes. The article reports actual instances of traffic violations (speeding, running stop signs) caused by the AI system's operation, which pose direct harm to road safety and potentially to people. The NHTSA investigation confirms the regulatory concern over these harms. Since the AI system's use has directly led to these harms or violations, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla's 'Mad Max' Mode Is Being Investigated by the Feds. Here's Why.

2025-10-28
VICE
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system that controls vehicle driving behavior. The 'Mad Max' mode reportedly causes the AI to ignore speed limits and drive recklessly, which has directly contributed to fatal and severe injuries, as evidenced by the wrongful death lawsuit and multiple incident reports. The federal investigation further confirms the system's role in causing harm. Therefore, this event qualifies as an AI Incident due to direct harm to human health caused by the AI system's use and malfunction.
Thumbnail Image

Le nouveau mode autonome " Mad Max " des Tesla inquiète les autorités

2025-10-27
20minutes
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that controls vehicle navigation and driving decisions. The new "Mad Max" mode increases speed and aggressive maneuvers, resulting in traffic violations and accidents causing injuries. The NHTSA's investigation and the reported collisions with injuries confirm that harm has occurred due to the AI system's use. The AI system's malfunction or risky behavior is a contributing factor to these harms, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

Les Tesla ont un mode de conduite " Mad Max ", et il inquiète les autorités américaines

2025-10-26
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system that controls vehicle driving behavior. The new "Mad Max" mode encourages aggressive driving, leading to observed traffic violations such as ignoring stop signs and speeding, which are direct harms to public safety (harm to persons). The NHTSA's investigation and multiple accident reports linked to FSD further confirm realized harm or risk of harm. Since the AI system's use has directly led to unsafe driving behavior and accidents, this qualifies as an AI Incident under the framework's criteria for harm to persons and disruption of critical infrastructure (road safety).
Thumbnail Image

Cette vidéo montre une Tesla évitant de justesse un crash d'avion, ne croyez pas ces fans qui surestiment la conduite autonome

2025-10-27
PhonAndroid
Why's our monitor labelling this an incident or hazard?
An AI system (Tesla's Autopilot and Full Self-Driving) is involved as an advanced driver assistance system. The event involves the use of the AI system (or rather the misconception of its use) and the spread of misinformation about its capabilities. Although no direct harm occurred, the misinformation could plausibly lead to harm if drivers overtrust the system and reduce vigilance, which is a recognized risk. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm related to the AI system's use, but no harm has yet materialized.
Thumbnail Image

NHTSA Investigates Tesla's Mad Max Mode in Full Self-Driving Amid Safety Concerns

2025-10-27
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that autonomously controls vehicle behavior. The 'Mad Max' mode modifies this AI system to drive aggressively, exceeding speed limits and weaving through traffic, which has been reported by users and shown in videos. These behaviors pose direct safety risks to people, fulfilling the harm criteria. The NHTSA investigation confirms that these risks are taken seriously by regulators. Since the AI system's use has already led to safety concerns and potential harm, this is an AI Incident rather than a mere hazard or complementary information. The event is not unrelated as it centrally involves an AI system causing or enabling harm.
Thumbnail Image

Tesla FSD Advances in Urban Driving Amid Safety Probes and Expansion

2025-10-27
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system involved in autonomous driving decisions. The article references multiple federal investigations into safety incidents linked to FSD's operation, such as traffic violations and dangerous driving behaviors, which have directly or indirectly led to potential or actual harm to people. The system's malfunction or limitations have caused or contributed to these harms, fulfilling the criteria for an AI Incident. While improvements and updates are ongoing, the documented safety issues and regulatory probes confirm that harm has occurred or is occurring, rather than being merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Les régulateurs américains examinent le mode de conduite " Mad Max " controversé de Tesla..

2025-10-26
Fredzone
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's automated driving mode) whose use has led to vehicles exceeding speed limits, a clear safety hazard that can cause injury or harm to people. The NHTSA's investigation confirms regulatory concern about the AI system's performance and safety implications. The harm is materialized or at least ongoing, as users have reported the behavior and regulatory bodies are responding. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm (risk to health and safety).
Thumbnail Image

Tesla's 'Mad Max' Feature Lets Cars Speed and Weave -- Feds Want Answers

2025-10-28
Technology Org
Why's our monitor labelling this an incident or hazard?
The 'Mad Max' feature is part of Tesla's AI-based Full Self-Driving system, which autonomously controls vehicle behavior such as speed and lane changes. The feature's aggressive driving mode has been shown in real-world footage to exceed speed limits and disregard stop signs, directly implicating the AI system in behaviors that can cause injury or harm to people. The federal safety agency's active investigation and concern further support that the AI system's use has led or could lead to harm. Since the AI system's use is directly linked to potentially unsafe driving and legal violations, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Le mode " Mad Max " de Tesla déclenche une nouvelle enquête fédérale

2025-10-27
KultureGeek
Why's our monitor labelling this an incident or hazard?
Tesla's "Mad Max" mode is a semi-autonomous driving AI system that reportedly causes vehicles to exceed speed limits and engage in dangerous driving, which directly relates to harm to public safety. The NHTSA's investigation confirms official concern about the AI system's role in these unsafe behaviors. Prior incidents and legal actions against Tesla's AI driving features further support the classification as an AI Incident. The AI system's use has directly or indirectly led to safety risks and potential harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla revives 'Mad Max' mode in Full Self-Driving

2025-10-27
Fox Wilmington
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as Tesla's Full Self-Driving system with the 'Mad Max' mode, which modifies driving behavior through AI-based decision-making. The article reports observed aggressive driving behaviors that violate traffic rules, indicating a malfunction or risky use of the AI system. While no actual harm is reported, the potential for accidents and injury is credible and plausible. The AI system's development and use are central to the event, and the risk of harm to persons and communities is significant. Since no realized harm is described, the classification as an AI Hazard is appropriate rather than an AI Incident. The article does not focus on responses, legal proceedings, or broader ecosystem context, so it is not Complementary Information. It is clearly related to an AI system and its potential risks, so it is not Unrelated.
Thumbnail Image

Le mode de conduite entièrement autonome " Mad Max " de Tesla~? qui encourage une conduite agressive ignorant les limitations de vitesse~? fait l'objet d'une enquête des autorités réglementaires américaines

2025-10-27
Developpez.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system controlling vehicle driving behavior. The 'Mad Max' mode encourages aggressive, illegal driving, which has led to multiple incidents including accidents and injuries. The involvement of the AI system in causing these harms is direct and documented, with regulatory investigations and reported damages. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to persons and property.
Thumbnail Image

Tesla Deploys 'Mad Max' Mode, Immediately Triggers NHTSA Investigation - Jalopnik

2025-10-29
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system that makes real-time driving decisions. The 'Mad Max' mode explicitly encourages aggressive and risky driving behaviors that have been observed, including speeding and running stop signs, which are violations of traffic laws and increase the likelihood of crashes. These behaviors have already prompted a government safety investigation, indicating that harm or risk of harm is materializing or imminent. Therefore, the event qualifies as an AI Incident because the AI system's use has directly led to potential or actual harm to people through unsafe driving practices.
Thumbnail Image

Larry Magid: Tesla's FSD no longer lets drivers set their own speed

2025-10-30
San Jose Mercury News
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system involved in autonomous vehicle control. The article describes the system's use and changes in its functionality that could lead to harm, such as speeding in school zones, which poses a risk to public safety. However, no actual harm or incident has been reported; the concerns are about plausible future harm due to the system's behavior and design choices. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to injury or harm, but no direct harm has yet occurred according to the article.
Thumbnail Image

What is Tesla's 'Mad Max Mode' and is it safe?

2025-10-30
The Tennessean
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, namely Tesla's Full Self-Driving software, which is a semi-autonomous driving AI system. The use of the 'Mad Max Mode' profile, which modifies the AI's driving behavior to be more aggressive, has been linked to real-world harms including crashes and injuries. The NHTSA investigation and reported incidents indicate that the AI system's use has directly or indirectly led to harm to persons (injuries) and potentially to property (crashes). Therefore, this qualifies as an AI Incident because the AI system's use has caused actual harm, not just potential harm.
Thumbnail Image

U.S. Road Safety Agency Probes Tesla About 'Mad Max' Driver Assistance Mode - Carrier Management

2025-10-31
Carrier Management
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system providing driver assistance with autonomous features. The 'Mad Max' mode is a variant of this system operating aggressively and reportedly causing traffic violations and crashes, which have resulted in injuries. The NHTSA's investigation and the reported incidents demonstrate that the AI system's use has directly or indirectly caused harm to people and traffic safety, fulfilling the criteria for an AI Incident. The presence of actual crashes and injuries linked to the AI system's operation confirms realized harm rather than potential harm, thus excluding classification as a hazard or complementary information.
Thumbnail Image

Tesla revive el modo "Mad Max" en la conducción autónoma total

2025-10-27
Yahoo!
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system controlling vehicle driving behavior. The 'Mad Max' mode increases aggressive driving, and there are reports of traffic violations (running stop signs, speeding) directly linked to this mode. Such behavior can cause injury or harm to people, fulfilling the harm criteria for an AI Incident. The AI system's use has directly led to these risky behaviors, and regulatory investigations underscore the seriousness of the issue. Hence, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El modo "Mad Max" de Tesla atrae la atención de la NHTSA por preocupaciones de velocidad Por Investing.com

2025-10-24
Investing.com Español
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system providing autonomous driving assistance. The 'Mad Max' mode is a variant of this system operating at higher speeds, which has led to reports of speeding and accidents. The NHTSA's investigation and the reported safety violations indicate that the AI system's use has directly or indirectly led to harm or safety risks. Therefore, this qualifies as an AI Incident due to the realized harm and regulatory scrutiny arising from the AI system's operation.
Thumbnail Image

Tesla ha estrenado el modo Mad Max para sus coches. Tras 14 accidentes, una investigación federal lo ha puesto bajo lupa

2025-10-27
3D Juegos
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system for autonomous driving. The introduction of the 'Mad Max' mode, which promotes aggressive driving, has been linked to 14 accidents and complaints about traffic violations. The NHTSA's investigation indicates regulatory concern about safety risks directly caused or exacerbated by the AI system's behavior. The article clearly describes realized harm (accidents) connected to the AI system's use and programming, fulfilling the criteria for an AI Incident. The aggressive driving mode's design and deployment have directly led to harm to people and disruption of safe road operation, meeting the definition of an AI Incident.
Thumbnail Image

Agencia estadounidense investiga a Tesla por nuevo modo de asistencia al conductor 'Mad Max'

2025-10-24
Forbes México
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system providing autonomous driving assistance. The 'Mad Max' mode operates at higher speeds and aggressive maneuvers, which have been linked to traffic violations, accidents, and injuries. The NHTSA investigation and reports of crashes and injuries confirm that the AI system's use has directly led to harm to people and disruption of safe vehicle operation. This meets the criteria for an AI Incident as the AI system's use has caused injury and safety violations. The event is not merely a potential hazard or complementary information but documents realized harm associated with the AI system's operation.
Thumbnail Image

El polémico modo "Mad Max" de Tesla provoca investigación

2025-10-24
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system designed for autonomous or semi-autonomous driving. The 'Mad Max' mode modifies its behavior to be more aggressive, which users have reported leads to risky driving maneuvers. The NHTSA investigation and reports of accidents and injuries confirm that the AI system's use has caused harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to injury and harm to people.
Thumbnail Image

El modo "Mad Max" de Tesla ya no es tan divertido: las autoridades lo están investigando

2025-10-27
Urban Tecno
Why's our monitor labelling this an incident or hazard?
Tesla's "Mad Max" mode is an AI-driven driving assistance system that modifies vehicle behavior to be more aggressive and exceed legal speed limits. The NHTSA investigation and prior reports of accidents and injuries linked to Tesla's FSD system demonstrate that the AI system's use has directly or indirectly caused harm to people. The article explicitly mentions injuries and accidents related to the AI system's behavior, fulfilling the criteria for an AI Incident. The investigation focuses on the system's malfunction or misuse leading to safety violations and harm, not just potential future harm, so it is not merely a hazard or complementary information.
Thumbnail Image

"Mad Max": el modo de Tesla que acelera, serpentea entre carriles y preocupa a las autoridades | Teknófilo

2025-10-25
Teknófilo
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system that makes real-time driving decisions. The "Mad Max" mode is a new AI-driven driving configuration that has led to complaints about dangerous driving behaviors such as running stop signs and crossing into oncoming traffic. The NHTSA's investigation and the legal case highlight that these behaviors have caused or could cause harm to people, fulfilling the criteria for an AI Incident. The AI system's use phase is implicated, and the harms relate to injury or harm to persons (a). The article does not merely discuss potential future harm but reports ongoing concerns and complaints, indicating realized or imminent harm. Hence, the classification is AI Incident.
Thumbnail Image

Tesla, trasă la răspundere. Anchetă în SUA din cauza modului de conducere "Mad Max"

2025-10-27
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system designed to assist or automate driving tasks. The 'Mad Max' mode is a specific AI-driven driving behavior that prioritizes aggressive maneuvers and higher speeds. The article reports multiple accidents and injuries linked to this mode, indicating that the AI system's use has directly led to harm to people (injury and risk to health). The NHTSA investigation confirms that the AI system induced behaviors violating traffic safety laws, which caused real incidents. This meets the criteria for an AI Incident because the AI system's malfunction or risky use has directly caused harm. The article does not merely warn of potential harm but documents actual harm and ongoing investigations into these incidents.
Thumbnail Image

Autoritățile din SUA au luat la întrebări Tesla din cauza sistemului de conducere "Mad Max" - HotNews.ro

2025-10-26
HotNews.ro
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system that autonomously controls driving functions. The reported incidents include traffic safety violations, accidents, and injuries linked to the system's operation, indicating direct or indirect harm caused by the AI system's use. The NHTSA investigation and multiple reports of accidents and injuries confirm that harm has occurred. Hence, this event qualifies as an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

Autorităţile americane investighează modul "Mad Max" al sistemului de asistenţă Tesla

2025-10-26
News.ro
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system providing autonomous driving assistance. The 'Mad Max' mode's aggressive driving and speeding have been linked to multiple traffic incidents causing injuries, which constitutes harm to people. The NHTSA investigation confirms the system's role in these incidents. Hence, the event involves the use and malfunction of an AI system that has directly or indirectly led to harm, fitting the definition of an AI Incident.
Thumbnail Image

Autorităţile americane investighează modul "Mad Max", care încurajează comportamentul rutier agresiv, al sistemului de asistenţă Tesla

2025-10-26
G4Media.ro
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system, including the 'Mad Max' mode, is an AI system providing autonomous driving assistance. The reported aggressive driving behavior and traffic violations have directly led to accidents and injuries, fulfilling the criteria for harm to persons. The NHTSA investigation and the reported incidents confirm that the AI system's use has caused actual harm, not just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Modul "Mad Max" al sistemului de asistenţă Tesla, verificat de autoritățile americane - Economica.net

2025-10-26
Economica.net
Why's our monitor labelling this an incident or hazard?
The 'Mad Max' mode is an AI system component of Tesla's Full Self-Driving suite, involving autonomous driving assistance. The NHTSA's request for information indicates a regulatory review due to potential safety concerns, suggesting plausible risk of harm. However, the article does not describe any actual harm or incident caused by the system so far, only the potential for such harm. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to an incident but no incident has been reported yet.
Thumbnail Image

Investigație la Tesla

2025-10-26
Profit.ro
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system that makes real-time driving decisions, including acceleration and lane changes. The 'Mad Max' mode represents a new, more aggressive AI driving behavior. The NHTSA investigation is based on multiple reported incidents where the AI system's behavior has led to traffic accidents and injuries, indicating direct or indirect harm to people. This meets the criteria for an AI Incident because the AI system's use has caused injury and safety violations. The investigation and reports confirm realized harm rather than just potential risk, so it is not merely an AI Hazard or Complementary Information.
Thumbnail Image

Autorităţile care se ocupă de siguranța traficului în SUA investighează modul 'Mad Max' al sistemului de asistenţă Tesla - Stiripesurse.md

2025-10-27
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system providing autonomous driving assistance. The 'Mad Max' mode's aggressive driving and speeding behavior raise credible concerns about potential traffic safety risks. The NHTSA investigation indicates regulatory concern about plausible future harm. Since no actual harm or incident is reported, but the risk is credible and linked to the AI system's use, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the potential risk and investigation, not just an update or response to a past incident.
Thumbnail Image

과속 주행 테슬라 '매드맥스' 기능에 美 당국 조사 착수 | 연합뉴스

2025-10-24
연합뉴스
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to assist driving by making real-time decisions. The 'Mad Max' mode allows the AI to operate at higher speeds, which has resulted in reported traffic violations and accidents. This constitutes direct or indirect harm to public safety (harm to persons), fulfilling the criteria for an AI Incident. The NHTSA investigation confirms that harm has occurred or is ongoing, not just a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

정지신호 무시·속도 위반... 테슬라 '매드맥스' 기능 美 당국 조사 착수 | 한국일보

2025-10-26
한국일보
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system for autonomous driving assistance. The 'Mad Max' mode causes vehicles to violate traffic laws by ignoring stop signals and speeding, which directly risks injury or harm to people. The U.S. transportation authority's investigation confirms the seriousness of these harms. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

과속 주행 테슬라 '매드맥스' 기능에 美 당국 조사 착수

2025-10-24
Wow TV
Why's our monitor labelling this an incident or hazard?
The Tesla FSD is an AI system that controls vehicle driving behavior. The new 'Mad Max' mode enables or encourages speeding and traffic violations, which have been reported and are under official investigation. This indicates that the AI system's use has directly led to harm or risk of harm to people and public safety. The investigation by NHTSA confirms the seriousness of the issue. Hence, this is an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

과속 주행 테슬라 '매드맥스' 기능...미 교통당국 조사 착수

2025-10-24
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The Tesla FSD is an AI system used for autonomous driving. The new 'Mad Max' mode encourages or enables speeding and traffic violations, which have been reported and are under investigation by U.S. authorities. These behaviors directly relate to harm to people and public safety (harm category a). Since the AI system's use has directly led to these harms, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

과속 주행 테슬라 '매드맥스' 기능에 美 조사 - 전파신문

2025-10-24
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system for autonomous driving. The new 'Mad Max' mode encourages higher speeds, which has led to reports of speeding. The U.S. authorities' investigation suggests that the AI system's use could directly or indirectly lead to harm (e.g., traffic accidents or injuries) due to speeding. Since the article does not report actual harm but highlights an ongoing investigation into potentially dangerous behavior caused by the AI system, this qualifies as an AI Hazard rather than an AI Incident.