Apple to Launch AI Health Coach in iOS Health App

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Apple is developing an AI health coach, code-named Project Mulberry, for iOS 19.4’s Health app. Drawing on Apple Watch and iPhone data, the AI ‘doctor’ will offer personalized medical advice trained on internal and external experts. The rollout next spring will feature expert-led educational videos to guide user health decisions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly describes an AI system under development (AI doctor analyzing health data and providing advice). There is no mention of any actual harm, injury, rights violation, or other negative outcomes caused by this AI system so far. The focus is on the planned launch and capabilities, which could plausibly lead to harm in the future if the AI provides incorrect or harmful medical advice. Hence, it fits the definition of an AI Hazard (potential for harm) rather than an AI Incident (actual harm). The patent dispute mentioned is unrelated to AI harm and does not affect the classification.[AI generated]
AI principles
SafetyPrivacy & data governanceTransparency & explainabilityAccountabilityRobustness & digital securityFairnessRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Healthcare, drugs, and biotechnologyConsumer productsDigital security

Harm types
Physical (injury)Physical (death)PsychologicalHuman or fundamental rightsReputationalEconomic/Property

Severity
AI hazard

Business function:
Citizen/customer serviceMonitoring and quality controlResearch and development

AI system task:
Interaction support/chatbotsContent generationOrganisation/recommendersForecasting/predictionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

苹果"Project Mulberry"揭秘:AI医生颠覆你的健康管理!

2025-04-01
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI health coach) under development and planned deployment, but there is no indication of any harm or malfunction caused by the AI system so far. The article is primarily about the upcoming AI health service and its potential to transform health management, without reporting any direct or indirect harm or plausible future harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context and updates on AI developments in healthcare.
Thumbnail Image

苹果被曝正在开发AI医生 最早明年推出

2025-03-31
东方财富网
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system under development (AI doctor analyzing health data and providing advice). There is no mention of any actual harm, injury, rights violation, or other negative outcomes caused by this AI system so far. The focus is on the planned launch and capabilities, which could plausibly lead to harm in the future if the AI provides incorrect or harmful medical advice. Hence, it fits the definition of an AI Hazard (potential for harm) rather than an AI Incident (actual harm). The patent dispute mentioned is unrelated to AI harm and does not affect the classification.
Thumbnail Image

苹果或下半年推出 M5 iPad Pro/用户吐槽小米汽车试驾,雷军秒道歉/胖东来店长平均月薪超 7 万

2025-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system under development (the AI doctor in Apple's health app) that processes health data and provides personalized advice. However, there is no mention of any harm, injury, rights violation, or disruption caused by this AI system. The AI system's role is prospective and developmental, aiming to improve health outcomes. Therefore, the event does not meet the criteria for an AI Incident (no realized harm) or an AI Hazard (no credible plausible risk of harm described). It is best classified as Complementary Information because it provides detailed context and updates about AI development and its potential impact on health technology without reporting any harm or risk of harm.
Thumbnail Image

苹果医疗帝国启航:据称AI医生+健康教练功能正在路上

2025-03-31
驱动之家
Why's our monitor labelling this an incident or hazard?
The article focuses on the planned introduction of AI health services by Apple, describing the AI system's intended use and development. There is no mention of any realized harm, injury, rights violations, or disruptions caused by the AI system. The AI system's involvement is prospective, and the article discusses potential benefits and features rather than incidents or hazards. Therefore, this event qualifies as Complementary Information, providing context and updates on AI development and deployment in healthcare without reporting an AI Incident or AI Hazard.
Thumbnail Image

库克的新大招:苹果AI医生要来了

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (the AI doctor and health coach) that could plausibly impact users' health and privacy. However, no actual harm or incident has been reported yet. The article discusses future deployment and challenges, which implies a credible potential for harm if issues arise, but no direct or indirect harm has occurred at this stage. Therefore, this qualifies as an AI Hazard, reflecting plausible future risks associated with the AI health system's deployment and use.
Thumbnail Image

错误率高达33%,苹果AI笑掉友商大牙

2025-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Siri with large language models, Apple Intelligence) and their development and use issues, including high error rates and delays. However, it does not describe any realized harm such as injury, rights violations, disruption, or significant community or property harm caused by these AI systems. Nor does it present a credible risk of future harm stemming from these AI issues. The content is primarily about Apple's internal struggles, management changes, and competitive positioning in AI, which fits the definition of Complementary Information as it provides context and updates on AI development and corporate responses without reporting an AI Incident or AI Hazard.
Thumbnail Image

早报|苹果或下半年推出 M5 iPad Pro/用户吐槽小米汽车试驾,雷军秒道歉/胖东来店长平均月薪超 7 万

2025-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article mentions AI systems explicitly, such as Apple's AI doctor analyzing health data, Kuaishou's video generation AI, RoboOS and RoboBrain frameworks for embodied AI, and AI assistants in cars. However, there is no indication that these AI systems have caused any injury, rights violations, property damage, or other harms. The OpenAI governance issues relate to management and corporate governance rather than AI-caused harm. The article mainly provides updates on AI development, corporate strategies, and ecosystem progress, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Apple 计划明年推出 Project Mulberry,打造"虚拟医生"服务

2025-03-31
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article describes the development and planned use of an AI system that will provide personalized health recommendations by analyzing user health data. There is no indication that any harm has occurred or that the system has malfunctioned. The event describes a future AI system with potential health impacts but does not report any realized harm or incident. Therefore, it qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (e.g., incorrect health advice) in the future, but no incident has yet occurred.
Thumbnail Image

苹果被曝正在开发AI医生 健康应用新突破

2025-03-31
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (AI doctor) analyzing health data and providing health advice, which fits the definition of an AI system. The AI system is under development and intended for use in health decision-making, which could plausibly lead to harm if incorrect or misleading advice is given, affecting users' health. However, there is no indication that any harm has yet occurred, nor any malfunction or misuse reported. The patent dispute mentioned is unrelated to AI harm. Thus, the event is best classified as an AI Hazard, reflecting the plausible future risk of harm from the AI health application.
Thumbnail Image

早报|苹果或下半年推出 M5 iPad Pro/用户吐槽小米汽车试驾,雷军秒道歉/胖东来店长平均月薪超 7 万

2025-03-31
爱范儿
Why's our monitor labelling this an incident or hazard?
The AI system described (Apple's AI doctor) is clearly an AI system as it analyzes health data and provides personalized recommendations. However, the article does not report any realized harm or incidents caused by this AI system. It is a development and research update about a new AI health application. There is no mention of any direct or indirect harm, malfunction, or violation of rights. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides supporting context about AI development and its potential impact on health technology.
Thumbnail Image

苹果(AAPL.US)拟推AI健康助手Health+ 大举进军医疗健康领域

2025-03-31
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system in healthcare, which could plausibly lead to harm if the AI provides incorrect medical advice or malfunctions. However, since the AI health assistant is still in development and no harm or incident has been reported, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses to past incidents or broader governance issues, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems in a health context with potential risks.
Thumbnail Image

苹果致力于开发人工智能医生和健康应用程序

2025-03-31
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems under development for health-related applications, which could plausibly lead to future harms if the AI provides incorrect medical advice or misguides users, but no actual harm or incident is reported at this stage. Therefore, this qualifies as an AI Hazard because the AI systems' use could plausibly lead to harm in the future, but no direct or indirect harm has yet occurred. It is not Complementary Information because the article is not about responses or updates to past incidents, nor is it unrelated as it clearly involves AI systems with potential health impacts.
Thumbnail Image

苹果再陷虚假宣传风波,iPhone 16 AI 功能在加拿大遭集体诉讼

2025-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI features of iPhone 16, including upgraded Siri) and their use in marketing. The lawsuit alleges that Apple falsely advertised these AI capabilities, which were not present at the time of purchase, leading to consumer harm through deception and breach of contract. This constitutes a violation of consumer protection laws, a breach of legal obligations, and harm to consumers. The AI system's development and use (or lack thereof) directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

苹果拟推AI健康助手Health+ 大举进军医疗健康领域

2025-03-31
新浪财经
Why's our monitor labelling this an incident or hazard?
The article describes Apple's development and planned deployment of an AI health assistant system that will analyze user health data and provide personalized advice. While the AI system is clearly involved and intended for medical use, there is no indication that any harm, malfunction, or violation of rights has occurred. The article focuses on the development progress, future plans, and potential capabilities of the AI system, without reporting any realized harm or incidents. Therefore, this event represents a plausible future risk scenario but not an actual incident or harm at this time. It is best classified as an AI Hazard because the AI system's use in healthcare could plausibly lead to harm in the future, but no harm has yet materialized.
Thumbnail Image

苹果再陷虚假宣传风波,iPhone16AI功能遭集体诉讼

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article describes a realized harm where Apple's marketing of AI features that were not yet available misled consumers, leading to a class-action lawsuit alleging fraud and breach of consumer protection laws. The AI system's development and deployment status is pivotal to the harm. This is a direct violation of legal obligations related to truthful advertising and consumer rights, fitting the definition of an AI Incident. The harm is not just potential but has materialized in the form of consumer deception and legal claims.
Thumbnail Image

苹果重拾「AI医生」

2025-04-02
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the AI doctor) that will analyze personal health data and provide personalized health management plans. Although no harm has yet occurred, the AI system's role in health decision-making implies a credible risk of injury or harm to users if the system malfunctions or provides inaccurate recommendations. Since the AI system is under development and not yet causing harm, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because it clearly involves an AI system with potential health impacts.
Thumbnail Image

苹果正在开发一款代号为"Mulberry"的健康应用 - cnBeta.COM 移动版

2025-03-30
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (an AI health coach) in a sensitive domain (healthcare) where incorrect advice could lead to harm to users' health. However, the article only discusses the development and planned deployment of this system, with no indication that harm has occurred. Therefore, it represents a plausible future risk of harm (AI Hazard) rather than an actual incident. The mention of past AI setbacks and lawsuits related to Siri is background context and does not indicate a current incident with this health app.
Thumbnail Image

苹果重拾"AI医生"-钛媒体官方网站

2025-04-02
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the AI doctor) that will analyze personal health data and provide health recommendations, fitting the definition of an AI system. The event concerns the development and intended use of this AI system, with no current harm reported. However, given the sensitive nature of health data and the potential consequences of incorrect AI health advice, there is a credible risk that the AI system could plausibly lead to harm in the future. Since no actual harm has occurred yet, this does not qualify as an AI Incident. It is not Complementary Information because the article is not updating or responding to a past incident but announcing a new development. It is not Unrelated because the AI system and its potential impacts are central to the article. Thus, the classification is AI Hazard.