Civil society

Sovereign AI for assistive and public technologies: How policymakers can reinforce critical digital capacities

man speaking sign language on video call

Along with many AI roboticists and other EU tech experts, I recently joined an open study group organised by the Commission’s DG CNECT on AI and robotics that focuses on policy and the deployment of critical digital capacities. This is a reflection of the geopolitical context and intensifying global technological AI race, echoing the EU and Commission’s efforts to address digital resilience and technological sovereignty. It concerns data, AI value, and supply chains, such as Eurostack, the AI Continent Action Plan, AI Factories, EU InvestAI, and the EU Chips Act. Additionally, it includes action plans from the United States and China, spanning from competition in high-performance computing and foundation models to AI infrastructure, semiconductors, and export controls.

Our input helped expand this perspective to reflect on how the Commission can better approach public and assistive systems, healthcare, education, vulnerable and emerging humanitarian contexts, including robust and energy-resilient models that can operate offline, on-device or in embodied systems, supported by sensors, 3D models, spatial data, and specialised chips and materials.

In fact, the taxonomies of assistive AI technologies continue to expand. The OECD’s repository of AI supporting labour with disabilities at workplaces includes over 140 systems. Cities like Barcelona deploy NaviLens, a computer vision-based system, providing real-time guidance for visually impaired travellers or Singapore’s SiLViA instantly translates spoken and written words into sign language. Furthermore, the World Health Organisation projects that the number of people needing assistive technology will grow from 2.5 to 3.5 billion—approximately 36% of the population—by 2050, yet only 10% have access to such technologies.

Why sovereign AI for assistive technologies is different

Public and assistive AI models operate in high-stakes environments, such as hospitals, homes, schools, and transportation systems. Failures or minor errors can harm vulnerable populations or make services inaccessible. Yet these systems often rely on fragile supply chains and datasets that may fail to represent specific contexts. Subsequently, assistive AI systems introduce distinct requirements that often fall outside the scope of policy discourse.

First, assistive technologies operate in fragmented infrastructures and supply chains. Tools for captioning, speech recognition, cognitive support, or navigation often integrate components from multiple closed ecosystems—external APIs for speech-to-text, closed models for vision tasks, and overseas cloud services.

Second,  they face compute and power constraints. Unlike enterprise AI running on high-end GPUs, assistive devices like smart prosthetics, wheelchairs, or home robotics operate on low-power systems that prioritise offline-capable, energy-efficient AI architectures and reliability-focused, cost-effective chips that can perform well under hardware limitations, without constant cloud connectivity.

Third, there is a systemic data gap across text, visual, and 3D spatial data, as well as simulation environments. Foundation models powering visual or linguistic AI are often trained on large, generic datasets that lack representation of real-world assistive contexts or multilingual environments. In fact, few models are trained on data from people with disabilities, and most clinical datasets come from the USA and China, limiting representation of regional diseases, conditions and disability patterns.

Finally, assistive deployments are vulnerable to both physical and digital threats. Unlike software-only AI systems, assistive technologies require backup systems and safety measures, as device failures can physically harm users. They are also vulnerable to malfunctions, adversarial attacks or privacy breaches, exposing sensitive biometric and health data.

For instance, researchers documented incidents where adversarial stickers placed on crosswalk signs caused an AI-powered navigation app for blind users to misclassify ‘STOP’ signals as ‘WALK’ commands or a cyberattack on Change Healthcare disrupted 15 billion annual health transactions, affecting one in three patient records and costing $2.87 billion.

Sovereign capacities and recommendations for inclusive AI

Building sovereign, inclusive AI for assistive and public applications requires an approach that spans multiple technical and policy layers. It should encompass perception, reasoning, interacting, testing, safety, energy resilience, critical materials, interoperability, and procurement. Below are technologies that can be used for assistive purposes and suggested policy actions to optimise their use.

Vision-Language Models (VLMs) for accessibility and perception

VLMs enable real-time environmental understanding through combined visual and textual reasoning. For instance, VLMs can enable real-time audio descriptions of street obstacles for blind pedestrians or provide contextual visual information to support individuals with autism in crowded public spaces. Two hundred eighty-five million worldwide are visually impaired, while approximately 1% of the global population is autistic, yet very few computer vision datasets include accessibility-specific annotations.

Policy action: Governments could develop data spaces for accessibility-focused multimodal datasets, especially those representing low-resource languages, urban and indoor scenes, and training of specialised annotators with data accessibility expertise (e.g. sign language). In parallel, they should develop API standards to ensure interoperability and developer access. Besides, VLMs must support underrepresented EU languages and dialects through cross-lingual transfer learning.

3D foundation models for spatial reasoning and navigation

With 75 million requiring wheelchairs daily, 3D models are critical for spatial understanding, object manipulation, and navigation for mobility aids, smart homes, and next-generation prosthetics. For example, 3D models enable smart home systems to understand how furniture placement affects wheelchair accessibility or help prosthetic hands calculate the precise grip force needed for different object textures and shapes.

Policy action: Governments could assemble datasets of real-life assistive scenarios, with semantic annotations and standardised formats, which capture not just objects but affordances—by someone with a particular physical or cognitive condition can use a space or item. For example, can a cabinet handle be reached from a wheelchair? Can a prosthetic grasp a toothbrush?

Compact and Small Language Models (SLMs) for Interaction

Language models—both compressed large models and purpose-built small models (under 7B parameters)—can power conversational agents that assist with healthcare triage, education support, or chronic care monitoring. For instance, compact language models can provide medication reminders that adapt to individual communication styles or offer cognitive support for people with dementia by maintaining contextual awareness of daily routines.

Policy action: Governments could prioritise investment in energy-efficient, offline-capable models optimised for low-power environments through compression techniques, fine-tuning for specific assistive applications, and specialised architectures. These approaches are aligned for use in hospitals, rural clinics, or home care devices with limited connectivity. Moreover, adaptive dialogue systems must be robust and support correction for fragmented input, which is common among users with speech impairments.

Embodied AI, haptics and tactile sensing for control

Embodied AI systems make intelligent decisions in the physical world. Assistive robotics, wearable exoskeletons, and responsive home environments rely on multi-sensor fusion, real-time control, and actuator feedback. Examples include prosthetic arms that safely grasp fragile objects, smart wheelchairs that navigate unexpected obstacles, or robotic kitchen assistants that adapt to users with limited mobility. These systems depend on specialised sensors—pressure sensors for prosthetic grip, LiDAR and proximity sensors for navigation, and tactile arrays for caregiving—requiring integrated hardware and software capabilities.

Policy action: Governments could back projects to develop modular robotic components, mobile robotics platforms, and open-source sensor systems that can be deployed in affordable, flexible ways. AI development should encourage real-world testing that combines AI decision-making with physical systems in assistive settings—fall prevention, assistance, or mobility support.

Simulation and training environments

Training AI systems in the real world is often costly, dangerous and unscalable. Simulation platforms and digital twin models enable scalable and safe training of assistive agents across diverse scenarios. These might include virtual hospital environments for training healthcare robots, simulated public transport scenarios for mobility aid testing, or emergency evacuation simulations for assistive navigation systems.

Policy action: Governments could encourage public simulation environments focused specifically on assistive applications— navigation, prosthetic usage, emergency response, or public transport support. These would be accessible through cloud and hybrid options with standardised evaluation metrics. Simulators must include varied scenarios and realistic human behaviour patterns to prepare AI systems for real-world deployment.

Edge AI and energy efficiency

Most assistive AI devices operate continuously and are battery-powered—smart wheelchairs, environmental sensors, speech agents, or mobility robots that require all-day operation on limited power. Yet the power most AI chips consume is incompatible with these requirements. It’s even more critical in disaster zones or humanitarian contexts where cloud connectivity may be disrupted and offline capabilities are mandatory.

Policy Action: Governments could encourage the development of power-efficient AI models—AI architectures that adapt model size based on battery level or workload. Predictive power management and model offloading between edge and cloud ensure devices can balance real-time performance with long-term operation.

Chips and materials for assistive applications

Assistive technologies typically operate on mature, reliability and costs-focused semiconductor nodes – 28nm to 180nm+, which prioritise long product lifecycles over peak performance. More advanced nodes, down to 14nm and below, are utilised in high-performance applications, including AI-enabled prosthetics with edge computing, smart wheelchairs equipped with computer vision, and advanced hearing aids with neural signal processing.

Beyond semiconductors, assistive devices require specialised materials distinct from mainstream electronics: soft robotics employs flexible polymers for safe human interaction, while haptic interfaces rely on piezoelectric materials for tactile feedback. Other specialised materials may include lightweight composites for prosthetics and energy-efficient displays for visual aids.

Policy action: Rather than aiming for universal chip or materials sovereignty, governments could adopt a tiered procurement approach that prioritises supply continuity, long-term availability, affordability, reliability, and certification for human-critical applications over peak performance. (related legislation – EU Chips Act, Critical Raw Materials Act)

Sandboxes, testbeds and interoperability

Assistive AI often falls into regulatory grey areas—spanning healthcare, accessibility, and consumer electronics. For example, a smart prosthetic may require approval as both a medical device and an AI system, necessitating coordination across multiple regulatory frameworks.

Policy action: Governments could develop sandboxes for safe experimentation with real users and assistive technologies in controlled environments, and regulatory testbeds for formal compliance testing. Sandbox environments should enable full-cycle testing with users and communities and cross-manufacturer collaboration. Regulatory testbed experiments must assess parameters such as model weight, dataset provenance, open-source component verification, cybersecurity protections, and adversarial content detection.

Both frameworks must mandate interoperability standards and privacy safeguards to prevent supply chain dependencies and ensure long-term public value. For instance, ensuring that speech-generating devices work seamlessly with navigation apps, or that mobility aids interface with smart home systems regardless of manufacturer. This includes compliance with accessibility standards and incident reporting mechanisms (related legislation – EU AI Act Articles 53-55).

Public AI infrastructure for all

Countries are actively developing their approaches to improve public infrastructure accessibility. The EU’s AI Act requires high-risk AI systems to meet accessibility standards under Article 16, ensuring people with disabilities are not excluded or discriminated against, while the EU Accessibility Act (effective June 2025) mandates that digital products and services, including AI-powered assistive devices, must be compatible with existing assistive technologies and meet specific accessibility requirements. In comparison, in the U.S., frameworks such as Section 504 of the Rehabilitation Act and IDEA already mandate assistive technology as a reasonable accommodation, providing a foundation for AI-enhanced systems.

Initiatives like ALT-EDIC help to create a common European data infrastructure for language technologies, while the European Health Data Space strengthens how individuals control their electronic health data, and both support the inclusive datasets needed for assistive AI development. Recently doubled budgets in the Horizon programmes allow for improved development of components, sensors and emerging systems. Finally, the AI Act (Article 15) and the Cyber Resilience Act (CRA) better address cybersecurity requirements for digital products, including IoT devices and embedded systems.

By addressing the technical complexity of assistive AI, nations can establish policies that serve those who need technology most, while building resilient, public-interest AI infrastructure for everyone.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.