AI Act and disability-centred policy: how can we stop perpetuating social exclusion?
In the same way that AI systems may discriminate against people of a particular ethnicity or skin tone, algorithms behind computer vision, facial recognition, speech recognition, law enforcement or autonomous systems may discriminate against individuals with facial differences or asymmetry, different gestures, gesticulation, speech impairment, or different communication patterns. It especially affects groups with disabilities, cognitive and sensory impairments, and autism spectrum disorders. As a result, it may lead to inaccurate identification, discrimination or even life-threatening scenarios.
For instance, several hiring and job search platforms allegedly discriminated against elders and individuals with disabilities. Social networks are known to mistakenly identify people with disabilities as “non-human” and block their accounts due to differences in behaviour or action patterns. Automated systems may add negative sentiments to “disability” keywords in resumes, exams or personal information. Bank systems may not properly recognise uploaded documents or automated video interviews. Along with discrimination, there are examples of direct life-threatening scenarios – police and autonomous security systems and military AI may falsely recognise assistive devices as a weapon or dangerous objects or misidentify facial or speech patterns.
Despite general advancements in this field, including the EU’s Strategy for the Rights of Persons with Disabilities 2021-2030 or special frameworks such as Unicef’s Accessible and Inclusive Digital Solutions for Girls with Disabilities, this area is not sufficiently covered in national AI policies. In particular, since the inception of the EU AI Act, disability organisations such as EU Disability Forum, communities and public entities have been vocal about the necessity to bring more focus to disability-specific cases, vocabulary and legal frameworks, ensure fairness, transparency, and explainability for these groups, address silos, negative scenarios and misuse of high-risk systems and prohibition of specific unacceptable-risk systems. UN Special Rapporteur also raised similar concerns addressing autonomous systems, disability and warfare.
The EU AI Act and disabilities
AI systems mirror the societies that create them. Historically, individuals with disabilities were excluded from the workplace, educational system, and sufficient medical support. For instance, around 50-80% of the population with disabilities are not employed full-time, while 50% of children with disabilities in low- and middle-income countries are still not enrolled in school. Urban public spaces meet only 41.28% to 95% of the expectations of people with disabilities, and only 10% of the population has access to assistive technologies. For cognitive disabilities, the level of discrimination is even higher. The unemployment rate among those with autism may reach 85%, dependent on the country; while among people with severe mental health disorders, it can be between 68%-83%, and for those with Down’s syndrome, 43%.
Algorithms may perpetuate this exclusion and discrimination further due to the lack of access to data for target populations, unconscious or conscious bias, or existing social practices. Bias can affect different stages of an AI system’s development and deployment, including data sets, algorithms, and systems. Negative complications are even higher in the case of “high-risk” systems, such as policing, autonomous weapons or law-enforcement systems.
The European AI Act is a good example of an approach to categorising AI systems based on risks and related compliance frameworks. It brings such categories as unacceptable, high, limited and minimal risks. In particular, unacceptable risk systems should be fully prohibited or prohibited with little exception. It includes government social scoring and real-time biometric identification systems in public spaces. High-risk systems require conformity assessment and an audit of the system’s safety, privacy, robustness, and impact. It includes critical infrastructures and transportation, educational and hiring platforms, private and public services, and law enforcement systems that may interfere with people’s fundamental rights.
Limited risk systems require a confirmation of the system’s transparency and user consent. Users should be aware that they are interacting with a machine to make an informed decision to continue or step back. Examples include chatbots and conversational AI, emotion recognition, systems, and face filters. Minimal risk systems are less regulated and require the Code of conduct. It mostly refers to systems that don’t involve sensitive data collection or do not interfere with human rights. This includes AI-enabled video games or spam filters.
National AI policies and disability-centred frameworks
Despite the fact that some of the discussions of the EU AI Act were centred around “limitations” and “regulations”, it’s important to remember that the objective of the AI Act (and it’s frequently connected to the “Brussel Effect”) is both to facilitate and regulate the development of AI systems, by providing clear rules and frameworks on how different stakeholders can come together to build more human-centred technologies and ecosystems. This logic becomes the cornerstone of other national AI policies and frameworks, including recently presented AI policy updates from the governments of Japan and China or the US White House.
As the EU AI Act’s draft is close to wrapping up, we can summarise which areas require further updates and development to better address the needs of groups with disabilities and how national and global policymakers and governors can learn from it to ensure human-centred policy for their countries. These suggestions will be put into two groups – parameters that facilitate and regulate assistive technologies and algorithms.
Steps to facilitate disability-centred AI systems
Legal status, stakeholders and caregivers
Technology addressing individuals with disabilities frequently involves not only one end-user but an “ecosystem” of stakeholders, such as family members, caregivers, counsellors, and educators, making it necessary to understand everyone’s involvement. For instance, apps supporting individuals with autism might have two interfaces – one for the parent and another one for the child.
Spectrums, comorbidities, gender and age groups
Disabilities are not monoliths but spectrums presented by multiple parameters, including intersectionality, underlying health conditions, and socioeconomic statuses. For instance, individuals with learning disabilities also more frequently experience mental health problems, girls are often misdiagnosed, and different age groups require different approaches to research (eg. Unicef’s framework – AI for children)
Accessible vocabulary and knowledge frameworks
With the World Health Organization elevating the necessity of the evolving digital health competence framework, it’s important to address similar concerns regarding accessibility technology. For instance, cognitive disabilities or autism-related conditions drive specific terminology related to neurodivergent individuals. Similar changes are driven by the development of specific assistive technologies, such as social and emotional AI used for learning purposes;
Adoption, curriculums and stakeholders
With more complexity of the adoption cycle, there is more need for stakeholder education or knowledge addressing children’s rights, safety, and privacy. For instance, companies in the area of assistive robotics sometimes highlight their role as “learning companies” since a critical part of their work is curriculum building and education;
Feedback loop and impact assessment
Despite risk categories in existing policies, there is not so much that addresses disability-specific risks and “impact assessment” of different systems and algorithms such as emotion recognition or law enforcement.
Technical fixes and error automation
With more tech companies (eg. – Meta) using automation to identify and fix issues, there is more of a vicious cycle of “automation of automation”. It’s important to ensure actual human involvement, including the targeted population involved in beta-testing, identification and inspecting of existing issues.
Steps to regulate disability-centred AI systems: safety, risks and misuse
High and unacceptable risk systems
Although existing AI policies categorise and describe systems presenting higher risks for the general population, they typically ignore some truly high-risk scenarios affecting individuals with disabilities. It includes police, security and military systems that may falsely recognise assistive devices as dangerous objects or disability-specific cases of social discrimination presented by work or screening platforms.
Low-risk systems and emotion recognition
Emotion recognition systems are especially prone to bias. And although such systems are widely used as a part of some assistive technologies, such as social AI and robotics, it’s important to keep them safe and aligned with the Convention on the Rights of Persons with Disabilities.
Silos, echo chambers and human involvement
Forty per cent of adults with a debilitating disability or chronic condition report feeling socially isolated. With more complexity of assistive technologies and the support ecosystems around them, it’s important to provide policy guidance on human involvement and how technology may serve as a “tool” but not a replacement for authentic social interaction.
Misuse scenarios and abuse
Individuals with disabilities nearly 2.2 times more frequently become victims of violence, social attacks, abuse or manipulation. Moreover, existing social network algorithms are known to discriminate against such individuals, identify them as “non-human”, or block their accounts due to differences in behaviour or action pattern. It’s important to identify such scenarios.
Omissions, non-actions and accountability
Not only actions but non-actions (omissions) may present harm and should be considered. These include the cases when the system may create silos, excludes particular groups, or be based on not genuine or inconsistent data or source that may lead to a manipulative scenario.
Cognitive, patient and disability data privacy.
Some believe that susceptibility to depression can be inferred from social media data or data from online searches it may be possible to predict Parkinson’s or Alzheimer’s Disease. In fact, patient data is frequently used in the areas of mental health, autoimmune disorders and other conditions. Unfortunately, this collection and processing may violate the data privacy of people with disabilities. For instance, Telehealth startup Cerebral shared millions of patients’ data with advertisers.
Data creation and ownership
Not only but ownership should be protected. For instance, individuals with cognitive disabilities may be supported with conversational AI, adaptive learning or other AI-based tools in the educational, work or creative processes. It’s important to identify stakeholders involved in this process, their rights and ownership.
Other criteria include disability-specific audit and assessment to ensure fairness, transparency, explainability and accountability.
The way forward of disability-centred national policies
Along with other rapidly emerging regulations and policies globally, the European AI Act is an important example of national legislation attempting to categorise AI systems based on risks, related compliance frameworks and explanations. Such mechanisms are aimed at both facilitating and regulating a human-centred policy approach to AI systems.
However, such regulatory documents bring a reminder of the necessity to highlight the needs and representation of historically excluded and discriminated groups, such as individuals with disabilities. More AI systems are becoming involved in critical social processes such as law enforcement, education or the workforce. This makes it more urgent to address perpetuated institutional, structural and social biases, distortions and exclusions and ensure that disability representation becomes a cornerstone of any discussion or development.
Nothing about us without us.