Civil society

For disabilities and designated groups, the Digital Services and Market Acts may complement AI and data policies to ensure algorithmic safety and accountability

abstract digital face

Following the Bletchley Declaration, governments are looking to address a risk-based approach to algorithmic safety, focusing on areas, types, cases and affected populations. While there is general agreement, countries are still in different stages of deployment. Some crucial steps include forming oversight entities, establishing required capacities, implementing risk-based assessment and infrastructure, connecting existing legislation, directives and frameworks, approaches to law enforcement, authorities, courts, and cooperation amongst member states.

An even bigger tendency is to see that risks and impacts go beyond the algorithms, and national, economic and social strategies. In particular, the US’ AI executive order requires safety assessments, civil rights guidance, and research on labour market impact, accompanied by the launch of the AI Safety Institute. In parallel, the UK’s governors introduced the AI Safety Institute and the online Safety Act, echoing the approach of the European Union to the Digital Services Act, with more focus on minors’ protection. 

It was followed by efforts from multilateral agencies and institutions, such as Unesco, WHO, and the OECD, working on area-specific guidelines to address algorithms in education, healthcare, the labour market, literacy and capacities-oriented recommendations. It includes Unesco’s AI competence framework for students and teachers or a recommendation to set the minimum age at 13 when generative AI can be used. Moreover, its recent action plan to address disinformation and social media’s harms, including the case with the use of Generative AI, collected responses from 134 countries, including Africa and Latin America. Similarly, governments from 193 countries signed their commitment to effectively implement children’s rights in the digital environment with the adoption by the United Nations General Assembly’s Third Committee. The OECD issued the report and technology repository reflecting how “AI supports people with disability in the labour market.”

This multifaceted approach to algorithmic governance, recommendations and frameworks highlights specific high-risk areas such as health, education, labour, policing, justice and legal systems, impacts on minors, and designated and vulnerable groups.

For instance, it’s known that models behind legal judicial systems were trained without the participation of specific populations, leading to higher errors against them. In some countries, governmental agencies were accused of using data from social media without consent to confirm patients’ disability status for pension programs. Immigrants tend to avoid medical examinations and tests in fear of being deported or facing unacceptable medical costs. Thus, the statistics and public data sets simply do not reflect them.

Finally, algorithms may not properly identify individuals who lack limbs, with facial differences, asymmetry, speech impairment, different, communication styles or gesticulations, or those who use assistive devices. In another example, facial recognition systems may use ear shape or the presence of an ear canal to determine whether or not an image includes a human face. Yet, it may not work for groups with craniofacial syndromes or lacking these parts.

Since the initial proposal of the EU AI Act in 2021, the European Commission has received appeals and comments addressing algorithms and disability rights, the use of biometric, facial and emotion recognition systems, and cases affecting refugees and immigrants, including automated risk assessment and profiling systems.

However, research and development of disability-centered AI systems is still a complex task from a technology and policy perspective. It includes its intersectional nature, condition, age, gender, and spectrum-specific parameters, and the involvement of multiple legal frameworks to address and protect it properly.

It’s increasing the role of non-AI-specific frameworks such as the Accessibility Act (which expects its further iteration in 2025), the EU Digital Services and Market Acts, the Convention on Rights of Persons with Disabilities, equality and children protection laws, involvement of specialised institutions and frameworks, thus coming beyond just forming generalised “Algorithmic Safety Institutes”. 

In particular, the Digital Services and Digital Market Acts cover the “gatekeepers” – big technology companies and platforms. These acts have specific articles to address fair competition, minimise silos, and improve accountability and reporting systems. For user protection, they address algorithmic transparency, outcomes and user consent, and protection for minors and designated groups. They also look at identifying dark patterns and manipulation.

Can these frameworks, along with the upcoming Accessibility Act 2025, complement AI and data regulation to better protect groups with disabilities? And can they bring more transparency and accountability while minimising technological and economic silos?

AI systems, designated groups and regulation

AI systems regulation addressing designated groups or persons with disabilities is not limited to one legal document but by a spectrum of legal frameworks, laws, conventions, and policies. In particular, such cases can be regulated and affected by AI-specific acts, related data, consumer and human rights frameworks, memorandums and conventions.

For instance, assistive technology used to support dyslexia or autism can be affected by articles in the AI and data regulation, specific laws protecting children and designated groups such as the Convention of the Rights of Persons with Disabilities, and country-specific accessibility, equality and non-discrimination laws.

 In particular, the US is known for its “Americans with Disabilities Act”, UK – ‘Equality Act“, France – Bill N2005-102 “for equal rights and equality of opportunities and the inclusion and citizenship of persons with disabilities”, Germany – “General Equal Treatment Act ”. There are similar examples in other countries.

However, the emerging Digital Services and Market Acts and related regulations in other countries, such as the UK’s Online Safety Act, could have an even bigger impact as they aim to “create a safer digital space where the fundamental rights of users are protected and to establish a level playing field for organisations”. This has already been a focus of  public attention and “misinformation” cases

In particular, these acts introduce requirements to online platforms related to transparency, accountability, explainability of used algorithms, use of “dark patterns”, minors protection, targeting and profiling, privacy and consent, manipulation, the introduction of “trusted flaggers” and moderators, the feedback loop between platforms and stakeholders, designated digital service coordinators at member states. These mechanisms help to better address user protection, cross-member-state cooperation, investigations and legal frameworks, including the involvement of relevant jurisdiction courts and authorities.

Digital Services and Market Acts – digital space and fundamental rights

The parallel rules and obligations presented by these Acts are set to fully apply to all regulated entities at the beginning of 2024. The DSA addresses user protection, feedback loop mechanisms, transparency, accountability, flagging and mitigation. The DMA looks at the organisational and economic side, including reporting mechanisms, fair participation, competition and compliance.

The DSA and DMA will work with the EU AI Act and data regulation to address digital platforms and associated algorithms better for several reasons.

 It designates high-influence / high-risk companies and platforms which may unfairly use their market advantage, create data silos and lack transparency. In particular, DSA identifies 4 tiers of digital platforms according to the number of users. Currently, Tier 4 designates platforms with over 45 million users,  19 platforms in all. There are eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest, and Snapchat;  five online marketplaces: Amazon, Booking.com, China’s Alibaba AliExpress, and Germany’s Zalando; and two search engines. Tier 1-3 are overseen by member states and “competent authorities”, but Tier 4 is directly under the Commission’s supervision. At the same time, the DMA introduces the definition of the “gatekeepers”, using market influence, business leverage and economic position using data on turnover, number of users and business agents as a threshold to identify particular market players such as Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft

It brings the legal and oversight framework parallel to the AI Act and data regulation. In particular, it aims to achieve similar objectives, including fair competition and accountability, protecting users’ rights and vulnerable groups such as minors, avoiding misinformation and manipulation, and keeping transparency and user consent. It uses the 4-tiers logic to identify levels of platforms, risks, monitoring, assessment and compliance measures. Similar to data regulation, it keeps the role of the member states, specifically for local cases. Still, the Commission is directly involved with systemic issues, large-scale cases, anti-trust proceedings, DSA’s Tier 4 and DMA’s gatekeeper’s lists.

It does not replace existing legislation. It does not identify what is illegal since what constitutes illegal content or actions is defined in other laws either at the EU level or at the national level. However, it proposes a framework that helps identify risks, rules and actions. In particular, the DSA leverages the “trusted flaggers” concept – specialised entities that may identify illegal content or actions. It also provides users with better mechanisms to complain to the platform, seek out-of-court settlements, complain to their national authority in their language, or seek compensation for breaches of the rules.

It addresses specific AI-based mechanisms and algorithms. It includes recommendation engines, personalisation, profiling and targeting. In particular, it requires explanations for particular content or product was recommended. It also bans targeted advertising on online platforms by profiling children or based on special categories of personal data such as ethnicity, political views or gender

It highlights less visible but high-impact violations such as misinformation, manipulation or privacy breaches. It also brings more focus to dark patterns on the interface of online platforms, referring to misleading tricks that manipulate users into choices they do not intend to make, which also can be connected to the use of generative AI.

It helps to not only “regulate” but “facilitate” the digital ecosystem, bringing formats to user rights, agency, oversight and reporting. It’s known that Very Large Online Platforms and Search Engines (tier 4) already started to publish transparency reports under the DSA. At least 5 platforms participated in a “stress test” related to harmful content and actions. Users, experts, and institutions provided feedback related to flagging and feedback loops.

The DSA, DMA and other frameworks may bring more protection to the disability-centred algorithmic ecosystem

Disabilities present combinations of spectrums, conditions, stakeholders, and technologies, making them difficult to regulate with general AI regulation. However, it can be complemented by consumer, digital and data-protection acts overseeing violations behind the algorithms. In the case of the Digital Services and Market Acts and their future iterations, they could improve mitigation in specific areas:

Multiple stakeholders and minors protection. Several users, including caregivers, can use assistive technologies and digital platforms for individuals with disabilities. DSA/DMA bring different categories of actors and an opportunity to flag harmful actions and include additional minors’ protection.

Invisible risks, misuse and manipulation. Generative AI can fuel speech-to-text or image-to-speech systems, education platforms for assistive accommodation, social protection and micro-learning, equality training and policing. However, similar algorithms can create deepfakes or manipulate users. DSA/DMA’s articles highlight “dark patterns”, misinformation and other harmful techniques.

Privacy breaches, transparency, consent. The code of conduct or mechanisms of consent used by social networks, learning, assistive and other digital platforms, which lack transparency or may mislead users and their decisions, would be regulated by DSA/DMA.

Profiling and generalisations. Medical and social services are known to use “profiling”, grouping people on their interests instead of personal traits, which may lead to discriminatory outcomes. Similar to the GDPR’s articles addressing profiling, DSA/DMA introduces additional requirements to restrict targeting, profiling, protect minors and designated groups

Participation. Finally, algorithmic distortions and errors related to disabilities are largely related to historical and statistical underrepresentation. For instance, some facial recognition systems were created without acknowledging some facial impairments. Legal and judicial systems have been trained on publicly available data sets, overlooking the participation of particular groups and populations. Designation of more specific actors and stakeholders, active participation and agency could bring more opportunities to protect groups with disabilities.

Accessibility Act and the way forward

Algorithms do not create biases but mirror social and historical distortions presented in society, statistics, existing practices and approaches. Mitigating algorithmic risks towards designated groups is rather a complex process, which brings the increased role of non-AI-specific legislation and comes beyond just forming “Algorithmic Safety Institutes”.

Thus, DSA and DMA logically complement AI and data policies by categorising the platforms, organisations and market players behind the algorithms. To do so, it weighs their economic position and influence, the potential scale of risks and responsibility, and oversight mechanisms.

The Acts present an opportunity to address non-algorithmic distortions better, including social and market factors, competition and silos. They also bring a critical component: the participation of different types of designated stakeholders and the feedback loop between developers and users, creating a consistent set of values and expectations and transparent reporting. 

Finally, to better address the needs of the population with disabilities, it is important to see algorithms, platforms and the topology of assistance and accessibility as connected ecosystems. This objective could be better achieved with the European Accessibility Act. This directive aims to improve how the internal market for accessible products and services works by removing barriers created by divergent rules in EU Member States. It covers products and services that have been identified as being most important for persons with disabilities. This directive could be complemented by specialised frameworks, guidelines and repositories developed by Unesco, WHO, the OECD and other multilateral organisations and institutions.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.