Making digital regulation work – The crucial role technical standards play in implementing the EU AI Act
While revising key EU digital acts is essential, ensuring adequate implementation of existing laws like the EU AI Act is equally important to significantly reduce the regulatory burden for EU companies. Without balanced technical standards and inclusive processes, European companies risk falling behind in the global digital race.

As governments and other stakeholders harmonise the EU digital acts to restore strength to Europe’s companies, they must also focus on properly implementing existing tech laws. The vast implementation space provided by the EU legislator has enormous potential that needs to be exploited.
Technical standards are the cornerstone of EU AI compliance and the most important element for implementing the EU AI Act in practice. They are specifications of legal requirements for high-risk AI systems developed by the European Standardisation Organisations CEN-CENELEC at the request of the European Commission. Through this mechanism, ambitious requirements for AI developers and deployers are being set across sectors.
They could serve as a blueprint for innovation-friendly global regulation of high-risk AI systems if done right. If done wrong, they could lead to negative conformity presumptions and lock AI providers, deployers, and suppliers into costly, perennial compliance efforts.
Yet, as Europe’s AI landscape reshapes under the anticipated new rules and a more innovation-friendly EU policy framework, a critical tension emerges: can businesses keep pace with extensive compliance demands without stifling innovation?
A study by the German AI Association and General Catalyst Institute has now revealed the challenges AI providers face regarding the expected technical standardisation by CEN-CENELEC under the EU AI Act. By interviewing 23 leading EU AI providers–including, for example, defence specialist Helsing or EU’s leading GPAI provider Mistral–the study outlines the current draft status, highlights technical and policy challenges, and provides respective recommendations.
Scope and Timelines
The current timelines for implementing harmonised standards upon the finalisation are unrealistic for most organisations–especially smaller ones. Most standards will only be finalised by the end of 2025 or early 2026, yet most of the EU AI Act high-risk requirements become applicable in August 2026. A realistic implementation period for just one technical standard can take 6–12 months. However, CEN-CENELEC currently contemplates approximately (partially referenced) 35 technical standards covering the EU AI Act. This sheer number of standards has already drawn criticism from the European Commission, as seen in their latest assessment, which was leaked and not yet published in February. Reducing the number of contemplated standards and extending the AI Act implementation deadlines by at least 12 months is essential.
Significant Costs
Additionally, implementing EU AI Act requirements through harmonised standards will likely create substantial annual compliance costs for organisations (starting at approximately €200k annually for SMEs and significantly larger sums for corporates). The volume of required standards can overwhelm smaller companies lacking the resources to purchase, interpret, and implement each standard. SMEs face a decisive disadvantage without adequate support as they struggle to bear costs that larger competitors can absorb more easily. These costs aren’t limited to financial burdens alone. They also involve substantial investment in skilled personnel to navigate and operationalise complex technical documents. Reducing the number of standards and establishing well-funded, targeted subsidy programmes could provide much-needed relief.
Participation Imbalance
An influence disparity mirrors this resource imbalance. SMEs and startups have significantly less representation and voice in the committees developing these standards. In contrast, large multinational corporations—often non-European—dominate these committees, shaping standards that smaller firms must follow. This underrepresentation can lead to standards that disproportionately favour established enterprises, potentially hampering innovation and creating higher barriers to market entry for SMEs. Without diversified stakeholder engagement, there is a genuine risk that the standards may lack the practical insights necessary to ensure their feasibility and fairness across different operational scales and properly represent European values.
Implementation Crossroads
Looking ahead, the EU faces a critical balancing act. The EU AI Act, through rigorous technical standards, can establish Europe as a global leader in trustworthy AI. However, to realise this vision without undermining Europe’s vibrant startup ecosystem, policymakers must actively address the challenges and imbalances for AI providers revealed by current research.
Ultimately, harmonised standards should not merely add another layer of bureaucracy. Instead, they should provide clear, fair, and practical pathways to safe AI innovation. If Europe’s policymakers can successfully manage the creation and implementation of these standards, they could set a global benchmark—not just for safe and ethical AI, but for regulatory excellence that fosters both responsibility and innovation. The coming months and years will be crucial. Europe’s evolving approach to digital regulation, particularly in AI, could well define the global direction of AI governance, establishing standards and other implementation-based tools to shape technology’s role in society for future generations.