Why insurance companies should encourage solid AI risk management instead of excluding it

AI is deployed in nearly every industry, and many associated risks are now sharply in focus. As these risks become more consequential, businesses look at insurance coverage. Yet, recent news shows that some major insurance providers, such as AIG, Great American, and WR Berkley, have recently sought permission from U.S. regulators to explore the possibility of excluding liabilities tied to business use of AI tools.

AI systems can lead to unexpected operational failures, such as biased hiring processes, discriminatory credit decisions, or misdiagnoses in healthcare. Such outcomes may expose businesses to legal and regulatory sanctions. Other possible failures, such as a customer support chatbot giving hallucinated responses, may result in liability determinations. These risks are neither theoretical nor trivial. Companies deploying AI systems should be expected to be prudent and responsible and not put their own brand, customers or suppliers at risk – especially when most of these risks can be avoided with good governance practices.

Insurers are preparing for the risks of AI

Although no regulatory guidance or decisions have been finalised, and no exclusions have yet been formally implemented, these actions send a clear signal: insurers are preparing for an era in which AI risk will fundamentally reshape commercial insurance landscapes. Insurers are also looking to hedge their risks.

Yet rather than exclude AI-related risks and leave businesses exposed, insurers should adopt a more proactive, constructive approach by requiring their clients to implement robust AI risk management and governance frameworks as a prerequisite for full policy coverage. This model, analogous to longstanding practices in cybersecurity insurance, offers win-win outcomes.

But there are uncertainties. Insurance companies are still struggling to quantify AI risk effectively. This leads to a lack of clarity about the scope of liability policies, exposure to unforeseen losses, and growing hesitation to underwrite coverage without exclusions. In the United States, insurance companies that provided coverage for financial instruments were burned in the 2008 financial crisis. These companies incurred heavy losses in their stock values, and some even had to be bailed out. These insurers could not properly assess the black-box nature of the subprime mortgage derivatives. They instead became part of the domino line of systemic risk. Instead of demanding more transparency or responsible behaviour from the financial sector, many insurance companies contributed to the failure. In fact, an OECD report notes the “insurance sector played an important supporting role in the financial crisis by virtue of the role played by financial guarantee insurance in wrapping, and elevating the credit standing of, complex structured products and thus making these products more attractive to investors and globally ubiquitous.”

With AI, the risk is not limited to a single sector. AI products are adopted across every industry. Therefore, insurers need to do better. However, instead of complete exclusions, insurers can require their clients to adopt AI risk management practices in return for coverage of AI-based risks. This approach can protect the insurance company, the insured business, and individual customers simultaneously. This approach is not an alternative to regulation. It simply complements and operationalises existing protections where they exist, and also demands that businesses be more proactive in responding to risks when gaps arise.

Why the cybersecurity model makes sense for AI insurance

The precedent set by cybersecurity insurance is instructive. Insurers require clients to implement robust cybersecurity controls. Apply for cybersecurity policy protection now, and you will receive a lengthy questionnaire asking for your organisational processes and safeguards. The premiums are reduced for safeguards and mature cybersecurity postures. Today, cyber insurance frameworks are central motivators for improved security across countless enterprises. These requirements enabled better protection for the whole industry, lifting best practices for everyone.

A similar trajectory for AI risk insurance is feasible and essential. Like cyber risk, AI risk can be systemic. One small flaw in an AI model could ripple across many decisions. Generative and now agentic AI are adding new challenges to the risk spectrum. By demanding rigorous risk management, insurers reduce the likelihood of systemic, widespread claims stemming from shared vulnerabilities. AI risk management involves data quality assurance, model documentation, ex-ante and ongoing risk assessments, bias detection, adversarial testing, compliance monitoring, and incident response planning. These are analogous to basic safety protocols in other industries (think seatbelts and smoke detectors) but tailored to AI products. They are considered best practices for data scientists, developers and governance employees.

Existing AI governance and risk management frameworks can lead the way

Insurance companies do not need to reinvent the wheel. There are established AI governance and risk management frameworks that they can prioritise. OECD AI Principles, adopted in 2019 and revised in 2024, provide a great starting point for a risk-based approach, and the U.S. contributed significantly to these Principles. The first Trump administration published guidance on governing AI development in the private sector and on federal AI use. The National Institute of Standards and Technology published the NIST AI Risk Management Framework (AI RMF) – a voluntary risk framework now globally acknowledged for its guidance on how to incorporate trustworthiness into the design, development, use, and evaluation of AI products, services, and systems. The Federal Reserve’s SR 11-7 Guidance on Model Risk Management, in place since 2011, has been effective and is widely adopted. The National Association of Insurance Commissioners (NAIC) even published a bulletin on the use of AI in insurance, offering governance insights. NAIC also recently met with OECD partners to discuss oversight of third-party AI systems and enhancing data privacy.

Businesses can start from the basics and mature and grow in their practices over time. Just as cybersecurity insurance demands minimum requirements, AI risk management for insurance purposes could include such documentation or evidence of governance mechanisms in place.

AI risk management-based insurance is good business for everyone

By tying insurance coverage and pricing to demonstrable AI governance efforts, insurers effectively nudge businesses toward trustworthy AI deployment. Insurers can help their most innovative clients stand out by making robust AI governance frameworks a badge of trustworthiness and resilience.

For insurers, requiring AI risk management bolsters the ability to assess and underwrite risk credibly. Reviewing client documentation, testing procedures, governance structures, and mitigation protocols can also provide insurers with better insights into risks.

Sophisticated AI risk management not only reduces the likelihood of claims but also enables more accurate pricing, potentially rewarding advanced compliance with lower premiums. This approach is familiar to insurance professionals who price auto policies by considering factors such as driver history and safety features.

The governance landscape shifts with new legislation and the pull-push of AI politics. However, insurance companies can help set basic safeguard expectations while protecting themselves against an increasingly diverse set of risks. Insurers should not exclude AI risks; they should evolve their practices and foster trustworthy AI for all.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.