SHARE
Facebook X Pinterest WhatsApp

Insurers to Pull Back From AI Liability Coverage

AI can promise efficiency and cost savings, but can also introduce new legal questions about responsibility when algorithms go wrong.

Nov 24, 2025
Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

Major insurers are reassessing their appetite for artificial intelligence–related risks, seeking to limit exposure amid concerns that flawed or misused AI tools could trigger lawsuits worth billions.

The Financial Times reports that in recent filings, AIG, Great American, and WR Berkley have asked regulators for approval to restrict their liability for claims tied to AI agents, chatbots, and other automated systems.

Their coordinated move signals a turning point in how the insurance market views the rapidly expanding AI sector.

Why insurers are reassessing AI risk

The proposal comes as businesses across industries deploy AI tools to handle critical operations, from automated customer support to financial advice and medical triage. These technologies promise efficiency and cost savings, but they also introduce new legal questions about responsibility when algorithms go wrong.

Errors by AI systems can propagate quickly and at scale: a faulty customer-service chatbot can provide harmful guidance to thousands of users; a biased automated underwriting tool can expose a company to discrimination claims; a malfunctioning AI broker could mismanage millions in client assets.

Insurers fear that such failures could spark class actions or regulatory penalties far larger than today’s typical liability claims. As one underwriter put it in early market discussions, AI risk represents a convergence of human reliance, opaque decision-making, and legal ambiguity—a combination capable of producing unprecedented losses.

Key players and industry impact

AIG, Great American, and WR Berkley are among the most established names in commercial insurance, which makes their retreat especially significant. When market leaders step back from coverage, smaller carriers often follow, shrinking the availability of protection across the industry.

Their actions suggest that insurers are uncertain whether traditional liability frameworks—built around human employees, predictable processes, and clear chains of command—can be applied to complex systems that learn, adapt, and sometimes behave unpredictably.

If regulators approve their request, businesses that depend on AI tools may face reduced coverage, higher premiums, or policy exclusions that shift more risk back onto the companies themselves. The result could be slower adoption of AI technologies, particularly among mid-sized firms that cannot self-insure against major claims.

Unresolved questions about AI accountability

The insurers’ filings underscore how far the legal system lags behind technological change. Courts are still determining who is responsible when an AI system generates harmful or inaccurate outputs—the developer, the business using the tool, or the employees overseeing it.

This uncertainty complicates underwriting. Without historical data, insurers cannot accurately price the risk. Without clear legal standards, they cannot reliably determine whether a company is genuinely at fault.

The insurers’ move also puts pressure on lawmakers and regulators to accelerate work on AI liability frameworks. Businesses need clarity, insurers need predictability, and consumers need assurance that someone can be held accountable if AI causes harm.

Implications for businesses using AI

Organizations relying on AI are likely to face new strategic decisions. They may need to:

• Reassess whether existing insurance policies still protect them.
• Negotiate more specialized coverage with higher deductibles.
• Implement stronger oversight and documentation of AI systems.
• Prepare for increased regulatory attention, especially in consumer-facing applications.

For companies that build AI tools, the insurers’ filings could influence product design. Developers may need to invest more heavily in auditing, explainability features, and compliance frameworks to reassure risk-averse customers—and to keep their own liability in check.

What happens next

Regulators have not yet made a final determination on the insurers’ proposal. But the filing itself highlights a critical inflection point: the insurance industry, which has historically absorbed risk to enable innovation, is now signaling that unchecked AI development may outpace its ability to provide coverage.

Recommended for you...

Microsoft, Nvidia, and Anthropic Reveal $45B AI Partnership
Datamation Staff
Nov 19, 2025
Jeff Bezos Launches $6.2B AI Startup Project Prometheus
Datamation Staff
Nov 18, 2025
Google’s AI Just Mastered Video Games
Datamation Staff
Nov 17, 2025
Datamation Logo

Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.