As generative artificial intelligence (gen AI) becomes embedded in day-to-day commercial operations across virtually every sector, businesses are confronting a parallel rise in litigation and regulatory risk tied to AI development, deployment, and disclosure. Insurers are responding in kind. Perhaps in recognition that many traditionally worded liability policies may otherwise respond to AI-related claims, a growing number of carriers have begun introducing exclusions and endorsements aimed at narrowing or eliminating coverage for these exposures.
While these provisions are often drafted in sweeping terms, they are not necessarily the final word on coverage. It is uncertain how pervasive and insistent various insurers may be in requiring language narrowing or eliminating coverage for liability arising out of AI-risks. Other insurers have begun issuing policies offering affirmative AI-specific coverage. Additionally, it is uncertain how courts will apply AI exclusions which, at least for some policyholders, would have the effect of making their coverage illusory.
The Expanding Landscape of AI-Related Litigation
The legal risks associated with AI are multifaceted, and plaintiffs have advanced a wide array of theories in recent filings, including:
- Copyright and IP claims arising from the training of large language models on allegedly protected works (e.g., Bartz v. Anthropic in the U.S. District Court for the Northern District of California, which reportedly settled for $1.5 billion);
- Product liability and negligence claims alleging that AI systems caused real-world harm (e.g., Raine v. OpenAI, Inc., in California Superior Court, San Francisco County);
- Privacy and data-use claims challenging the scraping or use of user data for AI training (e.g., Reddit, Inc. v. Anthropic PBC, also in California Superior Court, San Francisco County);
- Antitrust claims alleging misuse of proprietary data in AI development (e.g., Chegg, Inc. v. Google LLC in the U.S. District Court for the District of Columbia);
- Discrimination and algorithmic bias claims alleging that AI systems produce discriminatory outcomes (e.g., Mobley v. Workday, Inc., 23-CV-00770-RFL, 2026 WL 636719 (N.D. Cal. Mar. 6, 2026), a class action in the U.S. District Court for the Northern District of California); and
- AI-related securities class actions, where plaintiffs allege misleading statements about AI capabilities or prospects (e.g., D’Agostino v. Innodata Inc. in the U.S. District Court for the District of New Jersey).
Given this breadth of potential exposure, it is unsurprising that insurers are attempting to cabin their risk through increasingly expansive exclusionary language.
The Rise of AI Exclusions
Some carriers—such as Berkley—have introduced what they characterize as “absolute” AI exclusions in D&O, E&O and fiduciary liability policies. These exclusions purport to bar coverage for claims that are “based upon, arising out of, or attributable to”:
- any actual or alleged use, deployment, or development of Artificial Intelligence;
- any statements, disclosures or representations concerning AI;
- any alleged violation of laws regulating AI or AI disclosures; and
- any demand or regulatory requirement to investigate, monitor or respond to AI-related risks.
Further, Berkley defines “Artificial Intelligence” incredibly broadly for purposes of this exclusion, as “any machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, including, without limitation, any system that can emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text, and other digital content.” Read literally, this exclusion could encompass matters for which many companies have been using machine assistance for years and well before the recent explosion in gen AI tools.
Other insurers have adopted arguably more targeted but still sweeping provisions. For example, Hamilton Insurance Group has used language seeking to preclude coverage for claims “based upon, arising out of, or in any way involving” the use of “generative artificial intelligence,” defined broadly to include systems that produce text, imagery, audio or synthetic data in response to user prompts—including, but not limited to, tools such as ChatGPT, Bard, Midjourney or DALL-E.
Late last year, the Insurance Services Office (ISO) (which issues standard forms for use in insurance policies) introduced three AI-related endorsements, CG 40 47, CG 40 48 and CG 35 08 for optional use in commercial general liability policies.
ISO states that the first endorsement purports to be a full exclusion:
CG 40 47 Exclusion – Generative Artificial Intelligence (for use with the ISO Commercial General Liability Coverage Part. For use with both the occurrence and claims-made versions, this optional endorsement excludes coverage under Coverage A and Coverage B with respect to bodily injury, property damage or personal and advertising injury arising out of generative artificial intelligence.
The second purports to exclude certain coverage for personal and advertising injury coverage:
CG 40 48 Exclusion – Generative Artificial Intelligence (Coverage B). For use with the ISO Commercial General Liability Coverage Part, both the occurrence and claims-made versions, this optional endorsement excludes coverage under Coverage B with respect to personal and advertising injury arising out of generative artificial intelligence.
The third purports to exclude certain coverage otherwise covered under the Products/Completed Operations Liability coverage part under general liability policies:
CG 35 08 Exclusion – Generative Artificial Intelligence. For use with the ISO Products/Completed Operations Liability Coverage Part, this optional endorsement excludes coverage under Section I with respect to bodily injury or property damage arising out of generative artificial intelligence.
According to the ISO, this endorsement appears designed to eliminate coverage for nearly any claim with a connection to AI. But exclusionary language, even when broadly phrased, is not interpreted in a vacuum and may give way to coverage depending on the facts of a particular claim. In fact, policyholders have powerful arguments at their disposal that even “absolute” AI exclusions do not bar coverage for all suits involving AI in some respect. Policyholders should carefully consider these arguments as applied to the facts of each claim and pursue coverage where it may be available.
Absolute AI Exclusions Should Be Interpreted Narrowly
First, courts consistently hold that coverage grants are construed broadly, while exclusions are interpreted narrowly and against the insurer. Although phrases such as “based upon,” “arising out of,” or “attributable to” can be read expansively, courts have rejected insurer attempts to apply them as automatic bars to any claim with a remote connection to excluded conduct. Courts have found coverage where:
- The alleged injury could have occurred independent of the excluded conduct;
- The complaint includes allegations that do not unambiguously fall within the exclusion; or
- The action presents “mixed” claims—some potentially excluded and others potentially covered.
Under well-established duty-to-defend principles, if any claim in the underlying action is potentially covered, the insurer generally must defend the entire action. Thus, in a suit alleging both AI-related conduct and non-AI-related wrongful acts (for example, traditional mismanagement, contractual disputes or other operational errors), an AI exclusion may not eliminate the insurer’s defense obligation. In assessing whether coverage is available despite an AI exclusion, the critical inquiry is not whether AI appears somewhere in the factual narrative of a claim, but whether the claim “plainly and clearly” falls within the exclusion.
AI Exclusions Cannot “Swallow” Coverage
Policyholders also retain arguments grounded in the doctrine against illusory coverage. Courts have often held that exclusions may not be construed so broadly that they “swallow” the coverage promised by the policy, as such a reading would defeat the reasonable expectations of the insured when purchasing the policy.
As AI becomes more deeply integrated into ordinary business functions—marketing, customer service, logistics, HR screening, financial forecasting—virtually every aspect of a policyholder’s operations may at least remotely involve some AI tool. If insurers’ preferred interpretation of “arising out of AI” were adopted in its broadest form, in the not-too-distant future almost any claim will arguably be connected, however indirectly, to AI use. Such a reading could eliminate coverage for most or all of the insured’s operations. This would defeat the very purpose of insurance policies which were intended to broadly cover an insured’s operations, subject to limited exclusions. Courts have been clear that exclusions may not “swallow” the promised insurance coverage and render coverage illusory in this manner. This argument may be particularly compelling for technology companies or data-driven enterprises whose business model inherently involves AI. An exclusion that eliminates coverage for the company’s primary line of business may conflict with the insured’s reasonable expectations.
Earlier Policies May Provide Broader Coverage
AI exclusions are a relatively recent development and many policyholders maintain multiyear “occurrence-based” general liability insurance programs that predate the introduction of these provisions. The earlier policies in these programs, which may be issued by the same insurer as later policies with an AI exclusion, generally contain no language limiting coverage for AI-related claims. Policyholders may still face AI-related suits which implicate these earlier policies and may argue that the introduction of AI exclusions in later policies actually supports finding broader coverage—including for AI claims—under these earlier policies.
Courts have recognized that changes in policy language over time can be relevant to interpreting earlier forms. When insurers add new exclusions or endorsements narrowing coverage, courts sometimes view that addition as evidence that earlier policies were broader. In other words, if an AI exclusion was necessary in 2026, that may suggest that a 2022 or 2023 policy—lacking such language—did not already exclude AI-related claims. Policyholders facing AI-related litigation should carefully examine prior policy years, particularly if the alleged wrongful acts span multiple periods.
Practical Takeaways for Policyholders
- Review New and Renewal Policies Carefully.
AI exclusions are evolving rapidly. Policyholders should scrutinize new endorsements and work with their brokers on negotiating policies without such endorsements or narrower definitions, carve-backs or clarifications. - Be Thoughtful in Insurance Applications.
Representations concerning AI capabilities, governance and controls should be scrutinized carefully both in underwriting/applications and in later coverage disputes. - Do Not Assume an AI Exclusion Ends the Inquiry.
Mixed allegations, independent causes of injury and narrow construction principles may preserve defense and indemnity rights. - Consider the “Swallowing” Argument Where Appropriate.
For businesses deeply integrated with AI, overly broad exclusions may conflict with the basic purpose of the policy. - Assess Prior Policy Years.
Policies issued before the advent of AI-specific exclusions may provide meaningful coverage for current litigation.
Even in this world of AI, the human element is crucial. Experienced coverage counsel can assist in evaluating these issues at placement, renewal and in assessing potential claims for coverage.
RELATED ARTICLES
Policyholder Pulse


