Generative AI is transforming our economy in previously unimagined ways, with Goldman Sachs estimating a $7 trillion (7%) increase in global GDP by virtue of this ecosystem. Insurance is but one sector that will be impacted, with new products, services and opportunities for efficiencies being the most obvious benefits. For insight into the insurance implications of this technology, we asked AI oracle du jour ChatGPT-4 the top three ways it believes generative AI will impact policyholders.
Here’s what it said:
- By analyzing a broad range of data, AI could create personalized policies with coverage and pricing that accurately reflect each policyholder’s risk profile. This could lead to better coverage and potentially lower costs for many policyholders.
- The use of AI in claims processing could significantly speed up the time it takes for policyholders to receive payouts. Automated systems could review claims, assess damages, and approve payouts quickly and accurately.
- Generative AI could aid in understanding and interpreting policy language and terms. The AI could be trained on insurance law and policy documents to generate explanations and clarifications in plain language.
While there may ultimately be some opportunities for policyholders to use generative AI directly in their policy underwriting and insurance claims, our focus here is on the insurable risks posed by generative AI. Any policyholder starting to incorporate generative AI into its business operations (or whose employees may be doing so) should take a step back to evaluate its exposures and existing insurance programs and to assess any insurable gaps in coverage.
Organized by generally applicable lines of insurance, some of the chief exposures that have been identified as the risks of generative AI include the following:
Professional Liability/Errors & Omissions Insurance
Using generative AI to answer questions or develop work product is of course not without risk. As has become well-known, current generative AI models may create erroneous or “hallucinatory” output that lacks a factual basis—and may even justify its output with additional fabricated sources. An attorney already learned this lesson the hard way when he filed briefs citing non-existent legal authorities provided by ChatGPT. The attorney was admonished by the Court and ultimately sanctioned for using the program and failing to review its work.
Hallucinations aside, generative AI can of course also make mistakes that could lead to serious adverse consequences. For example, numerous companies are replacing their programmer employees and using generative AI to write code. Coding errors can not only lead to internal problems, but could also create vulnerabilities that wrongdoers could exploit to access the company’s network or those of third parties.
Given this potential for serious errors, any company actively using generative AI in their business operations should evaluate its existing coverage for professional liability, also known as “Errors & Omissions” (E&O) insurance, and its applicability to generative AI-related claims. For example, policyholders should confirm that work by generative AI falls within the scope of “professional services” covered under their E&O policy. Policyholders should also carefully evaluate any new proposed policies or renewals for exclusions or limitations related to generative AI. Further, every company should have an internal policy governing employee use of generative AI, regardless of whether the company is making intentional use of such technology, both to safeguard the company from unsanctioned employee use and for questions insurers are beginning to start asking in E&O policy applications.
Commercial General Liability/Media Liability Insurance
Generative AI is also raising issues regarding intellectual property infringement and publicity rights. Lawsuits have already been filed against generative AI platforms challenging their use of original works to train their AI and alleging improper use of copyrighted images or open source code. Such claims are likely to extend to companies that use allegedly infringing generative AI output.
Certain such claims may be covered under existing insurance policies in a company’s program. For example, Coverage B in many CGL policies covers “personal and advertising injury,” which is commonly defined to include offenses such as “infringing upon another’s copyright, trade dress or slogan in your ‘advertisement’” and “oral or written publication, in any manner, of material that violates a person’s right of privacy.” However, there could be claims relating to generative AI’s use of IP that fall outside the scope of those insuring agreements.
Another coverage that may respond to generative AI-related IP claims is Media Liability coverage—whether within a company’s cyber insurance policy or through a specialty Technology or Media Liability insurance product. Media Liability coverage may cover claims for, inter alia, copyright infringement, plagiarism, trademark infringement or invasion of the right of privacy or publicity.
Policyholders that are currently using or considering using generative AI in their business operations should evaluate their current policy programs, particularly their CGL and cyber policies, to determine what coverage they have that may be applicable to IP claims and consider whether to obtain additional coverage if necessary.
Employment Practices Liability Insurance (EPLI)
Concerns are also being raised as to generative AI’s contribution to biased outcomes and its potential to produce offensive content. The company behind ChatGPT, OpenAI, has acknowledged that “valid” concerns have been raised regarding outputs deemed “politically biased, offensive, or otherwise objectionable.” OpenAI has made commitments to addressing these issues, as have other leading generative AI platforms. But risks remain that a routine “chatbot” exchange, or other AI-generated content, could go seriously off the rails.
While a more nascent issue than mistakes or IP theft, depending on a policyholder’s use of generative AI, they should consider potential exposure to, for example, claims alleging disparate treatment of protected classes. There is also the possibility of claims by employees such as hostile work environment if generative AI systems produce offensive content. Policyholders should also consider review of their EPLI insurance in connection with such potential claims.
Advances in generative AI open new opportunities and also bring new liability exposures. Policyholders should work with their brokers and experienced coverage counsel now to best position their insurance to cover possible claims arising out of generative AI. The insurance industry will also be reacting to developments in generative AI, and so it is paramount that companies evaluate their new and renewing insurance policies for changes impacting coverage of generative AI-related claims.