Artificial intelligence’s (AI’s) potential impact on the future of so many industries frightens some people and excites others—and the insurance industry is not immune. At CLM’s 2023 Focus Conference in Manhattan, in the session titled, “Apocalypse Now: Artificial Intelligence’s Impact on Insurance,” presenters Robyn Lauber, vice president, Berkshire Hathaway Specialty Insurance; Jonathan Meer, partner, Wilson Elser; and Joseph Stephenson, director, digital intelligence, INTERTEL, Inc., showed that AI can produce both negative and positive outcomes in the context of the insurance industry.
Compared to last year, noted Meer, there is much more discussion surrounding AI and its potential impact on the insurance industry, and it was ChatGPT that brought AI to the forefront of the conversation. “If it wasn’t for ChatGPT, people would not be thinking about AI today,” he said. “But ChatGPT is just one version of AI. AI has been involved in our lives for many years…it’s nothing new. But what ChatGPT did is called the democratization of AI. The fact is that AI [came into] the hands of the people…. That’s what made ChatGPT so unique.”
The Scary Part of AI
Many in the insurance industry are nervous about the potential lawsuits that could arise from AI—particularly due to a lack of trust and understanding of how it works. Stephenson explained some concerns from a fraud perspective: “What AI has essentially done is put the tools to manipulate the system in the hands of the common person,” he said. “In fraud, you have opportunistic fraud, and you have…organized fraud…[and] these are merging….
“Now you have a case where [you] can get online and there’s tons of free apps out there that use AI. [You] can generate fake faces. There’s a site called No Face…and what it does is it takes all the images off the internet it can, and it makes layers to create new images of people.”
Similarly, voices can be manipulated to sound like a completely different person over the phone. Stephenson provided an example to the audience of two completely different voices saying the same sentence from his smartphone. “In the insurance world, we’re pushing for everything online…. So, the ability to create fake policyholders…is through the roof. But now, if I do have to speak to somebody on the phone…it’s not difficult for me to…create [a new] persona.” He added that the software is free and allows users to add layers, refine the text, add pauses, and change the inflection to make the voice sound as natural as possible.
Lauber mentioned that, as humans, we all have implicit and explicit biases, which AI learns from us. There have been cases in which AI screened out certain resumes based on race, disability, and gender. For instance, resumes that used more “feminine language” or had names that sounded more “ethnic” were rejected.
Stephenson added, “It’s really important for the company that’s [looking] into using [AI or other technology]—[they] have to understand it, and [they] have to make sure that it’s an ethical use of data.”
These examples represent “the scary part of AI,” noted Stephenson. However, the presenters showed that the future is not all gloom and doom.
The Positives of AI
Stephenson described software that can pull attachments from emails—for example, medical records—and sort through them, examine data within them, cross-reference information, and identify patterns in writing to make it easier to detect potential fraud in the future. “Things that we do manually in that process that take days and weeks—or maybe [don’t] get done because we have too much of a case load—now, we get done in a matter of minutes or hours, depending on the volume.”
Stephenson also mentioned an AI program that is specifically used for identifying fraudulent documents. Although the technology is not yet at a point where he would trust it completely without double-checking the work, he would now only need to sift through 10 documents that were flagged by the program instead of 3,000 documents. “I think I see the applications for AI on the insurance side being more of something where it’s going to do much better predictive analysis…. It’s going to speed up the claims process…free up reserves…[and] make better predictions on what your claims settlements have to be; where your reserves have to be.”
Meer noted that when people say, “AI is going to take my job, [I believe] it [will be] more like AI is going to take your job…to let you have the free time to focus on something else.” In other words, he explained, AI will cover the mundane tasks so human employees can shine within their expertise—not replace people completely.
Government Regulation
At present, AI is not fully regulated by the government, which affects privacy, ethics, and more. However, Lauber discussed an Executive Order signed by President Joe Biden in October 2023, which is aimed at “mitigating the risks associated with AI while, at the same time, encouraging innovation.” The order followed a move toward an AI Bill of Rights.
Biden also called upon Congress to pass bipartisan data privacy legislation, as currently, there is not a national data privacy statute. “States have passed privacy laws,” noted Meer. He pointed to laws currently in effect, such as the California and Colorado Privacy Rights Acts. “Connecticut’s just went into effect this summer; Florida’s and Indiana’s are going to go into effect in 2026. Montana’s, Oregon’s, Tennessee’s, and Texas’ are all going into effect next year. The closest thing right now that there is to regulating in connection with AI is protecting people’s privacy from what AI can do.” There are also various state laws regarding employment, based on a guidance issued by the U.S. Equal Employment Opportunity Commission (EEOC), to ensure that AI does not cause discrimination and bias during the hiring process. Furthermore, Lauber added, “Advancing equity and civil rights is a big concern of the [Biden] administration. Irresponsible uses of AI can…exacerbate discrimination and bias that is already present in the justice system.”