Caution Ahead on Generative AI

View this technology as a supplement, not a substitute

November 16, 2023 Photo

Like it or not, artificial intelligence (AI) will become a significant component of our industry. As this technology continues to progress, the question for most has proven to be, should we trust AI and to what extent?

Unfortunately, some lawyers have learned the hard way that generative AI platforms like ChatGPT may not be as trustworthy as some would like to think. Steven A. Schwartz, of the firm Levidow, Levidow & Oberman, “threw himself on the mercy of the court,” The New York Times reported, after “saying in an affidavit that he had used the artificial intelligence program to do his legal research—‘a source that has revealed itself to be unreliable.’”

Schwartz was representing Roberto Mata, who sued the airline Avianca claiming that he was injured when a serving cart struck his knee during a flight to Kennedy International Airport in New York. When Avianca asked a Manhattan federal judge to toss out the case, Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. However, unbeknownst to Schwartz, ChatGPT had invented every case.

Judge Castel said in an order that he had been presented with “an unprecedented circumstance”—a legal submission replete with “bogus judicial decisions, with bogus quotes and bogus internal citations,” according to The Times. Judge Castel ordered Schwartz and another attorney to each pay $5,000, and both lawyers were also required to send copies of the sanctions ruling to Mata, the plaintiff, within two weeks, and forward the ruling to each judge ChatGPT falsely identified as an author of the six ginned-up opinions, according to Courthouse News Service. Stories like this raise the question of how generative AI even works and where it gathers its information from.

Defining Generative AI

AI is the term used to describe how computers can perform tasks normally viewed as requiring human intelligence, such as recognizing speech and objects and making decisions based on data. Machine learning is an application of AI in which computers use algorithms (rules) to learn from data. Machine learning adapts with experience. In other words, the algorithm can change as more data is fed into it.

Generative AI is a type of artificial intelligence that uses machine learning algorithms to create new and original content such as images, videos, text, and sound. The most commonly used generative AI platforms, like ChatGPT, draw from a variety of data that is available to the public. This data can include a wide range of sources. Some generative AI platforms have been trained on massive data sets consisting of innumerable terabytes of text data, including everything from books and articles to social media posts and online forums.

However, both claims and legal professionals use curated data sets to ensure that the content is accurate, relevant, and compliant with legal requirements. Even minor errors or inaccuracies in content relied upon by legal professionals in these curated data sets can potentially have significant consequences. The recent faux pas described above clearly illustrates potential flaws when relying on generative AI. For example, ChatGPT lacks human components such as experience, consideration, and judgment. It instead relies on data, including research that it retrieves and considers reliable, seemingly without any due diligence.

In the insurance industry, there is an emphasized focus on risk, claims, and litigation. While generative AI platforms offer solutions to some challenges—such as document management, analyzing loss events and trends, and evaluation of Total Cost of Risk (TCOR)—organizations must carefully consider the benefits against the potential risks of using this experimental technology.

While AI has proven useful in improving automation and analyzation in basic claims, there is a genuine industry concern as it relates to assessing high severity cases and those with intricate and challenging liability scenarios. For example, AI is not capable of making complex decisions requiring human judgment, particularly in medical malpractice claims where there is a high probability of emotions, including empathy and sympathy, which AI cannot evaluate.

New technologies such as AI have “tremendous promise for our industry but if misused could be certainly damaging and counterproductive,” Joe Powell, senior vice president for data and analytics with Gallagher Bassett Services Inc., said in a recent Business Insurance article. Powell drew distinctions between the different forms of AI, including “narrow” AI, or machine learning, where models are built or “trained” on data targeting a specific desired decision outcome; and generative AI, which can involve taking a “single, very general model and (applying) it to a whole host of use cases.” He said, “Those two are not substitutes for each other. It’s not like we’ve moved from one to the other. I think the new generative AI models are going to complement our narrow AI models in a lot of ways.” He stressed the importance of embracing the evolution of this technology in a responsible way. 

Even ChatGPT’s CEO, Sam Altman, warned in a December 2022 post on Twitter (now X), shortly after ChatGPT’s release, that it would be a mistake to rely on the platform for “anything important right now.” For now, all would likely agree that generative AI should remain as a supplement to, and not a substitute for, the required and necessary human element in legal and insurance industry. 

photo
About The Authors
Multiple Contributors
Gary Leonard

Gary Leonard, MA, AIC, CCP, is executive vice president of Gallagher Bassett Specialty. gary_leonard@gbtpa.com

Mamie Stathatos-Fulgieri

Mamie Stathatos-Fulgieri is partner at Vigorito, Barker, Patterson, Nichols & Porter.  m.stathatos@vbpnplaw.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Litigation Management
No community events