While it may seem hard to believe today, there were those who doubted the importance of the internet when it was in its early years. For example, columnist Paul Krugman, in a 1998 piece for New York Times Magazine, suggested that, by 2005, the internet’s impact on the economy would be no greater than the fax machine’s. And he was not alone in his assessment. The internet, of course, ended up transforming virtually every corner of the economy.
Today, Grant Little, CEO and Co-Founder of Precedent, believes AI technology may be on a similar trajectory to the internet, and become just as critical to businesses across industries. Little recently joined CLM President Susan Wisbey-Smith for an online discussion about AI technology.
Emphasizing that AI is not a passing trend but a pervasive technology that every company will eventually leverage, Little said, “Everyone is still trying to get their arms around AI and understand how to best leverage it. It strikes me as similar to when the internet started, and it was this new thing that only the computer science folks and people that were on the more ‘nerdy’ side were really into initially. And it transformed everything. With the internet, there were a lot of folks that guessed wrong on that.”
Little cited OpenAI's ChatGPT as an example of AI's potential, highlighting its ability to perform tasks like writing letters and checking grammar. “I think that was the first exposure where general consumers understood: ‘Well, this is a new thing that can do a lot of powerful [tasks] that I would have not guessed it could do.’”
Security
Addressing security concerns related to AI, Little drew parallels to initial apprehensions about cloud technology. He suggested that the robust security protocols implemented by major cloud providers for their AI models eventually alleviated concerns. “It was quickly that the cloud's security posture and technology surpassed what any real private company could do themselves,” noted Little. “So now virtually every company is in the cloud in some fashion or some capacity, and I think it'll be no different with AI.”
He cited some of the security protocols that major companies have around their AI models and said they are “very secure” and go through a great level of scrutiny. He also emphasized the need for companies to conduct their own research, given the evolving AI landscape.
Use Cases
Asked how one could identify good use cases for AI, Little advised considering factors such as the problem's solvability with traditional programming. “A lot of [problems] can be solved with just traditional programming and don't require an AI-based solution. And so, I would say if you can solve a problem with traditional programming, that's the way you should go, because AI inherently has more complexity to maintain and establish.”
He also said companies should examine the likelihood of process changes, “because if you build your AI solution around something that is changing a lot over time, then you're going to have to continually be modifying your AI” to account for that.
“And the third thing I would say is, how comfortable are you with the solution making mistakes?” Little added. “How comfortable are you dealing with the fact that it's going to have false negatives where it under-identifies an answer, or it does false positives where it over-identifies an answer?
“And if it makes a mistake,” Little continued, “are you able to easily spot it, or is it very hard to actually understand when it's been wrong?”
Little said the ability, or lack thereof, to understand when the model is wrong is why some decision-makers get uncomfortable with “black box models,” and noted that other models, such as Perplexity, include source citations that could alleviate such concerns.
“And then the last thing I'll say if you're going to do this with a lot of your own internal resources, do you have the [subject matter] experts to train the system” and accurately identify what is right and what is wrong?
Buy vs. Build
Asked how companies should consider whether to buy or build their AI tools, Little said, “If you are really adamant that you want to build your own AI system, just go in knowing it's going to be a much larger investment than anyone estimates.” He recommended buying solutions unless a 10X return over the estimated build cost is anticipated.
“If you're still saying, ‘I want to build it myself,’” Little added, high-quality data will be needed. He explained that “high quality” does not necessarily mean more data. “I know that folks think, ‘Well, if it just can read more and more data, then that makes it even more accurate.’ And actually, that can be the inverse,” said Little.
He spoke to the complexity of model training, and the engineers, data scientists, and analysts needed to make it all work and refine it, and concluded, “For all of those reasons, I tend to lean toward buying solutions.”
Little also shared insights on selecting AI vendors, emphasizing the importance of assessing the company's stability, understanding the accuracy of their underlying models, and considering their vision for the future of AI. He stressed the value of vendors who genuinely seek to solve problems and are willing to customize solutions.
Reflecting on his experience selling AI solutions to insurance carriers, Little noted their reluctance to adopt new technologies and processes. “Carriers are really hard to sell to,” he noted. “They like to build their own solutions. They also are leaning heavily on their existing solution providers...to build some of that technology for them. ...And carriers, for that reason, are somewhat reluctant to adopt new technologies and processes, and that puts the carriers at a disadvantage because, by their very [nature], they are built to avoid risk.”
Little cautioned that this may put carriers at a disadvantage, because “on the plaintiff side, they're much more risk tolerant, and they're much more willing to explore and try [new] things. And because of that, a lot of the venture capital money has flooded over to that side.”