The Doctor Is Operational

AI in health care and the new liability landscape

August 18, 2020 Photo

Artificial intelligence (AI) has captured the world’s headlines, perhaps in health care more than any other field. From surgical robots to diagnostic machines that can more quickly and accurately detect disease process (such as diabetic retinopathy by studying images of the eye), AI has the potential to transform health care delivery and improve patient outcomes.

While AI is designed to curb human error through automated systems, machine learning, and neural networks, it introduces a new set of risks into the health care liability landscape. To the extent a health care provider utilizes AI to treat a patient who has a less-than-desired outcome, we anticipate liability suits against both the health care provider and AI software companies.

What Is Artificial Intelligence?

The term “artificial intelligence” is not universally defined. When people talk about AI, they usually mean “machine learning,” a subset of AI that uses algorithms to detect patterns in data.

AI’s transformative potential stems from its ability to integrate, parse, and synthesize large quantities of data. AI systems are able to identify patterns and links at a much faster rate than human health care providers. Thus, AI can play a critical role in diagnostics, customizing individual treatment plans, and keeping providers current on the latest medical research.

As highlighted in a recent New York Times article, “A.I. Shows Promise Assisting Physicians,” many organizations, including Google, “are developing and testing systems that analyze electronic health records in an effort to flag medical conditions such as osteoporosis, diabetes, hypertension, and heart failure.” Researchers are also developing technology to automatically detect signs of disease and illness in MRIs and X-rays.

Because of the nature of machine learning, a health AI application is only as good as the training data that it works with. If the data programmed into the algorithm is flawed, limited, or biased, the outcome will be imperfect. Potential problematic areas include the clinical data fed into the algorithm; the algorithm itself; or the population within which the algorithms are used.

New AI Legal Liability Landscape

Assume a health care provider is utilizing AI-enabled clinical decision-support software to treat a Black patient for heart disease. Now suppose the AI software recommends a high blood pressure/cardiovascular disease treatment plan that proves ineffective, the patient’s disease process progresses, and he dies from his condition. Upon review, the health care provider learns that the data feeding the algorithm contained within the AI software originated from clinical trials conducted on white subjects only.

While the health care provider followed the recommendation of the software, the data underpinning the algorithm was arguably flawed as it was derived from a distinct clinical population that responds more favorably to a different treatment regimen than what was prescribed for the patient. Let’s assume the patient’s family files a wrongful-death lawsuit for alleged negligence in the provision of care and seeks to recover monetary damages. Who will be the target defendants in this litigation?

The most obvious target would be the treating physician through a theory of medical malpractice liability. While the standard of care in the AI context is evolving, a central issue for the case against the physician will likely be whether relying on AI clinical-software output was a breach of the standard of care. As with any new technology, medical providers will need to understand the benefits and limitations of each health AI application.

Other critical factors the court or jury might consider include whether the provider substituted the AI clinical-software output for her own medical judgment, the provider’s familiarity with the data feeding the algorithm, and the confidence level in its application to her patient population. We anticipate health care professionals will experience a learning curve similar to that of surgeons with the advent of laparoscopic/robotic surgery as they acclimate to incorporating AI effectively into their medical practices.

In addition to the individual physician, the physician’s employer (either a group or hospital) could face vicariously liability for the acts of its employed physician. It could also encounter liability independent of the physician for failing to sufficiently vet or investigate the AI company that relied on flawed data and for endorsing the use of the AI software by its providers before it had been put through a rigorous credentialing process. Liability will depend on many factors, including whether the hospital directed the physician to utilize the software or whether the physician independently decided to adopt the program and abide by its outputs.

The AI software company itself could face a litany of claims, including products liability, false advertising, and negligent training and supervision. Liability against the software company will often require a determination as to whether the AI software and its “algorithmic bias” is a “product,” and therefore subject to product liability law; or is a “service,” which would require analysis under a tort theory of liability. In an article, “What Is Product Liability?” FindLaw notes that a plaintiff in a product liability case typically needs to prove two things: The product that caused injury was defective; and the defect made the product unreasonably dangerous.

Since health care software has been generally regarded as a support tool to assist providers in making treatment decisions, courts have so far been reluctant to apply product liability law to software developers, as outlined in “Artificial Intelligence in Health Care: Applications and Legal Implications,” published in The SciTech Lawyer. But Nature Biomedical Engineering’s “Artificial Intelligence in Healthcare” notes that this might change in the future when it comes to “black box” algorithms where even the physician has difficulties in interpreting the results reached by the AI.

In addition, the competition among AI companies to be first to market with product capabilities is intense, thus setting the stage for plaintiffs’ attorneys to argue that the AI technology has not been adequately vetted and that AI companies put profits over people, prematurely releasing a defective product to the health care community.

AI software companies will also likely be defending against claims of negligent misrepresentation and/or false advertising. AI company websites may include statements advertising its products as providing “new levels of diagnostic certainty,” or “proven to be effective.” Such promises will be cited by plaintiffs as treatment/outcome guarantees in support of false-advertising allegations.

Because AI works through machine “learning,” the program is continually evolving. Thus, there may be a duty to continuously test the algorithm to ensure that its results remain sound as it learns from additional data. Plaintiffs will likely argue that the software companies failed to timely test or update their algorithms and make necessary updates.

Plaintiffs may also pursue the AI software companies for failing to ensure that the providers leasing their software are trained on how to incorporate the technology into their medical practices. How will AI companies ensure health care professionals undergo formal training on the diagnostic capabilities and educate them on the limitations of the AI software applications? Is there a legal duty to do so? AI software companies should consider investing in health care training to ensure patient safety and to avoid allegations similar to those levied against other manufacturers of new health care technology.

Insurance Considerations

In light of the evolving AI liability landscape, health care providers and the companies specializing in AI health care diagnostics and predictive analytics should evaluate coverage under their current insurance programs.

Since it is unclear how the courts will analyze health care AI liability, AI companies may want to ensure that their current insurance programs respond to patient bodily injury claims pursued under either product or tort law. Typically, a product liability policy will cover damages due to a claim alleging bodily injury or property damage caused by an occurrence. Although the language across policies varies by company, the International Risk Management Institute’s (IRMI) glossary of terms defines “occurrence” as, “[A]n accident, including continuous or repeated exposure to substantially the same general harmful conditions.” In addition, IRMI notes that an occurrence cannot result in injury or damage expected or intended by the insured.

Does a claim against an AI company constitute an “occurrence”? If so, when is the occurrence deemed to have occurred—the date the patient suffers bodily injury/adverse event, the date on which the software/algorithm is run and provides the erroneous treatment recommendation, or the date the AI software incorporated the “biased” clinical data into the algorithmic-based software? The occurrence date is key with respect to determination of which policy period is potentially implicated and application of policy retroactive dates.

Medical professionals, too, will want to consider the impact of AI when assessing the adequacy of their insurance programs. Medical professional liability insurance for physicians and other health care entities generally covers claims for “wrongful acts” in the rendering or failure to render professional services. Should the physician’s use of AI-enabled software be considered a “professional service”? And, similar to the occurrence issue, when will the wrongful act be deemed to have occurred?

Allocations issues present, as well. Will the AI software companies require providers to indemnify and/or hold them harmless if they are named as a co-defendant with the provider in any patient bodily injury suit? How will a court or jury apportion liability between the co-defendant AI company and the treating provider if they conclude that both the software was defective and the provider negligent?

By targeting AI companies instead of health care providers, plaintiffs may be able to circumvent medical malpractice damages caps. Thus, the AI software company may become the deep pocket in any litigation with no limit on recoverable damages, thereby prompting it to increase the liability limits of coverage purchased to protect itself from anticipated litigation.

While the use of AI in health care will ultimately improve patient care with more timely and accurate diagnosis, treatment, and even prevention of disease process, it will create new areas of liability for clinicians, provider systems, and AI companies. As the lines between “machine learning” and a provider’s individual clinical judgment blur, who will be held liable by the courts in the event of a missed diagnosis or adverse outcome? Which types of insurance policies will respond to claims arising out of an adverse health outcome caused by the AI?

It is an opportune time for regulators to weigh in on these AI health care issues before the first spate of patient injury claims are filed. New liability regulations tailored to health AI applications would contribute to more transparency and security for stakeholders in the field. Insurers could customize their policies and offer coverage solutions as appropriate. In the interim, to the extent AI companies and health care providers are implementing new diagnostic or predictive software, they should consult with their brokers and insurance partners with respect to how their insurance program would respond in the event of claims activity. 

photo
About The Authors
Multiple Contributors
Kristin McMahon

Kristin McMahon is head of global claims at Ironshore Inc. kristin.mchahon@ironshore.com

Alicia Bromfield

Alicia Bromfield is claims product leader, North America specialty, at Ironshore Inc.  alicia.bromfield@ironshore.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
About The Community
  CMPL

CLM’s Cyber, Management & Professional Liability Community helps raise awareness of issues and trends in the management & professional liability insurance marketplace, with an emphasis on litigation management through a collaborative effort between insurance companies and brokerages, claims organizations and service providers.

photo
Community Events
  CMPL
No community events