Walking the AI Tightrope

Avoiding professional liability pitfalls while leveraging the many benefits of artificial intelligence in health care

April 10, 2024 Photo

There is an argument that generative artificial intelligence (GenAI)— artificial intelligence (AI) capable of generating text, images, or other data using generative models, often in response to prompts—has already been woven into the fabric of the administration of health care, certainly in the U.S.’ fragmented provider systems. Third-party-contracted vendors, which provide back-office services utilized by private and public hospitals and doctors’ offices, have already created and implemented GenAI platforms that catalog and maintain data and provide financial reporting tools governed by algorithms.

GenAI: A First Step Toward AI Implementation

The World Health Organization (WHO) recently pronounced that AI has tremendous promise with respect to the “delivery of health care and medicine worldwide,” cementing the mission critical nature of AI utility. Given the tremendous increasing financial cost of health care to patients, managed care organizers, and taxpayers, AI has the promise to create efficiencies and reduce costs to the consumer and the federal government. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP). This newly developed ability is more comprehensive than model generation by GenAI in that it analyzes and provides a conclusion in an instant.

Importantly, the health care AI that has evolved from hospital GAI models should include AI offerings that can make decisions and identify administrative patterns made by health care professionals. The result is it may reduce time put into charting, billing, and—in light of recent regulatory changes—executing compliance checks.

For example, the Centers for Medicare & Medicaid Services (CMS) recently finalized enhanced hospital price-transparency requirements for 2024. With this update, hospital price-transparency mandates will become stricter, reinforcing regulations established in 2021. Now, hospitals must disclose charge information using a more prescriptive template. Once AI has learned that template, it can immediately track billing to patients to align with CMS regulations.

CMS Medicare Billing Form CMS-1450 and 837 Institutional, and the Medicare Benefit Policy Manual, are on CMS.gov. In theory, machine learning AI technology can read the manual and notes from the federal government, and implement a compliant process for submission all while being HIPAA compliant. Once done, the AI can learn to anticipate and evolve with changes that are issued by CMS. The reduction in administrative costs and time to do the same will have immediate measurable impact.

AI in the Patient Care Space

Where the implementation of AI becomes more controversial is in the patient care space, particularly when laid out like this: Computer Vision AI, which is already being used in self- driving and other computer technology, initially screens a patient from home and leverages machine learning, which has access to every medical textbook, research study, and image on the internet. Then, a decision can be made about the next step of patient care. Even in an emergency, with HIPAA-compliant disclosures obtained, an ambulance could use the technology to prepare an ER or OR for more precise treatment.

Here is the issue: What about a hallucination? In a hallucination, AI generates something that is not grounded in that data at all. It essentially makes something up in an effort to answer a query or prompt. There have already been public instances of lawyers being sanctioned for using AI and citing fake cases, which, while bad for their clients, does not involve treatment for a health condition. There should be a level setting that professionals using AI are actively involved in the briefing or diagnostic process. It should also be noted that legal and medical malpractice happens without the use of AI.

AI To Lessen Disease Misdiagnoses

A July 2023 Johns Hopkins study found an estimated 795,000 Americans become permanently disabled or die annually across care settings because dangerous diseases are misdiagnosed. (See “Burden of serious harms from diagnostic error in the USA,” published in BMJ Quality & Safety). This research points to 15 commonly misdiagnosed health conditions (including the big three: vascular events, infection, and cancer) that are responsible for over half of the annual deaths and severe disabilities—including brain damage, blindness, and limb amputations—related to diagnostic errors.

Thus, it is hard not to imagine a reduction in misdiagnoses with machine learning that has access and ability to analyze research, historical data from case studies, and other professional experiences within seconds. Even with this prospect, there is warranted concern surrounding patient privacy. Once queries are inputted into AI, the information contained therein is available to all. Thus, any health care system considering the implementation of AI must have a policy in writing.

Patient Privacy and Cyber Concerns

An AI-written policy should have the codified data privacy state statutes (i.e., California Consumer Privacy Act, Children’s Online Privacy Protection Act) as well as HIPAA compliance spelled out. Because HIPAA has been in existence for over 20 years, the contours for protection of patient privacy are already in existence. The utility of AI for administrative efficiencies and streamlined patient diagnoses should not create an undue burden to providers relative to privacy protection.

Another common concern raised is the potential for a cyber event disruption. Cyber events are required to be reported under HIPAA and, with the growth of zero-trust security expected in 2024, health care organizations adopt a more proactive and preventive approach to cybersecurity that assumes no trust for any user, device, or network. Zero-trust security involves implementing multiple layers of security controls and verification mechanisms, such as multifactor authentication (MFA), encryption, segmentation, micro-perimeters, identity and access management (IAM), endpoint detection and response (EDR), and continuous monitoring.

Given the existing recognition of technology and compliance pain points that health care already faces, and the steps being taken to actively address them, AI utility as a partner in efficient administration and patient interface does not seem like a negative. Rather, it appears to be a productive path forward that may have tremendous positive economic and, most importantly, human health outcomes.

photo
About The Authors
Sarah Abrams

Sarah Abrams, Esq., is head of claims at Baleen Specialty, a division of Bowhead Specialty Underwriters, Inc.  sarah.abrams@baleenspecialty.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Claims Management
No community events