In the high-stakes world of claims and litigation management, trust is the cornerstone of every process, every decision, and every resolution for claims. As artificial intelligence (AI) becomes more and more entrenched in the day-to-day workflows of insurance and legal professionals, one question remains top of mind: Can we trust AI to act alone?
According to a recently commissioned survey by Wisedocs in collaboration with PropertyCasualty360, the answer from the market is clear: not quite. However, a solution is gaining momentum across the industry: Human-in-the-Loop (HITL) AI.
Also known as AI with human oversight or expert-validated AI, HITL refers to a model where expert professionals are embedded at key points in the AI workflow, such as training, validating, or correcting outputs. Rather than replacing human judgment, this approach ensures that AI decisions are guided, reviewed, and made trustworthy by real experts.
Human-In-The-Loop Is the Future of Claims
The benefits of AI in the claims process are no longer theoretical. Survey data reveals that 75% of claims professionals believe AI can significantly improve efficiency by accelerating tasks and optimizing resource allocation. Yet only 41% of respondents indicate their organization is actively using AI today, with another 38% still in the consideration phase. With widespread discussion of AI in the industry, one may wonder what is holding back large-scale adoption. The answer is trust.
Without expert oversight and HITL, AI systems can and do make mistakes. An astounding 89% of engineers working with large language models or LLM’s have reported encountering hallucinations, where inconsistent and false results can occur, and according to a study by IDC, a surprising 75% of companies are facing challenges with data quality, holding back their AI efforts.
From hallucinated outputs to compliance missteps, the risks are real. But when human oversight is added to the mix, the dynamic changes dramatically. Wisedocs' survey found that trust in AI nearly quadruples when humans are introduced to validate outputs, jumping from 16% to 60% among claims professionals who expressed a clear opinion. That "4x Trust Effect" is the foundation of why HITL is no longer an option, but a necessity becoming core to modern claims operations.
Oversight Matters: The Problem With AI Agents
The rise of agentic AI, self-directed systems making decisions independently, is drawing attention from VC firms and innovators alike. But when applied to claims and litigation, the agentic model quickly shows its limitations. Legal decisions, insurance denials, and compensation assessments are not fields where left unchecked, automation thrives.
To those new to the concept, I prefer to summarize it as follows: AI is the autopilot. But in high-stakes environments like claims, the human pilot remains in the cockpit, ensuring the safe landing.
Let us consider aviation as the analogy; autopilot systems are quite capable of handling the automated process of flight, such as maintaining stable flight paths, but pilots are required for the higher-risk areas, such as take-off, landing, and emergency decisions. Why? Because humans provide judgment, empathy, and accountability. When it comes to claims and litigation workflows, it deserves the same attention and care.
Scaling Responsibly With Human-In-The-Loop
Beyond trust and secure compliance regulations, the economic need for HITL focused solutions is clear. As 22% of property and casualty insurance professionals are projected to retire by 2026, the industry faces an urgent need to embrace automation amidst rising attrition rates. As an industry, we can soon expect a bottleneck as experienced adjusters exit, and institutional expertise quickly disappears. Without scalable, automated systems to absorb this loss and support growth, organizations risk stalling under the pressure of rising claims volume and shrinking expertise. Enter AI: a helpful tool for the claims workforce, but only if it's paired with oversight.
Business leaders are beginning to catch on. According to Reuters Events Special Report: The Future of AI in Insurance, profitability, accuracy, and efficiency are now the top metrics for AI success in claims. Organizations that deploy HITL AI systems report faster processing times, reduced errors, and improved customer satisfaction, without sacrificing regulatory compliance or decision defensibility. Wisedocs' own survey supports this finding, with respondents citing efficiency, productivity, and accuracy as the most impactful areas where AI can transform the claims document review process.
Moreover, when AI is configured with upstream HITL practices, such as training on domain-specific data, and downstream HITL oversight, like expert review of outputs, outcomes improve significantly.
Barriers To AI Adoption—and How Human-In-The-Loop Solves Them
While the promise of AI is vast, skepticism persists among professionals. The top barriers to AI adoption cited by claims professionals include:
- Accuracy concerns (54%)
- Compliance and regulatory risks (49%)
- Integration challenges (44%)
- Lack of trust in AI output (44%)
Each of these concerns points to the same root cause: uncertainty. In a field where uncertainty can mean denying a claim and withholding care from someone in need, it’s a risk we simply can’t afford. HITL approaches mitigate this risk by embedding a continuous feedback loop into the AI lifecycle. It ensures that real-world context, nuances, ethical considerations, and legal compliance are baked into every decision.
As regulatory bodies like the Centers for Medicare & Medicaid Services (CMS) and the North Carolina Department of Insurance move toward mandates for human review in AI-driven decisions, HITL isn’t just good practice, it’s quickly becoming the standard. As of May 2025, regulatory bodies and state governments are increasingly mandating human review in AI-driven decisions, positioning HITL as a legal requirement. States like California, Colorado, and Utah have already passed legislation, with additional bills underway in Connecticut, Illinois, and Rhode Island.
Human-In-The-Loop Is a Trust Multiplier
The industry has spoken: Trust is the gatekeeper of AI adoption. Without it, even the most sophisticated tools will sit idle. About 58% of claims professionals reported their organization still isn’t using AI in their document review processes. And it’s not because they don’t see the potential. Instead, they’re waiting for proof that AI can be secure, accurate, and defensible. When faced with that challenge, HITL delivers the proof by aligning automation with expert judgment.
It’s also no surprise that the adoption of AI is already happening, with or without your company’s approval. Many organizations are seeing rapid widespread use company-wide. For example, Hill Dickison staff accessed ChatGPT over 32,000 times in one week, highlighting how deeply integrated these tools have become in day-to-day operations, even without organizational AI policies.
This informal use of AI is not slowing down, and employees using agentic AI tools without formal processes in place are increasingly exposing organizations to risks like data security breaches, compliance issues and inaccuracies. As the generative AI market grows at a CAGR of 41.52%, this adoption isn’t going away; organizations must establish responsible AI policies to ensure the secure and effective use of AI. But as organizations race to integrate it, those who succeed will be the ones who prioritize trust, oversight, and accuracy.
Leading the Charge in Claims Transformation
The road to AI maturity in claims is not about abandoning human roles. It’s about equipping those roles with smarter tools to enable them to work smarter and keep up with current workloads. HITL isn’t a compromise between automation and expertise; it’s a collaboration that ensures both parties thrive.
For claims executives, risk managers, and legal leaders, the next step is clear:
- Evaluate current processes for AI readiness.
- Identify high-value, low-risk use cases for HITL AI such as document summaries and processing workflows.
- Invest in secure, configurable platforms that are backed by clinical quality assurance teams and HITL systems.
As Bain & Company advises, the best place to start for any organization is with manageable AI applications where human oversight mitigates risk and accelerates results.
HITL Has Become the Industry Gold Standard
In an industry built on trust, accuracy, and accountability, AI alone cannot meet the mark, and we shouldn’t expect it to. Human-in-the-Loop AI offers a realistic, proven path forward.
The future of claims management isn’t machine vs. human—it’s machine and human. HITL empowers professionals to do more, with greater precision and less risk, ensuring that every decision is not only fast, but fair. As AI adoption accelerates, those who adopt responsibly will lead the way, and those who lead with HITL will earn not just efficiencies, but trust.
About the Author:
Connor Atchison is the founder & CEO of Wisedocs. connor@wisedocs.ai