Alert: 2026 Medicare Provider Enrollment Updates Infusion Centers Should Know About
December 18, 2025
Building Strong Infusion Nursing Teams: Highlights from Foundations of Excellence 2025 
January 6, 2026

V3: A Framework for GenAI Success in Infusion Operations

December 18, 2025

Operations

How Verification, Validation, and Value reporting can turn GenAI pilots into sustainable operational improvements

Every infusion operator I know is thinking about AI right now. There are vendor demos, pilot projects, and internal experiments to see if Generative AI (GenAI) tools, specifically those using large language models (LLMs) can reduce friction across referrals, intake operations, and revenue cycle workflows. The good news is that many of those early pilots are working well, which opens the door to a promising future where Infusion practices can improve their operational efficiency through technology, ultimately increasing patients access to care, decreasing time to treatment.

However, scaling GenAI tools to messier workflows, larger teams, and less structured edge cases, while ensuring patient safety and compliance is significantly harder. The success of these efforts depends less on the tool itself and more on how the AI workflows are evaluated, managed, and continually improved through human-in-the-loop feedback.

To help ensure GenAI success, there’s a simple framework called V3: Verification, Validation, and Value reporting.

It is a simple way to assess whether your AI investments are actually performing the work as designed, in a way that solves a problem, and delivers measurable operational value.

Why GenAI Needs a Structured Evaluation Framework

GenAI is not traditional software. Each time it performs a task there is some kind of cost. How these costs eventually show up varies by vendor and your internal team structure, but it’s some combination of token usage, platform fees, configuration work, compliance risk, quality control, task rework, and ongoing support. These costs accumulate. To justify the investment, you must measure whether GenAI is delivering outcomes that exceed the all-in costs.

Most workflows today are hybrid, meaning they operate alongside human teams. For example, GenAI may complete 80 percent of a task, while a human finishes the other 20 percent. That is not failure, but rather speaks to how versatile the technology is, in that it can offer value across many different workflows, even if it only completes part of them.

However, this versatility also means measuring performance becomes more complex. How do you compare accuracy, productivity, and value of a task completed entirely by a person versus one where GenAI handled part of it? Furthermore, what if the GenAI claimed to have completed the task, it was done incorrectly, and a human had to clean it up later?

More specifically, what if the AI claimed the task was complete, but a human discovers a month later that it inserted the wrong modifier, leading to a denied claim, how is that cost factored into the equation?

This later point is an example of a hallucination, meaning when the GenAI tool generates false, nonsensical, or factually incorrect information and presents it as factual. Hallucinations are a known issue with GenAI and LLMs that are still present several years into the technology’s maturity.

The complexity grows further when you use more than one AI vendor, which is what we are starting to see as the technologies mature. One vendor may specialize in referral intake. Another may excel at benefits investigation. A third may focus on claim scrubbing. Each may be best in class at what they do, but this creates a fragmented operating environment. To understand your true workflow performance, you now need to reconcile not just your EHR, PMS, and billing systems, but also reporting feeds from two or more AI platforms.

Invest in the Proper Data Foundation

Given these layers of complexity cost variability, human-in-the-loop processes, and multi-vendor fragmentation, most operators are flying blind when it comes to performance. They cannot evaluate GenAI accurately without reliable baseline data, consistent definitions, and unified reporting.

This is why a strong data foundation is essential. Without it, the output of any AI tool becomes untraceable, and you are left managing opinions rather than metrics.

At a minimum, this foundation should include:

  • A single source of truth across referrals, intake, and revenue cycle
  • Standardized task and status definitions across sites and teams
    (For example, does everyone agree on the discrete steps that move a patient from referral to appointment? Do the intake and RCM teams define “complete” the same way?)
  • A centralized reporting layer or data lake that pulls from all major systems, including EHR, PMS, billing platforms, and GenAI tools, to support auditing and reconciliation

With this structure in place, your organization can move beyond pilot enthusiasm and begin applying a more disciplined, outcomes-focused framework for AI oversight.

That is where the V3 Framework comes in: a simple way to assess whether your AI investments are actually doing the work, doing it in a way that solves a business need, and delivers measurable impact.

V3: Verification, Validation, and Value

V3 is a helpful framework for GenAI engagements in operational healthcare settings. It is not technical. It is managerial. It gives you a way to evaluate whether your AI tooling is producing meaningful outcomes.

VerificationValidationValue
Was the task performed on the intended items, at the intended time to the intended specification?Was the task done accurately, correctly, and traceably in a way that solves business needs?Did automating the task deliver measurable leverage compared to a well-run human process?
Example: Automated benefit investigations completed for in-scope patients at 5 AM, according to defined business rulesExample: Automated benefit investigations applied payer and drug specific rules correctly and produced auditable records showing how results were generatedExample: Automated benefit investigations require less time, cost, or rework, and generate fewer denials after accounting for platform fees and error rates

Verification

Was the task actually performed to the intended specification? Not just marked “complete.” Did the referral get pushed to the EMR? Did the benefit investigation return a usable result? Many GenAI tools present output summaries, but without audit workflows in place, it is easy for tasks to appear done when they were not meaningfully completed.

Validation

Was the task done correctly and traceably in a way that solves for the business needs? Did the tool apply payer rules accurately? Did it account for edge cases, like authorization exceptions for specific drugs? If this task were reviewed in an audit tomorrow, would you have a clear record of how it was performed and why, and is this record easily accessible?

This is especially critical in hybrid workflows. If staff stop trusting the outputs, they will revert to doing everything manually. And when that happens, the ROI disappears.

Value

Evaluating value means looking beyond whether a task was automated. It requires summing the total cost of automation, including platform fees, usage fees, configuration time, and QA overhead, and SME Audits, and then comparing that against the performance and cost of a well-run human team.

It also means factoring in error rates and rework. If the AI handles a task but introduces billing errors or misses payer-specific logic, those downstream costs reduce any initial gains. Value should be measurable at a granular level such as by payer, by referral type, drug, whether a prior authorization is required etc. This allows you to identify where automation creates leverage and where it does not.

A task can be automated and still represent a loss if it costs more, creates more noise, or underperforms a team that knows the work. This type of performance reporting is standard practice for human-only teams, so it is reasonable to hold AI-enabled workflows to the same level of accountability.

Patient Safety, Compliance, and Risk Management

As operators expand their use of GenAI, safety and compliance need to be treated with the same rigor as any other part of the clinical workflow. This becomes even more important now that many GenAI platforms are generating outbound communication to providers, patients, and payers through automated summaries, reports, and follow up calls. Once an AI system is authoring information that leaves the four walls of the infusion center, the stakes increase.

When evaluating vendors, it is reasonable to look for signs that they are building with compliance and accountability in mind. Some examples include clear documentation of how PHI flows through their system, including encryption and data retention practices, audit logs that allow you to trace any AI generated output back to its source inputs, and configurable templates or logic so outbound communication is not governed by a black box. Many operators also look for basic safeguards such as accuracy checks or exception queues that prevent questionable outputs from being sent externally.

These safeguards are not separate from the verification, validation, and value framework. They support verification by ensuring tasks are truly completed and auditable, reinforce validation by giving operators confidence that outputs are accurate and traceable, and strengthen value by reducing rework, downstream denials, and compliance risk. No single approach is perfect, but vendors that invest in transparency, traceability, and controlled workflows generally enable safer and more sustainable scaling of GenAI in regulated healthcare environments.

How Strong Operations Unlock GenAI’s Impact on Care

Too many teams are chasing tools. The infusion operators who get GenAI right will be the ones who treat it like any other operational investment. They will:

  • Build foundational data infrastructure first
  • Expect hybrid, human-in-the-loop workflows, not magic
  • Use a structured framework like V3 to evaluate performance task by task
  • Reconcile, audit, and refine continuously

This is not about being skeptical of new technology. It is about building the muscle to manage it well. The operators who build that muscle will be the ones who see real, scalable results.

The impact goes beyond operational efficiency. When GenAI is implemented with discipline, it can shorten the time from prescription to treatment, reduce avoidable denials, and ease bottlenecks that limit patient access. The result is not only a more resilient operation, but a faster, more reliable path to therapy for the patients who depend on it.


About the Author
Chris Hilger is the CEO of SolisRx, a boutique healthcare analytics consultancy specializing in multi-site infusion and specialty pharmacy organizations. Since launching SolisRx in 2023, the team has launched 50+ analytics and workflow automation solutions designed to accelerate performance across four core growth areas: referrals, intake operations, revenue cycle, and market expansion.

Chris holds a Master’s in Health Data Science from Harvard and was selected to present at NICA 2025 and NHIA 2026 on topics including infusion analytics, intake workflow visibility, and automation. His insights were featured in Bourne Partners’ 2025 Infusion Market Update, and he continues to work closely with operators and investors to drive sustainable growth through better data.

Related Posts

December 4, 2025

From Intake to Impact: Financial Strategies That Strengthen the Infusion Revenue Cycle

From Intake to Impact: Financial Strategies That Strengthen the Infusion Revenue Cycle A Strong Start: Why Intake Defines Financial Success In infusion and specialty care, a […]
November 20, 2025

Medication Temperature Excursions: Steps to Protect Patients and Prevent Loss?

Introduction: The Case of the Manic Monday Imagine you are the nurse manager at “ACME Infusion Center.”  You arrive at the clinic early Monday morning like […]
October 30, 2025

Infusion Alphabet Soup Part 1: Understanding Infusion Delivery Models Today

Infusion Alphabet Soup Part 1: Understanding Infusion Delivery Models Today Originally published: June 14, 2023 Updated and reposted: August 14, 2025 By: Bryan Johnson AIC, AIS, […]

Login

When you login, you will be directed to admin.infusioncenter.org. When you get there:

  • Enter the email address that was used to claim your center in the Infusion Center Locator or
  • If you are looking to add your infusion center (i.e. it does not already exist as an unclaimed location in the locator), enter the email you would like to be associated with the location.

You will receive an email with a link to click that will log you into the resources section of the locator, which includes your claimed and/or activated infusion center(s).

Please note: you will have to enter your email each time as we no longer have passwords for extra security. This login process is separate from the forum.

If you have questions about the email address associated with the location, how to add or claim a center, how to purchase a resource, etc., please email our Member Relations lead, Ashley Kana at [email protected].

NICA