AI Won’t End Healthcare’s Admin Busywork

Why investors should think twice before betting on AI for healthcare admin

If a16z is a bellwether for venture capital, then AI in healthcare is the inevitable future.

Specifically, a16z highlights the immense potential of AI to reduce administrative burdens in healthcare, such as streamlining prior authorizations and automating other time-consuming tasks. The promise is efficiency, cost savings, and smoother operations.

While we won’t claim to know more than a16z, we question whether this technological shift will truly deliver relief from administrative work for clinicians—or whether it will meaningfully improve patient experiences and outcomes.

Our skepticism stems from the nature of healthcare itself: trust, not efficiency, is its foundation. Administrative processes, while frustrating, often serve as proxies to build that trust.

For this article, we take inspiration from "AI Cannot Fix Healthcare" by Russell Pekala to explore why administrative challenges in healthcare are far more nuanced than technology alone can solve.

Side note: My original prompt was “Doctor overrun by AI bots”, but ChatGPT refused to create that image because it was “promoting promoting fear or harm” between AI and humans!

The nature of healthcare interactions is driven by trust

As we and others have noted before, patient behavior is in large part driven by trust, and specifically trust in their clinicians. Patients must believe in the competence and intentions of their healthcare providers to follow through with treatments, share sensitive information, and engage in their care plans. Without trust, adherence to medical advice erodes, outcomes worsen, and the system falters.

The problem is that the incentives in the business of healthcare are driven by profit, not by trust. Healthcare organizations—whether payors or providers—are often structured to prioritize financial outcomes. This focus can unintentionally undermine the relational dynamics between patients and providers that are so crucial for effective care.

It’s important to note that the business of many other industries is also driven by profit, but their focus often revolves around convenience rather than trust, which is easier to address. For example, the tech industry prioritizes usability and efficiency, creating tools that simplify communication or automate tasks. While trust matters, it tends to be secondary, unlike healthcare, where trust is the cornerstone of every interaction.

And when healthcare is structured with payors and providers who can be adversarial in nature, there will have to be processes in place to get to a common level of trust. These processes are essential for resolving conflicts of interest, ensuring fair resource allocation, and fostering a system that enables collaboration instead of competition.

Why do Payors and Providers act as adversaries?

Game theory helps partially explain why payors and providers often find themselves in an adversarial relationship. In healthcare, each party operates with competing incentives—providers aim to maximize revenue by delivering services, while payors seek to minimize costs and prevent unnecessary spending.

Based on the image below, if either party assumes that the other must be trying to maximize their gains, then they will default to “playing dirty” to prevent any downsides. For providers, that is billing the highest prices to see how much insurance might pay them (billed amount vs adjusted amount); for payors, that’s aggressive prior authorization.

Either way, the patient loses with balance billing (although this loophole is slowly getting fixed with the No Surprises Act) and getting life saving treatment delayed or denied.

The why of healthcare admin: solving the issue of trust through work

While healthcare is making a meaningful shift towards value-based care, much of the system is still built upon the concept of Fee for Service. Since payment is made based on completion rather than outcomes, healthcare administration is designed to deter the consumption of resources unless absolutely necessary, serving as a gatekeeper to ensure the system isn’t overutilized.

Whether it’s requiring documentation, prior authorizations, or navigating complex systems, these barriers serve as a means for payors and providers to gauge the legitimacy of requests and ensure resources are allocated appropriately. While frustrating, these processes are designed to create a baseline level of trust in a system fraught with competing interests.

As Russell puts it:

If both payers and providers add more admin and optimization to their processes by collecting information that they need from each other and consulting common standards, they will eventually get to a decision about a claim that is from a legal perspective "objectively correct" and thus a weak form of trust can be formed by performing and completing this process.

So why won’t AI solve the plague of healthcare admin?

The most common use of AI is in automating manual and repetitive tasks. Healthcare is such a massive industry that it can be difficult to draw parallels, but there are simpler, relatable examples:

Take Captcha as the simplest example. It has evolved from typing numbers to identifying pictures of objects, and now it’s a multi-step process designed to outsmart automation. With AI advancements, it’s not hard to imagine Captchas evolving further to deter AI-based interactions, creating even more hurdles for users.

Here’s another example. My credit card company no longer requires just a six-digit number texted to me for verification; now it demands identifying the right four-letter sequence attached to the text as well. It’s not hard to imagine why: the power of automating software-based tasks with AI benefits not only good players, but also bad actors.

Back to healthcare, payors and providers will only put more barriers in places to maintain the level of work needed to establish trust. Again from Russell:

This model cannot be automated, since if it were automated then it would not work: the trust only came because the work was expensive to produce.

Instead, in response to automation payers and providers will "hack around" the automation by inserting manual steps that are designed to not be automated (kind of like inserting Captchas).

Providers can try to "automate getting a prior auth approved" and payers will respond by "automating reviewing a prior auth" and at the end of the day very little actual knowledge will be exchanged between parties if each party can do these tasks efficiently.

These increasingly sophisticated barriers reflect the escalating difficulty of building trust in an age of AI.

What does this means for investors?

While AI integration in healthcare is full steam ahead, we urge caution when considering investments in non-clinical applications of AI. The core of healthcare has always been about improving patient outcomes. Blindly chasing operational efficiencies to claim a piece of the $4 trillion healthcare spend—without staying aligned with the foundational purpose of healthcare—risks producing undesirable results both for patients and investors.

At The Healthcare Syndicate, we believe that sustainable success lies in aligning investments with the true mission of healthcare: better outcomes for patients. That’s why investing in solutions that directly improve patient outcomes remains a core part of our investment thesis. By staying true to this principle, we aim to back innovations that not only thrive financially but also make a lasting, positive impact on the healthcare system.

Please subscribe to our newsletter if you haven’t, and share our newsletter with a friend. Stay tuned to our newsletter for more insights into healthcare innovation!

Join us at The Healthcare Syndicate as we back the most ambitious founders 10Xing the standard of healthcare!

Reply

or to participate.