Why enterprise AI is finally growing up — and why healthcare must stay pragmatic

For much of the past three years, the visible impact of AI has been most apparent among consumers.

We saw it first in places where experimentation was cheap and forgiveness was high:

chatbots that write emails, generate vacation itineraries, summarize meeting notes, or help you argue with boss provider more politely. AI felt impressive, sometimes magical — but also, at times, like a clever intern who works fast and confidently, yet needs constant supervision.

Enterprise AI, however, has been slower to show its cards.

That appears to be changing.

According to OpenAI’s recently published whitepaper, The State of Enterprise AI 2025, many of the world’s largest and most complex organizations are now beginning to use AI not as a novelty or side project, but as core infrastructure. In other words, AI is quietly moving from “innovation lab” to “production system” — from demo to dependency.

And nowhere is this shift more visible than in healthcare.


Enterprise AI is entering its “ERP moment”

One of the strongest signals in OpenAI’s report is not about model performance or benchmarks, but about behavior.

Across OpenAI’s one million+ business customers, AI is no longer being used as a standalone chatbot. Instead, it is being embedded directly into workflows, products, and internal systems — the same way databases, ERP systems, and identity platforms were embedded in earlier technology waves.

That distinction matters.

When AI sits outside workflows, it is optional.

When AI sits inside workflows, it becomes unavoidable.

This mirrors what many of us lived through in earlier SaaS waves. Email alone did not transform enterprises. CRM did. ERP did. Secure identity systems did. Each succeeded not because the technology was flashy, but because it became operationally unavoidable.

AI now appears to be heading in the same direction.


Healthcare: fastest growth, smaller base

The OpenAI report highlights that healthcare shows the fastest enterprise AI growth after technology, with approximately 8× year-over-year growth.

That sounds impressive — and it is — but context matters.

Healthcare is starting from a much smaller base compared to technology, finance, or retail. A jump from “almost nothing” to “something meaningful” will naturally show eye-catching growth rates.

That said, this growth is not theoretical.

A 2025 report from Menlo Ventures shows that in just two years, healthcare AI adoption jumped from 3% to 22% of healthcare organizations deploying commercial AI solutions. By comparison, the average AI adoption rate across the broader U.S. economy is around 9%.

In other words, healthcare has quietly become one of America’s AI power users, even if it doesn’t always advertise it loudly.


Why healthcare is adopting AI faster than it admits

Healthcare is often portrayed as resistant to change. That stereotype is convenient, but incomplete.

Healthcare resists risk, not value.

When AI reduces clinician burnout, improves documentation accuracy, accelerates revenue cycle workflows, or helps patients navigate complex systems — adoption happens quietly and decisively.

What healthcare resists is something else:

AI pretending to be smarter than it really is.

Which brings us to an important point.


The AI pilots problem (HBR gets this right)

Harvard Business Review recently warned that many organizations eager to adopt generative AI are launching too many pilots across too many departments, chasing quick wins and marginal efficiencies.

The result?

Lots of demos.

Lots of enthusiasm.

Very little transformation.

HBR’s recommendation is refreshingly unsexy — and exactly right:

Organizations should resist the urge to experiment broadly and instead go deep and narrow, concentrating efforts where scale and synergy can drive meaningful change.

In plain English:

Don’t sprinkle AI like seasoning. Build it like plumbing.

This advice resonates deeply with healthcare IT, where fragmented pilots often create more noise than value. I’ve seen this firsthand — AI tools introduced without integration, governance, or ownership quickly become shelfware.

Technology success is rarely about brilliance. It’s about discipline.


Where AI is working in healthcare today

Let’s be clear about something that often gets blurred.

Most successful AI use cases in healthcare today are operational, not autonomous clinical decision-making.

And that is perfectly fine.

In fact, it is exactly where AI shines.

Operational wins include:

  • Ambient documentation and note generation
  • Clinical summarization and chart navigation
  • Coding and documentation improvement
  • Prior authorization support
  • Revenue cycle optimization
  • Patient communication and triage
  • Knowledge retrieval across policies, guidelines, and historical records

These are not glamorous tasks — but they are the ones draining clinicians and administrators every day.

This is also why ambient AI has taken off.


Ambient AI: operational relief, not clinical replacement

Ambient documentation sits firmly on the operational side of healthcare, not the clinical side.

As Epic formally entered the ambient space, companies like Abridge and Suki were forced (in a good way) to expand beyond transcription into adjacent areas such as:

  • Dictation and documentation quality
  • Coding support
  • Order staging
  • Patient summaries
  • Even early prior-authorization workflows

This is a crowded market, and it remains to be seen who ultimately wins.

Just this week, I spoke with the founder-CEO of a Y Combinator–funded startup working on AI agents for the RCM (Revenue Cycle Management) space. Like many others, he is deeply optimistic — and not without reason.

But optimism alone won’t be enough.

Healthcare is not a winner-takes-all market. It’s a “who integrates best, governs best, and survives longest” market.


Global contrast: China shows what scale looks like

While U.S. healthcare remains cautious about autonomous clinical AI, global examples show what happens when guardrails are different.

In China, Ant Afu, a health app from Ant Group (affiliated with Alibaba), reportedly answers over 5 million health-related questions per day.

The app offers:

  • Health tracking and goal reminders
  • Smart device integration
  • AI clinic follow-ups
  • Report interpretation
  • Access to over 300,000 doctors for online consultation and appointment booking

This is not an argument that the U.S. should replicate China’s model. Regulatory environments, legal frameworks, and cultural expectations are fundamentally different.

But it does demonstrate something important:

At scale, AI becomes less about intelligence and more about coordination.


Why LLMs struggle beyond operational tasks

Now for the cautionary part — and it’s an important one.

Multiple peer-reviewed studies, including recent research from MIT and Nature Medicine, confirm what many healthcare technologists already know:

Large Language Models are not yet reliable for autonomous clinical decision-making.

MIT researchers found that LLMs making treatment recommendations can be influenced by nonclinical information, including:

  • Typos
  • Extra whitespace
  • Missing demographic markers
  • Informal or emotional language
  • Dramatic phrasing

In other words, the model sometimes reacts not to medical facts, but to how humans talk.

This is not a flaw — it’s a reflection of how LLMs are trained.

They are phenomenal language predictors, not medical reasoners.

This is why I’ve consistently argued (and written previously in

👉 Is it fair to call AI “artificial”? https://sudhakar-musings.org/ai/ai-model-5-says-why-call-me-artificial/ )

that today’s AI is better described as probabilistic pattern engines, not thinking machines.


The right mental model for healthcare AI

If you take away one idea from the OpenAI report — and from the broader industry evidence — it should be this:

Healthcare AI succeeds when it behaves like infrastructure, not like a doctor.

Infrastructure:

  • Is boring when it works
  • Invisible when successful
  • Audited, governed, and controlled
  • Designed to reduce friction, not replace judgment

The moment AI is framed as “clinical replacement,” resistance is justified.

The moment AI is framed as “clinical force multiplier,” adoption accelerates.


So where does this leave us as 2025 ends?

Enterprise AI is no longer a curiosity.

Healthcare AI is no longer hypothetical.

But maturity is still uneven.

The winners over the next few years will not be those with the best demos, but those who:

  • Embed AI deeply into workflows
  • Respect regulatory and ethical boundaries
  • Focus on operational leverage
  • Design for humans-in-the-loop
  • Measure outcomes, not excitement

In healthcare, progress rarely arrives with fireworks.

It arrives quietly — in minutes saved, errors reduced, clinicians less burned out, and patients slightly less confused.

And honestly, that’s exactly how it should be.


Discover more from Sudhakar's Musings

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from Sudhakar's Musings

Subscribe now to keep reading and get access to the full archive.

Continue reading