Do No Harm: Using AI to Navigate Risk, Build Trust, and Drive Healthcare Outcomes

Learn why “do no harm” remains critical in designing ethical, explainable, and scalable AI solutions in clinical and operational environments.
March 20, 2026
Share

Last week at HIMSS, I had the opportunity to speak with healthcare leaders about one of the most pressing questions facing our industry today: how we responsibly bring artificial intelligence into clinical and operational environments.

In preparing for that conversation, I found myself reflecting on something a physician once told me:

“I trust my training. I trust my experience. I trust my colleagues.
But when an algorithm tells me something about my patient, I need to understand why.”

I was struck by how sharply that sentiment describes where healthcare stands today, as if that physician had spied the mounting importance of explainable AI in a crystal ball.

Whether those words were prophetic or not, we can agree that artificial intelligence is no longer a future concept. Right this second, it is actively being embedded into clinical workflows, patient records, and operational systems. It is beginning to influence decisions that affect real human lives.

And as that shift accelerates, the question facing healthcare organizations has fundamentally changed. It is no longer “Can we build AI?” It is “Can we trust it?”

The Principle That Built Medicine

For centuries, medicine has been guided by a simple principle: first, do no harm.

Every advancement, treatment, and innovation has been evaluated through that lens. Will it help patients? Or could it cause those same patients harm?

These questions are not abstract for me. Before working in data and technology, I was a registered nurse. And in clinical care, one truth becomes clear very quickly: every decision matters. The information you rely on, the signals you trust, and the tools you use all shape outcomes for real people.

That perspective has stayed with me and continues to shape how I think about AI today.

Because when we talk about data platforms, models, and analytics, we are not just talking about systems as they operate in isolation. We are talking about capabilities that will increasingly influence clinical judgment, operational priorities, and ultimately patient outcomes.

 Healthcare leaders understand this deeply. They are eager to innovate, to unlock the value of their data, and to apply AI in ways that improve care and efficiency.

But they also recognize what makes healthcare different: every dataset represents a patient, every algorithm carries risk, and every new capability must earn trust.

Now, as AI becomes a core part of healthcare delivery, we are being called to apply that same standard of “do no harm” to an entirely new class of tools. This also means that bringing AI into healthcare is not just a technical challenge, but also an ethical one.

AI Is Accelerating Fast

Adoption is accelerating at a remarkable pace. Health systems are already using AI to identify disease earlier, predict patient deterioration, reduce administrative burden, and improve operational efficiency.

At the same time, platforms like Snowflake are lowering the barriers to entry, making it possible to build AI applications directly on governed enterprise data.

For the first time, organizations can develop and deploy AI within the same environment where their most trusted data already resides.

This shift is incredibly powerful. That said, power without guardrails, especially in healthcare, introduces a considerable risk.

When AI Moves Faster Than Governance

We’ve already seen what happens when innovation outpaces governance. In one well-documented case, a healthcare algorithm designed to identify patients in need of additional care used historical healthcare spending as a proxy for need.

On the surface, that seemed reasonable. In practice, it introduced bias, because spending often reflects access to care and not actual health status. The result was that certain populations were systematically underserved by the model.

The algorithm didn’t fail because of the math. It failed because of the assumptions behind it. That distinction is an important one. AI systems rarely break in obvious ways. More often, they produce results that appear valid but are rooted in incomplete or biased inputs.

We are seeing similar challenges emerge with newer tools, such as AI-powered clinical documentation assistants. While these systems can significantly reduce administrative burden, early implementations have surfaced issues like hallucinated details or incomplete summaries. In each case, organizations have had to quickly introduce review processes and governance controls to ensure accuracy.

The pattern is clear: innovation is moving faster than governance, and healthcare organizations are being forced to close that gap in real time.

A Critical Shift: AI Where Data Lives

One of the most important changes underway is architectural.

Historically, building AI required moving sensitive healthcare data into external environments. This introduced complexity, duplication, and risk. Today, that model is being reversed. Increasingly, AI is being brought to the data.

Modern platforms like Snowflake now allow organizations to build and run AI directly within governed data environments, reducing the need for data movement and enabling stronger oversight. This approach not only improves security, but also creates something even more important: visibility.

Leaders can trace where data originates, how it has been transformed, and how insights are generated. In healthcare, that level of transparency is essential—because trust depends on it.

Designing for Responsible, Scalable AI

Building trust requires more than technology; it requires intentional design.

Responsible AI must be embedded into the foundation of how systems are built and operated. That includes:

  • Understanding data lineage and transparency.
  • Rigorously testing for bias across populations.
  • Implementing role-based access controls.
  • Continuously monitoring model performance over time.

Before scaling AI, healthcare organizations must also ensure they are structurally ready.

Clean data, governed access, auditability, and executive visibility are not just technical milestones to productionizing AI. They are the prerequisites for responsible adoption.

When these elements are in place within a unified platform, organizations gain the ability to move quickly without sacrificing control or confidence. That balance is what ultimately matters.

The Path Forward for Ethical, Explainable AI

Healthcare data leaders understand that AI will be the axis around which the next chapter of care revolves. But the organizations that lead in this next era will not simply be the fastest innovators. They will also be the most responsible ones.

If we build AI on foundations of trusted data, transparent architecture, explainable outputs, and strong governance, we set AI adoption up to do real good for patients and practitioners alike.

It will also allow healthcare organizations to expand the art of the possible while staying true to the values that have always defined medicine. Among them, of course, being do no harm.

As we build the next generation of intelligent systems, that principle must continue to guide us. Because while the future of healthcare will undoubtedly include artificial intelligence, it must always begin with trust.

Looking to build that kind of trust into your AI adoption, but don’t know where to start? Talk to one of Hakkoda’s healthcare data experts today.

March 19, 2026
|
Blog
Discover how zero-copy SAP data integration with Snowflake is transforming analytics and AI, with insights from Hakkoda and Coalesce experts...
March 18, 2026
|
News
Hakkoda earns Cortex Code Preferred Partner status with Snowflake, accelerating innovation and helping clients deliver faster business value.
March 10, 2026
|
News
A new report from Snowflake and Hakkoda explores the future of AI and interoperability in healthcare and public health.

Ready to learn more?

Speak with one of our experts.