The uncomfortable truth about AI in clinical work

AI

AI

Feb 3, 2026

Feb 3, 2026

Sophia Carter

Sophia Carter

“Build in public” is a very popular mantra in tech, what it means is to share early, ship fast and iterate loudly. However, building tools for clinicians, especially in assessment, case conceptualization and care, the phrase takes on a very different weight.

What building in public means is exposing judgment, assumptions, and values; in a field where people’s credentials, patients, and professional identities are on the line.

For a long time, our team wanted to build quietly. Part of that was respect for the clinical field, which rewards rigor and caution. Another part was fear: fear of getting it wrong publicly, fear of being misunderstood, fear of overstepping into a domain where our team is made of engineers, not purely clinicians. But the truth we ran into quickly: we cannot design clinical tools in isolation and expect them to be trustworthy.

Why clinician feedback isn’t a “nice to have”

In clinical technology, feedback is oriented in epistemology. For example, where does automation subtly shift responsibility away from the clinician? What would you never want auto-generated, even if it were statistically accurate? These aren’t questions you can answer with benchmarks or demos, ive in the lived experience of practice. That’s why, early on, we released a free tool simply to measure how long report writing actually takes. Hundreds of neuropsychologists used it. It was a simple exercise in asking a respectful question: where is your time actually going?

The uncomfortable truth about AI in clinical work

Machine learning is extremely good at patterning language, which also makes it dangerous. In clinical contexts, the risk is plausible overreach; language that sounds confident, but in fact subtly exceeds the evidence of the data presented (a very typical problem with large language models). This is why building in public matters; when clinicians can see how a system thinks, not just what it outputs, they can push back.

"Building with clinicians, not for them". One of the biggest mistakes tech makes in healthcare is assuming that “made by clinicians, for clinicians” is enough. We do not believe it's enough.

Clinical tools need to be built with clinicians, in real workflows, under real constraints, with full acknowledgment of professional risk. That means slower iteration. Smaller pilots. More “no’s” than “yes’s.” And a willingness to hear things that complicate the product vision.

Right now, we’re working with a small number of clinics to design workflows where AI is:

1.Transparent 2. Citation-based and 3. Subordinate to clinical judgment

If clinical AI is going to exist at all, it has to earn its place. And the only way to do that is in conversation with the people whose names, licenses, and patients are affected.