⇓ More from ICTworks

TrustRx: How to Overcome AI Resistance in Digital Health?

By Wayan Vota on June 24, 2025

trust in doctor ai diagnosis

I’ve spent the last week grappling with two questions that should keep every development professional awake at night:

  • Would you accept Dr. AI to treat you as a patient?
  • Would a doctor accept Dr. AI as a smarter peer?

The answers reveal a profound disconnect between what we know works and what we’re willing to accept. A disconnect that’s literally costing lives in the places where we work.

AI Will Be Better Than Doctors Soon

In the USA, doctors correctly diagnose diseases 89% of the time. This drops to 83% in strokes, which can present in many confusing ways. In India, doctors are right 76-89% of the time, depending on region and disease.

Now compare those realities with AI-intermediated healthcare:

  • ChatGPT alone achieved a median diagnostic accuracy of more than 92%, while physicians using conventional approaches scored only 73.7%.
  • AI-powered tools have reached a 93% match rate with expert tumor board recommendations.
  • In radiology, AI systems achieved 94% accuracy in detecting lung nodules, significantly outperforming human radiologists who scored 65% accuracy on the same task.
  • For breast cancer detection, AI-based diagnosis achieved 90% sensitivity compared to radiologists’ 78%.
  • A randomized trial in Nature showed that physicians using LLMs had higher diagnostic accuracy than those without AI support.
  • Another trial demonstrated that an AI interface outperformed physicians in simulated cases, achieving 80% diagnostic accuracy and reducing consultation time by 44.6%.
  • We are experimenting with LLMs at Intelehealth, and are finding that AI shows the correct differential diagnosis 89% of the time already.

AI will never be worse than this at disease diagnosis. It will soon be much better.

How to Introduce Dr. AI to LMIC Patients?

The greatest barrier to improving healthcare outcomes in LMICs isn’t technology. It’s human psychology. We’re facing two distinct but related resistance patterns that threaten to derail the AI revolution when we need it most.

1. Doctor AI Support Acceptance

Recent AMA data shows that while 66% of physicians now use AI (up 78% from 2023), concerns about AI exceeded enthusiasm for 25% of doctors in 2024. This resistance manifests in two problematic ways.

Many doctors are reluctant to take advice from AI – even when it can improve their effectiveness and efficiency. This is a normal human reaction to new technology and a fear over loss of autonomy or even relevance. One can argue that it is a misplaced reaction, yet it still is common.

At the other end of the acceptance spectrum, we are also concerned that doctors could default to choosing the AI answer because it is easier. In some settings, the system could be designed to discourage doctors from choosing a diagnosis or treatment that is recommended by AI. Not our process, but it can happen elsewhere.

Interestingly, more than half of physicians (57%) in an AMA study in the USA said reducing administrative burdens through automation was the biggest area of opportunity for AI. They want AI to handle the paperwork but resist its clinical insights. This is backwards thinking that prioritizes physician comfort over patient outcomes.

2. Patient AI Support Acceptance

Pew Research found that 60% of U.S. adults say they would feel uncomfortable if their healthcare provider relied on AI for diagnosis and treatment recommendations. Even more concerning, 70% of patients want a human doctor deciding their care, even if AI makes fewer mistakes.

How will patients feel if they learn that their diagnosis and treatment was AI-influenced? Could they discredit or even reject the diagnosis and treatment? A 2024 study in BMC Medical Ethics found that 96% of patients insist AI must be under continuous physician control, with many expressing concerns about data privacy and the loss of human touch in medical care.

What if they are not educated on the AI learning and deployment processes? Or worse, are educated on it and that gives them even more concern? This resistance is understandable but dangerous, but what happens when that human oversight actually reduces diagnostic accuracy?

We’re creating a system where patient comfort takes precedence over clinical outcomes.

The LMIC Reality: Where AI Matters Most

The Lancet Global Health Commission estimates that almost 9 million lives and $1.6 trillion in productivity are lost each year as a result of poor quality medical care, the majority occurring in LMICs.  The challenge is real but solvable.  AI models can work in LMIC contexts with thoughtful adaptation.

Yet implementation challenges pale compared to the fundamental shortage of skilled healthcare workers. In areas like sub-Saharan Africa, where medical education capacities are severely limited, AI-powered clinical tools represent one of the few scalable solutions for increasing both quantity and quality of medical care.

The irony is devastating: the places that most need AI’s diagnostic capabilities are often the least equipped to overcome cultural and technical barriers to implementation.

What can we do?

Filed Under: Healthcare
More About: , , , , ,

Written by
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.
Stay Current with ICTworksGet Regular Updates via Email

Leave a Reply

*

*