
We are quickly reaching an inflection point where withholding Generative AI solutions from healthcare systems in low- and middle-income countries (LMICs) will be as morally indefensible as denying any other life-saving intervention.
Subscribe Now for more digital health insights!
Recent research reveals a sobering reality that challenges our traditional ethical frameworks: AI now outperforms doctors in diagnostic accuracy while LMICs face an insurmountable physician shortage that will cost millions of lives.
The evidence and implications are profound.
When Microsoft’s AI diagnostic system achieved 4-fold higher accuracy than physicians at 20% lower cost, we are seeing the emergence of a new ethical paradigm that demands immediate attention from the global health community.
But let’s be honest about the complexity here.
Healthcare delivery extends far beyond diagnosis, encompassing cultural sensitivity, patient communication, shared decision-making, and nuanced human judgment that current AI systems cannot replicate. We are not talking about AI replacing human caregivers.
I am asking if we can ethically continue to withhold AI tools that could dramatically improve outcomes while human resources remain catastrophically scarce.
Depressing Math of Healthcare Scarcity
The numbers alone should keep development practitioners awake at night. WHO estimates a global shortage of 2.8 million physicians, with LMICs bearing the brunt of this crisis. Consider these realities:
- LMICs will have a 10 million healthcare worker shortage by 2030
- Chad has fewer than one doctor for every 20,000 people
- Mozambique has only 548 doctors for more than 22 million people
- Indonesia looks great by comparison, with 1 doctor per 1,400 people
These statistics represent preventable deaths on an unconscionable scale. How many times have you heard the phrase “we need more doctors” without any realistic plan for actually producing them? The reality: LMICs will never have enough doctors to meet their healthcare needs through traditional training and deployment models.
This scarcity means that in many LMIC contexts, the choice isn’t between AI-assisted care and ideal human care. It’s between AI-assisted care and no care at all.
AI Is Better Than Human Experts
The research landscape has shifted dramatically in the past year. A comprehensive meta-analysis found that AI models demonstrated no significant performance difference compared to physicians overall. But more telling studies reveal AI’s superiority in specific diagnostic contexts:
- ChatGPT alone achieved 90% diagnostic accuracy versus 76% for doctors with AI assistance and 74% for doctors alone
- AI recommendations were optimal 77% of the time compared to 67.1% for treating physicians
- AI chatbots often outperform radiologists but require safeguards against overprescribing
Even more interesting is Stanford research that shows the future lies not in AI replacement but in sophisticated human-AI collaboration. Doctors collaborating with AI, in systems designed for human-AI reasoning, outperform passive consumption of AI outputs.
Patients Expect AI to Be Better Too!
Most provocatively, recent research published in NEJM AI demonstrates how public perception of medical liability shifts when AI is involved.
The randomized study found that radiologists who failed to find abnormalities detected by AI were viewed as significantly more culpable, with participants 23% more likely to find doctors legally liable when AI disagreed with their diagnosis.
This legal paradigm shift has profound digital health implications.
There is a real potential for legal repercussions if doctors fail to diagnose diseases that AI correctly finds. Doctors not following AI diagnosis can be seen as worse than if doctors fail to find something with no AI support in the first place.
Cultural and Human Dimensions
Healthcare delivery involves human elements that current AI systems simply cannot address.
Cultural competency requires understanding local beliefs about illness, death, family dynamics, and treatment preferences. In many LMIC contexts, healing involves community support systems, traditional medicine integration, and spiritual considerations that AI cannot navigate.
Patient communication remains fundamentally human.
The ability to deliver devastating news with compassion, to navigate family hierarchies in treatment decisions, or to build trust with vulnerable populations requires emotional intelligence and cultural fluency that AI lacks. A patient’s willingness to follow treatment protocols often depends more on their relationship with their healthcare provider than on diagnostic accuracy.
In other words, bedside manner matters most.
Patients of physicians who communicate well have 2.16 times higher odds of adherence compared to those with poor communicators. These effect sizes are comparable to or exceed many standard pharmaceutical interventions, demonstrating that bedside manner is not merely a soft skill but a critical clinical competency
Moreover, treatment decisions frequently involve complex trade-offs that require human judgment.
Should a family in rural Kenya spend their limited resources on expensive treatments for an elderly grandmother or on preventing childhood malnutrition? These ethical dilemmas require understanding of local contexts, values, and priorities that AI cannot provide.
The most successful health interventions combine technical excellence with deep cultural understanding. AI can dramatically improve the technical component, but it cannot replace the human elements that make healthcare delivery effective and appropriate.
Ethical Imperatives Driving Change
The WHO’s Ethics and Governance of AI for Health guidance acknowledges that AI could also benefit low- and middle-income countries, especially in countries that may have significant gaps in health care delivery and services for which AI could play a role. The guidance emphasizes that AI could help governments extend health care services to underserved populations and enable healthcare providers to better attend to patients.
But I believe we must go further while acknowledging these complexities.
The WHO’s six core principles for ethical AI deployment—protecting autonomy, promoting human well-being, ensuring transparency, fostering accountability, ensuring inclusiveness, and promoting sustainability—actually create a moral imperative for AI deployment in resource-constrained settings, provided we design systems that enhance rather than replace human capabilities.
Consider the principle of promoting human well-being: when AI demonstrably saves lives and improves outcomes, withholding it violates this fundamental tenet. The principle of inclusiveness demands that we don’t perpetuate healthcare apartheid where high-income countries benefit from AI advances while LMICs languish with inadequate human resources.
However, the principle of protecting autonomy requires that AI deployment respects local decision-making processes and cultural values. This means AI systems must be designed with input from LMIC communities, not imposed by external actors who assume they know what’s best.
Four Critical Deployment Considerations
We cannot just deploy AI and hope for the best. We must deploy Responsible AI that improves the patient experience. Here are four ways to do that today:
1. Infrastructure and Equity with Cultural Adaptation
The digital divide poses real challenges, but AI tools can be transformative in LMICs battling geographic and economic barriers. The 99DOTS tuberculosis management program exemplifies how low-cost, mobile-based AI approaches can dramatically improve outcomes while reducing costs.
However, successful deployment requires extensive cultural adaptation. AI systems must account for local languages, health beliefs, and communication patterns. A diagnostic AI trained primarily on Western populations may miss conditions common in tropical settings or misinterpret symptoms that have different cultural significance.
2. Professional Standards with Human Oversight
As research from Brown University demonstrates, legal frameworks must evolve to address AI-physician collaboration. The finding that doctors face increased liability when they ignore AI recommendations that prove correct suggests we’re moving toward a new standard of care that incorporates AI insights.
But this standard must preserve space for human judgment, especially regarding treatment decisions that involve cultural, economic, or social factors. The legal framework should protect healthcare workers who override AI recommendations when human judgment suggests a different approach is more appropriate for the specific patient context.
3. Capacity Building with Community Engagement
Successful AI deployment requires robust ethical frameworks that address privacy, bias, transparency, and accountability. This includes ensuring AI models are trained on diverse datasets and validated across different populations to avoid perpetuating health disparities.
Critically, this capacity building must involve LMIC communities as partners in design and deployment, not just as end users. Healthcare workers need training not just in how to use AI tools, but in when to trust them, when to question them, and how to integrate AI insights with cultural and contextual knowledge.
4. Human-AI Collaboration Models
The most promising approach isn’t AI replacing doctors but AI augmenting healthcare workers’ capabilities while preserving essential human relationships. This means designing systems where AI handles pattern recognition and data analysis while humans focus on communication, cultural navigation, and complex decision-making.
For example, an AI system might identify potential tuberculosis cases from chest X-rays, but community health workers would still handle patient education, family engagement, and treatment support. The AI improves diagnostic accuracy while humans ensure culturally appropriate care delivery.
Moral Courage with Human Wisdom
We cannot allow perfect to be the enemy of good when lives hang in the balance, but we also cannot ignore the complexity of healthcare delivery in diverse cultural contexts.
This ethical framework questions if we can withhold potentially life-saving technology when human alternatives simply don’t exist at scale, and whether we can deploy AI in ways that enhance rather than undermine human dignity and cultural values.
Our sector often gets trapped in endless consultations and pilot programs while real-world suffering continues. Yet there is real damage when technological solutions are imposed without cultural understanding. The challenge is finding the middle path: urgent deployment with careful attention to human factors.
This requires:
- Regulatory frameworks that fast-track AI health tools proven effective in similar contexts while requiring cultural adaptation assessments
- Training programs that prepare healthcare workers to collaborate effectively with AI systems while maintaining their roles as cultural navigators and human advocates
- Legal protections for practitioners who follow established AI-assisted protocols while preserving space for culturally informed human judgment
- Investment mechanisms that make AI tools accessible regardless of ability to pay while funding the human infrastructure needed for appropriate implementation
- Community engagement processes that ensure AI deployment respects local values and decision-making structures
The moral calculus is clear but complex: in contexts where human expertise is scarce and AI demonstrably outperforms available alternatives, deployment becomes an ethical imperative rather than an optional innovation.
But this deployment must be done in ways that honor the irreplaceable human elements of healthcare delivery—the cultural sensitivity, emotional support, and contextual wisdom that no algorithm can provide.
We must have the courage to act on the evidence for AI’s diagnostic capabilities while maintaining the wisdom to preserve what makes healthcare fundamentally human. The time for moral leadership is now, but it must be leadership that embraces both technological possibility and human complexity.

