⇓ More from ICTworks

5 Key Million-Dollar Questions for Artificial Intelligence in Global Health

By Guest Writer on November 14, 2024

artificial intelligence improve healthcare

Over the last year, the Audere team found multiple insights from our work deploying practical artificial intelligence solutions in low- and middle-income countries (LMICs). Mainly, donors and health organizations are increasingly interested in understanding AI’s potential, driven by two key motivations.

  1. AI to improve efficiency and cost savings—an attractive prospect in today’s climate of dwindling resources available for health. Donors are exploring whether AI can help deliver critical health services to more people at a lower cost..
  2. AI to enhance health services uptake.  Blanket health interventions have struggled to meet people’s unique needs. AI can personalize outreach, predict community-specific needs, and customize care—making health services more accessible and relevant for diverse populations.

The allure of AI is clear. Yet, despite this enthusiasm, many critical questions remain. These misconceptions can slow much-needed investments in global health AI – at a time when innovation in medicine is booming!

Let’s answer the five biggest questions of why AI in healthcare is not just hype, but a powerful tool with the potential to transform health systems in LMICs.

Question 1:  Is AI Pure Hype or New Reality?

AI is a broad field encompassing many types of technologies with varying levels of maturity and proven impact. Not all AI functionality should be treated under one blanket term; different forms of AI have different degrees of validation, testing, and usability.

For instance, some types of AI—like computer vision for interpreting medical images—have a strong foundation, with years of research supporting their accuracy, reproducibility, and practical applications in healthcare. Tools using computer vision can reliably analyze x-rays, MRIs, and other medical images to support diagnoses, often at a level comparable to human specialists. This type of AI has moved well past the hype phase into genuine impact.

On the other hand, Generative AI is a relatively nascent technology. While LLMs show incredible potential and are advancing rapidly, they present unique challenges around accuracy and consistency. These models occasionally generate incorrect responses, or “hallucinations,” and lack reproducibility, sometimes providing different outputs to similar queries.

This variability raises key questions: what level of error is acceptable in healthcare settings? How can we minimize these errors through added engineering and guardrails? And, critically, can we build robust monitoring and evaluation tools to catch and address inaccuracies in real time?

This is an area of hyperfocus and innovation for us as we look to adapt these specifically for low-resource settings. For example:

  • AI healthcare solutions can use a combination of health and behavioral data to identify individuals for targeted health screening , counseling or intervention.These can be particularly useful in targeting improved allocation of scarce resources such as limited clinician time or commodities.
  • Jacaranda Health’s PROMPTS system which leverages a fine-tuned version of Meta’s Llama model to introduce fluent Swahili interactions over SMS to improve maternal care. Both are examples of AI adding value today.

Question 2: Can AI Overcome Infrastructure Limitations?

A classic misconception is that AI deployment simply isn’t feasible in low-resource environments because of the many infrastructure constraints: limited connectivity, hardware challenges, and interoperability issues.

Mobile banking in Africa back in the early 2000s is a great example. Many tech experts believed it would face challenges in resource-limited areas. However, M-Pesa proved a great success using basic mobile phones. Communities can adapt and innovate to make even complex interventions work in resource-limited settings. While infrastructure limitations are real, they don’t need to halt progress.

  • Urban users can benefit from AI tools on smart devices delivered by Whatsapp, while those in peri-urban areas could access AI features through simpler devices. Jacaranda health mentioned earlier is a good example, delivering health services through SMS on feature phones at low data cost.
  • Deeply rural users see indirect benefits through AI-enhanced tools used by health workers who travel between communities. And, emerging connectivity solutions like Starlink and Amazon’s Project Kuiper promise to bring internet access to more rural areas, expanding AI’s reach even further.

Cost remains a substantial hurdle. Current funding mechanisms may struggle to absorb costs sustainably, highlighting the need for cost-sharing models and innovative financing that make AI tools affordable in low-resource settings. Additionally, hardware and connectivity are advancing. On-device AI models that can work offline are already being developed, allowing even low-cost phones to run diagnostic tools like rapid malaria tests.

Successful AI deployment requires coordinated investment in technology and a commitment to building local capacity within health systems. Achieving sustainable impact means strengthening local infrastructure and nurturing human resources for health, empowering communities to leverage AI solutions effectively and independently over the long term.

Large donors like the Global Fund, governments, and other stakeholders should align their priorities and develop a comprehensive scientifically driven evidence-based roadmap to guide AI development and deployment.

Question 3: Are AI Solutions Practical Today?

Where are the most practical entry points for AI in LMICs? Do meaningful AI applications even work for these settings? In reality, over 18 high-impact use cases exist today that directly address the unique needs of resource-limited areas.

AI-powered mobile apps, for example, can aid in diagnosing diseases like malaria and tuberculosis. Predictive analytics tools support supply chain management, ensuring critical health supplies reach those who need them most. These applications show that, in specific areas, AI is not only feasible but can make a real difference today.

Yet, the question of AI practicality depends heavily on donor priorities, risk tolerance, and specific mandates. Different types of AI applications exist at various levels of validation and reliability, from administrative support tasks to advanced analytics and clinical decision support. Each use case has its own potential impact and requires the right setting and donor to flourish.

A robust research agenda is essential here, focused on gathering evidence around several key factors. These include:

  • Acceptability and usability: understanding how communities and healthcare providers perceive and interact with these tools.
  • Viability of AI solutions in local contexts: solution adaptability to local infrastructure and resource constraints.
  • Performance benchmarks: AI tools need to meet or exceed existing standards of care, a comparison that often gets overlooked when budgets are tight. Beyond technical performance, it’s essential to evaluate whether these tools genuinely improve health outcomes or simply increase efficiency.
  • Cost-effectiveness: defining the cost-value ratio for each AI use case can vary significantly by country, impacting both feasibility and long-term sustainability.

Safely adapting AI to global health via careful research and evidence-building demonstration projects can help us decide which AI applications resonate with local contexts, meet practical needs, and offer real value, setting the stage for impactful, scalable AI deployments.

Question 4: Does AI Make Too Many Errors?

AI’s potential for error is real. However, AI need not replace human expertise; it can serve as a decision-support tool, aiding professionals without taking over critical decisions. Combining AI with human oversight often yields the best results—AI and humans make different errors, and together they balance each other’s blind spots.

In resource-constrained settings, human errors often stem from time pressures and high patient loads. AI can handle time-intensive tasks, allowing healthcare workers to focus on critical decisions. Our work with computer vision in telehealth shows that AI and human clinicians form a complementary team, each catching what the other might miss.

AI Oversight Reduces Mistakes

What’s unique about AI is its capacity for ongoing oversight and transparency—qualities that traditional human-led healthcare systems lack to the same degree. Today, human errors in healthcare are often unmeasured and unaddressed due to the lack of data-driven tracking systems.

By contrast, AI offers unprecedented visibility into how and where mistakes occur, allowing for an iterative process of refinement and improvement. Through active monitoring and regular updates, AI systems can evolve over time, capturing insights from their own performance to reduce future errors. This transparency not only helps improve the technology itself but can also set new standards for accountability in healthcare.

Generative AI Needs MEL

Deploying AI responsibly in healthcare requires robust monitoring and evaluation tools both pre-launch and post-launch. Particularly with generative AI models, whose outputs may vary each time they’re used. Generative AI tools are designed to be dynamic, mirroring the variability of human responses.

We are extending monitoring and evaluation tools to build trust, confidence, and safety into AI deployments leveraging LLMs. With active oversight, clinicians can feel empowered to intervene when AI flags issues, and systems can track these instances to continually improve performance.

This commitment to transparency, adaptability, and iterative learning is critical for creating a safer, more reliable roadmap for healthcare AI, ultimately helping AI reach its full potential as a partner—not a replacement—in patient care.

Question 5: Can AI Investments Scale and Have Impact?

Some donors fear that AI is too nascent to offer scalable, impactful solutions in the near term. AI technology continues to evolve. We are already seeing applications that can achieve significant results.

Today, AI funding is mostly focused on pilot projects, with nonprofits, for profits, educational institutions, and local organizations all vying for small, separate, disjointed grants. To truly unlock AI’s potential in low- and middle-income countries, catalytic tech investments are needed.

  • Real, sustained R&D funding that allows for rigorous testing, iterative refinement, and scaling of the most promising solutions.
  • Substantial support for partnerships that build a broader foundation, bringing together tech developers, local health experts, and implementation scientists to collectively design robust, adaptable solutions for LMICs.
  • Long-term investment in last-mile internet access, high-throughput backbone infrastructure, and the technical expertise to deploy and extend computer technologies.

How can we catalyze this meaningful R&D in LMICs for global health? We need donors committed to bringing transformative AI to scale where it is needed most.

We also need donors willing to invest in responsible AI. Thoughtful integration of AI can help address critical healthcare challenges in low-resource settings, like workforce shortages.

Standing still would be a missed opportunity; the many types of AI offer transformative potential to strengthen health systems and build a more equitable global health future. Let’s proceed with purpose, addressing risks and seizing the chance to achieve real impact.

By Dino Rech, CEO of Audere Africa

Filed Under: Featured, Healthcare
More About: , , , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Leave a Reply

*

*