
When the WHO Science Council released its 2025 report Advancing the Responsible Use of Digital Technologies in Global Health, I saw digital health practitioners nod approvingly at the familiar recommendations.
The report follows a predictable pattern that echoes decades of global health strategy documents. We need to improve governance frameworks, interoperability standards, and workforce development.
But buried within Recommendation 6 lies a dangerous confusion. The WHO proposes establishing “digital assistants” to bridge gaps in digital health adoption. This recommendation conflates two fundamentally different approaches under the same misleading term.
The WHO report uses “digital assistants” to describe both:
- Human digital assistants hired to help users navigate complex systems
- Digital assistant software that makes systems intuitive from the start.
This ambiguity represents more than semantic carelessness. It reveals a sector-wide failure to distinguish between compensating for poor system design and actually fixing it.
Understanding this distinction will determine whether low- and middle-income countries build sustainable digital health systems or institutionalize expensive workarounds that drain resources from patient care.
The Fatal “Digital Assistant” Confusion
The WHO report uses “digital assistants” to describe both human staff and AI-powered software. This conflation obscures the most critical choice facing digital health policymakers today:
- Should countries invest in permanent human workforces to compensate for poorly designed systems?
- Or should they demand better-designed systems enhanced by intelligent software?
The report states that human digital assistants (or digital health navigators) can help bridge divisions and training gaps in digital health, assisting providers and clients with tasks such as scheduling, messaging, accessing records and navigating systems. It then notes that with rapid advances in AI, many digital assistants may soon become software that functions as virtual agents via natural language and voice interfaces.
This sequential framing fundamentally misunderstands what each approach represents.
Human digital assistants—staff hired by ministries of health to help health workers and patients use digital systems—are essentially expensive workarounds for bad design.
With only 17% of countries having structured digital health funding and workforce training identified as the lowest-performing area in global digital health implementation, the sector cannot afford to institutionalize expensive workarounds.
The World Bank’s 2024 Digital Development Report emphasizes that sustainable digital transformation requires systems that reduce rather than increase the human resources needed for operation. When a health system requires permanent human intermediaries to make digital tools usable, those tools have failed basic user experience standards.
Hidden Economics of Digital Assistants
The distinction matters enormously for three reasons that practitioners consistently overlook.
1. The economic implications are staggering.
Hiring human digital assistants creates permanent operational expenses that scale linearly with population. Every additional clinic requires proportionally more navigators. According to the WHO’s own data, health worker shortages will reach 18 million by 2030, making any strategy that requires additional permanent staff fundamentally unsustainable.
By contrast, AI-powered conversational assistants have high upfront development costs but minimal marginal costs. Once deployed, they can serve millions without requiring additional staff. In countries where 83% lack structured digital health funding, this difference between recurring human costs and scalable software solutions determines whether digital transformation is financially viable.
2. Human digital assistants create a moral hazard.
If governments commit to hiring permanent staff to make unusable systems usable, what incentive do vendors have to invest in genuine user-centered design? The digital assistant workforce becomes a subsidy for poorly designed products, allowing them to persist despite fundamental usability failures.
3. Practitioners are conflating two distinct problems.
We have two real problems: genuine digital literacy gaps versus poor system design. Some populations need education about digital health concepts: understanding what a patient portal offers, why data sharing matters. But this educational need differs fundamentally from needing ongoing human assistance to navigate poorly designed interfaces.
The former requires time-limited digital literacy campaigns; the latter reveals systems that shouldn’t have been deployed. Yet how many of the latter do we see (and get frustrated with) every day?
The Seductive Appeal of Creating Jobs
We often miss this crucial distinction for interconnected reasons rooted in our institutional biases.
The global health sector emphases job creation, making workforce expansion seem beneficial. In contexts with high unemployment and critical health worker shortages, proposing a new cadre of digital assistants appears to address multiple problems simultaneously.
This framing obscures the opportunity cost: every dollar spent on human navigators is unavailable for nurses, community health workers, essential medicines, or the software solutions that could eliminate the navigation problem entirely.
The AI literacy gap among health policymakers compounds this problem.
Many decision-makers’ mental models of “AI assistants” could be shaped by frustrating encounters with early-generation chatbots. They don’t realize that modern AI-enhanced tools achieve high patient engagement rates precisely because they provide immediate, accessible support that human-staffed systems cannot match at scale.
Learning From Other Sectors
The evidence from other sectors starkly illustrates what’s missing from the WHO’s analysis.
India’s Unified Payments Interface revolutionized digital finance for hundreds of millions of users—without deploying armies of “payment navigators” to help citizens use the system. The success came from establishing clear standards and systems designed to be intuitive enough that smartphone users could transact independently.
Natural language chatbots now serve as “digital front doors for health systems,” facilitating engagement without requiring human staffing at each interaction point. This is precisely the model that should be replicated, not replaced by human intermediaries.
The Implementation Evidence
Emerging implementation evidence reveals the fundamental limitations of human digital navigator programs. A 2023 primary care implementation found that while digital navigators could help patients enroll in portals, implementation was seriously handicapped by lack of anticipated provincial interoperability standards and system integration issues.
The navigators couldn’t fix the underlying problems. They could only help individual patients work around them, one interaction at a time. This is precisely the expensive, non-scalable model that the WHO’s recommendation threatens to institutionalize.
By contrast, healthcare organizations implementing AI-powered conversational assistants report transformative outcomes. AI operational assistants can analyze healthcare data in real-time, and complete administrative tasks autonomously, delivering productivity improvements across operations roles.
These AI solutions don’t just compensate for poor design—they actively improve it. Natural language interfaces eliminate the need for users to navigate complex menu structures or learn specialized terminology.
What We Must Do Differently
The WHO’s “digital assistant” recommendation should be read as an urgent call for clarity, not a unified implementation roadmap. When implementing this recommendation, digital health policymakers must choose the path that enhances rather than perpetuates current limitations.
1. Reject human digital assistant programs.
Ministries of health should view human digital navigators only as temporary bridge measures during system transitions, never as permanent career paths. If a digital health system requires ongoing human intermediaries to function, it has failed basic usability standards and should be redesigned or replaced.
2. Prioritize AI-powered conversational interfaces.
Instead of budgeting for human digital assistant salaries, countries should invest in software-based AI assistants powered by natural language processing. Development partners should condition funding on demonstrated usability improvements through AI rather than human workforce expansion.
3. Establish mandatory usability standards.
No system should be deployed if it cannot be used successfully by end-users. If vendors claim their systems are “too complex” for AI assistance, or worse, require human digital assistants to support end users, then they’re too complex for any human users and should be rejected.
The WHO’s recommendation, properly interpreted, points toward digital assistant software that improves an already optimized user experience, which can be codified in strong usability standards. Let us invest in better digital experiences, not humans to accommodate poor digital design choices.

