⇓ More from ICTworks

Stop Development Project Failure! Use This People-Centered AI Playbook

By Wayan Vota on January 27, 2026

ai playbook for social impact

I’ve lost count of how many AI-for-development projects I’ve seen crash and burn. Not because the technology wasn’t impressive or the intentions weren’t noble, but because teams fundamentally misunderstood what makes Generative AI work in real-world development contexts.

The latest evidence comes from Dalberg Data Insights’ People-Centered AI Playbook, which methodically dismantles the Silicon Valley mindset that’s been imported wholesale into development contexts.

After working with NGOs, social enterprises, and governments across health, agriculture, education, and financial inclusion, their message is consistent: organizations need practical support to move from theory to action, and they don’t want to reinvent the wheel.

Sign Up Now for more digital development insights 

The Problem with Tech-First Thinking

The development sector has fallen into the same trap that plagued early ICT4D initiatives: assuming that importing methodologies from high-resource contexts will somehow work in environments with completely different constraints. The 18 AI applications we highlighted earlier show what’s possible, but they don’t explain why so many similar initiatives stumble.

Dalberg’s framework starts with a radical premise: before considering any technology, teams must ground their ambitions in real user needs, organizational realities, and workflow challenges. Their six-phase approach (Discover, Define, Design, Develop, Pilot, Scale) deliberately front-loads the human research that most teams treat as an afterthought.

Consider their “Discover” phase, which can take weeks of user interviews, workflow mapping, and organizational assessment before a single line of code gets written. This is a fundamental rejection of the “build first, find users later” mentality that dominates mainstream AI development.

3 Critical Insights Challenging Conventional Wisdom

This playbook isn’t just another framework. It’s a direct challenge to how we think about AI adoption in low-resource settings. The core argument is provocative: most AI projects fail because teams skip the human work that makes technology sustainable.

1. AI Readiness Is About People Systems

Most AI readiness assessments focus on technical infrastructure: bandwidth, devices, data pipelines. Dalberg flips this by emphasizing what they call “people readiness“. The extent to which intended users, staff, and partners are willing, skilled, and motivated to adopt and sustain an AI solution.

The playbook references diagnostic tools that assess strategy, data maturity, ethical considerations, and organizational culture as primary determinants of success. Microsoft’s AI Readiness Assessment and GSMA’s AI Ethics framework get mentions, but Dalberg’s own DART assessment is specifically built for social impact in low-resource settings.

This people-first approach explains why government-led initiatives with existing infrastructure integration consistently outperform standalone digital solutions, as we’ve observed in our analysis of AI governance challenges.

2. Problem Definition Beats Solution Innovation

The playbook’s most contrarian element is its Define phase, which systematically tests whether AI is even the right tool for identified challenges. They include a decision framework asking whether tasks are high volume, repetitive, or pattern-based and whether simpler tools cannot solve it as effectively.

This represents a fundamental philosophical shift. Instead of starting with AI capabilities and seeking applications, teams begin with specific workflow challenges and test whether AI offers measurable performance gains over alternatives like workflow redesign, basic digital tools, training, or policy changes.

The framework includes explicit guidance to flag challenges for non-AI approaches and avoid building AI for its own sake. In resource-constrained contexts, deploying AI without clear fit can waste time, introduce risk, or make systems more fragile.

3. Scaling Means Building Robust Systems

The final insight challenges how we think about successful AI deployment. Dalberg’s Scale phase isn’t about user acquisition. Scale is institutionalization, continuous development, and contextual adaptation.

Their framework recognizes that what works in one setting may not in another and requires teams to re-examine assumptions, language, and data flows as solutions expand across geographies or user groups. This adaptive approach stands in stark contrast to platform thinking that assumes universal applicability.

The playbook emphasizes that scaling requires shifting from activity tracking to impact evaluation using proportionate, credible methods to validate performance, equity, and cost-effectiveness. This evidence-based approach to expansion explains why USAID’s AI implementation guidance emphasized continuous learning and iterative development.

Cross-Cutting Enablers: Where Real Work Happens

Perhaps most importantly, the playbook identifies three cross-cutting enablers that run through all six phases: People, Equity & Inclusion, and Data Governance. These aren’t add-on considerations—they’re fundamental design requirements.

  • The People dimension recognizes that success depends on building trust, aligning leadership, and equipping teams with the skills and confidence to use AI responsibly. This human-centered approach ensures that people must remain at the center, engaged, trained, and supported for adoption.
  • The Equity & Inclusion framework requires teams to examine who is represented in your data, who participates in testing, and who faces barriers such as limited connectivity, literacy, language, or access to devices. This systematic attention to inclusion helps prevent unintended harms and ensures AI delivers value across different needs and contexts.
  • Data Governance encompasses data quality, access, privacy, security, and compliance throughout all phases, ensuring AI systems are ethical, reliable, and contextually appropriate.

Implementation Reality Check

The playbook acknowledges what practitioners already know: few teams have every skill in-house.

Their pragmatic approach suggests partnerships with universities, local tech-for-good groups, or global networks for specialized support, while outsourcing short-term tasks like data labeling and training internal teams for core functions.

This collaborative model aligns with our observation that successful AI initiatives require interdisciplinary approaches that combine technical expertise with domain knowledge. As Stanford’s Human-Centered AI Institute demonstrates, bringing together computer scientists, ethicists, social scientists, and domain experts produces more robust and sustainable solutions.

The framework also provides practical templates for user persona development, problem statement framing, use case definition, and feasibility assessment. These tools translate abstract methodologies into actionable workflows that teams can implement immediately.

What This Means for Our Development Practice

This playbook matters because it offers a methodologically rigorous alternative to both AI evangelism and AI skepticism.

It neither dismisses AI’s potential nor accepts uncritical adoption. Instead, it provides a systematic approach to determine when, where, and how AI can create meaningful value in development contexts.

The framework’s emphasis on iteration and evidence-based decision-making reflects what we know about successful technology adoption in resource-constrained settings: solutions must be designed for local conditions, validated through real-world testing, and adapted based on user feedback.

Most significantly, the playbook positions AI as one tool among many, not as an inherent good. By requiring teams to justify AI solutions against alternatives and measure impact against defined value drivers, it promotes responsible innovation that serves user needs rather than technological possibilities.

For development organizations considering AI initiatives, this framework offers both roadmap and reality check.

The future of AI in development won’t be determined by algorithm advances or funding announcements. It will be shaped by whether we’re willing to do the human-centered work that makes technology truly useful. This playbook shows us how.

Filed Under: Management
More About: , ,

Written by
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, Career Pivot, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.
Stay Current with ICTworksGet Regular Updates via Email

Leave a Reply

*

*