⇓ More from ICTworks

Where Should International NGOs Start Their Generative AI Journey?

By Matt Haikin on February 3, 2026

ai ngo journey

A few weeks back I had a really energising conversation with Bert Maerten, the International Program Director for Oxfam Denmark. Like many in his position, Bert was looking to make sense of the fast-moving world of Generative AI – not the hype but what are other NGOs actually doing and what it means for Oxfam:

“We’re all on our own AI journey. Many of us are already experimenting with it, and everyone has a learning curve. For Oxfam Denmark, the big question is: how do we move forward as an organisation? Where should we focus our energy? What’s worth embracing, what’s just hype?”

This blog summarises our conversation, though lightly expanded with some thoughts and examples I’ve pulled in since our conversation that I think is helpful for international NGOs to consider.

Table of Contents

If you’re working in aid/development and feeling a bit like everyone else has AI figured out while you’re still trying to get ChatGPT to stop lying to you, this may be an interesting and  reassuring read.

Subscribe Now for more AI for Good insights 


ai ngo phases

Everyone Feels Behind on AI

One of the first things I said to Bert was that he wasn’t behind. Everyone just thinks they are thanks to the incredible GenAI FOMO.

It’s a weirdly universal feeling. Some donors are writing grand AI strategies. Some INGOs are issuing press releases and LinkedIn posts at pace. But scratch the surface, and you’ll usually find that what’s actually happening is cautious experimentation, a few quietly enthusiastic individuals, and a lot of internal confusion.

In fact, a Coursera webinar I joined a few days after our call (on navigating the EU AI Act) shared a model that made this point nicely: most organisations are still firmly in the “Experimentation” phase. In fact, I suspect in aid/development, many haven’t even made it that far and are stuck at “Awareness”, knowing they need to do something, but unclear on where to start.

It reminds me of the state of INGOs with mobile/digital in 2011/12 – there was the same sense of panic – “everyone’s doing it, we’re late, what’s our strategy?” – and the same uneven reality. Sure, some were full steam on mHealth apps; but most were still figuring out how to get reliable 3G in their country offices.

There is one big difference this time: Speed!

With digital and mobile, you could afford a 2–3-year learning curve. With AI, the cycles are more like 3-6 months. A tool you haven’t heard of today could be making headlines (or causing a scandal) next quarter. The public expectations, policy debates and practical applications are all accelerating, and that puts real pressure on INGOs to respond. Not perfectly. But fast.

While this pace-of-change makes AI feel like a revolution unlike any that has come before; in reality the technology is an evolution that has been going for decades. It’s the latest tech in a long story of digital transformation – it adds complexity, opacity, and new risks but the foundations are familiar: user needs, new use-cases, data ethics, responsible practice, capacity building, governance. All things the sector has grappled with before.

What Are Other iNGOs Doing?

It can be hard to get a clear picture of what’s actually happening beyond the hype, the tech-evangelists and the tech-doomsayers and the vendor-driven ‘success’ stores.  So, combining my personal experience, the proceedings from this year’s AI4D conference, and a bit of help from ChatGPT, I shared some ideas for Bert and Oxfam Denmark to consider.

I find it useful to break this down into five overlapping layers. Not a formal framework – just a rough mental map that’s helped me make sense of different types of activity.

1. Personal use

The most active layer – but perhaps the least visible. Your teams are definitely already using ChatGPT, Claude, Gemini, Copilot etc. to draft reports, summarise meeting notes, write ToRs, clean up data, structure workshop agendas, rewrite clunky text, debug code, analyse data, translate documents… the list goes on.

It’s often informal, ad-hoc, and under the radar (see Wayan’s LinkedIn articles on Shadow AI) which means organisations are benefiting from AI productivity and/or suffering from AI harms but without necessarily knowing it, guiding it, learning from it, or managing the risks and harms that come with it.

If you’re sitting there thinking: “we’re not doing anything in AI now”, I guarantee someone in your team is already quietly using AI to reword that donor report. So, you may be further along than you think.

2. Institutional use

Many organisations have intentionally adopted AI tools for internal processes – Knowledge Management, MEL, HR support, document summaries for reporting etc.  Some are experimenting further such as tools that extract insights from collections of long-form narrative reports, or auto-summarise learnings from complex evaluations.

There’s also a growing use of more advanced features such as like Retrieval Augmented Generation (RAG) systems that can apply the reasoning power of Large Language Models (LLMs) to your own proprietary and internal documents, custom GPTs; chatbots trained on organisational FAQs to help them clients find information or speed up onboarding for new staff; Agentic AI which directly interfaces with other systems and databases etc.

It’s patchy, and often led by a few enterprising individuals or a specific team driving things forward, rather than a coordinated organisational strategy.

Some of the applications most discussed include:

Application Area What’s Being Done
Knowledge Management & Search  AI-assisted search over internal documents using RAG; staff query policy manuals, reports, SOPs in natural language
Document Workflows & Donor Reporting AI synthesizes programme data, generates report drafts, flags anomalies in monitoring data
Fraud & Risk Monitoring Machine Learning for fraud detection across operations (e.g. GiveDirectly)
MEL Automation & Augmentation Automated evidence mapping, AI-assisted qualitative analysis (e.g. NVivo or MAXQDA), real-time signal detection, geospatial early warning (e.g. WFP’s Prism or UNICEFs’ GeoSight), remote evaluation
Responsible AI Governance Drafting policies, risk assessments, metadata standards, AI literacy training, red lines

3. Customer-facing use

Some NGOs are already applying AI externally – with communities, partners, or service users. It’s rarer, and many are being more cautious, but useful examples and pilots are emerging: translation tools embedded in field apps; WhatsApp bots for information access; AI-driven diagnostics for humanitarian response.

This layer raises bigger ethical and governance questions – bias, transparency, data security, informed consent – but it also offers the most potential for positive or negative impact (with robust evidence of impact on outcomes or equity still relatively scarce).

Some of the experiments that are published or discussed at conferences include:

Application Area What’s Being Done
Chatbots & Advisory Tools WhatsApp/Messenger bots for Q&A, agricultural advice (e.g. UlangiziAI in Malawi), refugee support chatbots, behaviour change, mental health support
Predictive Targeting & Eligibility ML models for poverty targeting, crisis vulnerability scoring, prioritization
Anticipatory Action & Early Warning AI-driven flood forecasting, hazard models triggering cash transfers before crisis hits
Social Listening & Rumour Tracking Processing multilingual community feedback to detect patterns, rumours, emerging needs
Geospatial Advisory Satellite-based crop/land analysis, exposure mapping, remote advisory
Accessibility Tools Voice interfaces, speech-to-text, text-to-speech, local-language support for low-literacy users

4. Governance

Finally, there’s the question of how organisations manage all this – whether by publishing guidance, setting guardrails, offering training, or engaging in sector-level initiatives.

Despite the flurry of AI ethics principles and responsible AI toolkits in the last year or so (many borrowed or adapted from the private sector or academia), few INGOs have yet turned these into clear, operational guidance and policies – let alone meaningful conversations with staff about what’s allowed, what’s risky, what’s encouraged, what training is needed etc.

5. Ecosystem level

We didn’t discuss this level much. Coalition-building, regulatory advocacy, sector standards, shared infrastructure etc. It’s messy, and slow-moving, but essential if we want the sector to align coherently!

5 Key GenAI Challenges for iNGOs

My déjà vu returned when we looked at the challenges people are talking about – if you drop the term AI, it could be a conversation around digital from 2011. Different tech, same underlying worries:

  • Do we have the right skills?
  • What are the risks?
  • Are we being ethical?
  • Who owns this stuff internally?
  • Are we already too far behind?
  • Are we just chasing vendors?
  • Are we piloting things we can’t scale?”

I’ve wrestled our notes in to 5 of the top AI-adoption related issues I see facing INGOs today:

1. Slow systems vs fast tech: Governance is falling behind

AI is moving at a pace that simply doesn’t fit the way most INGOs are structured to work. Foundation models update in 3/6-month cycles, meaning assumptions, skills, and policies can become outdated as soon as or even before they’re finalised.

A super practical example is good practice advice: writing prompts.

Just a few months ago, prompt engineering advice was to have carefully structured multi-shot prompts to get good results. Today, while good prompts remain important, most frontier models (ChatGPT, Claude, Gemini etc.) are happy with plain-English instructions in most situations and advice has evolved to multi-platform context engineering.

Concerns over harms have shifted just as rapidly – where bias and hallucinations dominated, today more complex challenges like AI bullying, manipulation and gaslighting have emerged.

Leaders are asking the right questions – What’s safe to pilot? Where are our red lines? When do we involve communities? – but few have frameworks that can match the speed and complexity of what’s unfolding.

This is not for lack of templates. Charity AI Governance Framework, and Vera Solutions’ 9 principles offer starting points and the EU AI Act will make aspects of this mandatory for relevant organisations. The hard part is operationalising them:

  • What to pilot,
  • What to prohibit,
  • When to pause,
  • How to monitor impact,
  • How to evaluate whether an AI system is doing what it claims.

This rapid pace isn’t a reason to avoid engaging with AI – but it does demand a different kind of organisational response. Policy cycles aren’t designed to keep up.

One of the clearest structural issues across the sector is how to operationalise this quickly.  For decades many have been calling for more iterative and adaptive approaches to NGO governance.  With the speed of AI, it is no longer a possibility but a necessity

2. Shadow AI: Use without oversight

The majority of AI use in INGOs is informal, invisible, and often unacknowledged. Which means that the biggest productivity gains – and the biggest risks – are emerging outside any formal strategy, support or safeguards.

Much shared surveys suggest over 60% of NGO staff are already using GenAI tools in some way: summarising reports, translating content, drafting emails, brainstorming ideas. Often they’re doing this without formal approval – and sometimes deliberately hiding it from managers.

This isn’t necessarily bad. People are solving real problems. The problem is the secrecy. If staff don’t feel safe talking about their AI use, then organisations can’t learn from it, can’t govern it, and can’t build on it.

Instead, Generative AI quietly starts reshaping how knowledge is created, how decisions are made, and how risks are handled – without ever showing up in an “AI strategy”.

3. Skills gaps that ripple upwards and outwards

Across the sector, staff are already being asked to make AI-related decisions – often without the skills or confidence to do so. From spotting hallucinations in a draft, to judging whether a model might reinforce bias, to knowing when to raise ethical concerns – these aren’t traditional parts of a programme officer’s job description. But suddenly, they are part of their job.

The capability gap isn’t just on the frontlines. Boards feel responsible for AI risks, but many lack the background – or trusted internal advisors – to help them make informed choices. This leads to a kind of paralysis: cautious interest with no one quite sure what action is safe to take.

An added consequence of this gap is that decision-making often defaults to whoever sounds most confident – often external vendors. Let’s be honest, some development actors have a poor history of relying heavily on tech providers to help them define the problems – ending up with the vendor also setting the agenda to match the solutions they are pitching.

There are emerging resources to help – like MERLTech’s tool for assessing AI vendors – but if organisations don’t have enough internal capacity to challenge, adapt, or even fully understand vendor claims, then the risks of solutionism multiply.

4. Weak data, weaker evidence

Even when AI models are strong, their outputs are only as good as the data we feed them – and most INGOs aren’t ready on that front.

Internal data is often messy, unstructured, fragmented across teams, or locked away in PDFs and legacy systems. Labels are inconsistent, documentation is thin, and metadata is often an afterthought. Which means even if you plug in a great model, the results can be underwhelming – or actively misleading.

It’s not just inputs that are the problem – there is a real lack of robust evidence around outputs too.

Case studies are often anecdotal, vendor-driven or written for donor visibility. That’s not the same as rigorous evaluation.  Few pilots have been independently assessed. Fewer still have had the time needed to look at long-term outcomes, cost-effectiveness, or unintended harms.

This puts INGOs in a difficult spot: trying to make responsible, strategic decisions in the dark – or, worse, based on stories written by the companies selling the tools.

5. Real risks of harm and exclusion

We know that AI systems can reinforce bias, misinterpret context, or exclude marginalised groups – especially where tools don’t work well in local languages or low-literacy settings.  These are well-documented risks. But most INGOs aren’t yet equipped to spot them, let alone address them.

Many of these harms are subtle, structural or cumulative, they don’t always trigger obvious red flags – which means they can slip through even the most well-meaning governance process.

At the same time, the perceived urgency to get to grips with AI risks northern actors treating the Global South as their testing ground – if we’re serious about shifting power in the sector, AI has to be part of conversations about locally-led development and Southern partners need to be part of the conversations about AI.

4 Suggestions for iNGO AI Leadership

4 genai pillars for ngo

I don’t claim to have all the answers – and Bert wasn’t asking for a blueprint – but based on our conversation, and other chats I’ve had with INGOs over the past year, 4 suggestions came to the surface as maybe the best places to focus on first.

These aren’t about becoming an AI-first organisation or launching a flashy innovation lab. They’re more about creating the right conditions for thoughtful, useful progress.

1. Turn Shadow AI into a learning opportunity

Pretending your staff aren’t already using AI isn’t just pointless, it’s risky. The real choice isn’t “To AI or Not To AI”, it’s between unmanaged, invisible use and visible, supported use. The latter is far safer.

The first step simply to bring it into the open. Encourage staff to share what they’re already doing – the prompts that worked, the ones that didn’t, the weird behaviours they’ve spotted, the results they’ve improved.

When people “show their workings out,” you get a goldmine of rapid and actionable insights: what’s delivering value, where people are struggling, where skills are lacking, and what needs governance.

This isn’t about encouraging reckless use. It’s about accepting reality, and making it safer. By starting with the day-to-day – how people actually do their work – you can surface what’s useful, what’s risky, and where your real organisational needs are.

2. Get your house in order before going external

Before launching anything community-facing, INGOs should focus first on using AI where risk of harm is lower and learning opportunities are high – this is most often internal-facing work. This might mean:

  • Retrieval-augmented search (RAG) across internal documents,
  • Improved knowledge management,
  • Synthesising donor reports,
  • Translating internal communications.

These are relatively safe spaces to experiment – and immediately useful! More importantly, they surface tough questions early, such as:

  • Who owns which data?
  • Who has access to the data or the models?
  • What metadata is missing?
  • What assumptions are baked into the documents we rely on?

Of course, the pace of change means things aren’t entirely sequential – it’s just not realistic to test everything rigorously internal before even starting any external-facing work.  Be adaptive, be iterative, but use internal work to build the competence and guardrails that external-facing tools will need, without exposing communities to risks of harm as you learn.

Don’t run your first experiments on people’s lives when you don’t know what your doing yourself.

3. Build governance that can keep up

Many INGOs still approach AI governance the way they would any other policy: write it, review it in 18 months, maybe update a training module or two. But AI doesn’t work on that timeline – and neither can we.

With core models shifting every few months, assumptions break quickly, pilots become outdated fast, and governance needs to adapt in real time. That means it must adapt to support:

  • Red lines that are clear but flexible
  • Pilots that are short, time-bound and iterative
  • Success/failure criteria defined upfront (and the adage of ‘fail fast’ really taken to heart!)
  • MEL involvement from the start
  • Community input wherever possible, Southern partners involved as a bare minimum
  • Regular reviews as the tech evolves

In this context, governance needs to be living, learning, and looped into decision-making. Not just a PDF on your intranet. Not just a compliance box. But something that actively shapes – and is shaped by – what’s happening on the ground.

4. Upskill everyone – AI literacy is entry-level now

AI isn’t just for specialists. It’s fast becoming a basic skill – much like email and spreadsheets became 10–15 years ago.  While most staff don’t need in-depth understanding of how Large Language Models are built, they do need enough firsthand experience to:

  • Write effective prompts
  • Spot dodgy or biased outputs
  • Know when AI shouldn’t be used
  • Escalate issues when they’re unsure
  • Contribute meaningfully to programme design, MEL and governance

A framework I’ve found helpful is adapted from early days of thinking around IT skills (via e-Skills UK), which categorised skills into Users, Professionals, and Strategists. The labels need updating, but the logic still holds – different roles need different skills.

Level Example roles Skills needed Why it matters
AI Users Most staff across programmes, operations, MEL, comms, admin Use AI safely for tasks like summarising, translating, drafting, analysing; spot hallucinations; iterate prompts; escalate when needed; know when not to use AI; apply basic judgement when outputs look plausible but feel “off.” Without this, staff can’t meaningfully contribute to conversations about risk, MEL, or programme design. It also reduces the knowledge gap between “AI people” and everyone else.
AI Integrators / Professionals Programme leads, MEL teams, data staff, innovation teams Understand data governance, bias and evaluation risks, how to ask good vendor questions, participatory approaches; grasp basics of RAG, embeddings, open-source vs. proprietary; adapt methods as tools and risks shift. These are the bridge-builders – translating programme needs into technically viable solutions, and avoiding bad assumptions or over-engineered solutions.
AI Leaders / Strategists Senior leadership, boards, digital/strategic leads Set direction and red lines; judge which problems need AI (and which don’t); understand regulatory pressures (EU AI Act, GDPR); oversee risk; fund or stop pilots; make decisions under uncertainty as models evolve. Strategic decisions shape everything – where AI is used, who is impacted, what gets piloted, and who bears the risk. Leaders don’t need to be technical, but they do need to be literate.

Plus a few important caveats…

  • AI developers are missing from the list – for now. For now, most INGOs rely on external contractors for AI development, but as tools become more accessible (e.g. via “vibe coding”), internal development capacity will become more important.
  • These skills are not static. With core models updating frequently, this needs to be ongoing, not a one-off training sprint!
  • The EU AI Act raises the bar. It introduces explicit expectations around organisational competence and oversight. I haven’t unpacked that fully here, but it’s something INGOs operating in or funded from Europe will need to take seriously.
  • It’s critical that country office and implementing partners skills develop alongside those of HQ staff – how this happens will be different for every organisation, but it mustn’t be overlooked.

The good news? You don’t have to build all this from scratch. There are hundreds of online courses available – in multiple languages – covering core AI literacy, including some sector-specific options:

Where Should iNGOs Start Their AI journey?

Looking back at this conversation and similar ones I’ve had – there is clearly no lack of interest in AI, but there is a lack of certainty over where and how to start.

In reality – most organisations have already started whether they know it or not: people are curious., they’ve been experimenting, some are probably building things that are genuinely useful. But often this is happening without structure, leadership or shared learning.

Not every INGO needs a shiny AI tool, a glossy AI vision document or their own chatbot.  Most simply need to embrace the uncertainty, accept that the gate is already open and won’t be closed again, and have open conversations across all teams (not just IT!) about how to move forward – cautiously, creatively, and collectively.

Our conversation covered a fraction of a very broad topic; does it resonate with your own experience; or are things very different from where you are sitting?

How are you seeing AI journeys being started – whether within your own organisation or with your donors, implementing partners, etc.?

Filed Under: Featured, Opportunity
More About: , , , ,

Written by
Matt Haikin Matt Haikin is a Digital Transformation consultant, practitioner and published researcher specialising in participatory approaches to technology. He has worked at all levels and led teams for INGOs, multi-laterals, civil society and community organisations in the UK and the global South.
Stay Current with ICTworksGet Regular Updates via Email

Leave a Reply

*

*