⇓ More from ICTworks

Is Generative AI Revolutionizing Social and Behaviour Change Programming? 

By Guest Writer on July 4, 2024

generative ai social behaviour change

 The content generated by Generative AI comes close to mimicking that created by humans, opening up many new possibilities for SBC programming, as well as producing new challenges and reinforcing existing ones.

Social and Behaviour Change practitioners and researchers are faced with many questions before they can decide whether to leverage GenAI as part of their programming – let alone how. Just like when other digital technologies and tools such as SMS/text messaging, Interactive Voice Response (IVR), mobile web, apps, social media and chatbots first emerged.

Subscribe Now for more MERL Tech insights!

The MERL Tech Initiative, on behalf of  iMedia Associates, convened SBC and digital development practitioners from over 30 organizations for a 1-day workshop where we began collaboratively developing a research agenda related to SBC and GenAI.

How Could GenAI Benefit SBC?

We are wrestling with our own personal and internal uses of the technology while we figure out how to use it within programming. In the rush to use GenAI, we might fail to pause and reflect on the many ways a new technology can be used, focusing on only the most hyped use-cases.

During our consultation, we first looked at how GenAI could be used across the whole spectrum of programme activities, and how practitioners were already using it, or planning to use it.

ai sbc

Examples of use (sometimes actual, but mostly potential) covered:

  • programme development & operations (proposal development, managing documentation to support decision-making, research support & gap identification, internal working groups and governance, content generation)
  • community-facing products (Conversational assistants / interfaces, for example a voter education chatbot to support free elections and combat disinformation; triage support before signposting to human led services (for example for Sexual and Reproductive Health services), misinformation/fake news tracking, education support for school children e.g Rori by Rising Academies);
  • language-based applications (creating models in native language for example by Data Science Nigeria); using GenAI for agile translation); and
  • MERL (Social media and feedback data analysis including sentiment analysis; general data analysis; performance analysis of products or interventions; voice data analysis of call center interactions (for example, Family Planning support conversations); automated services for post-intervention follow-up; managing feedback and accountability to program participants; developing M&E frameworks and designing M&E plans.)

Interestingly, many participants did not have a strong understanding of Generative AI beyond the existence of ChatGPT and other conversational assistants, signposting the potential for missed opportunities and risks.

Questions to Ask: GenAI and SBC

Although the workshop was convened for the purpose of developing a research agenda, what emerged quickly was that most practitioners and researchers were not ready to articulate complex research questions. Rather, they were seeking guidance, training and capacity building on if, when, and how to effectively and responsibly implement GenAI.

They were also interested in accessing case studies of existing applications as well as data that could be used to design and benchmark their own indicators of adoption, engagement and impact. Interest in organizational guidance (on when/when not to use Gen AI) and “how would we get started” figured heavily in our discussions. Tellingly, very few examples of live products supported or powered by GenAI were flagged.

We did identify a range of research and evidence needs and big questions, particularly around data rights and privacy, and the risks posed by the bias and potentially unreliable content generated by LLMs. It’s worth noting that many questions were similar to those practitioners have asked at the peak of earlier ICT hype cycles.

Guidance and capacity building requirements raised by participants included the need for resources on:

  1. Getting started with GenAI within SBC programming: What could I be using it for? What’s the first step?
  2. Using GenAI within programmes: how do I implement it? What skills do I need? What external organizations might I need to work with? How long will it take to be up and running? How much is it going to cost?
  3. Data, privacy and wider safeguarding aspects: What do I need to know? What do I need to plan for? How will my existing policies and processes need to change? What could go wrong and how do I mitigate the possible risks? What data can I put into different kinds of chatbots?
  4. Ethics: How can I implement a GenAI intervention ethically? What ethical considerations do I need to know about? What guidance exists already? Who is accountable for GenAI interventions?
  5. Localisation, relevance, trust: How do I localize and contextualize my GenAI intervention to mitigate bias and improve relevance for my end users? How much does this influence uptake and impact? How do I build trust and confidence in my GenAI service?
  6. M&E: How do I evaluate a GenAI powered intervention? How can I use GenAI within evaluation activities?
  7. In-house uses of GenAI: How can I support staff in their own use of GenAI in their work? How do I need to regulate this? What policies should be in place internally? What about for partners, grantees, subcontractors, and vendors? How do I know if GenAI has been used for something?

GenAI Issues in SBC

One issue in using GenAI in SBC is the cost-benefit analyses comparing time and budgetary efficiencies in GenAI vs non GenAI interventions were seen as crucial.

One of our speakers, Maria Dyhsel from Tangible AI, shared that whilst GenAI responses within their Syndee chatbot pilot were more flexible and responsive than those provided by their predictive model, leading to longer conversations, the engagement was not necessarily more productive (for example based on exercise completion); and whilst time was saved on conversation design/scripting, more time needed to be spent on guardrails, moderation and red-teaming. Small insights such as these were greeted with much enthusiasm, underlining (again) the need for a culture of data transparency.

Another issue is the urgent need for cross-sectoral benchmarking studies and the fostering (by funders especially!) of a transparent data and knowledge sharing environment to support the development of realistic indicators of reach, adoption, engagement and impact. At the level of risk-assessment, we also discussed the need for case studies examining the financial, environmental and ethical costs of using GenAI, to help support funders and grantees alike to decide if it is advisable to proceed.

Beyond these requirements for case studies, some interesting early-stage research questions did emerge. These were broadly focused on two workstreams, namely programme/service/content design, and MERL activities.

Questions that feel particularly pressing were concerned with community readiness and digital literacy for GenAI powered / supported services (How do users understand and conceptualise GenAI? Do they trust it? Can there be such a thing as informed consent relating to data privacy in the age of GenAI?), with an emphasis on understanding how this may differ along generational, gender, socio-economic and geographic lines.

Other research areas relating to impact evaluation would gather evidence as to whether the increased personalisation, and therefore relevance, of GenAI powered services such as chatbots, would actually increase adoption and impact compared to say, a decision-tree style chatbot. Similarly, MERL practitioners working in SBC programming would like to see hard evidence on the extent to which GenAI really speeds up the emergence of reliable insights from programme data – an important cost and efficiency question which would have significant ramifications for day to day practices.

Join the SBC Working Group

Have your own thoughts to add on this topic? You can join the NLP CoP SBC Working Group’s next meeting where we’ll be hearing from 2 organizations already using GenAI within their work and opening the floor to further input into this research agenda.

You’ll be joining over 600 development and humanitarian practitioners in the NLP CoP convened by The MERL Tech Initiative to help us to develop capacity building resources to address the “GenAI and SBC 101” type questions raised here.

By Isabelle Amazon-Brown who designs chatbots & AI for positive social impact

Filed Under: Marketing
More About: , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

3 Comments to “Is Generative AI Revolutionizing Social and Behaviour Change Programming? ”

  1. Ben Bellows says:

    Interesting article. Diving into it, I was hoping to see a discussion on hybrid approaches that bring genAI into specific points of behavior change journeys. LLMs bring certain strengths to the SBC space, but also risks. Assessing where and how to use LLMs could help organizations that are struggling with “why use LLMs”.

    • Isabelle says:

      Great point Ben – in recent guidance we’ve put together for orgs grappling with these questions this is exactly the starting point we’ve recommended. As with all use of digital for dev, the first step should ideally be a pragmatic assessment of whether and where the tech can actually be most useful – with a mandate to decide not to use it if the projected cost/benefits just don’t stack up! Would love to hear more about this hybridized approach if this is something you’re taking at Nivi!

  2. How do I get involved?