⇓ More from ICTworks

Risk Reality Check: Six Generative AI Threats That Demand Action

By Wayan Vota on October 21, 2025

asean generative AI risks

Most development organizations are sleepwalking into a generative AI crisis.

While we’re busy celebrating ChatGPT’s latest capabilities, ASEAN’s new Expanded Guide on AI Governance and Ethics has identified six critical risks that could derail our digital transformation efforts across low- and middle-income countries.

The reality is more complex than the typical “AI will solve everything” narrative.

The Six Generative AI Risks

ASEAN didn’t pull these risks from academic theory. These represent ethical, legal, and societal issues that require new approaches to governance based on real-world deployment across diverse economies. Here’s what keeps them up at night:

1. Mistakes and Anthropomorphism

GenAI systems can make highly coherent and persuasive mistakes, referred to as ‘hallucinations’ that appear completely credible. I’ve witnessed health workers in rural clinics treating AI-generated medical advice as gospel truth. The remedy isn’t avoiding AI, but implementing systematic verification protocols and ensuring human oversight remains mandatory for critical decisions.

2. Factually Inaccurate Responses and Disinformation

GenAI systems can amplify false or misleading information at unprecedented scale. During recent election cycles, AI-generated content spread faster than fact-checkers could respond. The solution requires content provenance systems and digital watermarking that makes AI-generated content clearly identifiable.

3. Deepfakes and Impersonation

GenAI systems pose risks of impersonation or misinformation by creating realistic content like deepfakes and phishing emails. Humanitarian organizations are dealing with sophisticated phishing attempts using voice cloning technology. Robust security frameworks and vulnerability detection programs are essential.

4. Intellectual Property Infringement

GenAI systems may lead to legal repercussions if copyrighted works are used as data to train the systems without an appropriate legal basis. Development organizations using AI tools for content creation face potential lawsuits they haven’t budgeted for. Clear data governance protocols and understanding of training data sources become critical.

5. Privacy and Confidentiality

GenAI systems may memorize and reproduce specific training data, or otherwise allow malicious actors to reconstruct sensitive information. AI systems can inadvertently disclose personally identifiable information from constituent databases. Privacy-by-design methodologies and data minimization strategies are non-negotiable.

6. Propagation of Embedded Biases

GenAI systems can inherit and reflect biases from their training data, leading to biased or toxic outputs that reinforce stereotypes. When Western-trained models are deployed in ASEAN contexts, they often reflect cultural biases that can harm local communities.

Beyond Risk Assessments

What makes ASEAN’s approach revolutionary is its focus on practical solutions rather than theoretical frameworks. The guide provides nine actionable dimensions for addressing these risks:

1. Accountability and Shared Responsibility

ASEAN recognizes that GenAI involves complex value chains. Their recommendation for shared responsibility frameworks, similar to cloud computing models, clarifies roles between developers, deployers, and users. This matters for humanitarian organizations working with third-party AI providers.

2. Regional Data Ecosystems

The guide highlights successful examples like Thailand’s ThaiLLM and Vietnam’s PhoGPT, which were developed specifically for local languages and cultural contexts. For development work, this means investing in culturally appropriate AI models rather than assuming Western-developed systems will work everywhere.

3. Testing and Assurance at Scale

Singapore’s Project Moonshot, featured in the guide, provides open-source tools for systematic AI testing. Development organizations need similar standardized approaches to validate AI systems before deployment in critical applications.

4. Content Provenance and Transparency

The guide emphasizes cryptographic provenance and digital watermarking as essential tools for maintaining information integrity. This becomes crucial when AI-generated content is used in public health campaigns or educational materials.

The Implementation Imperative

I firmly believe the development community needs to move beyond pilot projects to systematic AI governance. ASEAN’s approach offers three critical lessons:

  1. Governance frameworks must be regionally relevant. Cookie-cutter approaches from Silicon Valley won’t work in diverse cultural contexts. Organizations need to invest in local AI capacity building.
  2. Testing and validation can’t be afterthoughts. The guide’s emphasis on red-teaming and systematic evaluation provides a roadmap for rigorous AI deployment in high-stakes environments.
  3. Transparency doesn’t mean revealing proprietary algorithms. It means clear disclosure about AI usage, data sources, and limitations that enable informed decision-making by end users.

The stakes are too high for continued improvisation. ASEAN’s framework provides the systematic approach we need to harness GenAI’s transformative potential while protecting the communities we serve.

Filed Under: Featured, Government
More About: , , , ,

Written by
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, Career Pivot, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.
Stay Current with ICTworksGet Regular Updates via Email

Leave a Reply

*

*