⇓ More from ICTworks

New Toolkit: How to Apply Artificial Intelligence Ethically in Your Programs

By Wayan Vota on March 10, 2021

ai ethics nonprofit toolkit

As decision making and recommendations become increasingly algorithmically automated, it is critical that we are intentional about producing more fair development outcomes and proactive in mitigating harmful effects of bias in humanitarian assistance.

Nonprofits play an important role in ensuring just and equitable outcomes for individuals and communities we support. We have a responsibility to understand the risks of artificial intelligence in order to guide the design and use of this emerging technology toward optimal public benefit. We should be asking questions like:

  • How might artificial intelligence design and implementation cause disproportionate harm?
  • How well do we understand machine learning results?
  • Would we recognize bias or inequities when (or before) they occur?
  • What happens when algorithms go wrong?

We should learn how to anticipate consequences of machine learning solutions we’re creating and using, and to take deliberate steps to address the risks. This is why NetHope’s AI Working Group has put together a set of AI Ethics resources for nonprofit organizations focusing on increasing fairness and avoiding bias.

AI Ethics for Nonprofits Toolkit

The AI Ethics for Nonprofits toolkit helps humanitarian organizations explore the ethical issues surrounding artificial intelligence and machine learning solutions on the individuals and communities we support, including:

  • Intentional harms such as hate speech, misinformation, weaponization
  • Infringement on rights and values such as surveillance.
  • Unfair outcomes like discrimination and prejudice stemming from bias.

Today, bias is one of the most recurrent harms generated by automated technologies. Bias is systematically favoring one group relative to another based on specific categories or attributes such as gender, race, age, education level. Bias can impact who gets food, health, education, or any other assistance or support.

This is why the first installment of the toolkit focuses on building capacity to apply the ethical considerations related to the principle of fairness and take deliberate steps in the process of development and implementation of AI systems to mitigate the risk of algorithmic bias on the end user.

The toolkit is designed to help you learn some of the fundamentals of AI ethics and then immediately get to practice applying ethical considerations related to the principle of Fairness in the context of several humanitarian and international development use cases. It includes a step-by-step process that explores Fairness across all stages of an machine learning project, from problem definition and data collection to model creation, implementation, and maintenance.

More Artificial Intelligence Guides

Filed Under: Data
More About: , , , , , , ,

Written by
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.
Stay Current with ICTworksGet Regular Updates via Email

2 Comments to “New Toolkit: How to Apply Artificial Intelligence Ethically in Your Programs”

  1. Ronald Denye says:

    Hello i have read through all the opportunities but if i wanted to support me with different idea different from what i see you posting their. Can i get such or can you link me to that opportunities such that i can apply.
    Thanks and waiting for your most important reply.

    • Wayan Vota says:

      Ronald, we post new opportunities all the time. However we do not have any control over their focus. We do not have funding of our own either. You will need to do your own research to find other opportunities.