While many artificial intelligence (AI) tools originated in the United States, Europe, and China, the development and adoption of AI in lower- and middle-income countries (LMICs) have been accelerating rapidly.
RSVP Now for AI Design Sessions at GDDF 2022! |
The 2022 Global Digital Development Forum will feature multiple sessions on how to use artificial intelligence and machine learning for greater social and economic impact. |
Fueled by the increasing availability of computational power, improved connectivity, and data, AI tools have the potential to help tackle some of the world’s most pressing issues by spurring economic growth, improving agricultural systems, enabling higher-quality education, and addressing health and climate challenges.
Artificial Intelligence Challenges in LMICs
While applications of AI in LMICs are in their early stages, many pilot projects and technology- driven business models demonstrate the potential for AI to benefit underserved populations, better connect local communities and international technology firms, and improve lives. However, as with other emerging technologies—from cryptocurrency to 5G—AI presents challenges as well as new opportunities, especially as it transitions from Western settings to LMICs.
Broadly speaking, these challenges fall into two categories. The first are the deficiencies in technology capacity and policy making faced by LMICs. The second set of challenges have to do with deficiencies inherent to the “architecture” of AI systems and how they are developed.
1. “Artificial Stupidity” and Algorithmic Bias
The first architectural issue involves “artificial stupidity” and algorithmic bias. In the public imagination, AI can often appear to make decisions without the influence of human foibles and misjudgments. However, AI systems are far from infallible. Even a well-designed algorithm must make decisions based on data which is in turn prone to the same flaws or errors we encounter in all spheres if life.
And algorithms often make judgment errors when faced with unfamiliar scenarios. This so-called artificial stupidity can extend still further, to the point where AI may make decisions that not only resemble human misjudgment but reproduce human bias and prejudice.
2. The “Black Box” Problem
A second ethical issue inherent to the architecture of AI systems is the so called “black box” problem, faced by high-income nations and LMICs alike. Not only is AI prone to error and bias, but the reasons for faulty decisions are not easily accessed or readily understood by humans— and are therefore difficult to question or probe.
Toward Ethical AI in International Development
As is the case with many emerging technologies, the international development community is faced with a conundrum: on the one hand, we recognize the immense potential that AI tools have in solving some of the more complex development challenges facing LMICs.
Indeed, the development community is already piloting these tools across various sectors. On the other hand, AI tools appear to have ethical challenges built into their foundations. These intrinsic challenges are likely to have pronounced effect when AI applications are introduced in LMIC settings.
How can we take a balanced approach that moves us toward more ethical uses of AI in international development while still reaping its benefits? To answer this question, we outline four recommendations in Toward Ethical Artificial Intelligence in International Development that are focused on key areas of future investment by bilateral and multilateral donors:
1. Develop ethical AI frameworks with country-specific ethics
AI poses new philosophical and ethical questions that experts, policy makers, and societies at large are only beginning to grapple with. In response, AI researchers in the United States and Europe have developed frameworks through which to examine ethical decision making on AI projects and minimize algorithmic harms.
We recommend that the international development community adapt these frameworks, beginning with research to determine if they are partially or wholly applicable to LMICs and to understand how ethics are construed in countries of interest. Organizations and individuals from and in LMICs should be meaningfully incorporated into this research agenda.
2. Diversify data, designers, and decision makers
A number of architectural issues in AI have their roots in a lack of diversity—especially a lack of diversity in the training data used to develop AI systems and in the backgrounds of people who design AI systems and decide when they’re deployed. The international development community can invest in ways to increase diversity in these areas:
- Data: AI systems—often developed in Western contexts with Western-centric training data—need access to training data from the Global South. Without this information, AI tools used in the Global South will reinforce the norms and biases of the society in which that source data is collected.
- Designers: Much like the AI research community, the community of AI designers and developers is homogeneous—in terms of both technical and identity group background. Adding more diversity in terms of gender, race, and ethnicity would introduce new perspectives to the AI conversation. Achieving ethical AI will also require an interdisciplinary approach, involving a more diverse group of data scientists, software developers, and statisticians, as well as engaging people who specialize in complementary fields such as history, law, and anthropology.
- Decision makers: While improvements can be made to AI systems and the processes by which they are developed, issues such as the black box problem stem as much from their application as from their inherent architecture. We need more gatekeepers equipped to make informed decisions about when—and when not—to deploy AI.
3. Develop ethical AI metrics for implementations
Frameworks are the first step, but they can only take AI ethics so far. The development community should develop clear metrics to help AI designers and deployers determine if they are taking adequate steps to counter or mitigate AI bias.
4. Cultivate partnerships between Global South and North
Adapting ethical frameworks, increasing diversity, and developing clear metrics will demand increased partnership between developed and developing countries. Building on existing AI partnerships, especially North-South and South-South relationships, will create a community and nurture conversations that inform foundational research, data sharing, metrics, and technical assistance for governments and policy makers.
A lightly edited version of Toward Ethical Artificial Intelligence in International Development by Gratiana Fu with contributions from Miriam Stankovich and Anand Varghese
Sorry, the comment form is closed at this time.