⇓ More from ICTworks

10 Takeaways: First Global Index on Responsible Artificial Intelligence

By Guest Writer on July 18, 2024

index responsible ai

Responsible AI” has emerged as the key concept for achieving peaceful and equitable human futures with AI.

Responsible artificial intelligence refers to the design, development, deployment and governance of AI in a way that respects and protects all human rights and upholds the principles of AI ethics through every stage of the AI lifecycle and value chain. It requires all actors involved in the national AI ecosystem to take responsibility for the human, social and environmental impacts of their decisions.

Subscribe Now for more artificial intelligence insights!

The responsible design, deployment and governance of AI are proportionate to the purpose of its use and meet the technological needs of the individuals and societies it seeks to serve.

Global Index on Responsible AI

The Global Index on Responsible AI (GIRAI) is the first tool to set globally-relevant benchmarks for responsible AI and assess them in countries around the world. This study constitutes the largest global data collection on responsible AI to-date. In its first edition, the Global Index on Responsible AI covers 138 countries, including 41 countries from Africa.

The Global Index on Responsible AI measures 19 thematic areas of responsible artificial intelligence, which are clustered into 3 dimensions: Human Rights and AI, Responsible AI Governance, and Responsible AI Capacities. Each thematic area assesses the performance of 3 different pillars of the responsible AI ecosystem: Government frameworks, Government actions, and Non-state actors’ initiatives.

The Global Index on Responsible AI provides insights into the following questions

  1. What is the global state of responsible AI?
  2. What actions have countries taken to advance their commitment to practicing responsible governance, use and development of AI?
  3. What are the evident regional and global trends emerging in relation to the implementation – or lack thereof – of responsible AI standards?
  4. What are the major capacity gaps in advancing responsible AI governance and practice around the world?
  5. What does and should responsible AI entail in different regions of the world

Top 10 Takeaways from GIRAI

The Global Index on Responsible AI adopts a multifaceted approach to measurement in order to generate insights on the performance and competencies of the responsible AI ecosystem within each country across the 19 thematic areas and 3 dimensions.

1. AI governance is not responsible AI

The Index found that while there are many examples and approaches to AI governance, the existence of frameworks governing AI does not necessarily mean that responsible AI is being promoted and advanced, and human rights are being protected.

Countries that performed well in the Index were able to demonstrate a wide range of governance mechanisms – including sector specific policies and legislative frameworks – to safeguard human rights and advance responsible AI development and use.

2. Limited mechanisms for protecting of human rights

Few countries have mechanisms in place to protect human rights at risk from AI. Such mechanisms include AI impact assessments to measure the real and potential harm of AI systems, access to redress and remedy where harm occurs, and public procurement guidelines that address the adoption of AI by the public sector which oftentimes includes the use of AI in the delivery of socio-economic rights and citizen services.

3. International cooperation is important

Global Index Web Template v1 Global Index Web Template v1 75% 10 C28 Across all regions, international cooperation was the highest scoring thematic area, demonstrating the foundations for global solidarity toward responsible AI. The majority of countries assessed were able to demonstrate activity around international cooperation on responsible AI.

A significant finding is that the work of UNESCO constitutes the most significant mechanism to date for building country-level capacity in responsible AI. International cooperation should be leveraged to advance responsible AI around the world and bridge the AI divide. Across all regions, international cooperation was the highest scoring thematic area, demonstrating the foundations for global solidarity toward responsible AI.

The majority of countries assessed were able to demonstrate activity around international cooperation on responsible AI. A significant finding is that the work of UNESCO constitutes the most significant mechanism to date for building country-level capacity in responsible AI. International cooperation should be leveraged to advance responsible AI around the world and bridge the AI divide.

4. Gender equality remains a critical gap

Despite a growing awareness of the importance of gender equality in AI, it is concerning to note that most countries have not yet made significant efforts to promote it. Gender equality was one of the lowest performing thematic areas of the Index. Only 24 of the countries assessed had government frameworks addressing the intersection of gender and AI. Non-state actors are showing greater activity in this field, specifically civil society organisations and academic institutions.

5. Inclusion and equality are not addressed

Few governments consider inclusion and equality in AI to be a priority. The thematic areas relating to the rights of marginalized or underserved groups performed among the lowest. In addition, the Index found that non-state actors, and particularly civil society groups and academic institutions, were playing a crucial role in pulling up performance in key thematic areas relating to equality and inclusion, including: gender equality, labor protections and right to work, bias and unfair discrimination and cultural and linguistic diversity.

6. Workers are not adequately protected

Few countries are ensuring the existence of labor rights to protect labourers and employees as the use of AI increases in the workplace, and as new AI-driven platforms and gig economies emerge. Efforts to ‘upskill’ workforces do not correlate with sufficient labor protections for workers whose jobs might be at risk of displacement from AI, and for those working in new AI-related industries.

7. Cultural and linguistic diversity must be included

The respect, promotion and advancement of cultural and linguistic diversity as part of a country’s efforts to ensure responsible AI is essential to address some of the major cultural and linguistic imbalances in current AI models, particularly when it comes to large language models (LLMs).

If used responsibly, AI can help promote diversity and protect low resourced languages and cultural heritage. AI applications spanning multiple language groups serve more people and promote inclusivity in AI. However, the results demonstrated that few countries were considering the promotion of cultural and linguistic diversity in their responses to AI.

8. Safety, security and reliability gaps are huge

Only 38 countries (28% of countries assessed ) have taken steps to address the safety, accuracy and reliability of AI systems, and only 34 (25% of countries assessed) have government frameworks in place to enforce technical safety and security standards for AI.

Given the globally interdependent nature of cyber systems and cyber security – as well as the growing number of cases of maleficent AI use – this finding is deeply concerning. The technical integrity of AI on a global scale is not secure and is at risk.

9. Universities and civil society play crucial roles

Universities take the lead in terms of non-state actors in almost all regions of the world, followed by civil society organizations. During data collection for the Index, more than 500 university and academic institutions worldwide were identified with activities toward responsible AI, along with over 400 civil society organizations and over 350 private sector actors.

10. Responsible AI is still a distant worldwide goal

Despite the global proliferation of the development and use of AI systems, the majority of countries around the world are far from adopting responsible AI. 67% of the world’s countries scored up to 25 out of 100 in the Index and a further 25% between more than 25 and up to 50.

This means that nearly 6 billion people across the world are living in countries that do not have adequate measures in place to protect or promote their human rights in the context of AI.

A lightly edited synopsis of the Global Index on Responsible AI 

Filed Under: Reports
More About: , , , , ,

Written by
This Guest Post is an ICTworks community knowledge-sharing effort. We actively solicit original content and search for and re-publish quality ICT-related posts we find online. Please suggest a post (even your own) to add to our collective insight.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.