⇓ More from ICTworks

Introducing the Artificial Intelligence Ethics Playbook from GSMA

By Wayan Vota on September 21, 2022

Artificial Intelligence LMICs

Artificial intelligence is a powerful, emerging force that is transforming business and society. The potential of these technologies to unlock benefits for organisations and society is only beginning to be seen. AI can help organisations to improve prediction, optimise operations, allocate resources more efficiently, and personalise digital solutions. PwC estimates AI could contribute $15.7 trillion to the global economy by 20302.

However, artificial intelligence isn’t a futuristic technology; it is present in our everyday lives, used across a wide variety of industries. The mobile industry is no different. AI is at the core of operational and business models for an increasing number of mobile network operators (MNOs). Three common uses are:

  • Core business optimisation MNOs are using AI to improve network optimisation, real-time network monitoring, predictive maintenance and network security through improved efficiency.
  • Personalised customer interaction MNOs are improving communication with customers through robotic process automation, virtual assistance, intelligent pricing and B2B sales optimisation.
  • AI-driven mobile data products for external stakeholders MNOs are using their data to provide services to third parties, such as predicting customers’ media content preferences, providing location-based marketing insights and assisting with new approaches to credit scoring.

As the adoption of artificial intelligence accelerates, organisations and governments around the world are considering how best to harness this technology for the benefit of people and the planet. AI has the potential to truly change the world, and this represents not only an opportunity but also a risk.

Artificial Intelligence Opportunity and Risk

AI depends on large amounts of data, often relating to individuals, and makes inferences based on this data. These inferences may be used to guide decisions that have a significant impact on the things people care about the most – their health, their employment and their access to resources. It is, therefore, vital AI is used in a way that protects our fundamental human rights.

Artificial intelligence is increasingly an essential element of the infrastructure on which our society is built, playing an active role in financial markets, basic services and international supply chains. We need to be able to trust AI to behave in the way we want it to; and when it doesn’t, we need to understand what went wrong and who should be accountable for this.

Ethical AI isn’t just the right thing to do; it can also positively impact companies’ bottom lines. Organisations that pro-actively choose to act ethically – and tell their stakeholders they are doing so – can generate goodwill, build positive relationships, and increase market share. Research suggests bias can lead to lost revenue. The ethical principles discussed in this playbook can help to ensure AI systems are reliable, reproducible and explainable, which will ultimately increase their value for a company as well as ensure a more positive impact on society.

A huge amount of effort has gone into considering what ethical AI looks like, resulting in many organisations drawing up ethical AI principles. However, putting such principles into practice remains a challenge. With this in mind, a group of MNOs worked with the GSMA to create this practical playbook to support the operationalisation of ethical AI principles into everyday activities.

Artificial Intelligence Ethics Playbook is intended as a practical tool for not only the mobile industry but also any other organisation currently grappling with the challenge of designing, developing or deploying AI in an ethical manner. It is intended to be a flexible tool organisations can adapt to their needs.

This is a rapidly evolving space; AI is progressing quickly, and many governments are in the process of drafting regulations relating to AI. This playbook is envisaged as a living document, and we welcome enquiries from organisations that want to help advance it further.

Artificial Intelligence Ethical Principles

  1. Fairness For an AI system to be fair, it must not discriminate against people or groups in a way that leads to adverse decisions or inferences. Non-discrimination and equality are central features of the major human rights treaties and many countries’ laws.
  2. Human agency and oversight It’s important to determine an appropriate level of human oversight and control of an AI system. As AI directs decision-making, people may become reliant on a system. Organisations must respect human autonomy.
  3. Privacy and security AI systems should respect and uphold an individual’s right to privacy and ensure personal data is protected and secure. Organisations using AI should pay special attention to any additional privacy and security risks arising from AI systems.
  4. Safety and robustness AI systems should be safe, robust and reliably operated in accordance with their intended purpose throughout the lifecycle.
  5. Transparency and explainability It’s important to be transparent about when an AI system is being used, what kind of data it uses, and its purpose. Explainability is the principle of communicating the reasoning behind a decision in a way that is understandable to a range of people, as it is not always clear how an AI system has arrived at a conclusion.
  6. Accountability Organisations should have a governance structure that makes it clear who is responsible for reporting and decision-making and is thereby ultimately accountable, throughout an AI lifecycle. Accountability is key to complying with regulatory and legal requirements.
  7. Environmental impact AI systems must be designed, developed and deployed in a way that is mindful of environmental impact throughout their lifecycle and value chain. With careful consideration of systemic consequences, AI can help to secure a sustainable future for all.

Filed Under: Thought Leadership
More About: , , , , , , , , ,

Written by
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.
Stay Current with ICTworksGet Regular Updates via Email

Sorry, the comment form is closed at this time.