Since 2016, over thirty countries have passed laws explicitly mentioning Artificial Intelligence, and in 2025, the discussion about AI bills in legislative bodies has increased globally.
UNESCO published its “Consultation Paper on AI Regulation – Emerging Approaches Across the World,” and it describes 9 regulatory approaches (with examples from around the world) that are extremely interesting to anyone working or studying AI governance & regulation.
The nine AI regulatory approaches are presented in order from less interventionist, light-touch regulatory measures to more coercive, demanding approaches. It’s important to notice that the regulatory approaches described below are not mutually exclusive, and AI laws around the world will often combine two or more approaches.
Principles-Based Approach
Offer stakeholders a set of fundamental propositions (principles) that provide guidance for developing and using AI systems through ethical, responsible, human-centric, and human-rights-abiding processes.
UNESCO’s “Recommendations on the Ethics of AI” and the OECD’s “Recommendation of the Council on Artificial Intelligence” are examples of international instruments promoting AI principles relevant to all stakeholders.
Standards-Based Approach
Delegate (totally or partially) the state’s regulatory powers to organizations that produce technical standards that will guide the interpretation and implementation of mandatory rules.
Recital 121 of the EU’s AI Act, for example, states that “Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation, in line with the state of the art, to promote innovation as well as competitiveness and growth in the single market.”
Furthermore, the same recital encourages a “balanced representation of interests involving all relevant stakeholders in the development of standards, in particular SMEs, consumer organisations and environmental and social stakeholders.”
Agile and Experimentalist Approach
Generate flexible regulatory schemes, such as regulatory sandboxes and other testbeds, that allow organizations to test new business models, methods, infrastructure, and tools under more flexible regulatory conditions and with the oversight and accompaniment of public authorities.
This is the case of the EU’s AI Act that establishes a framework for the creation of AI regulatory sandboxes, which consist, according to article 3, of a “means a concrete and controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.”
Facilitating and Enabling Approach
Facilitate and enable an environment that encourages all stakeholders involved in the AI lifecycle to develop and use responsible, ethical, and human rights-compliant AI systems.
In this vein, for example, UNESCO developed the Readiness Assessment Methodology (RAM) that aims at helping “countries understand where they stand on the scale of preparedness to implement AI ethically and responsibly for all their citizens, in so doing highlighting what institutional and regulatory changes are needed”
Adapting Existing Laws Approach
Amend sector-specific rules (e.g., health, finance, education, justice) and transversal rules (e.g., criminal codes, public procurement, data protection laws, labor laws) to make incremental improvements to the existing regulatory framework.
With regards to transversal rules that are pertinent for developing and using AI systems, for example, article 22 of the European Union’s General Data Protection Regime (GDPR) establishes that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
Access to Information Mandates Approach
Require the deployment of transparency instruments that enable the public to access basic information about AI systems.
Few countries have already adopted algorithmic transparency obligations for public bodies through regulation. In France, for example, article 6 of Law N° 2016-1321 (“for a Digital Republic”) orders that public bodies must publish “the rules defining the main algorithmic processes used in the exercise of their functions when they are the basis for individual decisions.”
In Colombia, the ethical framework of AI published by the national government in 2021 was complemented with a decree that establishes that public bodies must “Promote the use of open portals of State data during the implementation and management of artificial intelligence projects” (Decree 1263 of 2023).
Risk-Based Approach
Establish obligations and requirements in accordance with an assessment of the risks associated with the deployment and use of certain AI tools in specific contexts.
An example of a current regulation with a risk-based approach is Canada’s Directive on Automate Decision-Making, adopted in March 2021 and amended in April 2023.
Article 4.1 states that the Directive aims at ensuring “that automated decision systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian society, and leads to more efficient, accurate, consistent and interpretable decisions made pursuant to Canadian law.”
Rights-Based Approach
Establish obligations or requirements to protect individuals’ rights and freedoms.
John Cantius Mubangizi proposes a human rights-based approach for African countries “to empower rights-holders (individuals or social groups that have particular entitlements in relation to duty- bearers) to claim and exercise their rights and to strengthen the capacity of duty-bearers (state or non-state actors) who are obliged to respect, protect, promote, and fulfill human rights.”
Liability Approach
Assign responsibility and sanctions to problematic uses of AI systems.
For example, the EU’s AI Act established penalties applicable to infringements of the regulation (articles 99 – 101). The non-compliance concerning the AI Act’s prohibitions “shall be subject to administrative fines of up to 35 000 000 EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher” (article 99).
Bonsoir je suis une Malienne et je travaille dans une collectivité territoriale. Nous avons des projets sur la digitalisation de l Agriculture et sur la digitalisation de la participation citoyenne.