
When I read through the 2024 US Federal AI Use Case Inventory, I expected to find a showcase of responsible digital governance. Here was the world’s most powerful government, publishing a public database of every AI system its agencies operate, organized by risk level, rights impact, and deployment status.
A model for transparency. An accountability framework worth emulating. Right?
Not quite. What I found instead should alarm every ICT4D practitioner working with governments in Africa, South Asia, and Latin America.
Of the 1,700-plus AI use cases reported across the US federal government, 227 were flagged as rights-impacting or safety-impacting. Of those 227 systems, 206 received compliance extensions because agencies could not certify that required safeguards were in place.
That is a 91% non-compliance rate among the very systems most capable of harming people. And the United States, unlike most of the governments we work with, at least published a list.
The Transparency We Don’t Have
We spend considerable energy in this sector debating whether governments in the Global South have AI strategies. African AI readiness rankings get published, discussed, and cited. Countries are scored on whether they have published a national vision document.
That framing badly misses the point.
The US inventory demonstrates something our sector has not fully absorbed: the gap between what a government says about AI and what it runs on your communities can be enormous, and the running matters far more than the saying.
When I dug into the specific systems in the US inventory, a pattern emerged that I suspect is replicated across every government operating at scale.
AI is deployed primarily to accelerate decisions that were already being made, about populations who already bore disproportionate scrutiny, using data that carries existing institutional biases.
- ICE uses a Hurricane Score to predict whether detained immigrants will comply with supervision, without any publicly disclosed validation study or demographic performance analysis.
- USCIS uses ARGOS for fraud-risk scoring, whose own inventory entry acknowledges known failure modes including data collection bias and limited generalization across industries.
The governance frameworks for these systems were still in draft when the systems went live.
If the US government, with its mandatory disclosure requirements and formal inventory process, produced this picture, what should we expect when we ask the same question about governments in Kenya, India, or Brazil?
What AI Systems Are Actually in Use?
The OECD’s recent Governing with Artificial Intelligence report found that 70 percent of surveyed countries had used AI to improve internal government processes, but only 33 percent had used it to enhance policy design.
The gap between those numbers is where the harm concentrates: AI deployed for operational volume management, benefits processing, fraud detection, and enforcement, with accountability frameworks running years behind.
Latin America shows the pattern most clearly. A database of AI systems in the public sector in Latin America and the Caribbean found 70 percent of documented cases already implemented.
Mexico offers a specific cautionary example: in 2024, the government advanced its digital transformation agenda while simultaneously closing INAI, the independent authority responsible for transparency and data protection oversight. More AI deployment, less accountability architecture.
That combination should be familiar to anyone who has worked on digital health systems in sub-Saharan Africa, where data collection expands steadily while data protection legislation remains incomplete across much of the continent.
In South Asia, India is scaling government AI at a rate few other middle-income countries can match. The IndiaAI Mission positions the country as a global infrastructure player, and India’s Aadhaar-linked biometric system already makes AI-mediated identity verification the gateway to social protection benefits for hundreds of millions of people.
Benefits denial rates from biometric failures have been documented in multiple states, particularly affecting elderly and manual laborers whose fingerprints degrade. The systems run. The error rate data is not publicly disclosed in any form comparable to the US inventory.
The communities most affected, rural health system users, beneficiaries of social protection programs, informal workers, are not consulted about the systems evaluating them.
US Inventory Shows Who Writes AI Policy
I want to be precise about what the US case does and does not tell us.
It does not tell us that AI in government is necessarily harmful. Some of the systems in the US inventory are unremarkable efficiency tools.
- The FDA is building a horizon-scanning platform to aggregate early signals about food supply chemical hazards.
- HHS uses AI to detect fraud in Medicare claims.
- The DOJ is deploying AI to process legal documents faster.
These are defensible applications.
What the inventory reveals is that deployment consistently outruns governance, even under conditions of mandatory public disclosure. The systems described as “rights-impacting” in the US inventory are, in most cases, already running.
The governance frameworks are in development. The demographic performance analyses are absent or incomplete. The affected populations have no mechanism to contest the inferences these systems generate about them.
That structural condition, deployment without accountability infrastructure, is not a US-specific failure. It is the documented norm. And in countries where there is no mandatory disclosure requirement, no public inventory, and no formal rights-impact designation process, we simply do not know what is running.
African governments’ AI policy work tends to frame goals around economic growth while muting questions of algorithmic accountability and distributive equity.
That framing is not neutral. It reflects whose interests the policy was written to serve.
The populations bearing the highest risk from poorly governed government AI, people moving through social welfare systems, migrants, small health providers serving complex patients, are precisely the populations least represented in the rooms where these strategies get drafted.
What You Need to Do Now!
Here is the practical implication, and the challenge I want to put directly to everyone reading this.
Find out what AI systems your government is actually operating.
Not what strategy documents say. Not what ministers announced at last year’s AI summit. The actual deployed systems.
- Who built them?
- What data they train on?
- Whether any demographic performance testing has been done?
- What happens to someone who gets a false positive?
The US inventory process, imperfect as it is, shows what questions to ask:
- What is the system’s decision function?
- What populations does it affect?
- Has it been tested for disparate impact?
- What is the process for contesting an output?
- Is there a published validation study?
Most governments in Africa, South Asia, and Latin America do not have a public equivalent of the US AI inventory. Some, like Rwanda and Nigeria, have national AI strategies that address governance in principle.
Fewer have the implementation mechanisms that would answer the practical questions about specific deployed systems. The strategies and the systems are different documents, written by different people, with different levels of public accountability.
The analysis I did on the US federal inventory surprised me. The scale of non-compliance among rights-impacting systems surprised me. The completeness gaps in individual system entries surprised me. And I was working from a public document, in English, published on GitHub.
What will surprise you when you go looking for the equivalent in your context? Please find out and share it with me.

