Politics, Business & Culture in the Americas

AI in Latin America: Smart Cities or Surveillance States?

Artificial intelligence promises unprecedented efficiency, but without safeguards, it threatens to entrench inequality and empower authoritarianism.
shulz via Getty Images
Reading Time: 5 minutes

As cities deploy artificial intelligence at a breakneck speed, they face a fundamental choice: Will AI enhance democratic governance or digitize authoritarian control? More than 80% of Latin America’s population lives in urban areas, and in many cities, citizens are demanding solutions to rampant crime and broken public services. For leaders facing these pressures, AI appears to offer a silver bullet—a seemingly modern, decisive solution that projects an image of control and bypasses the slow, difficult work of institutional reform.

Latin America’s $68.5 billion digital connectivity gap—and its reliance on foreign infrastructure that deepens technological dependency—make it especially tempting to import off-the-shelf surveillance packages rather than building bespoke, rights-respecting systems. But when cities become consumers of AI rather than creators, they lose both technical capacity and democratic control over systems that shape citizens’ lives.

The question isn’t whether Latin American cities should embrace AI, but how to harness its benefits while preserving democratic norms, preventing abuse, and genuinely benefiting constituents. The good news is that Latin America is already producing promising models.

The regulatory landscape adds another layer of complexity. As Eduardo Levy Yeyati has argued, Latin America has a narrow window to develop AI governance on its own terms rather than becoming a “regulatory colony” of Silicon Valley or Brussels. While the global regulatory landscape polarizes between U.S. deregulation and European restriction, the region has an opportunity to define a “third way” that balances innovation with rights protection.

This regulatory urgency is grounded in concrete regional experience. Across five countries—from discriminatory facial recognition systems in Brazil to Argentina’s groundbreaking PROMETEA legal automation—Latin America has already witnessed both AI’s democratic promise and its authoritarian perils. The case studies discussed in this article reveal that the region isn’t starting from scratch: It has real-world lessons about what works, what fails, and what the stakes are when AI governance goes wrong.

AI in Latin America’s cities

The region was at an AI crossroads even before large language models like ChatGPT hit the market. These AI initiatives reflect the region’s complex technological landscape, drawing from U.S., Chinese, and increasingly local sources. While some early systems like Uruguay’s PredPol came directly from U.S. institutions and China has aggressively promoted surveillance technologies across the region, Latin America has also fostered homegrown innovation. Argentina’s PROMETEA system was developed entirely by local prosecutors and technologists, while Chile’s MIRAI project represents a collaborative model—adapting MIT’s breast cancer detection algorithms to local medical data and conditions. Colombia’s MAIIA platform emerged from regional partnership between the Inter-American Development Bank and local developers. This technological diversity reflects broader geopolitical tensions, as the region navigates between competing AI ecosystems while building indigenous capabilities.

In 2019, police in Rio de Janeiro launched a pilot program to use AI-powered facial recognition technology, but it was beset by false positives, according to watchdog group O Panóptico. These mistakes hindered police investigations; in a high-profile case, police detained the wrong woman for murder due to faulty facial recognition. They also disproportionately harmed Black communities, as 90% of the people arrested through the pilot were Black, according to the group.

Meanwhile, Buenos Aires’ PROMETEA system told a different story. Launched in 2017, it automates routine legal document drafting while keeping human lawyers in charge of final decisions. It slashed document drafting time from 90 minutes to one minute while maintaining rigorous human rights safeguards through three key principles: strict ethical codes for developers and implementers; a focus on routine tasks that support rather than replace human judgment; and full transparency with human oversight at every decision point. This success, praised by institutions like the Inter-American Development Bank, inspired similar systems like Colombia’s PretorIA.

Other success stories abound. In Barranquilla, Colombia, the MAIIA platform maps informal settlements with 85% precision so that planners can see precisely where homes lack water access or where a new road could connect a forgotten neighborhood to the city’s economic life. This allows for targeted and humane urban development. In Santiago, Proyecto MIRAI Chile uses AI to analyze mammograms and predict a patient’s risk of developing breast cancer years in the future. Even outside of cities, AI-powered projects are improving efficiency in areas such as irrigation. Argentine startup Kilimo has helped farmers there save 72 billion liters of water, at once mitigating the effects of climate change and supporting the country’s most important industry.

And yet, courts in Buenos Aires recently began phasing out PROMETEA in favor of ChatGPT, citing faster processing times. Argentina’s capital is trading a transparent, locally-controlled system designed with human rights safeguards for a commercial product that stores data abroad, exhibits higher error rates, operates as a “black box” without transparency, and lacks a contextual understanding of Argentine law. This shift embodies the kind of techno-quick-fix thinking that undermines democratic AI governance.

The PROMETEA-to-ChatGPT shift reflects a fundamental misunderstanding of how AI works best. Research demonstrates that small, specialized language models are 10-30 times cheaper and often outperform massive general-purpose systems on the repetitive, specialized tasks that characterize municipal AI use.

Intentional AI design

Successful AI systems in the region share a human-centered philosophy that complements rather than replaces human decision-making. These systems prioritize transparency so that their reasoning can be understood and challenged. Crucially, they integrate human rights safeguards from the outset, instead of treating ethics as an afterthought.

PROMETEA’s legal automation keeps prosecutors in control of final decisions in the small number of uses where it hasn’t yet been phased out. MAIIA equips and empowers human decision-makers in urban planning rather than making autonomous decisions. Proyecto MIRAI explicitly combines AI techniques with medical expertise to identify risk rather than make diagnoses.

Failed AI deployments typically suffer from the opposite approach: over-reliance on automated systems without proper human review, opaque operations that prevent scrutiny, and quick-fix thinking that treats AI as a panacea. The discontinued PredPol system in Uruguay, which promised to predict crime locations, for example, proved no better than traditional methods and was discontinued within three years. Even worse, Argentina’s Salta province partnered with Microsoft in 2017 to predict teenage pregnancies “five or six years in advance—with the first and last name and address” without revealing their data sources, model assumptions, or intervention plans. Girls identified as “high-risk” were subjected to physical surveillance by government agents who photographed them and recorded their GPS coordinates, creating a surveillance apparatus that activists argued was designed to monitor and control young women’s reproductive choices.

Seeking local solutions

A democratic AI framework that doesn’t jeopardize civil liberties is achievable. Cities can implement immediate safeguards, pilot-test rigorously, and preserve transparent, locally-controlled AI.

Banning high-risk technologies when they don’t meet these thresholds, like facial recognition or predictive policing, is not an anti-technology stance; it is a pro-democracy guardrail that preserves space for legitimate innovation. This must be paired with radical transparency and public registries modeled on Chile’s proposed legislation that detail all AI systems in use. Otherwise, the potential for authoritarian abuse becomes too high.

Rather than starting from zero, cities can adopt tangible models for oversight, like UNESCO’s formal Ethical Impact Assessment. The Montevideo Declaration and the regional Roadmap for AI, adopted at the Second UNESCO Ministerial Summit in 2023, are a good start. Cities can also look to existing initiatives like the Inter-American Development Bank’s fAIr LAC partnership to share best practices and, crucially, collectively bargain with vendors for better prices and rights protections.

AI’s advance is inevitable. Whether it will do more harm than good depends on the policies that will govern it.

ABOUT THE AUTHOR

Natalia Cote-Muñoz
Reading Time: 5 minutes

Cote-Muñoz is a geopolitical and policy consultant focused on Latin America, East Asia, and AI. She writes a weekly AI newsletter on Substack, Artificial Inquiry.

Follow Natalia Cote-Muñoz:   LinkedIn  |   X/Twitter


Tags: AI, artificial intelligence, Cities
Like what you've read? Subscribe to AQ for more.
Any opinions expressed in this piece do not necessarily reflect those of Americas Quarterly or its publishers.
Sign up for our free newsletter