Latin America faces a choice that will determine its economic future: develop AI governance on its own terms, or become a regulatory colony of Silicon Valley and Brussels.
Across Latin America, policymakers are waking up to the fact that AI isn’t another tech fad—it’s a rapidly moving and structural shift that’s already affecting jobs, public services, and democratic processes. Synthetic audio impersonating political figures circulated during last month’s Buenos Aires municipal election. In Brazil, the government has clashed with Meta over algorithmic transparency. And around the region, education systems are quietly integrating AI tools into classrooms—often without oversight or guidelines.
The numbers tell a sobering story: According to a recent IMF index, Latin America lags behind developed countries and China in AI readiness across four areas: digital infrastructure, human capital and labor market policies, innovation and economic integration, and regulation. If smart and well-timed, regulation can not only rein in risk associated with AI but also build public trust, attract responsible investment, and protect smaller innovators from being steamrolled by tech giants.
Latin America has a narrow window to shape rules that are both protective and enabling and reflect its own values and reality. Countries with clear frameworks—like the U.K. in financial technology and cybersecurity—tend to attract more investment and innovation. Estonia’s digital governance model has drawn billions of dollars in tech investment. The cost of inaction is equally clear: Countries without regulatory frameworks risk becoming dumping grounds for untested AI systems.
For Latin America, smart AI regulation—one that balances protection with innovation, adapts quickly, and reflects local social and institutional realities—also implies positioning the region as a preferred destination for responsible AI investment at a time when other regions are either over- or under-regulating. Latin America shouldn’t copy others or start from zero. Instead, it should craft its own AI regulations, drawing from global models but tailoring them to local needs and values.
What’s happening across the region
The ECLAC 2024 Digital Agenda, endorsed by all 33 member countries, calls for regional coordination, shared standards, and cross-border capacity building. However, the regulatory landscape across Latin America remains fragmented, with countries at different stages of development.
Brazil’s bill inspired by the EU AI Act is the most developed framework in the region, with provisions for civil liability and tiered risk categories covering facial recognition and automated hiring systems. Chile has drafted legislation rooted in transparency, fairness and human oversight, building on its National AI Policy.
Colombia, Peru and Paraguay are working on proposals focused on data protection, algorithmic fairness, and ethical use in sectors like education and finance. Argentina lacks a formal law, but momentum is building: Recent Congressional hearings have spotlighted issues like electoral manipulation and data privacy.
This is more than legislative noise: It’s a sign that the region is looking for direction and needs a framework that makes sense.
Constructing a smart framework
In a recent primer on AI regulation for Latin American lawmakers, Angeles Cortesi and I proposed a simple tool to help avoid the trap of copying templates from abroad, organized around purpose, risk, approach and context.
Before adopting any AI regulation, policymakers should be able to answer four key questions clearly.
Is the regulation’s purpose to protect rights, promote innovation, or secure national interests? China’s AI rules prioritize state security, the U.S. AI Bill of Rights focuses on civil liberties, and the U.K.’s framework aims to enable innovation while protecting public safety.
Which AI systems need a higher level of scrutiny to manage risk? An AI scheduling assistant poses different risks than an AI loan officer. A risk-based framework focuses oversight where harm is most likely. New risks related to agentic AI (which can plan, act, and iterate toward a goal) are especially crucial to monitor. The EU AI Act categorizes AI systems by risk level, banning some, strictly regulating others, and leaving low-risk systems largely untouched.
Should the framework focus on high-level ethical guidelines, detailed technical standards, or flexible “sandboxes” (controlled environments where developers can test innovations under supervision)? Singapore’s voluntary Model AI Governance Framework, Canada’s proposed AIDA law, and South Korea’s sandbox-based model each illustrate contrasting approaches.
What local context must be considered? What works in a wealthy, highly digitalized country may fail in a region where informal labor is common and digital infrastructure uneven. AI regulation must reflect linguistic diversity, institutional capacity, and social realities.
A regulatory approach for Latin America
The region’s countries don’t need to start from scratch, but they also shouldn’t settle for copy-paste from international frameworks. Latin America should develop its own regulatory approach, with an emphasis on flexibility, inclusion, regional coordination, and capacity-building.
Given the pace of AI change, regulation must evolve quickly. Pilot programs and regulatory sandboxes can offer a testbed for adaptation.
AI systems trained on English-language or Global North data often miss Latin American realities. Regulation should mandate diversity in training data and support open-source alternatives rooted in regional contexts while unlocking their own local datasets, like the U.K.’s National Data Library.
A fragmented landscape invites regulatory arbitrage (companies shopping for the most permissive jurisdiction). Shared data standards and joint governance bodies could give Latin America a stronger collective voice on the global stage.
Many governments still lack the technical teams to audit, evaluate, or enforce AI rules. Investments in AI literacy, especially among public servants, are essential. A regional training program could build these capabilities efficiently while creating networks for ongoing coordination.
To achieve this, one promising direction is to launch AI oversight labs with universities to let local researchers study AI in context. Another is to create regional sandboxes where startups get real-time feedback from regulators while developing new tools.
These strategic steps can help position the region to attract ethical AI investment and protect democratic institutions—as well as to avoid the extractive dynamics of the digital economy where data and value are concentrated by a few at the expense of users, workers and local innovation.
An opportunity to lead
AI is moving fast, but Latin America doesn’t have to play catch-up. By learning from other regions and tailoring approaches to its own context, the region can shape AI governance that is inclusive, realistic and future-proof.
Rather than merely running behind, Latin America has the opportunity to lead by building a regulation ecosystem that is flexible, context-aware and grounded in development priorities.
The next 18 months are critical. The global regulatory landscape is rapidly polarizing, creating an unprecedented opportunity for Latin America to define its own approach.
In early 2025, the United States took a decisive turn toward deregulation when President Trump issued executive orders revoking earlier safety-focused guidance and directing federal agencies to prioritize innovation, national security, and economic competitiveness over precautionary oversight. This shift creates a stark counterweight to Europe’s increasingly restrictive AI Act, deepening a regulatory divide that could fragment global AI governance. Europe’s forthcoming Cloud and AI Development Act (CAIDA) is set to institutionalize sovereignty requirements.
Most international AI initiatives—from OECD principles to the G7’s Hiroshima Process, which produced broad commitments but no binding obligations—remain voluntary and aspirational, lacking enforcement mechanisms that could bridge these growing divides. This polarization reinforces why regional frameworks matter more than ever: They offer a path to both innovate locally and coordinate collectively.
Against this backdrop, Latin America can define a third way, crafting AI regulation that evolves quarterly rather than annually, prioritizes transparency over restriction, and builds regional capacity rather than relying on foreign expertise.
Smart AI regulation in Latin America would signal that it can govern innovation without suffocating it and protect its people while preparing them for what’s coming.