IN SUMMARY
Between Innovation, Digital Colonialism, and Micropolitics
by Alfredo Cuéllar
Introduction
Artificial Intelligence (AI) is developing at an unprecedented speed. Meanwhile, most national bureaucracies lack regulatory frameworks to prevent the risks of misuse: mass disinformation, political manipulation, authoritarian surveillance, or sophisticated fraud. This asymmetry—technology racing ahead while regulation stumbles—opens the way to scenarios of uncertainty and social vulnerability.
As Mariano-Florentino Cuéllar, former Justice of the California Supreme Court and leader at the Carnegie Endowment, has pointed out, the challenge lies in combining trust with verification: building institutions that allow innovation to flourish while ensuring accountability (Cuéllar, 2025).
Regulating to Build Trust
Far from stifling innovation, regulating AI means building public trust. Citizens need to know that algorithms will not be used against them, that they will not be discriminated against due to hidden biases in data, and that they will not be manipulated by deepfakes during election campaigns.
Experience shows that without regulation, the benefits remain concentrated in a few actors, while the risks are socialized. In other words: the costs are borne by society as a whole.
The Case of California: What Is SB 53?
In September 2025, Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (SB 53), the first legislation in the United States designed to establish controls over so-called “frontier AI models”—the most advanced AI systems capable of producing amplified social effects.
SB 53 establishes five pillars:
- Transparency: requires large developers to publish on their websites a detailed framework explaining how they comply with national and international safety and ethical standards.
- Innovation: creates a public consortium called CalCompute, aimed at building a public computing cluster to foster ethical and sustainable AI research and deployment.
- Safety: establishes a mechanism for companies and citizens to report critical security incidents to the state’s Office of Emergency Services.
- Accountability: protects whistleblowers who warn of serious risks and grants the Attorney General authority to fine noncompliant companies.
- Continuous updating: orders the California Department of Technology to issue annual recommendations based on technical advances and international standards.
In Newsom’s words: “AI is the new frontier in innovation, and California is not only here for it but stands firm as the national leader by enacting the nation’s first frontier AI safety legislation” (Office of the Governor of California, 2025).
Three Levels of Nations in International Regulation
The global landscape shows uneven progress:
- Nations advanced in both development and legislation: the European Union with its AI Act; Canada with its Artificial Intelligence and Data Act; and California as a pioneering subnational actor.
- Nations advanced in development but not in legislation: the United States at the federal level; China and Russia, with frameworks focused more on censorship and political control than on civic ethics.
- Nations lagging in both development and legislation: the majority of the Global South, including Mexico, where the legal vacuum reflects technological dependence. This constitutes what can be called digital colonialism: producer countries export AI without clear rules, while consumer countries remain subordinated to external dynamics.
The Trump Enigma and the Tech Magnates
The return of Donald Trump represents an unknown factor: his disregard for science and expert advisors foretells clashes between regulatory states (California, New York) and a federal government inclined toward deregulation.
Adding to this scenario is the pressure from technology magnates, who strongly resist regulation. Through political financing and lobbying, they seek to maintain a field of action free of controls. A potential alliance between Trumpism and the “barons of Silicon Valley” would be particularly dangerous: it would consolidate a space without checks and balances where political and corporate power reinforce each other.
The Micropolitical Lens
Micropolitics (MP), as an emerging science that studies invisible powers in organizations and societies, provides keys to understanding this situation.
- Sources of power: tech giants operate from economic and seductive power, downplaying risks while exalting innovation.
- Principle of least interest: the less interested the AI-owning companies appear in regulation, the more power they accumulate over states that depend on their investments.
- Timing: legislators who act too late—when the damage is already multiplying—arrive with “medicine after the disease,” as Machiavelli and other thinkers warned.
- Digital colonialism: from a micropolitical perspective, it is not only technological dependency but also a mechanism of symbolic and organizational domination, where consumer countries internalize the role of subordinates.
MP invites us to observe these processes not only from the vantage point of grand politics but also from the everyday struggles of power: who decides which algorithms are adopted in a school, which company provides surveillance systems to a city, or which social networks filter—or fail to filter—hate speech.
Conclusion: Toward a Global Policy
The AI challenge is global. California’s SB 53 sets an important precedent, but it is not enough. The absence of a federal framework in the U.S. and the passivity of much of the Global South deepen inequality and consolidate digital colonialism.
Regulating AI is not optional: it is the only way to prevent this tool from becoming an instrument of unchecked power. As MP warns, the real danger is not what is visible but what operates in the shadows. And in AI, those shadows are vast.
The challenge for humanity, then, is to build ethical, transparent, and dynamic rules capable of ensuring that artificial intelligence expands human freedoms rather than reduces them.
Dr. Alfredo Cuéllar is a specialist in Micropolitics, an international consultant, and a retired professor at California State University, Fresno. He has worked in dozens of universities, including Harvard. His articles focus on education, migration, politics, sociology, culture, and current affairs.
For inquiries and comments: alfredocuellar@me.com
Suggested Bibliography
- Cuéllar, M. F. (2025). Remarks on AI Regulation and Public Trust. Carnegie Endowment for International Peace.
- European Union (2024). Artificial Intelligence Act. Brussels: European Commission.
- Government of California (2025). Press Release: Governor Newsom Signs SB 53, Transparency in Frontier AI Act. Sacramento.
- Li, F. F., & Chayes, J. T. (2025). Human-Centered AI and Governance. Stanford HAI Reports.
- Mozur, P., & Satariano, A. (2024). The Unequal Geography of AI Data Centers. The New York Times.
- Ortega, M. (2015). Dimensiones del comportamiento y la cultura organizacionales. Mexico: Trillas.
- Ortega, M. (2016). La cultura organizacional. Mexico: Trillas.
- Wiener, S. (2025). Senate Bill 53: Transparency in Frontier Artificial Intelligence Act. California State Legislature.












