-
CENTRES
Progammes & Centres
Location
Image Source: Getty
This article is part of the series — Raisina Files 2025
Advanced Artificial Intelligence (AI) is fundamentally strange. We can think of this “strangeness” in terms of the classic iceberg analogy. At the tip of the iceberg, we do not always know why models behave the way they do. While we see the inputs and outputs in black box AI, the internal reasoning of these models is opaque.[1] One level below the surface, AI is strange because of the culture that surrounds it. There is often an air of inevitability when proponents of AI are asked whether AI development needs to decelerate—a narrative now buttressed by geopolitical rationales: If “we” don’t create this powerful Artificial General Intelligence (AGI), “someone else” will.[2] Finally, at the base of the iceberg, is the fact that broad proclamations about the benefits to humanity from AI ignore one crucial fact: AI infrastructure at the scales that we would need to build and deploy models at scale necessitates capital, infrastructure, manpower at a level that only highly centralised entities—tech giants, or well-resourced governments—can marshal.
Enter: sovereign AI, an idea defined as “a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.”[3] At a time when more governments are embracing sovereign AI, this essay examines the varied models of sovereign AI that may emerge based on the type of governance and government, industry role, and institutional capacity.
AI, particularly General Purpose AI, requires massive investments in data collection, compute (mainly GPUs), related energy infrastructure, and workflow management.[4] As things stand, only highly centralised and well-resourced entities, i.e., Big Tech companies or big governments, would be able to build such models at scale, at least based on the realities of technology as they exist. Sovereign AI is not a passing trend, but a recognition that relying on the goodwill of a handful of powerful AI companies, chipmakers, and cloud service providers, among others, is contrary to national interest. All the “weirdness” of AI— bias, hallucination, lack of guardrails, lack of accountability, the concentration of capital— have resulted in governments turning to sovereign AI.[5] A few key factors would then affect the trajectory of Sovereign AI in a given country: the relationship between AI industry and government, strength of regulation, and institutional capacity within governments. Based on these factors, this article proposes four scenarios: AI Technostates, Hybrid Systems, Neo- Feudal Systems, and Neo-colonies.
AI Technostates
Well-resourced states, with abundant capital, alignment with industry, and ample institutional regulatory capability may emerge as AI Technostates.[a]
Both the United States (US) and China are home to some of the most prolific institutions in terms of AI publications, including the Chinese Academy of Science, Tsinghua University and Zhejiang University in China, and the Massachusetts Institute of Technology and Stanford University in the US.[6] In the same vein, the US leads the world on the number of notable AI models, and China on the number of AI patents.
The US government has sought to entrench the country’s AI leadership by restricting access to key technologies and fostering AI use cases to supercharge its government and agencies. In the first bracket is a series of chip controls, the latest of which has divided the world into three groups for AI chips sales, ranging from least to most restricted.[7] In the second bracket, the Biden Administration’s Executive Order on Trustworthy AI (now rescinded) had sought to harmonise the government’s approach to AI.[8] Among the earliest of Trump’s presidential actions upon assuming his second term is the creation of the Department of Government Efficiency (DOGE), a temporary, quasi-governmental agency tasked with modernising US government software, network infrastructure, and IT systems.[9]
The 2024 Federal Agency AI Use Case inventory documents 1,757 AI use cases, varying from the Office of Personnel Management using AI to improve job recommendations on the USAJOBS portal to the Department of Homeland Security’s procurement of an AI tool for social media surveillance to supplement traveler screening.[10] The US’s AI infrastructure projects have seen investment both from the US private sector, which saw the announcement in January of the Stargate Project, a US$500-billion AI infrastructure project, and the US government, through Executive Order 14141 on allotting federal land for data centres (one of the few Biden EOs the new administration has not yet revoked).[11]
China is ahead of the curve in the developing world. As of early February 2025, the only three non-US LLMs in the top 10 on Chatbot Arena’s leaderboard are Chinese: two built by DeepSeek, and the other by StepFun.[12] China benefited from the presence of US tech powerhouses, and the return of high-tech talent from the US following a crackdown on Chinese nationals on concerns of economic espionage.[13] The growth in the country’s homegrown models was forced along in part due to US chip export controls. Finally, the Chinese Communist Party at both the national and city government level has sponsored research and provided subsidies to spur AI development and have supported growth in firms in which the private sector would otherwise not have invested.[14] China is an example of a big state that has deployed both heavy-handed regulation and its vast institutions to build sovereign AI.
Figure 1: Global Leaders in AI
Source: Stanford Global AI Vibrancy Tool.[15] * International borders as they appear in the original.
Both the US and China have potential to become AI Technostates, but while the US government’s vast institutional capacity is being deployed to supercharge the government’s own processes, the Chinese government is leading AI development in the country, injecting investment at scales the private sector is not. Beijing also released some of the world’s earliest and most comprehensive guidelines and regulations for AI services. Both governments are large: the US Federal government employs 2 million civilians, and the Chinese government has 8 million civil servants.[16] However, while the US government’s effectiveness and regulatory quality, based on World Bank indicators, are high, this does not translate into comprehensive controls.[17] It is worth noting then that this model of Sovereign AI could take diametrically opposite forms, given AI can be used equally to enhance choice and limit freedoms, becoming either a police state or, as one economist describes it, a “high-tech open society with an AI-fortified e-government.”[18]
Hybrid Systems
Hybrid systems will involve the co-development of AI infrastructure and use cases. On one level, we would see governments partnering with corporations, some homegrown, others foreign, to build their AI infrastructure and models. Hybrid systems may also see government-led AI development and applications in some critical sectors, but not all. Another shape such a system could take would be the Digital Public Infrastructure (DPI) model, with open-source datasets and models, widely available compute infrastructure, and AI Platform as a Service (AIPaaS).[b],[19]
Many countries pursuing sovereign AI would fall into this category. An example is the Government of India’s approach, which includes homegrown open-source models like Bhashni, open government datasets, and compute capacity, as well as partnerships with companies like Microsoft.[20] Layered on top of the government’s efforts are B2B partnerships, such as the ones inked between NVIDIA and a number of Indian conglomerates.[21]
Many members of the European Union (EU), should the bloc’s combined package of the Digital Markets Act and the EU AI Act succeed, would likely arc toward a Hybrid System or an AI Technostate model. The EU’s approach benefits from the fact that its collective regulatory heft and institutional capacity allow it to change markets and mobilise investments in ways the individual members would not be able, even as it does not have homegrown AI powerhouses at numbers or scales comparable to those of the US or China. Singapore is similarly hybrid, involving deep partnerships between Singaporean research institutions and government agencies, as well as international companies. Singapore National AI Strategy 2.0 outlines as its goal: “Singapore aspires to be a pace-setter—a global leader in choice AI areas, that are economically impactful and serve the Public Good.”[22] The Singaporean government has also rolled out shared resources and services like AI Verify, a testing toolkit that companies can use to test their own AI systems.
Neo-Feudal Systems
Neo-Feudalism is a new form of feudalism where “entire realms of public law, public property, due process, and citizen rights revert to unaccountable control by private business.”[23]
Weak government and weak governance, combined with strong influence of large AI companies built around closed models could result in a form of AI neo-feudalism. Neofeudal systems abound in science fiction: in the 1982 classic film, Blade Runner, there is no government, and the city, perhaps the world, is run by the Tyrell Corporation. The AGI research and core models of most companies currently working such models—OpenAI, Google DeepMind, Anthropic—are closed-source. Some, like OpenAI’s Sam Altman, state that closed-source is more secure, especially for technology as dangerous as AGI.[c],[24] However, the US Cyber and Infrastructure Security Agency (CISA) has argued that the benefits of open-source AI far outweigh the risks.[25] The closed- vs. open-source debate has not been settled, and proprietary models are the current standard.
As the race toward AGI accelerates, AI oligopolies will become the norm, as the level of investment, data and sheer compute required to make a breakthrough pushes smaller firms to merge with larger ones, unchecked by anti-trust regulations. A neo-feudal system may take the form of governments ceding services and functions to private sector entities. We may also see the creation of new virtual “company towns”, communities centred entirely around providers of AI services. This is still sovereign AI, but with the traditional sovereign subsumed or merged with new corporate sovereigns.
Neo-colonies
Governments with low institutional capacity and weak or non-existent regulations, paired with negligible industry investment—homegrown or foreign—within their countries, will become consumers of models developed beyond their shores. Such “neo-colonies” will fall into the sphere of influence of one of the other three systems, as consumers and buyers, suppliers of data and other AI inputs. In view of the current geographic concentration of AI investment, research and development, paired with low government AI readiness in some regions, many countries risk falling into the neo-colonial model.[26]
Figure 2: Concentration of Notable Machine Learning Models, by Country
Number of notable machine learning models by geographic area, 2003–23 (sum)
Source: Stanford AI Index Report.[27] * International borders as they appear in the original.
There exist nascent efforts that would support sovereign AI in such geographies, like the African Union’s Continental AI Strategy which includes harmonised regulations and cooperative capacity building as its pillars, pooling resources and forming a de facto regulatory bloc.[28] Sustaining and growing efforts would be foundational to avoiding the replication of colonial patterns in a new technological era.
The scenarios presented in this article can only paint an incomplete picture, as the real world is a whirlpool of factors that will influence the trajectories of sovereign AI. Energy constraints will be a crucial variable, for instance, whether there are breakthroughs like cost-effective nuclear fusion or if competing demands for energy infrastructure will foment social unrest. Trust is another element: Will societies continue to trust their governments, and view them as legitimate arbiters of their interests?
In a wide sweep of rationales different entities have outlined, each requiring varying l evels of control and territoriality, analysts often hyperfocus on the legitimacy of sovereign AI projects. But sovereign AI is one of the most important trends of this decade, and the intent of laying out the four scenarios in this essay was to urge observers to assess Sovereign AI projects not just through the lens of modes or tools, but concrete outcomes.
Endnotes
[a] Note: capability need not equate to hands-on regulation.
[b] The Digital Public Goods Alliance’s discussion paper on AI is useful in this context and is included in the endnotes.
[c] Meta is a notable exception, but not the standard.
[1] Matthew Kosinski, “What is Black Box AI?,” IBM, October 29, 2024, https://www.ibm.com/think/topics/black-box-ai
[2] Sigal Samuel, “Silicon Valley’s Vision for AI? It’s Religion, Repackaged,” Vox, September 7, 2023, https://www.vox.com/the-highlight/23779413/silicon-valleys-ai-religion-transhumanismlongtermism- ea
[3] Angie Lee, “What is Sovereign AI,” Nvidia Blog, February 28, 2024, https://blogs.nvidia.com/blog/what-is-sovereign-ai/
[4] General Purpose AI and the AI Act, Future of Life Institute, May 2022, https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act. pdf
[5] Stanford Institute for Economic Policy Research, “Is Antitrust Policy Good for Innovation: A Conversation with Lina Khan, Chair, Federal Trade Commission,” YouTube, November 2, 2023, https://www.youtube.com/watch?v=i3hgKaInBzE
[6] “Chapter 1: Research and Development,” Artificial Intelligence Index Report 2023, Stanford University, https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report-2023_CHAPTER_1-1. pdf
[7] “Biden-Harris Administration Announces Regulatory Framework for the Responsible Diffusion of Advanced Artificial Intelligence Technology,” Bureau of Industry and Security, January 13, 2025, https://www.bis.gov/press-release/biden-harris-administration-announces-regulatory-frameworkresponsible- diffusion
[8] “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” White House, October 30, 2023, https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executiveorder- on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[9] “Establishing and Implementing the President’s ‘Department Of Government Efficiency’,” The White House, January 20, 2025, https://www.whitehouse.gov/presidential-actions/2025/01/establishing-and-implementing-thepresidents- department-of-government-efficiency/
[10] “2024 Federal Agency AI Use Case Inventory,” GitHub, Updates January 23, 2025, https://github.com/ombegov/2024-Federal-AI-Use-Case-Inventory
[11] “Announcing the Stargate Project,” OpenAI, January 21, 2025, https://openai.com/index/announcingthe- stargate-project/; Executive Office of the President, “Advancing United States Leadership in Artificial Intelligence Infrastructure,” Federal Register, January 14, 2025, https://www.federalregister.gov/documents/2025/01/17/2025-01395/advancing-united-statesleadership- in-artificial-intelligence-infrastructure
[12] Wei-Lin Chiang and Anastasios Angelopoulos, “Chatbot Arena,” LMArena, https://lmarena.ai/
[13] “The Global AI Talent Tracker 2.0,” Macro Polo, https://macropolo.org/interactive/digital-projects/the-global-ai-talent-tracker/
[14] Ngor Luong, Zachary Arnold, and Ben Murphy, “Understanding Chinese Government Guidance Funds: An Analysis of Chinese-Language Source,” Center for Security and Emerging Technology, March 2021, https://cset.georgetown.edu/publication/understanding-chinese-government-guidance-funds/; Reuters, “Beijing City to Subsidise Domestic AI Chips, Targets Self-Reliance by 2027,” April 25, 2024, https://www.reuters.com/technology/beijing-city-subsidise-domestic-ai-chips-targets-self-relianceby- 2027-2024-04-26/; Hodan Omar, “How Innovative is China in AI,” ITIF, August 26, 2024, https://itif.org/publications/2024/08/26/how-innovative-is-china-in-ai/
[15] “Which Countries are Leading in AI?,” Stanford University Human Center Artificial Intelligence, https://aiindex.stanford.edu/vibrancy/
[16] Ben Leubsdorf and Carol Wilson, “Current Federal Civilian Employment by State and Congressional District,” Congressional Research Service, December 20, 2024, https://crsreports.congress.gov/product/pdf/R/R47716; Laurie Chen, “Chinese Youth Flock to Civil Service, but Slow Economy Puts ‘Iron Rice Bowl’ Jobs at Risk,” Reuters, December 29, 2024, https://www.reuters.com/world/china/chinese-youth-flock-civil-service-slow-economy-puts-ironrice- bowl-jobs-risk-2024-12-30/
[17] “Worldwide Governance Indicators,” World Bank, https://www.worldbank.org/en/publication/worldwide-governance-indicators/interactive-dataaccess
[18] Samuel Hammond, “AI and Leviathan: Part III,” Second Best, September 11, 2023, https://www.secondbest.ca/p/ai-and-leviathan-part-iii
[19] “Core Considerations for Exploring AI Systems as Digital Public Goods,” Digital Public Goods Alliance, https://www.digitalpublicgoods.net/AI-CoP-Discussion-Paper.pdf
[20] Ajinkya Kawale, “AI Compute Capabilities to be Available by June 2025: MeitY Additional Secy,” Business Standard, October 25, 2024, https://www.business-standard.com/companies/news/ai-compute-capabilities-to-be-available-byjune- 2025-meity-additional-secy-124102400982_1.html; Ministry of Electronics & IT, https://pib.gov.in/ PressReleasePage.aspx?PRID=2091170
[21] Lee, “What is Sovereign AI”.
[22] “National AI Strategy 2.0: AI for the Public Good, For Singapore and the World,” Government of the Republic of Singapore, December 4, 2023, https://file.go.gov.sg/nais2023.pdf
[23] Katherine V.W. Stone and Robert Kuttner, “The Rise of Neo-Feudalism,” The American Prospect, April 8, 2020, https://prospect.org/economy/rise-of-neo-feudalism/
[24] Sarah Jackson, “Sam Altman Explains OpenAI’s Shift to Closed AI Models,” Business Insider, November 2, 2024, https://www.businessinsider.com/sam-altman-why-openai-closed-source-ai-models-2024-11
[25] Jack Cable and Aeva Black, “With Open Source Artificial Intelligence, Don’t Forget the Lessons of Open Source Software,” July 29, 2024, https://www.cisa.gov/news-events/news/open-source-artificial-intelligence-dont-forget-lessonsopen- source-software
[26] Government AI Readiness Index 2024, Oxford Insights, https://oxfordinsights.com/ai-readiness/ai-readiness-index/
[27] Nestor Maslej et al., The AI Index 2024 Annual Report, AI Index Steering Committee, Institute for Human- Centered AI, Stanford University, Stanford, CA, April 2024, p. 48, https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf; Yu Xie et al., “Caught in the Crossfire: Fears of Chinese-American Scientists,” PNAS 120, no. 27 (June 27, 2023): e2216248120, https://doi.org/10.1073/pnas.2216248120
[28] “Continental Artificial Intelligence Strategy,” African Union, August 9, 2024, https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center. Her research interests lie in geopolitical and security trends in ...
Read More +