Analysts often attribute the rapid development of AI technologies in countries like the United Arab Emirates and China to state support and cheap energy. But another important driver is their authoritarian governance model, which enables AI companies to train their models on vast amounts of personal data.
Last year, the United Arab Emirates made global headlines with the release of Falcon, its open-source large language model (LLM). Remarkably, by several key metrics, Falcon managed to outperform or measure up against the LLMs of tech giants like Meta (Facebook) and Alphabet (Google).
Adam Posen worries that markets and some Federal Reserve officials are misassessing the effects of policy-rate hikes.
Since then, the UAE has positioned itself as a frontrunner in the global artificial-intelligence race by consistently releasing updates to its powerful model. These efforts have not gone unnoticed: in April, Microsoft acquired a $1.5 billion minority stake in G42, the UAE’s flagship AI company, underscoring the country’s growing influence.
Analysts often attribute the UAE’s emergence as an AI powerhouse to several factors, including robust state support, abundant capital, and low-cost electricity, all of which are necessary for training LLMs. But another important – and often overlooked – factor is the country’s authoritarian governance model, which enables the government to leverage state power to drive technological innovation.
The UAE is not alone. Authoritarian countries like China have a built-in competitive advantage when it comes to AI development, largely owing to their reliance on domestic surveillance, which fuels demand. Facial-recognition technologies, for example, are used by these regimes not just to enhance public safety but also as powerful tools for monitoring their populations and suppressing dissent.
By contrast, facial recognition has become a source of enormous controversy in the West. The European Union’s AI Act, which entered into force on August 1, has effectively banned its use in public spaces, with only a few narrowly defined exceptions.
This provides AI firms in China and the UAE with a massive advantage over their Western counterparts. Research by David Yang and co-authors shows that Chinese AI firms with government contracts tend to be more innovative and commercially successful, owing to procurement practices that provide them with access to vast troves of public and private data for training and refining their models. Similarly, UAE firms have been allowed to train their models on anonymized health-care data from hospitals and state-backed industries.
AI firms seeking access to such data in Western countries would face numerous legal hurdles. While European and American companies grapple with strict compliance requirements and a surge in copyright-infringement lawsuits, firms in China and the UAE operate in a far more lenient regulatory environment.
This is not to suggest that authoritarian countries do not have laws protecting data privacy or intellectual property. But the national goal of promoting AI development often takes precedence, resulting in lax enforcement.
Meanwhile, consumers in authoritarian countries tend to be more supportive of AI. A 2022 Ipsos survey, for example, ranked China and Saudi Arabia – another authoritarian Gulf state with technological ambitions – as the world’s most AI-optimistic countries. These regimes’ widespread use of surveillance tools seems to have accelerated the commercial adoption of emerging technologies, possibly increasing public trust in the companies deploying them.
Moreover, authoritarian governments benefit from the ability to coordinate and direct resources toward innovation, especially through state-owned enterprises and sovereign wealth funds. Both the UAE and China have implemented top-down national strategies aimed at positioning themselves as global AI leaders. As I explained in a recent paper, the Chinese government is not just a policymaker but also a supplier, customer, and investor in this sector.
The UAE has adopted a similar approach. In 2017, it became the first country to appoint a Minister of State for AI, whose primary mission is to facilitate public-private partnerships and provide firms with convenient access to valuable training data. Notably, the Falcon AI model was developed by the Technology Innovation Institute, a state-funded research center. G42, which is backed by the UAE’s sovereign wealth fund and chaired by the government’s national security adviser, collaborates with various state agencies.
Recognizing the vital role of academic research in driving technological progress, the UAE also established the Mohamed bin Zayed University of Artificial Intelligence, the world’s first university dedicated exclusively to AI.
Despite the many similarities between the AI strategies of the UAE and China, one crucial difference stands out: whereas China’s progress in advanced technologies could be impeded by Western restrictions on chip and equipment exports, the UAE enjoys unrestricted access to these essential resources. In 2023, G42 signed a $100 million deal with the California-based startup Cerebras to build the world’s largest supercomputer for AI training. And earlier this year, the company reportedly engaged in talks with OpenAI CEO Sam Altman about a potential investment in an ambitious semiconductor venture that could challenge Nvidia’s dominance in the industry.
But the reasons for the UAE’s success are still widely misunderstood. Tellingly, Altman recently suggested that the country could “lead the discussion” on AI policy, acting as a “regulatory sandbox” for the rest of the world. In praising the UAE’s approach, Altman obscures a fundamental point: it cannot be replicated in a democratic environment.
Angela Huyue Zhang, Professor of Law at the University of Southern California, is the author of High Wire: How China Regulates Big Tech and Governs its Economy (Oxford University Press, 2024). She will soon join the faculty of the USC Gould School of Law.
More about: