Contact Us

From Deepfakes to Arms Races, AI Politics Is Here

Artificial Intelligence sculpture
Thought Leader: Niall Ferguson
April 7, 2024
Source: Bloomberg
Written by: Niall Ferguson

Two weeks ago, I asked and attempted to answer six questions about the economic and financial consequences of the artificial intelligence revolution. Today, I have even more — eight — political and geopolitical questions. They are harder to answer.

1. Will AI have an adverse impact on the 2024 election?

It seems highly likely. Because US election campaigns have much larger campaign budgets than other developed countries, each new communications technology is rapidly adopted by political entrepreneurs. Consider what Google gave to President Barack Obama’s campaign in 2012, or what Facebook ads gave to the Donald Trump campaign in 2016. In 2020, Silicon Valley aided Joe Biden with its so-called content moderation.

Large language models (LLMs) such as OpenAI’s GPT-4 have immense political potential. They can generate vast quantities of plausible content with little human oversight. This includes fake phone calls and video clips. We have already seen the first fake robocalls (mimicking Biden’s voice) in the New Hampshire primary.

It is hard to believe that the political use of AI will simply be prohibited.

But what would its large-scale deployment mean? Recent studies show that LLMs — even open-source ones less advanced than GPT-3 — can produce content that survey respondents rate as being just as credible as material on the same subject from the New York Times. One experimental study has shown that AI systems can overwhelm legislators or government agencies with fake constituent feedback.

Still other surveys show that even when voters are primed to be aware of deepfakes, they do not get better at identifying deepfakes — but they do lose trust in real videos.

All this probably means that the election will generate additional public pressure for regulation, especially if one campaign is seen to be using AI in a nefarious way.

2. Will AI be curbed by US regulation?

Last October, Biden issued an executive order detailing his administration’s priorities for regulating AI.

In one of its toothier security provisions, the order invokes the 1950 Defense Production Act to require companies developing advanced AI systems that could threaten national security to notify the federal government and report the results of safety tests, or “red teaming.” This requirement would apply only to very large systems (with training runs of over 10 to the 26th power mathematical operations).

The executive order does not regulate liability or require licensing for companies building AI models, as proposed by, among others, OpenAI CEO Sam Altman. Testifying to Congress in May, Altman called for a new federal agency to license AI companies and oversee audits. That is unlikely.

The executive order’s enforceability and implementation will therefore largely depend on federal agencies’ enforcement and rulemaking, as well as judicial review and the willingness of tech companies to abide by the new strictures.

Federal Trade Commission Chair Lina Khan wrote in the New York Times last May that her agency already has jurisdiction over a range of AI-related issues, including competition and consumer protection. Her essay suggested three focal issues: antitrust, fraud and abuse and labor discrimination. Last July, the FTC launched an investigation to determine whether OpenAI engages in “unfair or deceptive practices” relating to privacy and data security.

In Congress, the most coherent proposal to date is the Bipartisan Framework for US AI Act, sponsored by Senators Rich Blumenthal, Democrat of Connecticut, and Josh Hawley, Republican of Missouri.

Hawley-Blumenthal would clarify that Section 230 of the Communications Decency Act — a key piece of legislation in the history of the internet — “does not apply to AI,” meaning companies would be liable for disseminating harmful AI-generated content. The bill would also require AI developers and providers to disclose to users when they are interacting with an AI system; create rules to protect children; give consumers control over how their personal data is used in AI systems; require watermarking on AI-generated deepfakes; and limit the transfer of AI technology to China.

Will this legislation get passed? Probably not. Congress has a track record of regulating new technologies very slowly. The time between the invention of railroads and the first federal regulation of them was 62 years. For telephones it was 33 years; radio 15; the internet 13. Nuclear energy is the outlier: The lag was just four years.

3. Will Europe succeed in regulating AI?

The European Commission, true to form, hopes to lead the world in AI regulation. Its AI Act, which imposes data quality, oversight and disclosure requirements, will be formally adopted later this year and most of its provisions will take effect by 2026.

The AI Act divides AI systems into four levels, depending on the threat they could pose to human health, safety and fundamental rights. Each level faces different regulatory requirements. Unacceptable-risk applications, including some forms of biometric surveillance and “social scoring” systems, are banned outright. Critical infrastructure operation, educational training, border checks and law enforcement are examples of AI systems deemed “high risk.” Minimal-risk AI systems include spam filters, which would face voluntary codes of conduct only. Finally, “transparency-risk” AI systems, which involve interactions with users (e.g., chatbots), are required to disclose that their content is machine-generated.

The Europeans are looking to repeat what they pulled off for online privacy with the General Data Protection Regulation (GDPR). Many non-EU countries adopted the stringent European regulatory norms within their home markets purely because they wanted to sell their products into EU markets. Since 2018, GDPR regulators have imposed €4.5 billion in fines, though US big tech firms have fought back with litigation.

However, I doubt the Europeans will be able to set the standards for AI regulation. In 2021, Washington and Brussels founded the Trade and Technology Council. Within this framework, they have tried and failed to agree on a “voluntary code of conduct” on AI regulation. The big European problem is that it is home to hardly any major AI companies, with the notable exception of France’s Hugging Face.

4. Is there any prospect of a system of global governance?

Mustafa Suleyman, a co-founder of DeepMind (acquired by Google in 2014) and the new head of Microsoft AI, last year offered an ambitious blueprint for an international “technoprudential” regime to regulate the technology. He and Eurasia Group’s Ian Bremmer proposed as a model “the macroprudential role played by global financial institutions such as the Financial Stability Board, the Bank of International Settlements, and the International Monetary Fund.”

I remain unpersuaded that AI can be regulated like finance. However, the Bremmer-Suleyman model had two other elements. One was a body similar to the Intergovernmental Panel on Climate Change, to ensure that we have regular and rigorous assessments of AI’s impacts. The other was that “Washington and Beijing should aim to create areas of commonality and even guardrails proposed and policed by a third party. Here, the monitoring and verification approaches often found in arms control regimes might be applied.”

Analogies between AI and nuclear arms are obviously not perfect. As Suleyman and Bremmer themselves conceded: “AI systems are not only infinitely easier to develop, steal and copy than nuclear weapons; they are controlled by private companies, not governments.” And yet they — like almost everyone who tries to think systematically about how to cope with the threats posed by AI — were drawn back to comparisons with the Cold War arms race.

An ideal global governance system would structure coordination between states to stop nonstate actors and rogue states from developing or accessing cutting-edge AI models. Enforcement would work through a global export control regime for GPUs (the most sophisticated semiconductors, mostly designed by Nvidia and manufactured by Taiwan Semiconductor Manufacturing Company) and a global know-your-customer protocol for cloud compute.

Such a system is already being built. However, for geopolitical reasons, it is targeted at China. As China is the only other AI superpower, this makes little sense. In the last year of his life, Henry Kissinger attempted to establish a meaningful AI arms control dialogue between the US and China. It is doubtful this initiative will long outlive him.

History and recent events thus suggest that a global AI governance regime is very unlikely in the short or even medium term. We are a long way from the idea of Artificial Intelligence Limitation Talks. The arms race will therefore continue at the current breakneck pace.

5. Can China catch up?

Back in 2021, a committee chaired by former Google CEO Eric Schmidt released a report predicting that “China could surpass the United States as the world’s AI superpower.” That does not seem to be happening. The biggest Chinese LLMs are inferior to the American leaders.

Why is China lagging? The simple answer is that it cannot manufacture the most sophisticated semiconductors itself and the US is able to restrict its access to those produced by TSMC as well as to the complex chip-making machines produced by the Dutch firm ASML. According to my colleague Chris Miller’s book Chip War, “as many as 95% of GPUs in Chinese servers running artificial intelligence workloads are produced by Nvidia.” China can produce for itself the less fancy chips — for example, the ones that run electric vehicles. But not the AI chips.

Does that mean the “tech war” — which began when President Donald Trump’s administration went after Huawei and ZTE and culminated in the Commerce Department restrictions imposed on all Chinese firms in October 2022 — has been won? Not so fast. True, China is behind the US in AI spending and in AI company formation. But it is ahead in robots. And it is striving mightily to find ways to circumvent the US restrictions. Nor is Nvidia indifferent to China’s insatiable appetite for its chips. In recent years, revenues from China have amounted to between a fifth and a quarter of the company’s total. The Economist is not alone in wondering if contraband Nvidia chips are being smuggled to China via Singapore.

True, the US continues to be the dominant market for AI talent. But the latest edition of the MacroPolo study of the careers of top AI researchers — those who had papers accepted at the December 2022 Neural Information Processing Systems (NeurIPS) conference — suggests that Beijing is gaining ground. China is where a very large share of top AI researchers began their academic careers, up to 47% in 2022 from 29% in 2019. And US dominance of AI employment has eroded since 2019, down to 57% from 65%. Remember: The Soviet Union began the nuclear arms race far behind the US. It took two decades to catch up, but it did so.

6. Is AI really the new Manhattan Project?

My Tech Lord friends Vinod Khosla and Marc Andreessen had an interesting exchange about AI last month. Both are renowned venture capitalists. But Khosla is a backer of OpenAI and a fan of Altman’s ideas for regulation; Andreessen prefers to see open-source models flourish. “Would you open source the manhattan project?” asked Khosla in an exchange on X (formerly Twitter). “This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans.”

As I said, the analogy between AI and nuclear fission is far from perfect. But one thing is very striking to me. Today there are approximately 12,500 nuclear warheads in the world, and the number is rising as China adds rapidly to its nuclear arsenal. By contrast, there are just 436 nuclear reactors in operation. The share of total world electricity production that is nuclear has declined from 15.5% in 1996 to 8.6% in 2022, partly as a result of political overreactions to a small number of nuclear accidents that had trivial impacts on human health and the environment. Indeed, in absolute terms nuclear electricity generation peaked in 2006.

In thinking about the likely uses of AI, we should remember that as a species we have a track record. Yes, there are all kinds of wonderful uses to which AI can be put. The medical-scientific possibilities are especially mind-blowing. But the history of nuclear fission suggests we shall devote at least as much effort to developing AI’s destructive potential.

7. How much energy is an AI world going to need?

One big difference between nuclear fission and AI is that AI only consumes energy. But how much? Some alarmist commentators have projected that AI could end up needing close to a quarter of global electricity generation by 2030. However, the most thorough analysis I have yet read (by Dylan Patel, Daniel Nishball and Jeremie Eliahou Ontiveros for SemiAnalysis) concludes that “AI will propel datacenters to use 4.5% of global energy generation by 2030.”

That is still a lot. And because so much AI activity is concentrated in the US, data center “critical” IT capacity will need to triple from 2023 to 2027, taking it from 4.5% of US power generation to 14.6%.

In recent weeks, mainstream media in the US have been waking up to what this implies, not least for the dream of reducing the share of electricity generated by natural gas and increasing the share generated from “renewable” sources.

That dream is dead.

8. What will AI mean for the future of war?

The most questionable assertion in Andreessen’s AI essay “Why AI Will Save the World” was his claim that “AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically,” because AI will help statesmen and commanders “make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.”

I strongly suspect the opposite will be the case. In the coming AI wars, mortality rates in armed forces will be very, very high precisely because AI will make the missiles and other weapons so much more accurate. This is already apparent in Ukraine, where drone warfare is gradually transitioning from remote human-piloted systems to AI-powered autonomous systems.

The central problem of our time should be obvious. Without quite thinking it through, the US in effect outsourced manufacturing of the most advanced semiconductors to an island claimed by China. The main reason TSMC — which produces over 90% of GPUs — has a price-earnings ratio of just 14.3 is its vulnerable location.

To quote the Wall Street Journal, “A U.S.-China war over Taiwan would almost certainly result in the destruction of TSMC’s fabs. This would set back the global chip supply chain by five to ten years, derailing the AI boom in the process.” To quote Chris Miller again, “If Taiwan’s fabs were knocked offline, we’d produce 37% less computing power during the following year.” And even if China “only” blockaded Taiwan, “TMSC’s chip production would halt as the government rationed energy.”

It would be very, very nice if the US could build its own version of TSMC on US soil. That is the dream that helped to inspire the subsidy-packed CHIPS act.

Let’s just say I am not holding my breath.

Relevant and recent posts

Subscribe to the WWSG newsletter.

Check Availability

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

0
Speaker List
Share My List