Contact Us

We Need AI Arms Control to Keep the New Cold War From Turning Hot

AI Technology Image
Thought Leader: Niall Ferguson
August 27, 2023
Source: Bloomberg
Written by: Niall Ferguson

Who will be the Robert Oppenheimer of the artificial intelligence revolution? That was the question I kept asking myself as I read Mustafa Suleyman’s dazzling new book, The Coming WaveTechnology, Power and the 21st Century’s Greatest Dilemma. Perhaps it will be Suleyman himself.

While Suleyman’s ideas about how to contend with the challenges posed by AI demand our respect and attention, they recall to mind the agonizing dilemmas of the nuclear age — not to mention the toxic politics that coincided with the early Cold War.

You have by now read a great deal of both hype and doom-mongering on the subject. But Suleyman’s is the book you cannot afford not to read. As one of the co-founders of DeepMind, along with Demis Hassabis and Shane Legg, Suleyman has been a key figure in the explosive advance of AI. He was in the room when DeepMind had its first big breakthrough with Deep Q Network, which learned to excel at the computer game Breakout. He was “present at the creation” when DeepMind’s AlphaGo defeated Lee Sedol at the ancient Asian board game of Go in Seoul in 2016 — an event watched live by 280 million people.

More recently, he helped Google to build LaMDA (short for Language Model for Dialogue Applications), one of the revolutionary new large language models (LLMs). LaMDA is so plausible in conversation that it convinced Google engineer Blake Lemoine that it was sentient to the point of being a person. That surely is the essence of passing the “Turing Test,” named after the English information technology pioneer Alan Turing. Last year Suleyman set up Inflection AI with Reid Hoffman, whose book Impromptu — co-authored with GPT-4 — I discussed in my last column on this subject.

What makes Suleyman unusual in the field is that he did not start out as a data scientist. Born in Islington, in north London, the son of a Syrian-born taxi driver and an English nurse, he dropped out of Oxford to start the Muslim Youth Helpline, a telephone counseling service, and went on to work for the socialist mayor of London, Ken Livingstone (“Red Ken”). His experiences with both municipal government and the United Nations inform his insights into the likely political response to the challenges posed by AI.

Suleyman’s starting point is the familiar, hyperbolic one that AI is going to revolutionize just about everything. “With AI, we could unlock the secrets of the universe, cure diseases that have long eluded us, and create new forms of art and culture that stretch the bounds of imagination. … The coming wave is a supercluster, an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the Earth’s history.”

As I read his opening chapters, I was reminded of Marc Andreessen’s Why AI Will Save the World, the recent essay in which the titan of venture capital argues against the Cassandras and Luddites that AI won’t “kill us all, ruin our society, take all our jobs, or lead to crippling inequality.” It will just “make it easier for bad people to do bad things” — like pretty much any new technology.

But Suleyman is a great deal less sanguine. “With AI,” he warns, “we could create systems that are beyond our control, and find ourselves at the mercy of algorithms that we don’t understand.” He foresees “an existential threat to nation states — risks so profound they may disrupt or even overturn the current geopolitical order.” He fears “immense AI-empowered cyber-attacks, automated wars that could devastate countries [and] engineered pandemics” — not to mention “a flood of misinformation, disappearing jobs and the prospect of catastrophic accidents.”

This sounds a lot closer to another ex-Google AI maven, Geoffrey Hinton, who recently told Wired: “There are occasions when I believe that probably we’re not going to be able to contain it [AI], and we’re just a passing phase in the evolution of intelligence.” Hinton’s latest suggestion to slow down the AI revolution is to require it to be based on analog computers.

On my more optimistic days, I find myself hoping that LLMs so pollute the Internet with their “hallucinations” — truthy-sounding made-up stuff — that we all lose confidence in whatever it is we find online. Already the LLMs have begun learning from the vast amounts of content they themselves are spewing out, which must surely have garbage-in, garbage-out consequences. As Deepak Seth has argued, LLMs are already scraping AI-generated content and learning from it. This process will tend to amplify the hallucinations. Earlier this month, the Wall Street Journal reported that GPT-4 is getting worse at math. The technical term for this is “drift,” which gives a new meaning to the question: “Do you get my drift?”

The less we can trust the plausible verbiage GPT-4 gives us, the more we’ll be driven back to good, old-fashioned libraries, where the knowledge is a great deal more reliable — and sorted rationally rather than to maximize eyeball-engagement. This is why my biggest investment of the past five years has been in a large, “Name of the Rose”-style library to house printed-on-paper books.

The most immediate short-term danger posed by AI is to the democratic political process. Earlier this summer, Archon Fung and Lawrence Lessig published a chilling essay in Scientific American, in which they imagined an AI called “Clogger” deciding the outcome of the 2024 presidential election:

First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally … Second, Clogger would use a technique called reinforcement learning to generate messages that become increasingly more likely to change your vote. … Last, over the course of a campaign, Clogger’s messages could evolve to take into account your responses to prior dispatches and what it has learned about changing others’ minds.

Another clear and present danger is that more and more military decisions get delegated to AI, as is already true in the case of Israel’s Iron Dome missile defense system and seems increasingly a feature of the drone war in Ukraine. The most questionable assertion in Andreessen’s essay was his claim that “AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically,” because AI will help statesmen and commanders “make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.”

I strongly suspect the opposite will be the case. In the coming AI wars, mortality rates in armed forces will be very, very high precisely because AI will make the missiles and other weapons so much more accurate. Any half-decent AI that has read Clausewitz will want to achieve the annihilation of the enemy as soon as possible. AI-enabled commanders may also be more willing to sacrifice their own men to secure victory, in the same way that AI chess programs sacrifice their own pieces more ruthlessly than human grandmasters.

In sum, I agree with Suleyman’s analysis. AI — especially when combined with genetic engineering, robotics, quantum computers, fusion reactors, and nanotechnology — implies a proliferation of new technologies that are asymmetric, hyper-evolutionary, “omni-use,” and autonomous. Not all the consequences will be benign.

The problem is that such a tsunami of technological change is almost impossible to contain, much less to halt. As Suleyman argues, our political institutions lack the capacity to regulate AI. On one side, criminal actors will soon be able to deploy unstoppable malware (far worse than WannaCry), robot or drone assassins, and deepfake misinformation engines. On the other, lawful power is increasingly concentrated in the hands of the leaders of a few tech companies — the new East India Companies. Meanwhile, AI is poised to cause massive disruption to the labor market, shredding the modern social contract, whereby the 20th-century liberal nation-state offered its citizens both security and a high rate of employment. Suleyman fears that a rising share of humanity may soon face a choice between failing states such as Lebanon (succumbing to “Hezbollahization”) or Chinese-style dictatorships with AI-powered surveillance.

Is there anything we can do to avoid this dystopian outcome? In a new piece co-authored with Ian Bremmer in Foreign Affairs, Suleyman offers an ambitious blueprint for an international “technoprudential” regime to regulate AI. The analogy is partly with financial regulation, as he and Bremmer make clear by proposing as a potential model “the macroprudential role played by global financial institutions such as the Financial Stability Board, the Bank of International Settlements, and the International Monetary Fund.” Specifically, they call for the creation of a Geotechnology Stability Board, similar to the Financial Stability Board created in April 2009, in the depths of the global financial crisis. However, they envision the big tech companies being involved as “parties to international summits and signatories to any agreements on AI,” implying an even bigger say than the big banks have in financial regulation.

Like me, you may be inclined to despair at the thought of regulating AI as badly as we regulate finance. But note the two other elements of the Bremmer-Suleyman model. One is a body similar to the Intergovernmental Panel on Climate Change, to ensure that we have regular and rigorous assessments of AI’s impacts. The other is more compelling, to my mind:

Washington and Beijing should aim to create areas of commonality and even guardrails proposed and policed by a third party. Here, the monitoring and verification approaches often found in arms control regimes might be applied. … there may be room for Beijing and Washington to cooperate on global antiproliferation efforts.

This came as a surprise to me, as I had inferred from The Coming Wave that Suleyman had little time for analogies between AI and nuclear arms. He and Bremmer even say: “AI systems are not only infinitely easier to develop, steal, and copy than nuclear weapons; they are controlled by private companies, not governments.” And yet they — like almost everyone who tries to think systematically about how to cope with the threats posed by AI — inevitably come back to the Cold War arms race.

Of course, it’s an imperfect analogy. (Just imagine if the atomic bomb had emerged from a private-sector contest between, say, General Electric and IBM. And AI has many more uses and users than nuclear fission.) Still, it is not entirely a coincidence that innovation in AI has accelerated more or less simultaneously with the transition of the US-China relationship from economic symbiosis — “Chimerica” — to Cold War II. Eric Schmidt, the former CEO of Google, was skeptical back in 2018 when I first argued that we were in a new cold war. But the 2021 final report of the National Security Commission on Artificial Intelligence, which he chaired, essentially agrees that we are:

The US military has enjoyed military-technical superiority over all potential adversaries since the end of the Cold War. Now, its technical prowess is being challenged, especially by China and Russia. … if current trend lines are not altered, the US military will lose its military-technical superiority in the coming years. … AI is a key aspect of this challenge, as both of our great power competitors believe they will be able to offset our military advantage using AI-enabled systems and AI-enabled autonomy. In the coming decades, the United States will win against technically sophisticated adversaries only if it accelerates adoption of AI-enabled sensors and systems for command and control, weapons, and logistics.

Marc Andreessen’s clinching argument for pursuing AI “with maximum force and speed” is that “the single greatest risk of AI is that China wins global AI dominance and we — the United States and the West — do not.”

That implies, as Andreessen acknowledges, an arms race as unbridled as the one that followed the Soviets’ acquisition (through espionage more than their own excellence in physics) of the atomic bomb and then the hydrogen bomb. True, the United States today is ahead in one key respect: We have access to the most sophisticated microchips and, thanks to various US sanctions, the Chinese do not. But doesn’t this just put Xi Jinping in the position of Stalin when the US first had the Bomb?

Is there an alternative to an all-out AI arms race? Revealingly, the best examples Suleyman himself gives of successful regimes of technological containment (a word made famous by George Kennan, of course) are both taken from Cold War I: the nuclear non-proliferation regime and the ban on chemical and biological weapons. Arms control was not an unmitigated success, of course. But it didn’t achieve nothing (see last year’s excellent paper by Paul Scharre and Megan Lamberth). And that is why Suleyman is right to argue for it.

Which brings us back to “Oppie.” In a recent article here, Hal Brands argued that Oppenheimer was wrong to oppose the building of the hydrogen bomb — the “super,” as it was known to the physicists. Brand’s argument seems to be that the nuclear arms race was fine because the good guys ultimately won it. This surely understates how risky that race was, not least in 1962, when the superpowers came within an inch of World War III over Cuba. We ultimately got ourselves into the lunatic situation where we devoted far more effort to building nuclear missiles than to building nuclear power plants — as if the latter were more dangerous! Is that really how we want the AI race to play out?

Bremmer and Suleyman are right: The US and China urgently need to begin arms control negotiations, not only to limit the weaponization of AI, but also to make sure more resources go to its benign applications. Right now, practically no restraint is in place other than the economic restrictions the US has imposed on China. Meanwhile, it is highly probable that China is forging ahead with research on biological weapons. As Schmidt and others have pointed out, the risk of AI being used for that purpose is “a very near-term concern.” Nothing we are currently doing prevents that; the Biden administration’s current approach to China may even be encouraging such activity.

The Biden national security team believes it can rebrand economic de-coupling as “de-risking” and then line up some high-level meetings in Beijing. But that is not the path to a meaningful détente. The US and China need to talk about substantive issues, and arms control — not just of AI, but also of nuclear, biological and other weapons of mass destruction — is the right place to start. Indeed, I would not be surprised if the Chinese were the ones to suggest it. The initiative for arms control in a cold war tends to come from the side that fears it may lose the arms race.

As for the brilliant Mr. Suleyman, he must take care. He is right to warn of the perils of an unchecked AI race. He is right to call for AI arms control. But his argument for global institutions recalls Oppenheimer’s in 1946 for an Atomic Development Authority that would limit national sovereignty with respect to nuclear technology. Like Oppenheimer, Suleyman has a left-wing political past. And I worry that, like Oppenheimer, he may one day have that held against him, as the new cold war hots up.

Subscribe to the WWSG newsletter.

Check Availability

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

0
Speaker List
Share My List