Sanjay Gupta: Can Science and God Coexist?
Faith and science may often seem at odds with one another, but renowned geneticist and former NIH director, Dr. Francis Collins, says that he sees…
Thought Leader: Sanjay Gupta
The now-surging development of artificial intelligence will produce medical breakthroughs that save and enhance billions of lives. It will become the most powerful engine for prosperity in history. It will give untold numbers of people, including generations not yet born, powerful tools their ancestors never imagined. But the risks and challenges AI will pose are becoming clear too, and now is the time to understand and address them. Here are the biggest.
The health of democracy and free markets depends on access to accurate and verifiable information. In recent years, social media has made it tougher to tell fact from fiction, but advances in AI will unleash legions of bots that seem far more human than those we’ve encountered to date. Much more sophisticated audio and video deep fakes will undermine our (already diminished) confidence in those who serve in government and those who report the news. In China, and later in its client states, AI will take facial recognition and other tools that can be used for state surveillance to exponentially higher levels of sophistication.
This problem extends beyond our institutions, because the production of “generative AI,” artificial intelligence that generates sophisticated written, visual, and other content in response to prompts from users, isn’t limited to big tech companies. Anyone with a laptop and basic programming skills already has access to AI models far more powerful than those that existed even a few months ago and can produce unprecedented volumes of content. This proliferation challenge is about to grow exponentially as millions of people will have their own GPT running on real-time data available on the internet. The AI revolution will empower criminals, terrorists and other bad actors to code malware, create bioweapons, manipulate financial markets, and distort public opinion with startling ease.
Artificial intelligence can also exacerbate inequality – within societies between small groups with wealth, access, or special skills, as well as among wealthier and poorer nations.
AI will create upheaval in the workforce. Yes, technological leaps of the past have mainly created more jobs than they’ve killed, and they’ve increased general productivity and prosperity, but there are crucial caveats. Jobs created by big workplace tech changes demand different skillsets than those they’ve destroyed, and the transition is never easy. Workers must be retrained. Those who can’t be retrained must be protected by a social safety net that varies in strength from place to place. Both these problems are expensive, and it will never be easy for governments and private companies to agree on how to share this burden.
More fundamentally, the displacement created by AI will happen more broadly and much more quickly than transitions of the past. The turmoil of transition will generate economic, and therefore political, upheaval all over the world.
Finally, the AI revolution will also impose an emotional and spiritual cost. Human beings are social animals. We thrive on interaction with others and wither in isolation. Bots will too often replace humans as companions for many people, and by the time scientists and doctors understand the long-term impact of this trend, our deepening reliance on artificial intelligence, even for companionship may be irreversible. This may be the most important AI challenge.
Challenges like these will demand a global response. Today, artificial intelligence is regulated not by government officials but by technology companies. The reason is simple: You can’t make rules for a game you don’t understand. But relying on tech firms to regulate their products isn’t a sustainable plan. They exist mainly to make a profit, not to protect consumers, nations, or the planet. It’s a bit like letting energy companies lead the way on strategies to fight climate change, except that warming and its dangers are already understood in ways that AI risks are not, leaving us without pressure groups that can help force the adoption of smart and healthy policies.
So, where are the solutions? We’ll need national action, global cooperation, and some commonsense cooperation from the US and Chinese governments, in particular.
It will always be easier to get well-coordinated policy within national governments than at the international level, but political leaders have their own priorities. In Washington, policymakers have focused mainly on winning a race with China to develop the tech products that will best support 21st century security and prosperity, and that has encouraged them to give tech companies that serve the national interest something close to free rein. Chinese policymakers, fearful that AI tools might undermine their political authority, have regulated much more aggressively. European rule-makers have focused less on security or profits than on the social impact of AI advances.
But all will have to make rules in coming years than limit the ability of AI bots to undermine political institutions, financial markets, and national security. That means identifying and tracking bad actors, as well as helping individuals separate real from fake information. Unfortunately, these are big, expensive, and complicated steps that policymakers aren’t likely to take until they’re faced with AI-generated (but real) crises. That can’t happen until discussion and debate on these issues begin.
Unlike on climate change, the world’s governments haven’t yet agreed that the AI revolution poses an existential cross-border challenge. Here, the United Nations has a role to play as the only institution with the convening power to develop a global consensus. A UN-led approach to AI will never be the most efficient response, but by building consensus on the nature of the problem and pooling international resources, it will help.
By forging agreement on which risks are most likely, most impactful, and emerging most quickly, an AI-focused equivalent to the Intergovernmental Panel on Climate Change can regularize gatherings and the production of “State of AI” agreements that drill ever closer to the heart of AI-related threats. As with climate change, this process will also have to include active participation of public policy officials, scientists, technologists, private-sector delegates, and individual activists representing most member states to create a COP (conference of the parties) process to address threats to biosecurity, freedom of information, health of the labor force, etc. There could also be an artificial intelligence agency modeled on the International Atomic Energy Agency to help police AI proliferation.
That said, there’s no way to address the fast-metastasizing risks created by the AI revolution without an infusion of much-needed common sense into relations between the U.S. and China. After all, it’s the tech competition between the two countries and their lead tech companies that create the greatest risk of war, particularly as AI plays an ever-growing role in military weapons and planning.
Beijing and Washington must develop and sustain highest-level conversations about emerging threats to both countries (and the world) and how best to contain them. And they can’t wait for an AI version of the Cuban Missile Crisis to force them toward genuine transparency in managing their competition. To create an “AI arms control agreement” with mutual monitoring and verification, each government must listen not only to one another but to technologists on both sides who understand the risks that must be contained.
Far-fetched? Absolutely. The timing is terrible, because these breakthroughs arrive at a time of intense competition between two powerful countries that really don’t trust one another.
But if Americans and Soviets could build a working arms control infrastructure in the 1970s and 80s, the U.S. and China can build an equivalent for the 21st century. Let’s hope they realize they have no choice before a catastrophe makes it unavoidably obvious.
Sanjay Gupta: Can Science and God Coexist?
Faith and science may often seem at odds with one another, but renowned geneticist and former NIH director, Dr. Francis Collins, says that he sees…
Thought Leader: Sanjay Gupta
Marc Short on what to expect from DOGE in relation to next year’s legislative agenda
Marc Short comments on what to expect from DOGE in relation to next year’s legislative agenda for CNBC. Looking for a great keynote or public…
Thought Leader: Marc Short
Marc Short on whether Musk has permanently changed how congressional communication functions
In this video, Marc Short joins CNBC to discuss whether Musk and Twitter have permanently changed how congressional communication functions and what to expect for…
Thought Leader: Marc Short