Sanjay Gupta: Can Science and God Coexist?
Faith and science may often seem at odds with one another, but renowned geneticist and former NIH director, Dr. Francis Collins, says that he sees…
Thought Leader: Sanjay Gupta
Ian Bremmer is president of Eurasia Group and GZERO Media and author of “The Power of Crisis.”
Throughout history, technological breakthroughs have created new opportunities for invention, adaptation and progress while inflicting irreversible damage on many lives and livelihoods. They have tested the remarkable ability of human beings and societies to adapt to the turmoil of transition and to survive what economists call “creative destruction.”
The world must now prepare for a technological breakthrough whose implications will be vast and which are already beginning to unfold at a speed that has frightened even the men and women who have spent their working lives laying the ground for this upheaval.
Artificial intelligence will transform our lives — for better and for worse — so thoroughly and so quickly that we have no choice but to prepare ourselves and one another for the fallout.
Without question, there will be medical and scientific breakthroughs that transform the labor of decades into the work of days. Those with access to the most powerful AI tools will have an opportunity to live longer, healthier and more prosperous lives than human beings have ever experienced.
But there are also risks that we must think through and prepare for.
Among the most consequential is disinformation. Without citizens, consumers and investors having continuous access to accurate, verifiable information, there can be no democracy or free market capitalism.
The advent of social media and the tidal waves of distorted information it generates have already poisoned public attitudes toward institutions of all descriptions. The mainstreaming of AI will add a vast chorus of preprogrammed nonhuman voices to the conversations that shape political life in every country in the world.
The ease with which malicious political actors, criminals and terrorists can create video illusions that can fool even the most sophisticated viewer will make it far harder for political leaders and those who report the news to build and sustain credibility.
China, Russia and other authoritarian states will develop more effective forms of digital propaganda that undermine freedom in profound and unprecedented ways, and they will sell these technologies to any government willing to pay for them.
But disinformation is only one of many malignant applications of AI.
In recent years, the technology problem that has most preoccupied political debate within many democratic states has been the collection of data from citizens’ online activity and the impact of that on privacy.
But artificial intelligence is a democratized technology. The powerful tech companies that have come to dominate our online lives can set rules and guidelines for the use of the products they create. To some extent, they can enforce those rules.
However, AI models that are nearly as advanced — and more powerful than the algorithms in general use even a few months ago — are already available to anyone with marginal programming skills and a laptop computer. A number of people I know are now running their own large language models using publicly available information to produce large amounts of text.
In a field with an open-source culture and very few barriers to entry, this availability will spread far and wide quickly and easily. Millions of people will soon have their own generative pretrained transformers like ChatGPT running on real-time data available on the internet.
It will make for a powerful tool that individuals can use to create useful things that break scientific and artistic ground. It will also be a weapon that rogue political actors, criminals and terrorists can use to code malware, create bioweapons, manipulate markets and poison public opinion.
It is true that the authorities will be able to deploy AI to create more effective tools to police these crimes, but governments have never faced a threat so diffuse.
Mass displacement is a third risk that must be taken into account.
We know the explosion of artificial intelligence will displace untold numbers of workers as machines replace people, even in knowledge sectors, on a scale that most of us until recently thought impossible.
It is true that we have seen such upheavals before. Most recently, surging global trade in recent decades has killed millions of manufacturing jobs in countries where workers had earned relatively high wages by catalyzing a shift in factory production to developing countries. Automation has also displaced manufacturing jobs more broadly.
In both cases, these tech disruptions yielded much higher productivity and wealth globally and eventually created more jobs than they destroyed. But it takes time and resources to retrain workers and to establish sustainable social safety net protections for those who cannot adapt.
The displacement triggered by the expansion of AI will hit more workers in more places much more quickly than any workplace disruption the world has seen before. This workplace revolution will create economic and political turmoil on a scale that national governments and multinational institutions are not prepared to manage.
Finally, there is the most personal aspect of the AI revolution. Humans will soon become much more accustomed to direct communication with machines than other people. Instead of turning to simple bots for weather reports, we will rely on complex AI-driven machines for complex interaction and even companionship.
We already know that excessive social media use can produce anxiety, depression and even self-harm among teenagers and isolated adults. This problem is about to become much larger as more people with anti-social tendencies build relations with increasingly sophisticated machines. This will be AI’s most profound challenge, and it is the one that policymakers are least prepared to meet.
Nothing separates us from the risks coming from AI except easily solvable technical obstacles and time. Each of them will have to be addressed within families and communities, among public- and private-sector decision-makers and across borders. But the AI revolution has already begun.
Sanjay Gupta: Can Science and God Coexist?
Faith and science may often seem at odds with one another, but renowned geneticist and former NIH director, Dr. Francis Collins, says that he sees…
Thought Leader: Sanjay Gupta
Marc Short on what to expect from DOGE in relation to next year’s legislative agenda
Marc Short comments on what to expect from DOGE in relation to next year’s legislative agenda for CNBC. Looking for a great keynote or public…
Thought Leader: Marc Short
Marc Short on whether Musk has permanently changed how congressional communication functions
In this video, Marc Short joins CNBC to discuss whether Musk and Twitter have permanently changed how congressional communication functions and what to expect for…
Thought Leader: Marc Short