Sanjay Gupta: Can Science and God Coexist?
Faith and science may often seem at odds with one another, but renowned geneticist and former NIH director, Dr. Francis Collins, says that he sees…
Thought Leader: Sanjay Gupta
Yesterday, 1,188 people — including researchers, tech critics, a few of my closest friends, and for some reason Elon Musk (will get to that in a minute) — released an open letter to “Pause Giant AI Experiments”. Specifically, the letter calls for a 6-month pause on training any AI system more powerful than GPT-4 — the groundbreaking AI technology that powers Open AI’s ChatGPT Pro.
While I believe many (but not all) of the people behind this letter had good intentions when they signed it, I feel the letter stokes more fear than solutions and their approach to guiding AI into the future is fundamentally flawed, in my opinion.
Their concern, in summary, is that AI development is starting to get out of control and that, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” To be clear, they are not calling for the end of AI development, but a “public and verifiable” pause by all key actors to build AI that is, “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
1188 people have signed the open letter so far.
Some of the signees include Elon Musk, Andrew Yang (2020 presidential candidate), Steve Wozniak, the CEO of Stability AI and co-founders from Pinterest and Ripple, along with a large field of research scientists at companies like DeepMind (owned by Google).
While I believe the letter and its authors to be well-intentioned, I feel its approach is fundamentally flawed and doesn’t propose a plausible strategy for guiding safe and ethical AI. Instead, this letter is an adversarial provocation designed to capture media attention. When you take even a cursory look under the surface, the problems become abundant and apparent:
In my opinion, the approach this open letter proposes is misguided and will lead to more problems than solutions. The vast majority of those who have signed this letter are well-intentioned, but have not thought the logistics or ramifications of their proposal through. This letter stokes more fear than solutions — which is potentially the point of the open letter. (It is certainly generating PR.) Fear always captures attention, even if that fear is somewhat manufactured.
We’ve been through this before with CRISPR, the incredible gene editing technology revolutionizing biology and genetics. Once CRISPR was invented, scientists found ways to use it to advance cures for cancer and other diseases. And while a few abused the technology, researchers and world governments came together and found ways to regulate the technology without calling for a complete ban on its development.
I agree that we need to build more ethical guardrails into AI. I have written extensively about building a potential AI Code of Ethics here on my newsletter and spoken with some of the world’s top ethicists (many of which have not signed this open letter to date). But calling for an immediate pause of AI development — primarily targeted at one company — isn’t a plausible or effective way to do it.
Instead, we need to convene leaders from AI, business, academia, government and art to have an unfiltered discussion about the best path forward. This is something I am actively working on now with some key leaders in AI, business, and Congress. I know there are other AI, policy and academic leaders who are doing the same.
AI has the power to transform our world, both for the better and for the worse. But technology has always, on a long enough time scale, improved the human condition. Instead of proposing an impossible solution and stoking fear, we should guide AI’s development through collaboration, transparency, practical solutions, and most of all, hope.
More to come.
~ Ben
Sanjay Gupta: Can Science and God Coexist?
Faith and science may often seem at odds with one another, but renowned geneticist and former NIH director, Dr. Francis Collins, says that he sees…
Thought Leader: Sanjay Gupta
Marc Short on what to expect from DOGE in relation to next year’s legislative agenda
Marc Short comments on what to expect from DOGE in relation to next year’s legislative agenda for CNBC. Looking for a great keynote or public…
Thought Leader: Marc Short
Marc Short on whether Musk has permanently changed how congressional communication functions
In this video, Marc Short joins CNBC to discuss whether Musk and Twitter have permanently changed how congressional communication functions and what to expect for…
Thought Leader: Marc Short