Contact Us

Tech Leaders Can Do More to Avoid Unintended Consequences

Thought Leader: Rachel Botsman
May 24, 2022
Source: Link

TEN YEARS AGO, in a small hotel room in Helsinki, Finland, a young tech entrepreneur sat down with a pen and paper and calculated that one of his inventions was responsible for wasting the equivalent of more than a million human lifetimes every day. The realization made him feel sick. That entrepreneur’s name is Aza Raskin, and he’s the inventor of the “infinite scroll,” the feature on our phone that keeps us endlessly scrolling through content with the simple swipe of a finger.

Back in 2006, Raskin was trying to solve the clunky experience of the next-page button that internet users continually had to click. Ironically, his goal was to stop disruptions to a user’s train of thought.  “My intention was to create something that could focus our attention and control our tempo when on websites and apps,” Raskin explained to me in a recent interview for my podcast, Rethink Moments. The infinite scroll fixed the problem by making new content load automatically, no click required.

Raskin didn’t foresee how tech giants would exploit his design principle, creating apps to automatically serve more and more content without your asking for it—or necessarily being able to opt out. Finish watching a video on YouTube, the next one loads instantly. Go on Instagram to look at a couple of pictures and you’re still mindlessly swiping half an hour later.

 “I think when I look back, the thing I regret most is not packaging the inventions with the philosophy or paradigm in which they’re supposed to be used,” says Raskin. “There was a kind of naive optimism about thinking that my inventions would live in a vacuum, and not be controlled by market forces.” He deeply regrets the unintended consequence of his invention—hours, even lifetimes—of mindless surfing and scrolling.

Raskin is far from alone. Over the years, when I’ve advised successful entrepreneurs, I often hear how they couldn’t imagine the negative effects their ideas would have at scale. The AirBnb founders, for example, didn’t foresee the negative impacts of short-term rentals on local communities. When Justin Rosenstein invented the Like button, he didn’t imagine the effect that receiving hearts and likes—or not—would have on young teens’ self-esteem. I’m not a fan of Facebook (sorry, Meta), but Mark Zuckerberg arguably didn’t start the social media giant as a tool for political interference. Yet we’ve seen how a platform intended to “give people the power to share and make the world more open and connected,” to quote Zuckerberg, has ended up having devastating unintended consequences, such as the storming of Capitol Hill last January 6. Creators and entrepreneurs want to build products that will “change the world.” And often they do, but not in the way they imagined.

The failure to predict the unintended consequences of technology is deeply problematic and raises thorny questions. Should entrepreneurs be held responsible for the harmful consequences of their innovations? And is there a way to prevent these unintended consequences?

THE UNINTENDED CONSEQUENCES of innovations have been accelerated by new technology, but they are not a 21st-century problem. Microwaves are built for convenience, but their inventor didn’t think about the impact on family eating habits if everyone just zaps their own meal. When Karl Benz first developed a petrol-powered automobile to help people move faster and have more freedom, he didn’t think about the problems of traffic congestion or air pollution. When plastic was first invented over 110 years ago as a strong and flexible material, it was hard to imagine the environmental damage we’re dealing with now because of mass packaging and petroleum extraction. 

In 1936, social scientist Robert Merton proposed a framework for understanding different types of unanticipated consequences—perverse results, unexpected drawbacks, and unforeseen benefits. Merton’s choice of words (“unanticipated” rather than “unintended”) was by no means random. But the terms have, over time, become conflated.

“Unanticipated” gets at our inability or unwillingness to predict future harmful consequences. “Unintended” suggests consequences we simply can’t imagine, no matter how hard we try. The difference is more than semantics—the latter distances entrepreneurs and investors from responsibility for harmful consequences they did not intend. I like the term “unconsidered consequences,” because it puts the responsibility for negative outcomes squarely in the hands of investors and entrepreneurs.

Merton outlined five key factors that get in the way of people predicting or even considering longer-term consequences: ignorance, short-termism, values, fear, and error—assuming habits that worked in the past will apply to the current situation. I’d add a sixth: speed.

Speed is the enemy of trust. To make informed decisions about which products, services, people, and information deserve our trust, we need a bit of friction to slow us down—basically, the opposite of infinite, easy swiping and scrolling. And speed is a two-pronged problem.

According to Our World in Data, it took more than 50 years for more than 99 percent of US households to adopt the radio for listening to programs in their homes and cars. It took 38 years for the color TV to reach similar mainstream adoption. In comparison, it took Instagram just three months to reach a million users when it launched in 2010. TikTok landed its billionth user in 2021, just four years after its global launch—half the time it took Facebook, YouTube, or Instagram to achieve the same milestone, and three years faster than WhatsApp. When the time frame of consumer adoption is compressed from decades to months, it’s easy for entrepreneurs to ignore the deeper and often subtle behavioral changes those innovations are introducing at an accelerated rate.

Entrepreneurs will often tell themselves the story that they’re still in the “novelty” or “sandbox” phase, when in reality millions of people are using their product. It’s reflected in the fact that big tech companies’ original mission statements, such as “Don’t be evil” (Google) or “Give people the power to build community and bring the world closer together” (Facebook), are used well beyond their expiration date—sometimes even years after the founders have been forced to acknowledge not only the severe shortcomings of their innovations, but the serious consequences of those shortcomings.

Simultaneously, most entrepreneurs are largely focused on accelerating the speed of their growth. I’ve only ever once seen in a pitch deck a “slow growth” strategy. “The old mantra of ‘Move fast and break things’ is an engineering design principle … it’s not a society design principle,” writes Hemant Taneja, a managing partner at the venture firm General Catalyst, in his book Intended Consequences. Taneja argues that VCs need to screen for “minimum virtuous products” instead of just “minimum viable products.” A powerful question for determining the virtues of a product over time is this: If you were born in a different era or different country, how would you feel about this idea?

Where will this idea lead? How will it change as it grows? The answer is, sometimes we just don’t know. The twists and turns of human behavior and technological progress can make it hard to see what lies ahead. I’ve found even while writing this article that many entrepreneurs and investors are reluctant to talk about the impact of technologies at scale. “You can’t imagine impact at scale,” is common pushback. But as Raskin points out, “an inability to envision the impact at scale is actually a really good argument as to why one shouldn’t be able to deploy tech at scale. If you can’t determine the impacts of the technology you’re about to unleash, it’s a sign you shouldn’t do it.”

Imagine if a pharmaceutical company said they couldn’t possibly imagine or predict the negative impacts or potentially life-threatening side effects of a drug because human bodies are all different and complex—but pushed it onto the market anyway? That’s inconceivable in our present context, because pharmaceuticals must go through rigorous testing protocols and meet efficacy and safety standards set forth by agencies made up of experts. Of course, this system isn’t perfect—there are gaps and loopholes—but it’s time to have more protective standards on tech products with throttling reach, which are arguably far more ubiquitous than most medications.

Unintended consequences can’t be eliminated, but we can get better at considering and mitigating them.

THE RESPONSIBILITY FOR unconsidered consequences is a complex problem. Take social media. Right now, the original inventors of platforms —Zuckerberg, Jack Dorsey (Twitter), Chad Hurley (YouTube)—can’t be held responsible for the content that users choose to post. But they should be liable for any content that algorithms they write and employ spread and promote. Regulation can’t force people to use a product or service in a responsible way. But entrepreneurs should be held responsible for structural and design decisions they make that either protect or violate the best interests of users, and society overall. Tim Berners Lee, the inventor of the internet, published a letter on the 30th anniversary of the World Wide Web in which he pointed to the “unintended negative consequences” of the web’s design, including “perverse incentives” from ad-based business models that many tech giants like Google and Facebook use, which reward “clickbait and the viral spread of misinformation.” As unanticipated consequences become apparent, it’s up to entrepreneurs to implement, upgrade, or completely rethink the business models and structural mechanisms they have in place to reduce the negative impacts.

An unconsidered consequence is different from an undesired outcome. A train or car crash that kills people is an undesired outcome. It’s different from an impact that is generated from a deliberate policy or purposeful action—such as an ad-based business model—that sets in motion a series of harmful behaviors and negative events in the future.

AS RASKIN’S STORY of the infinite scroll shows, it’s very easy for creators to lose control of the things they make when those inventions are manipulated by the free market. A feature he intended to help people focus has been exploited by others as a tool for mass distraction—for the benefit of tech giants’ bottom lines. But over the past decade, Raskin has been doing a lot of thinking as the cofounder, along with Tristan Harris, of the Center for Human Technology, about how to embed the design philosophy into an invention or product itself. He explained three ideas he’s been developing:

Firstly, he would like to see a new open source license introduced that comes with a Hippocratic oath. It would contain a “bill of rights and a bill of wrongs,” outlining specific situations or usages of the tech that would cause the license to be revoked. The idea would help prevent a creator’s technology being misused with impunity.

Raskin’s second practical solution to hold entrepreneurs responsible for the scale of liability is to tie it to the scale of power. “If your product or service is being used by less than 10,000 people you should be bound by different regulations than if your user base is bigger than a nation state,” says Raskin. He’s talking about an idea I call a “permission at scale” license. Every time an invention hits an adoption milestone—100,000 users, a million users, a billion users and so on—an entrepreneur would need to reapply for their license based on the positive and negative impacts of their invention. Again, there are best-practice frameworks that can be adopted from the pharmaceutical industry. When drug companies are working on treatments for diseases with very few cases, many restrictions get lifted because they’re too onerous given the context. But when the drug is rolled out at scale, there are very different provisions. Raskin explains: “A progressive scale of liability would mean you have lots of innovation at the small scale, but as soon as it has the surface area to create harm, you have the responsibility that pairs with it.”

Lastly, he recommends building your own “red team” independent of the board or investors. Raskin sees their role as to name all the ways the tech could be abused for good and for ill. “It would create a ‘we know, you know’ shame around using the tech for nefarious purposes,” he says.

Raskin has set up his own “Doubt Club,” a forum for a group of entrepreneurs who are working on noncompeting ideas to share doubts about their product, company mission, or metric. They have a pact that whatever is being shared at the Doubt Club won’t leave the room. The goal is to reduce ignorance and to encourage what Raskin calls “epistemic humility.” To be willing to say those three magic words: I don’t know.

Renowned theoretical physicist Richard Feynman once wrote in his book The Pleasure of Finding Things Out: “It is our capability to doubt that will determine the future of civilization.” The principle applies urgently to tech innovations. Entrepreneurs and investors need to be responsible for asking “What happens when …” questions:

  • What happens when people are left behind by my invention?
  • What happens when my system becomes susceptible to bias?
  • What happens when the interests of my business model don’t align with the best interests of customers?

We believe too deeply in the clarity of our own interpretations. Identifying and reducing unintended consequences calls for greater humility and acceptance of doubt; it requires us to take the time to explore what we don’t know and actively seek alternative possibilities. For entrepreneurs and investors, it requires essentially believing in yourself and your ideas, but doubting your current knowledge.

Relevant and recent posts

Subscribe to the WWSG newsletter.

Check Availability

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

0
Speaker List
Share My List