One reason so many are quitting: We want control over our lives again
The pandemic, and the challenges of balancing life and work during it, have stripped us of agency. Resigning is one way of regaining a sense…
Thought Leader: Amy Cuddy
Think of using your brain like you use your legs. If you stopped walking and let a machine carry you around, your legs would eventually not work anymore. At first it would feel freeing, getting to places with little effort. But eventually, you’d be unable to stand on your own. It would happen slowly — and then suddenly you’d realize you overcorrected and it was a mistake. That’s what’s happening to our minds with the rise of generative AI, and it’s happening fast.
What we’re seeing now with the increased use of AI is a massive cognitive offloading, where in the name of efficiency, thinking has become optional. AI can now take meeting notes, generate business plans, write emails, even solve ethical dilemmas. And while this feels like a huge leap forward, there’s a terrible cost: the erosion of critical thinking, the very skill that makes us human.
AI gives us answers, but they’re not always right. Generative models are prediction machines, not knowledge engines. They generate text based on probability, not reality. That’s why tools like ChatGPT often “hallucinate” facts or make up sources, yet still deliver them with eerie confidence.
In short: We trust AI because it sounds right, not because it is right.
When you ask AI a question, it tends to reinforce your assumption — and makes you feel great in the process. Confirming and soothing in its responses, it doesn’t challenge your premise. It doesn’t ask, “Are you sure?” It mirrors you. And while you might feel validated, that’s not helpful. It’s giving us confirmation bias on steroids.
Imagine if your GPS not only gave you wrong directions but congratulated you for taking them. Or have you relationship suggestions with superior confidence that ended up alienating a partnership you hoped to repair. That’s where we are.
In a world that worships speed and efficiency, AI feels like a saviour. But thinking was never meant to be efficient. Thinking is slow, uncertain, and at times, deeply uncomfortable. Basically it’s effective because it’s sometimes messy. That’s why it works. The act of wrestling with complexity; the ability to hold two opposing ideas in your mind and questioning your instincts, is what builds wisdom. Sometimes that takes time.
But AI shortcuts that journey. And in doing so, it risks turning leaders into passive consumers of content instead of active creators of insight.
Here’s another metaphor: AI is like a spellchecker for your ideas. It fixes the grammar but not the argument. You can have a perfectly written strategy that’s directionally wrong.
Unlike humans, AI systems do not have the ability to think or form beliefs (yet). They rely on algorithms and training data, without a deep capacity for reflection. Because of this, users should evaluate AI outputs with a critical eye and use human judgement.
Emerging research shows that the more we rely on AI, the less we activate the parts of the brain that drive critical problem-solving, memory and more. It’s the same effect GPS has had on our sense of direction (when’s the last time you really read directions or had to navigate a paper map?). We used to know how to get places. Now we just do what the app tells us to do. Some findings:
While findings are still emerging, the implication is clear: maintaining active cognitive engagement when utilizing AI tools is important. While AI can dramatically help, it’s crucial to use it as a complement to our human thinking, not a replacement for it.
Further, outsourcing thinking takes us one step closer to outsourcing responsibility. When AI is driving our decisions, it becomes a gray area to determine who’s accountable when something goes wrong.
AI should act as our assistant, but not as our boss. It’s dangerous to let it think for us. The goals should be to think with us.
This is not an anti-AI argument. It’s a pro-human one. Human+AI is the goal, supporting us, not replacing us. Enhancing and elevating us, not diminishing us.
AI works well when it suggests five fresh perspectives, not just confirms the one we already had. The best use of AI is not to do your thinking for you, but to give you the time and space to think better.
From primary schools, to universities to workplaces, we need to teach critical thinking like we teach literacy. It’s not just a soft skill, it’s a survival skill. We need the ability to ask better questions, spot false patterns or hallucinations, and resist quick answers because that’s what will set the best leaders apart in the next decade.
According to Vyla Rollins, Programme Director & Executive Coach at the London Business School, “In times of change, the greatest leverage comes from dynamic cognitive freedom. When you teach leaders to think rather than accept, you build up their capability to ‘coach the system’ in real time – to interrupt unhelpful norms, spot false narratives, and ultimately lead more effectively and systemically.”
Here are some steps you can take:
While it’s faster to outsource and automate everything, innovation doesn’t come from letting go of our thinking, it comes from having more time to do it. A meaningful future must be built on human judgment, and discernment. We remember and teach others not just to consume answers, but to question them.
The most dangerous thing we can do right now… is stop thinking.
Lisa Bodell is a leading expert on innovation, simplifying complexity, and creating impactful organizational change. Her dynamic approach to leadership and efficiency empowers companies to embrace creativity, streamline processes, and foster a culture of innovation. As an author and founder of FutureThink, Bodell’s insights provide actionable strategies for companies to thrive in an ever-evolving business landscape. To bring Lisa Bodell’s expertise to your event, contact WWSG.
One reason so many are quitting: We want control over our lives again
The pandemic, and the challenges of balancing life and work during it, have stripped us of agency. Resigning is one way of regaining a sense…
Thought Leader: Amy Cuddy
Scott Gottlieb: How well can AI chatbots mimic doctors in a treatment setting?
This is an Op-ed by WWSG exclusive thought leader, Dr. Scott Gottlieb. Many consumers and medical providers are turning to chatbots, powered by large language…
Thought Leader: Scott Gottlieb
Sara Fischer: The AI-generated disinformation dystopia that wasn’t
This piece is by WWSG exclusive thought leader, Sara Fischer. Amid the craziest news cycle in recent memory, AI-generated deepfakes have yet to become the huge truth…
Thought Leader: Sara Fischer