The Great AI Myth
Just like COVID triggered mass panic, AI is triggering mass surrender. Let’s reclaim our story.
The Great Myth
AI is smarter and inevitable. It will replace humans, efficiently.
This is the core narrative being shaped today, subtle in some circles, overt in others. It echoes in boardrooms, VC pitch decks, policy memos, and news headlines. The message is simple and repeated: “AI is coming. It’s smarter than you. And it gets the job done much more efficiently”.
At Davos, leaders openly acknowledged the stakes: "Everybody agrees that this is transformational, with a lot of promise, but also risks associated with it. We have a new study that shows that 40% of the global workforce is exposed to AI."
The tone is clear: unstoppable. inevitable. efficient.
Apple’s “Illusion of Thinking” paper cracked open a moment of reflection. It reignited the never-ending debate: “Does AI really think?” “Or is it just simulation?”
These are important questions, but they are behind the moment. Because while academics and ethicists argue over definitions, millions of people are already living with AI as a daily companion. They’re using AI to learn, to write, to organize their thoughts, to plan, to create, and to cope with loneliness, anxieties, etc.
Not in the future. Now.
So maybe the real question isn’t “Can AI think?” That ship has sailed.
Because perception shapes behavior. And behavior reshapes reality. And we’re already living inside that myth. We’re automating workflows. Rewriting emails in someone else’s voice. Writing and re-writing code. Trying to sound smarter, faster, more relevant because the myth says we must. Efficiency at all costs.
Over the past few months, I’ve been tracking this narrative. Quietly. In conversations with founders, academics, investors, and friends who feel disoriented, but can’t quite name why.
And what I’ve come to believe is this:
We are not just living through a technological revolution. We are living inside a myth-making campaign.
A campaign that’s not only inaccurate, but dangerous. Not because AI can’t create value. But because the myth distorts how we build, what we value, and what kind of agency we believe we have left.
It’s not just replacing work. It’s replacing the story of what it means to be human.
And that myth, and not the tech itself, is what most urgently needs disrupting.
So let’s pull it apart. Let’s look at how the myth was built. How it came in waves. How each wave shaped our perception and nudged us into resignation. And then, most importantly, let’s ask:
“What would it take to shape this technology on our terms?”
Not through panic. Not through paralysis. But with purpose, with clarity, and with a vision that places humanity at the center of AI’s evolution.
1. Hype Phase : The Arrival of a New Intelligence (Nov 2022 – Jul 2023)
Core Message: AI is magical, superhuman, and coming for everything.
Nov 30, 2022 – ChatGPT Launches
100M users in 2 months. The fastest-growing consumer app in history.
Media declares a revolution. TikToks, podcast demos, breathless blog posts.
Narrative: “Enchantment. Speed. Power”.
Jan 2023 – ChatGPT hits 123M monthly users
Fastest-growing consumer app ever.
Hype is now: “Everyone’s using it. You should too.”
Mar 1 – ChatGPT API launch
Any app can become AI-powered.
Most headlines read “AI is infrastructure now.”
Mar 14 – GPT-4 arrives
Multimodal. Passes elite exams.
Sentiment is “It doesn’t complete sentences, it completes my thoughts.”
Spring – Italy bans ChatGPT
We see the first pushback.
Framing: “AI vs. humanity?” starts to grow stronger.
People seem to be waking up to the hype curve.
Google Bard, Bing AI, Claude, LLaMA all launch
Speed replaces safety
“Now the race is on.”
Every company scrambles to show they’re in the game.
July – Sarah Silverman sues OpenAI for copyright infringement
“AI steals art, voice, identity.”
Still, the culture adapts. The myth holds: “It’s a powerful source of efficiency. We’ll figure it out later how to adapt other parts”.
Purpose of the Hype Phase:
Capture attention. Saturate media. Accelerate adoption.
Activate early users = future evangelists.
Fear and awe = free marketing.
2. Control Phase: Fear and the Gatekeepers (Aug – Dec 2023)
Core Message: AI is dangerous. But only we can build it safely.
Sep 13 – Senate AI Insight Forum
Musk, Altman, Zuck, Pichai meet behind closed doors.
“AI could destroy us. Also, please don’t over-regulate.”
Fear becomes a feature, not a flaw in the AI narrative for mass adoption.
It grants Big Tech legitimacy.
Oct 30 – Biden signs AI Executive Order
Mandatory safety testing.
“We’ll keep you safe.”
The government steps in, but keeps the builders building.
Nov 1–2 – UK AI Safety Summit (Bletchley Park)
“We need treaties. We need guardrails.”
Diplomacy legitimizes urgency.
Scarcity narrative emerges: “Only a few can build this responsibly.”
Nov – Altman ousted and reinstated at OpenAI
A power struggle. But the machine doesn’t stop.
OpenAI grows faster than ever.
Purpose of the Control Phase:
Reframe chaos as need for control.
Use fear to justify inevitability.
Turn public panic into institutional trust.
3. Mass Adoption Phase: From Myth to Market (2024 – 2025)
Core Message: AI is useful and already integrated. Adapt or fall behind.
Early 2024 – Enterprise adoption explodes
“Use it or lose.”
Framing: Not having AI is now a liability. Companies are rushing to implement their own AI solutions and push employees to adopt it to boost productivity
Feb 15 – OpenAI Sora: Text-to-video launch
AI makes photorealistic video.
“Creativity isn’t sacred anymore.”
AI now can do anything humans can.
Mar – Claude 3: Metacognition
AI understands it’s being tested.
“It thinks about thinking” just like we do.
June – Apple Intelligence
AI embedded in iOS, iPadOS, macOS.
On-device. Private. Personal.
“AI belongs next to your memories.”
Mid-2024 – EU AI Act implemented
First global regulatory framework.
Fines, risk categories, compliance standards.
“We’ve tamed the beast.”
July – Market soars
LLM market forecast: $140B by 2033
OpenAI projected to hit $12.7 Billion in revenue by 2025
“This is the new oil.”
Sept – GPT-o1 Reasoning Model
Passes PhD-level physics problems.
“AI can reason like us. Or better.”
Dec – Fortune 500 companies list AI as material risk
Risk and reward co-exist.
Myth completes its arc: “It will save us. It might kill us. Either way, you have to use it.”
2025 – AI is inevitable
50% of digital work projected to be automated, we passed the point of no return
AI safety institutes proliferate
International cooperation intensifies
OpenAI becomes a global utility
Purpose of this Mass Adoption Phase:
Normalize dependence.
Showcase investments and revenue to cement market dominance.
Collapse resistance into inevitability.
2. Why It Is Attractive
The myth of AI being better than humans wasn’t forced on us. It was absorbed, recycled, and eventually sold back as strategy. And it worked because in uncertain times, we crave clarity. In the post-COVID disarray, the AI myth offers a neat directive: upskill or be left behind narrative. It echoes the collective response to crisis we've seen before during panic times like Covid. But this time, the story didn't start in a corporate boardroom.
It began in cycles.
I don’t believe there was a villain behind a desk plotting this from the beginning. But something cracked open, an atmosphere of fear, change, and opportunity. Into that space, the main players stepped in with a story. One that helped them sell software, secure funding, and position themselves as architects of the future.
The myth didn’t come from nowhere. It borrowed legitimacy from decades of academic framing. Since the 1956 Dartmouth Conference, AI has been benchmarked against the human mind. The Turing Test also asked whether a machine could imitate a human well enough to fool another. This anthropomorphic lens gave early AI research its ambition and its trap.
Useful for science, but more importantly, sticky for culture.
Today, that framing is no longer just a thought experiment. It’s a sales pitch. A myth where AI doesn’t just assist humans, it replaces them for efficiency. This shift turned a research paradigm into a marketing strategy.
For individuals, it sells the promise of personal transformation and influence. For companies, it justifies layoffs, simplifies complexity, and suggests dominance. And to be honest, it’s not entirely wrong. AI is a powerful tool that can have a big impact on humanity. But it’s also overhyped, wrapped in a story designed to accelerate adoption.
And adoption, of course, means profit.
OpenAI, Anthropic, and others have clear incentives to position AI as workforce-replacing. It helps sell their products to enterprise. It makes the tech sound like a shortcut to productivity. But the truth is far more nuanced.
3. What It Obscures
The myth masks the current reality: AI is proving extremely useful, but not as a substitute for human judgment and is not as “efficient” as we may think. Salesforce’s CRMArena-Pro benchmark, one of the most comprehensive assessments of business-ready LLMs, found that top models succeed at only 58% of basic tasks, and fall to 35% when handling multi-step workflows. They struggle with nuance, context, and sustained reasoning.
Academic studies across cognitive science echo the same theme: LLMs perform best when they amplify human effort, not when they attempt to replace it. They’re powerful, but incomplete without human talent.
Yet the myth endures. Why?
Because speed is now the strategy. The faster adoption moves, the more data models absorb. The more they absorb, the better they perform. Even insiders who understand the limitations keep pushing the inevitability narrative because it accelerates uptake.
But acceleration without intention has a cost. And we’re not talking so much about this cost.
The damage isn’t in the models, it’s in how we’re choosing to deploy them. Inside companies, the myth is distorting how we organize work, define roles, and measure human value and contribution.
The question I want to get answer for is: What are we replacing? And for what cost?
A designer told me she’s now expected to generate marketing copy using AI, despite having no background in writing. “I don’t even know what good copy looks like,” she admitted. The tool churns out options that seem impressive at first glance, especially to someone outside marketing. But when that someone doesn’t have the skills, the interest, or the judgment to do it well, it doesn’t just lead to bad output, it leads to quiet burnout.
And this is felt across the board. 77% of workers say AI tools have increased their workload and hurt productivity.
Tech founders feel that deeply. One shared that, after HR layoffs on his company, he’s now running recruiting alone, using AI to write scripts and assess candidates. “I don’t know what I’m even evaluating anymore,” he admitted “I feel nobody is truly engaging with the questions”. The interviews feel hollow. But this isn’t laziness. It’s burnout. Everyone’s exhausted and no one has the energy to fake otherwise.
These aren’t edge cases. They’re early warning signs of a system that is collapsing:
64% of U.S. workers said they reduce effort when using AI
58% don’t verify its output
57% have made mistakes because of it
We see not only inefficiency happening, but also emotional erosion at an unprecedent level. When people don’t know why tools are being deployed or how they’re meant to help, they use them blindly. Anxiously. Resentfully.
The result? Confusion. Cynicism. Collapse in trust.
Low morale alone is costing U.S. companies $350B a year. This is the productivity paradox all over again. “You can see the computer age everywhere,” Robert Solow once said, “except in the productivity statistics.”
AI is exposing a broken system that was already in place. Companies were bloated. The many recent mass layoffs, 21% at Business Insider, over 40% of grads facing underemployment, recruitment that now requires proof you’re irreplaceable, weren’t caused by AI suddenly thinking better. They’re symptoms of overextended systems expected to perform on old tax frames and corporate incentives.
But this time, the cost isn’t just economic. It’s psychological, social, and cultural.
The myth rewards velocity, even when it harms profits. In the process, it dehumanizes our effort and erodes our craft. It flattens the very parts of work that make us whole: discernment, collaboration, meaning. That’s why everyone’s exhausted. Not just from the workload, but from chasing yet another internal memo titled “AI Enablement and Tooling,” like it’s a revolution when it’s really just more busywork.
4. What Kind of Story We Could Tell Instead
So maybe the real story isn’t: “AI is smarter and inevitable. It will replace humans, efficiently” because it’s not doing that — not well, and not yet. That myth is convenient, but false. It flattens a far messier truth: AI is powerful, yes, but its true impact will be shaped by how we choose to use it.
Real transformation is rarely efficient. It’s messy. Slow. Human. But that’s not a flaw, it’s a feature. Because some of the ways we’ve worked weren’t human to begin with. And AI has exposed that. They were built for scale, compliance, repetition.
They made us efficient, but not always fulfilled. This is where AI holds real promise. Not as a force of replacement, but of release. To offload the mechanical, the mindless, the bureaucratic. To unburden us from workflows that should’ve been done by machines all along. When used with intention, when guided by clarity of outcome, AI can expand our thinking, not shrink it. It can create space for reflection, depth, and originality. Not just do things faster, but help us do the right things, better.
That’s where it gets extraordinary.
We need to move beyond the myth, whether it was crafted or just absorbed. But that shift demands more than faster tools. It demands a new lens for value. What if the best use of AI isn’t acceleration, but insight? And one I particularly want to press on, which is, what if the goal isn’t to match human thinking, but to expand it?
This reframes the benchmark, from machine intelligence to human consequence. Because AI is already shaping how we think, whether is really thinking or simulating, there are already powerful use-cases being applied across disciplines. Use-cases that prompt harder questions. Surface the patterns behind our decisions. Eliminate biases in our thinking, pushes for depth … if you choose to.
Now imagine tools that don’t just generate content, but reveal where your thinking drifts or sharpens. That don’t just summarize, but ask what we’re trying to understand. That don’t just fill roles, but help align your skills with meaning.
But let’s be honest: not everyone wants insight. Not everyone wants transformation. Many just want faster answers. That’s fair. Flourishing isn’t the default. It’s a design choice. A cultural choice. One that many people don’t have time or space for yet.
But what if AI could help change that?
What if AI could create more accessible road to insight, not just speed? What if the real promise of AI is the expansion of our cognitive range, of our capacity to ask deeper questions?
Yes, some replacement of tasks will happen. They might even be necessary. Dangerous, mindless, or exploitative jobs should absolutely be automated. There’s utility in efficiency when it serves purpose. The danger isn’t automation per se. It’s unconscious automation. That’s why we need nuance, not just “keep the humans,” but “let’s elevate our craft”.
And here’s another challenge: much of this conversation centers elite users. Knowledge workers. Strategists. Substack writers. But what does flourishing look like outside of tech? What about the care worker, the warehouse picker, the immigrant delivery rider?
Human-centered AI can’t just mean more mental space for strategists. It has to include embodied, emotional, and communal intelligence. Tools that support dignity, fairness, connection. That’s where this becomes a cultural movement, not just a product roadmap.
And it’s not just hypothetical. It’s already happening.
Educators are using AI to help students reflect on how they learn, not just what they know. Therapists are using it to help patients process complex emotions. Early product prototypes are tracking cognitive shifts over time, not just outputs. Some startups I spoke to are developing “thinking partners” that surface patterns in your reasoning, highlight inconsistencies, and reflect your voice back with clarity and care.
These are small seeds, but they matter. The real promise of AI isn’t speed, it’s space. Space to think more deeply, work more meaningfully, and reconnect with what makes us human. Hell, it might even push us back toward our humanity. The old myth distracts us from a deeper truth: AI won’t transform the future of work unless we first transform what we value in human work.
This moment isn’t just about what machines can do. It’s about what we choose to protect, reimagine, and build for. It won’t be easy. It requires awakening. The real choice is whether we design this future consciously, or drift into a myth that was not ours to begin with.
Author’s Note
I’ve spent 15 years working in tech, building narratives that shape how new products are adopted and understood. This piece reflects ongoing conversations with founders, VCs, engineers, and researchers who are wrestling with what this next chapter of AI really means for work, identity, and meaning. I’m now bringing together a group of people who want to build something better, an intentional future of work that puts humans at the center of AI.
If this resonates, DM me or reply to this post.
Let’s start something.
Thank you for this article. You’ve articulated what many of us have been feeling but haven’t been able to name, how the AI narrative has morphed from curiosity into a cultural myth that shapes our behavior, values, and sense of agency. This isn’t just about technology; it’s about the kind of future we want to co-create.
I like your idea of reframing the utility of AI away from artificial thinking towards assisted thinking! This is powerful. It has themes of enablement and empowerment instead of suppression and paternalism.