The Quiet Erosion: What Happens to Your Thinking When AI Does It For You
The research on cognitive offloading is harder to read than the AI industry would like.
In 2025, researchers at SBS Swiss Business School surveyed 666 people about their AI usage and their critical thinking abilities.
The correlation between heavy AI use and critical thinking scores was -0.68. Strong negative relationship. The more someone relied on AI, the weaker their independent reasoning appeared to be.
The mechanism is something called cognitive offloading. You delegate a cognitive task to an external tool - a calculator, a GPS, a chatbot - and your brain, efficiently, stops practising that task. We've known since the early 2000s that heavy GPS use erodes spatial memory. The "Google Effect" describes how we stop remembering facts and start remembering where to find them instead.
AI takes this further. Because it can reason, not just retrieve. You can outsource not just the lookup but the analysis, the working-through of a problem. Which means the cognitive capacity you're potentially offloading is much more fundamental.
A 2026 BCG study tracked 244 consultants through five thousand AI interactions and found three distinct patterns. Sixty percent engaged in iterative dialogue and developed new skills. Fourteen percent used AI selectively while staying in control - and these were actually the highest performers. Twenty-seven percent delegated entire workflows and became, in the researchers' words, "passive conduits." They developed neither AI skills nor domain expertise. The paper called it "inadvertently hollowing out the very expertise that creates competitive advantage."
Twenty-seven percent of highly trained, highly educated professionals. That's worth sitting with.
I think about this a lot when it comes to Continio. Memory and continuity tools could, in theory, make this problem worse. If the system remembers everything and surfaces it automatically, at what point does the user stop engaging with their own thinking at all?
This is why Continio is designed the way it is. The memory is visible and correctable - you can see what it holds about you, and you can change it. The recall is offered, not inserted. And the tool doesn't make decisions for you. It holds the material. You do the thinking.
The distinction I keep coming back to is between a cognitive mirror and a cognitive replacement. A mirror reflects. It shows you what's there so you can engage with it. A replacement does the work instead. The mirror makes you sharper over time. The replacement makes you dependent.
The research on healthy AI use points somewhere specific. Tools that scaffold thinking - that offer prompts and hints rather than answers, that require engagement rather than passive receipt - seem to preserve and even strengthen cognitive ability. Tools that do the work for you don't. The BCG study called the optimal approach the "centaur model": strategic division of labour where humans stay in control of the tasks that build judgment.
That's the model I'm trying to build toward. Not AI that thinks for you. AI that helps you think better - and doesn't let you forget how.