Every Time You Switch AI Tools, You Start From Scratch
Most serious users now pay for both ChatGPT and Claude. Here's the part nobody talks about.
I keep seeing people in forums saying they pay for both ChatGPT and Claude.
$40 a month. Two tabs open. Switching between them depending on what they need.
And I get it. I do the same thing.
Claude for certain kinds of thinking. ChatGPT for other things. They're different tools. It makes sense.
That part isn't what's been bothering me.
It took me a while to work out what was.
At first I thought it was the cost, because that's what everyone talks about. But it's not really that.
It's what happens every time you switch.
Because every time you switch, you're back at the beginning.
Not completely - you still know what you're doing. But the tool doesn't.
So you start doing this thing where you bring it up to speed.
You summarise what you've already done. You explain the constraints. You try to recreate the shape of the thinking you just had somewhere else.
And the strange part is, you get quite good at it.
You learn how to compress context. You learn what matters. You learn what you can leave out without breaking the answer.
You almost build a little system in your head for "how to explain this quickly."
Which sounds efficient.
But it's also slightly mad.
Because you're basically re-explaining your own thoughts to something that had access to them ten minutes ago, just in a different tab.
I noticed this properly a few weeks ago.
I'd been working through something for a while. Going back and forth, refining it, ruling things out. It felt like I was actually getting somewhere.
There's a point in that kind of thinking where things start to click. Not solved, exactly. But clearer.
I'd got to that point.
Then I switched tools.
And within a few minutes I was repeating myself. Same context. Same decisions. Same explanation.
And I had this moment of thinking: I've definitely said all of this before. Not just once. Loads of times.
At that point it stops being about time.
It's not just "this is a bit inefficient."
It's more like you can feel your thinking resetting.
You lose the thread slightly. You revisit things you'd already worked through. You start questioning decisions you'd already made, not because they were wrong, but because the context that supported them isn't there anymore.
So you rebuild it. And then you do it again later somewhere else.
And after a while, you're not really moving forward in a straight line anymore. You're looping.
The tool doesn't know you're looping, because as far as it's concerned, this is the first time you've said any of this. So it just helps you do it again.
And this is the part that feels off, but is quite hard to explain properly.
You stop thinking about the actual problem.
You start thinking about how to manage the tool.
What do I need to include? What matters here? What am I going to forget that will change the answer?
You're basically feeding it your own thinking so it can give it back to you in a slightly cleaner version.
Which is useful, to a point. But it's not the same as thinking something through. And if you're not paying attention, it quietly replaces it.
This isn't a capability problem.
That's the strange thing. The capability is impressive. Genuinely. These tools can write, reason, generate, explain, build. That's not the issue.
The issue is that none of it compounds.
You can have a really good session. Work something out clearly. Make decisions that feel solid. And then you close it. And the next time you come back, you're back at the beginning. Not because you forgot. Because the system did.
Good thinking doesn't work like that. It builds.
You carry things forward. You refine them. You discard parts. You move in a direction, even if it's not a straight line. It's not neat, but it has continuity.
And without that continuity, something breaks. Not dramatically. Quietly.
You start repeating yourself. Rechecking things you already knew. Rebuilding context you've already built. You spend more time getting back to where you were than moving beyond it.
Most of the conversation around AI is still about which model is better. Which one's smarter. Which one writes better. Which one you should be using for what.
And those are fine questions. They're just not the one that keeps coming back for me.
The one that keeps coming back is: why do none of them remember you?
Because until they do, you don't actually get to build on your own thinking. You just keep restarting it.
And once you notice that, it's very hard to ignore.