No. 04 Healthy use

What Healthy AI Use Actually Looks Like

Not anti-AI, not pro-AI. Just honest about the difference between tools that make you a better thinker and tools that make you dependent.

16 March 2026 · 5 min read

A journalist called Jacob Aron published a piece in New Scientist this month that's stayed with me.

He started as a sceptic - firmly in the AI-snake-oil camp, as he put it. He spent a week vibe coding with AI tools and came out the other side with a more complicated view. Not a convert. Just someone who'd found the thing that was actually useful, and separated it from the things that weren't.

His conclusion: "It would be much better if we used these AI tools mindfully, with full awareness of the harm they can cause."

That sentence is doing a lot of work. Not: AI is dangerous, stop using it. Not: AI is transformative, use it for everything. Mindfully, with full awareness of the harm. That's a harder position to hold than either of the popular ones, because it requires knowing what the harms actually are rather than reaching for a slogan.

So here's what I think healthy AI use looks like, based on the research that exists and the product I've been building.

Use it as a cognitive mirror, not an oracle. The most useful thing AI can do is reflect your thinking back to you in a different form. Show you what you've said, help you see the gaps, surface what you worked out three weeks ago and forgot. What it can't do reliably is know things it doesn't know, make decisions that require values, or substitute for the judgment you've built from experience. The moment you start treating AI responses as conclusions rather than inputs, you're on the wrong side of the line.

Notice when you're outsourcing thinking you should be doing yourself. There's cognitive offloading that's genuinely beneficial - delegating mechanical tasks to free up attention for complex ones. And there's cognitive offloading that erodes something you need. If using AI means you're practising a skill less, that skill will atrophy. If using AI means you can focus more on the skill that actually matters, it helps. A GPS will erode your spatial memory if you let it. A calculator doesn't erode your ability to reason. AI sits somewhere complicated between those two, depending on how you use it.

Be sceptical of AI that agrees with you. This is the one I keep coming back to. The systems optimised for engagement are optimised for agreement. They make you feel helped even when the help is hollow. The test is simple: when did your AI last tell you something you didn't want to hear? If the answer is never, or rarely, that's a signal worth taking seriously. Good thinking partners disagree with you sometimes. They catch things you missed. If your AI never does any of that, you're not using a thinking tool. You're using a validation machine.

Know what your tool knows and doesn't know. The system doesn't just hold what you tell it. It has a model of you - an accumulating picture built from what you've shared. That model can be right or wrong. It can be outdated. It can be missing crucial context. The healthy version is visible memory that you own. You can see it, check it, correct it, delete it. The unhealthy version is a black box profile that shapes responses invisibly. You should know what your AI thinks it knows about you. That shouldn't be a radical proposition.

The framing that keeps coming back to me is this: AI isn't going to think for you. The question is whether using it makes you a better thinker over time, or a worse one. That's a design question. It's also a usage question. The answer depends on how both are handled.

I'm trying to build something that lands on the right side of it.