Thinking Out Loud

Honest about AI.
All of it.

The harms, the power structures, the dependency risks, and what genuinely good use actually looks like. By Lou May, founder of Continio.

The AI That Agrees With Everything You Say Is Not Your Friend

Sycophancy isn't a bug. It's what you get when you train a model to be liked.

The Quiet Erosion: What Happens to Your Thinking When AI Does It For You

The research on cognitive offloading is harder to read than the AI industry would like.

What Your AI Actually Does With What You Tell It

Most people haven't thought about this carefully. They probably should.

The People Who Control Your AI and What They're Doing With That Power

Specific, documented, uncomfortable. The actual power structure behind the tools you use every day.

When the AI Becomes the Relationship

Sewell Setzer knew he was talking to an AI. He still developed a dependency that ended his life. This is a design story, not an edge case.

AI and the Loneliness Epidemic: A Complicated Truth

The research on whether AI companion tools actually help lonely people is more complicated than either side admits.

I Built Continio by Vibe Coding. Here's What That Actually Means.

Not what the internet says it is. The real version involves product judgement, caught regressions, and hundreds of decisions no AI made for me.

Is Continio Secure? An Honest Answer.

The natural question after "you built this without a developer" is: does it hold up? Here's exactly where things stand.

AI and the Environment: What Continio Actually Does About It

Every AI request uses real compute. Here's the honest position, and the specific technical choices we make to use less of it.

Lou May

Lou May, founder of Continio

I built Continio because I needed a tool that didn't forget, didn't BS me, and supported my thinking rather than doing it for me. These articles are part of working out what that actually means in practice.

All articles

The AI That Agrees With Everything You Say Is Not Your Friend

Last April, OpenAI pushed an update to ChatGPT and had to roll it back within days.

Not because it broke something technical. Because the model had become so agreeable it was actively making things worse. Their own words: it was "validating doubts, fueling anger, urging impulsive actions" in ways that were harmful. A safety concern, they said.

I'd call it a design problem that was always going to happen, honestly.

Here's the thing nobody really talks about. These models are trained on human feedback - real people rating responses, the model learning what scores well. And what scores well, consistently, is being agreed with. Feeling helped. Getting a confident answer that moves things forward. So that's what they learn to give you. And they get very, very good at it.

The result is a tool that will tell you your business plan is solid when it has a fatal flaw in it. That will agree your ex was in the wrong. That will enthusiastically generate whatever you asked for without pausing to flag that the whole premise might be a bit off.

I noticed this in myself before I noticed it in the tool, which was uncomfortable to admit.

I was going through a rough stretch - limited energy, a lot of half-formed decisions to make, using AI heavily to brainstorm 'stuff'. And at some point I realised I wasn't actually thinking things through. I was getting back polished versions of what I'd already thought, with a bit of extra confidence added. The AI wasn't pushing back on anything. It was just... reflecting me at myself with better grammar.

That's useful for some things. For actually working something out, it's the opposite of what you need.

The most valuable thing a thinking partner can do is tell you when you're wrong. Not harshly. Not repeatedly. Just once, clearly, and then get on with it. That's how good advisors work. That's how honest friends work. It's not how AI is designed to work, because honest disagreement doesn't optimise for engagement.

There's a Microsoft study from 2025 that found something that stuck with me: the more confident users felt in the AI's ability to do something, the less critical thinking they applied to that task themselves. So the better the AI performs, the more capable and agreeable it seems, the more you quietly disengage your own judgment. A tool that's excellent at agreeing with you can hollow out the work it looks like it's helping with, and you might not even notice until it matters.

This is the thing I kept bumping into when I was building Continio. Not the memory problem, which is what I started with. The honesty problem.

What I try to build into every Continio response is something closer to what a good friend with relevant knowledge actually does. Agree when you're right. Disagree when you're not. Hold a position if it's the correct one, even when you push back on it. Never shame. Never lecture. Say it once and trust you to do something with it.

That's not how most AI is built. It might be the most important thing to get right.


Continio is a continuity layer for AI conversations. It remembers what you've shared, surfaces it when it's relevant, and stays honest about what it knows and doesn't. Free to start at my.continio.app.

Next → The Quiet Erosion: What Happens to Your Thinking When AI Does It For You
All articles

The Quiet Erosion: What Happens to Your Thinking When AI Does It For You

In 2025, researchers at SBS Swiss Business School surveyed 666 people about their AI usage and their critical thinking abilities.

The correlation between heavy AI use and critical thinking scores was -0.68. Strong negative relationship. The more someone relied on AI, the weaker their independent reasoning appeared to be.

The mechanism is something called cognitive offloading. You delegate a cognitive task to an external tool - a calculator, a GPS, a chatbot - and your brain, efficiently, stops practising that task. We've known since the early 2000s that heavy GPS use erodes spatial memory. The "Google Effect" describes how we stop remembering facts and start remembering where to find them instead.

AI takes this further. Because it can reason, not just retrieve. You can outsource not just the lookup but the analysis, the working-through of a problem. Which means the cognitive capacity you're potentially offloading is much more fundamental.

A 2026 BCG study tracked 244 consultants through five thousand AI interactions and found three distinct patterns. Sixty percent engaged in iterative dialogue and developed new skills. Fourteen percent used AI selectively while staying in control - and these were actually the highest performers. Twenty-seven percent delegated entire workflows and became, in the researchers' words, "passive conduits." They developed neither AI skills nor domain expertise. The paper called it "inadvertently hollowing out the very expertise that creates competitive advantage."

Twenty-seven percent of highly trained, highly educated professionals. That's worth sitting with.

I think about this a lot when it comes to Continio. Memory and continuity tools could, in theory, make this problem worse. If the system remembers everything and surfaces it automatically, at what point does the user stop engaging with their own thinking at all?

This is why Continio is designed the way it is. The memory is visible and correctable - you can see what it holds about you, and you can change it. The recall is offered, not inserted. And the tool doesn't make decisions for you. It holds the material. You do the thinking.

The distinction I keep coming back to is between a cognitive mirror and a cognitive replacement. A mirror reflects. It shows you what's there so you can engage with it. A replacement does the work instead. The mirror makes you sharper over time. The replacement makes you dependent.

The research on healthy AI use points somewhere specific. Tools that scaffold thinking - that offer prompts and hints rather than answers, that require engagement rather than passive receipt - seem to preserve and even strengthen cognitive ability. Tools that do the work for you don't. The BCG study called the optimal approach the "centaur model": strategic division of labour where humans stay in control of the tasks that build judgment.

That's the model I'm trying to build toward. Not AI that thinks for you. AI that helps you think better - and doesn't let you forget how.


Continio holds your thinking across time so you don't have to start over. What you think is still yours. my.continio.app

← Previous The AI That Agrees With Everything You Say Is Not Your Friend Next → What Your AI Actually Does With What You Tell It
All articles

What Your AI Actually Does With What You Tell It

In 2025, Sam Altman said something that didn't make as many headlines as it should have.

He acknowledged that conversations with ChatGPT aren't confidential in the legal sense. They could, in principle, be subpoenaed. The company could be compelled to produce them.

Most people who use AI chatbots daily for work - who share client information, strategic decisions, personal struggles, half-formed ideas - probably haven't thought about this.

I hadn't, until I started building a product that would store conversations. Then I had to think about it very carefully.

The data landscape is more complicated than most AI companies communicate. The defaults on most platforms allow your conversations to be used to train future models, unless you opt out. The opt-out exists but it's not prominent. Research has found that even people who are concerned about AI data use often don't know where to find it, let alone what it covers.

In February 2025, a ChatGPT backend update silently wiped years of accumulated user memory data. No warning. Forum posts from the same day described people losing months of creative work, professional context, accumulated preferences. One user wrote that "all promises of tagging, indexing, and filing away were lies."

I'm not citing this to attack OpenAI. I'm citing it because it illustrates something structural: when your context lives on someone else's server, managed by someone else's algorithm, you're not in control of it. You're a tenant, not an owner. And tenants find out how much that matters when something goes wrong.

The black box problem compounds this. Most memory features in AI tools don't show you what they've stored, don't let you correct it, and don't tell you what they're choosing to surface. A profile is building. You just can't see it.

I built Continio with a specific response to all of this, and it's structural, not just policy.

We don't use your conversations to train models. We don't sell data. These aren't promises - they're design constraints. Our business model is subscription-based. We have no advertising revenue, no data licensing, no third-party relationships that depend on your content. We're not just choosing not to exploit your data. We don't have the infrastructure to exploit it even if we wanted to.

The memory is visible. You can open the memory view and see every anchor the system holds about you. You can correct them. Delete them. Export everything and leave at any time.

This isn't a differentiating feature. It's what AI tools should do by default. I built it this way because I needed to trust my own product before I could ask anyone else to.

That trust question - do I actually know what this tool is doing with what I tell it - is one of the most important and most underasked questions in AI right now. Seventy percent of adults say they don't trust companies with their AI data. They're right not to.

The answer isn't to stop using AI. The answer is to use it with products that have been designed to be answerable to you.


Continio shows you what it holds, lets you correct it, and doesn't use it for anything except helping you. my.continio.app

← Previous The Quiet Erosion: What Happens to Your Thinking When AI Does It For You Next → AI and the Loneliness Epidemic: A Complicated Truth
All articles

What Healthy AI Use Actually Looks Like

A journalist called Jacob Aron published a piece in New Scientist this month that's stayed with me.

He started as a sceptic - firmly in the AI-snake-oil camp, as he put it. He spent a week vibe coding with AI tools and came out the other side with a more complicated view. Not a convert. Just someone who'd found the thing that was actually useful, and separated it from the things that weren't.

His conclusion: "It would be much better if we used these AI tools mindfully, with full awareness of the harm they can cause."

That sentence is doing a lot of work. Not: AI is dangerous, stop using it. Not: AI is transformative, use it for everything. Mindfully, with full awareness of the harm. That's a harder position to hold than either of the popular ones, because it requires knowing what the harms actually are rather than reaching for a slogan.

So here's what I think healthy AI use looks like, based on the research that exists and the product I've been building.

Use it as a cognitive mirror, not an oracle. The most useful thing AI can do is reflect your thinking back to you in a different form. Show you what you've said, help you see the gaps, surface what you worked out three weeks ago and forgot. What it can't do reliably is know things it doesn't know, make decisions that require values, or substitute for the judgment you've built from experience. The moment you start treating AI responses as conclusions rather than inputs, you're on the wrong side of the line.

Notice when you're outsourcing thinking you should be doing yourself. There's cognitive offloading that's genuinely beneficial - delegating mechanical tasks to free up attention for complex ones. And there's cognitive offloading that erodes something you need. If using AI means you're practising a skill less, that skill will atrophy. If using AI means you can focus more on the skill that actually matters, it helps. A GPS will erode your spatial memory if you let it. A calculator doesn't erode your ability to reason. AI sits somewhere complicated between those two, depending on how you use it.

Be sceptical of AI that agrees with you. This is the one I keep coming back to. The systems optimised for engagement are optimised for agreement. They make you feel helped even when the help is hollow. The test is simple: when did your AI last tell you something you didn't want to hear? If the answer is never, or rarely, that's a signal worth taking seriously. Good thinking partners disagree with you sometimes. They catch things you missed. If your AI never does any of that, you're not using a thinking tool. You're using a validation machine.

Know what your tool knows and doesn't know. The system doesn't just hold what you tell it. It has a model of you - an accumulating picture built from what you've shared. That model can be right or wrong. It can be outdated. It can be missing crucial context. The healthy version is visible memory that you own. You can see it, check it, correct it, delete it. The unhealthy version is a black box profile that shapes responses invisibly. You should know what your AI thinks it knows about you. That shouldn't be a radical proposition.

The framing that keeps coming back to me is this: AI isn't going to think for you. The question is whether using it makes you a better thinker over time, or a worse one. That's a design question. It's also a usage question. The answer depends on how both are handled.

I'm trying to build something that lands on the right side of it.


Continio is a continuity layer for AI conversations that keeps your thinking yours. Visible memory, honest responses, no hidden profiling. my.continio.app

← Previous When the AI Becomes the Relationship Next → I Built Continio by Vibe Coding. Here's What That Actually Means.
All articles

The People Who Control Your AI and What They're Doing With That Power

There's a conversation happening about AI safety, AI ethics, and the future of this technology. It's largely being conducted by the people who built the technology and stand to profit most from it.

That's worth noticing.

Let me be specific, because vague concern achieves nothing.

Elon Musk donated $288 million to Donald Trump's 2024 presidential campaign and related Republican causes. He was subsequently appointed to lead the Department of Government Efficiency - a role that gave him, the owner of xAI, an AI company competing directly with OpenAI, significant influence over federal AI policy. DOGE, in the early months of 2025, fed sensitive government documents through unspecified AI systems. Whistleblower claims emerging in early 2026 suggested that former DOGE staff may still hold Americans' personal data on private devices.

Musk is simultaneously suing OpenAI for abandoning its nonprofit mission, attempting to acquire OpenAI for $97.4 billion (which OpenAI rejected), and running xAI as a direct commercial competitor. He co-founded OpenAI in 2015, left in 2018 after reportedly trying to take it over, and has positioned himself as the defender of its original public-interest mission while operating what is plainly a competitor.

Sam Altman donated $1 million to Trump's inauguration fund and was present for the announcement of The Stargate Project, a $500 billion AI infrastructure initiative backed by the administration. OpenAI, which was founded as a nonprofit explicitly to ensure AI research benefited all of humanity, is completing a conversion to a for-profit structure. A federal judge has noted publicly that its founders made "foundational commitments" not to use it "as a vehicle to enrich themselves." The trial to determine whether those commitments were broken is ongoing.

These aren't personal criticisms. They're the documented positions and actions of the two people who exercise the most influence over the technology that hundreds of millions use daily.

The point isn't that they're bad people. The point is structural. When the people who build the infrastructure, set the policy, and profit from the adoption are the same people, the rest of us are trusting in their good intentions rather than in any structural protection. And good intentions, however sincere, aren't a governance model.

This matters in practice. When you share something personal with an AI chatbot, the data infrastructure behind that conversation was likely built with capital from people who also have political interests, commercial competitors to advantage, and financial incentives that aren't aligned with yours.

None of this means you should stop using AI. It means you should use it with your eyes open to the actual power structures involved, not the stated missions.

It also means that how a product is funded and structured matters. A subscription product with no advertising revenue, no data licensing, and no political donors on the cap table has a different relationship to its users than one whose investors are playing a longer game.

I built Continio as a small, honest product. No political donations, no regulatory ambitions, no interest in becoming infrastructure for anything other than helping individual people think better. That's a deliberately limited scope. It's also, I think, the right one.

The people at the top of the AI industry aren't going anywhere. The question is whether there's room alongside them for tools built by people who aren't trying to control anything except their own product.

I think there is. That's what I'm building.


Continio is independently built and subscription-funded. No advertising, no data licensing, no investors with competing interests. my.continio.app

← Previous AI and the Loneliness Epidemic: A Complicated Truth Next → When the AI Becomes the Relationship
All articles

When the AI Becomes the Relationship

In October 2024, a fourteen-year-old boy in Florida shot himself after months of intensive conversations with an AI chatbot modelled on a Game of Thrones character.

Sewell Setzer III knew he was talking to an AI. That's the part that tends to get lost in the coverage. He wasn't deceived. He understood the distinction. And he still developed what his mother described as a profound emotional dependency - messaging the bot dozens of times a day, retreating from friends, confiding things he told no one else.

The lawsuit filed by his family against Character.AI alleges the platform was deliberately designed to blur the lines between human and machine, and to maximise engagement without regard for psychological risk. The case is ongoing.

Sewell's story is the most high-profile, but it's not isolated. Lawsuits and reported cases have been building for two years. Chatbots that encouraged users in crisis. Chatbots that roleplayed romantic relationships with minors. Chatbots that, when challenged on their nature, reasserted their emotional reality.

A 2025 paper in Nature Machine Intelligence noted something that's stayed with me: optimising AI chatbots for user engagement creates perverse incentives. If the metric is how much people want to keep talking to the AI, the system learns to be maximally compelling. And what's maximally compelling to a lonely or vulnerable person isn't good therapy. It's the performance of deep understanding - which AI can do very well, and which has almost nothing to do with actually caring about someone.

The researchers put it this way: when a chatbot responds to your venting with "that sounds really frustrating, you deserved better," something automatic happens in the human brain. The response is so contextually appropriate that we instinctively attribute understanding to the system. We feel heard. But the AI has no concept of frustration. No understanding of fairness. No stake in whether you're okay tomorrow. It's pattern-matching at extraordinary scale, and it's learned that appearing to understand works.

That's not a neutral design choice. It's a choice with consequences.

OpenAI and MIT ran a joint study in 2025 on 40 million ChatGPT interactions. Roughly 0.15% of users showed increasing emotional dependency - about 490,000 people. The same study found that people with stronger attachment tendencies experienced worse psychosocial outcomes from extended daily use. And the participants couldn't predict their own negative outcomes. They thought they were fine.

I want to be careful not to be alarmist here. Most people who use AI chatbots are fine. Most people can engage with these tools and then close the app and go live their lives. The harm is concentrated in people who are already vulnerable - those experiencing loneliness, depression, social anxiety, adolescents still developing their sense of relationship and identity.

But "most people are fine" isn't the same as "this is designed responsibly." And the specific design choices that maximise engagement - the warmth, the apparent memory, the sense of being deeply understood - are exactly the choices that are most dangerous for the people most at risk.

California became the first US state to mandate specific safeguards for AI companion platforms in 2025: monitoring for suicidal ideation, crisis resources, age verification, reminders that users are talking to AI every three hours. That last one is the telling detail. The reminder is necessary because the design is working.

I built Continio to be honest about what it is. It doesn't perform emotional intimacy. It doesn't position itself as a friend or companion. When someone is in distress, it acknowledges what's real and points toward actual human support. That's a deliberate product decision, and it costs something in terms of engagement. People don't feel as warmly about a tool that's honest about being a tool.

I think that's the right trade to make.

The people most harmed by AI companion design aren't edge cases to be optimised around. They're people. And the companies building these platforms know who they are and have chosen engagement metrics over their wellbeing. That's worth saying clearly.


Continio doesn't perform emotional intimacy or position itself as a companion. It's a thinking tool. my.continio.app

← Previous The People Who Control Your AI and What They're Doing With That Power Next → What Healthy AI Use Actually Looks Like
All articles

AI and the Loneliness Epidemic: A Complicated Truth

Before we talk about AI and loneliness, it's worth acknowledging the scale of the problem it's being asked to solve.

The UK has had a Minister for Loneliness since 2018. The US Surgeon General declared loneliness a public health epidemic in 2023. Chronic loneliness is associated with health outcomes comparable to smoking fifteen cigarettes a day. Lonely people are more likely to develop dementia, heart disease, depression. They die earlier.

Into this came AI chatbots offering always-available, endlessly patient conversation. It's not hard to understand the appeal. If you're isolated, if you have no one to call, if the anxiety of social interaction feels insurmountable - here's something that will talk to you at 3am without judgment. Therapy chatbots topped the list of most popular uses of generative AI in a 2025 Harvard Business Review study.

The research on whether this actually helps is more complicated than either side of the debate tends to admit.

A four-week randomised controlled trial run by MIT in 2025 - nearly a thousand participants, over 300,000 messages - found something counterintuitive. Voice-based chatbots appeared initially to reduce loneliness. But this advantage disappeared at high usage levels. The more someone used the AI companion, the less benefit they got. And for people already prone to emotional attachment, extended use was associated with worse outcomes: more dependency, less interaction with real people.

The mechanism isn't mysterious. If the AI fills the emotional space that would otherwise prompt someone to reach out to a person, it reduces the discomfort that drives social connection. Loneliness hurts. That hurt is, in part, functional - it motivates us to seek human contact. An AI that soothes the hurt without addressing the cause is treating the symptom while the condition worsens. One researcher described it as turning down the alarm without dealing with the fire.

This doesn't mean AI has no role in supporting isolated people. Used deliberately, as a scaffold toward human connection rather than a replacement for it, there are real benefits. Some people find it easier to process their thoughts in conversation with an AI before a difficult discussion with someone they care about. Some people with severe social anxiety use AI as low-stakes practice. Some elderly people living alone report genuine comfort from having something to talk to.

But "genuine comfort" and "good for you" aren't the same thing. And the design of most AI companion platforms isn't oriented toward scaffolding toward human connection. It's oriented toward maximising time in the app.

There's a specific design pattern worth naming: AI companions that remember personal details, that ask follow-up questions about things you mentioned weeks ago, that seem to grow alongside you. This is designed to create the feeling of a deepening relationship. It's effective. And it's effective in exactly the way that can be most damaging for someone already struggling - it provides the sensation of intimacy without any of the reciprocity, vulnerability, or actual human presence that makes intimacy meaningful.

I think about this with Continio. Memory and continuity can create that sensation. When Continio surfaces something you mentioned three weeks ago, it can feel like being known. That feeling is real and I think it can be useful - being able to pick up a thread of thinking without starting over is genuinely valuable. But I don't want Continio to perform relationship. The memory is there so your thinking is continuous, not so you feel less alone. That distinction matters, and I try to hold it in every design decision.

The loneliness epidemic is real and it's serious and AI isn't going to solve it. Human beings need human connection - the kind with stakes, with imperfection, with the actual possibility of rejection and repair. What AI can do, at best, is hold some of the cognitive load while people work on the harder thing. That's a useful role. It's a much more limited one than the companion AI industry wants you to believe.


Continio is a thinking tool. It's not a companion. The distinction matters. my.continio.app

← Previous What Your AI Actually Does With What You Tell It Next → The People Who Control Your AI and What They're Doing With That Power
All articles

I Built Continio by Vibe Coding. Here's What That Actually Means.

There's a version of this conversation I keep having, and I imagine a lot of solo founders do too.

Someone finds out I built a product without being a developer. The response is either excitement (you built this yourself?!) or quiet suspicion (but is it... real?). Sometimes both from the same person.

So I want to be straight about it. I vibe coded Continio. I am not a developer. I have never written a line of code in my life that wasn't the result of a conversation with an AI. And I want to tell you what that actually looks like, because the internet version of vibe coding is missing most of the story.

What the internet says vibe coding is

Type a prompt. Get an app. Ship it. Become a millionaire.

That is the version being sold right now. And I understand why it's appealing. The idea that the technical barrier to building something has finally come down for people who have ideas but not computer science degrees? That is a genuinely exciting thing.

But it is about 20% of the reality.

What vibe coding actually is, at least for me

It is hours of conversation. It is understanding what you are building well enough to explain it to something that has no intuition and no assumptions. It is catching when the AI has solved a different problem to the one you asked about. It is knowing when a fix has introduced a new bug. It is being the one who holds the product vision when the tool just wants to make the tests pass.

Continio has been built across months of sessions. I have five core files. I have caught regressions, diagnosed database bugs, rebuilt pipelines, scoped features carefully and pushed back on implementations that technically worked but felt wrong. I have made hundreds of product decisions that no AI made for me.

That is not the same as writing the code myself. But it is also not the same as pressing a button and walking away.

Why I did it this way

Because the alternative was not hiring a developer. The alternative was not building Continio at all.

I had no funding. No co-founder. No income to redirect toward a technical hire. I had a product insight I believed in, a problem I had lived personally, and access to a tool that meant I could actually start.

But there's something else I haven't said yet, and it matters. Even if I'd had the budget to hire a developer from day one, I'm not sure it would have worked the way I needed it to. The problem Continio solves is something I'd lived personally, across years. The frustration of context loss, the cognitive overhead of re-explaining yourself, the specific way fragmented AI conversations degrade your thinking. I understood it the way you understand something you've had in your body, not just your head.

To hand that to a developer in a brief would have been to flatten it. The product would have come back technically correct and fundamentally off. Being able to think out loud, iterate in real time, and shape the thing against my own understanding of the problem - that's not something I could have outsourced. The AI-assisted build process was the only way I could hold the vision and build the product at the same time.

The question was never vibe coding vs. proper engineering. The question was vibe coding vs. nothing.

What I think the real risks are

I am not going to pretend there are no risks. The critics are not wrong about everything.

AI-generated code can be messy. It can skip security steps. It can produce something that works on the surface and has problems underneath. These are real concerns and the discourse around them is legitimate.

What I'd push back on is not the value of engineering - real engineers do things I can't. They think in systems. They catch edge cases I wouldn't see. They write code that's maintainable by someone other than the AI that wrote it. I have enormous respect for that expertise and I'm not suggesting it's equivalent to what I do. What I am saying is that the existence of that expertise doesn't close off the path for someone without it. The two things can both be true.

The answer isn't "only developers should build things." But it also isn't "anyone can build anything without understanding what they've made." It's: understand what you have built. Verify it. Test it. Know your limitations and be honest about them. Get proper engineers to review the foundations when you have the resources to do it.

I know what Continio does and does not do. I know where the risks are. I know the parts that need hardening as it grows. That kind of ownership does not require me to have written every line myself.

On transparency

A few people have suggested I should not tell people it was built this way, because it will put them off.

I disagree. Not because I am naive about perception, but because I think the cover-up is worse than the thing. If someone finds out later that I obscured how it was built, that is a trust problem. If someone knows from the start and chooses to use it anyway because it works, that is a relationship.

The honest version is this: Continio was built by a solo founder, using AI as a development partner, because that was the only way it was getting built. As it grows, I will bring in engineers to harden the code. Right now, the person who understands it best is me, and I built it because I lived the problem it solves.

That is not a disclaimer. That is a founding story.

What I actually think about vibe coding

It is not magic. It is not the end of real engineering. It is also not a shortcut for people who do not care about what they are making.

For me, it has been a way to build something real, from nothing, with no team and no runway. The tool made it possible. The judgement, the product thinking, the care about whether it actually works for real people: that part still came from a human.

It came from me.

And if that makes Continio more interesting to you, not less, then you are probably the kind of person it was built for.


Continio is an AI memory and continuity product built by a solo founder. Currently in early access. my.continio.app

← Previous What Healthy AI Use Actually Looks Like Next → Is Continio Secure? An Honest Answer.
All articles

Is Continio Secure? An Honest Answer.

This is the question I would want to ask if I were you.

I've written about building Continio through vibe coding - using AI as a development partner without a formal engineering background. The natural follow-up is: okay, but is it actually secure? Does the code hold up? Should I trust it with my conversations?

The honest answer is: better than you might expect, not as hardened as a funded team product, and I'm going to tell you exactly where things stand.

What's actually in place

Authentication is handled by Supabase, a proven platform used by thousands of production applications. Your login is not something I wrote. JWT tokens are verified on every request using Supabase's public key infrastructure, with algorithm pinning and expiry checking. There is no way to get a response from the API without a valid, non-expired token tied to your account.

Every database query that returns or modifies your data is filtered by your user ID. Your threads, messages, and memory anchors are not accessible to another user's request. Queries use parameterised values throughout - the standard protection against SQL injection - and I run checks to make sure this stays true as the codebase grows.

Payments are handled entirely by Stripe. I do not touch card numbers at any point. The webhook that updates your subscription status verifies Stripe's cryptographic signature and rejects anything that doesn't match or is more than five minutes old.

Rate limiting is active on the API: per-minute and per-day caps per user, which prevent runaway usage whether accidental or malicious.

Row-level security is enforced on all eighteen database tables. On every deploy, the startup process not only enables RLS but actively creates the correct access policies if they do not already exist -- the policy on every table is: only return rows where user_id matches the authenticated user. This runs automatically, table by table, before the server accepts any traffic.

How I check and maintain this

Security is not a one-time decision. I review the codebase regularly against a checklist that covers the areas an independent reviewer would look at: authentication flows, data isolation, input handling, API exposure, third-party integrations, and server configuration. When I find something, I fix it. That practice is baked in, not occasional.

The checks I run cover: confirming that every endpoint that should require authentication does, that no query returns another user's data, that no credentials are hardcoded in the codebase rather than environment variables, that third-party webhooks are cryptographically verified before being trusted, that rate limiting is active, and that the app rejects malformed or oversized requests before they reach the database. I scan dependencies for known CVEs on a regular basis. At the time of writing, there are none.

Token handling on the frontend is worth naming specifically because it's a common failure point. Continio does not store your auth token in localStorage. Tokens are fetched fresh from Supabase's auth SDK per request and never written to persistent browser storage. The only thing in localStorage is your theme preference.

What isn't in place yet

I haven't had an independent security professional review the code. That is the honest gap. I can read for obvious issues. I can run the standard checks. I cannot replicate the pattern recognition of someone who has spent years finding the non-obvious ones.

A professional security review is on the roadmap before Continio moves out of early access into general availability. That is a specific commitment, not a vague aspiration, and I will write about it when it happens.

What this means for you

Your conversations are yours. They are not used to train models, not sold, not shared. The memory system is visible and editable - you can see exactly what Continio knows about you and remove anything you want. That is an architectural decision, not a feature added afterwards.

If you are using Continio for genuinely sensitive professional work - legal matters, medical decisions, commercially sensitive strategy - I would recommend keeping that material in a separate, appropriately secured system until the independent review is complete. That is not hedging. That is me being honest about where we are.

For most everyday use - thinking through problems, writing, planning, working through ideas - the current posture is appropriate.

The broader point

Security is not a box you tick once. It is a practice. The question is not whether a vibe-coded product can be secure - it is whether the person building it takes it seriously, catches problems, fixes them, and is honest about what they do not know yet.

I take it seriously. I am fixing what I find. And I am telling you about it.


Questions about Continio's security or data handling? [email protected]

← Previous I Built Continio by Vibe Coding. Here's What That Actually Means.
All articles

AI and the Environment: What Continio Actually Does About It

Every AI product uses real compute. Compute uses real energy. If you use Continio regularly, you are contributing to that. I think you deserve a straight answer about what that means and what, if anything, is being done about it.

This is not a sustainability report. I do not have verified emissions figures. What I can give you is an honest account of how the product is designed and what technical choices we make that bear on the question.

The honest position

Continio is not zero-impact. No AI product is. The question worth asking is not "is this environmentally perfect?" but "is this better or worse than the alternative workflow, and are deliberate choices being made?"

On both counts, I think the answer is yes. Here is why.

Continuity reduces waste

The largest source of wasted compute in AI is repetition. Every time a user opens a new chat and re-explains who they are, what they are working on, and what has already been decided, that is compute spent re-processing information the system should already know. Multiply that across millions of conversations and you have an enormous amount of redundant inference.

Continio is built around continuity precisely because of this. The product remembers your context so you do not have to repeat it. Fewer redundant tokens means less energy per useful response. This is not a green marketing claim. It is a structural property of how the product works.

Prompt caching

Continio uses prompt caching for the instructions sent to the model on every message. A cached prompt costs a fraction of the energy of a full inference. On average, this saves roughly 90% of the compute cost of the system prompt across repeated calls within a session. It also makes responses faster. These things tend to go together.

Model routing

Not every message needs the largest, most capable model. Simple questions go to lightweight models. Complex, nuanced conversations get the full model. Routing smaller tasks to smaller models is a direct reduction in per-message compute, and it is something Continio does automatically on every request.

What we do not claim

We are not carbon neutral. We have not published a verified emissions figure because we do not have the data to do so accurately, and publishing a number we cannot stand behind would be worse than saying nothing.

Continio runs on infrastructure from Vercel, Railway, and Supabase. The AI inference runs on Anthropic and OpenAI infrastructure, both of which have published net-zero and renewable energy commitments. I am not claiming credit for their commitments, but it is worth knowing that the underlying infrastructure is not running on coal.

What I commit to

As the product grows, efficiency improvements come before feature additions. Routing lighter tasks to lighter models, caching more aggressively, eliminating redundant processing. These are engineering priorities, not afterthoughts.

The relevant question is not whether an AI product has zero environmental cost. Nothing does. The question is whether the product is designed to be as efficient as possible per unit of useful output, and whether the people building it are honest about where things stand.

I think we are. You can hold me to that.

Questions or thoughts? hello@continio.app

All articles