What Your AI Actually Does With What You Tell It
Most people haven't thought about this carefully. They probably should.
In 2025, Sam Altman said something that didn't make as many headlines as it should have.
He acknowledged that conversations with ChatGPT aren't confidential in the legal sense. They could, in principle, be subpoenaed. The company could be compelled to produce them.
Most people who use AI chatbots daily for work - who share client information, strategic decisions, personal struggles, half-formed ideas - probably haven't thought about this.
I hadn't, until I started building a product that would store conversations. Then I had to think about it very carefully.
The data landscape is more complicated than most AI companies communicate. The defaults on most platforms allow your conversations to be used to train future models, unless you opt out. The opt-out exists but it's not prominent. Research has found that even people who are concerned about AI data use often don't know where to find it, let alone what it covers.
In February 2025, a ChatGPT backend update silently wiped years of accumulated user memory data. No warning. Forum posts from the same day described people losing months of creative work, professional context, accumulated preferences. One user wrote that "all promises of tagging, indexing, and filing away were lies."
I'm not citing this to attack OpenAI. I'm citing it because it illustrates something structural: when your context lives on someone else's server, managed by someone else's algorithm, you're not in control of it. You're a tenant, not an owner. And tenants find out how much that matters when something goes wrong.
The black box problem compounds this. Most memory features in AI tools don't show you what they've stored, don't let you correct it, and don't tell you what they're choosing to surface. A profile is building. You just can't see it.
I built Continio with a specific response to all of this, and it's structural, not just policy.
We don't use your conversations to train models. We don't sell data. These aren't promises - they're design constraints. Our business model is subscription-based. We have no advertising revenue, no data licensing, no third-party relationships that depend on your content. We're not just choosing not to exploit your data. We don't have the infrastructure to exploit it even if we wanted to.
The memory is visible. You can open the memory view and see every anchor the system holds about you. You can correct them. Delete them. Export everything and leave at any time.
This isn't a differentiating feature. It's what AI tools should do by default. I built it this way because I needed to trust my own product before I could ask anyone else to.
That trust question - do I actually know what this tool is doing with what I tell it - is one of the most important and most underasked questions in AI right now. Seventy percent of adults say they don't trust companies with their AI data. They're right not to.
The answer isn't to stop using AI. The answer is to use it with products that have been designed to be answerable to you.