No. 05 Power & accountability

The People Who Control Your AI and What They're Doing With That Power

Specific, documented, uncomfortable. The actual power structure behind the tools you use every day.

5 March 2026 · 5 min read

There's a conversation happening about AI safety, AI ethics, and the future of this technology. It's largely being conducted by the people who built the technology and stand to profit most from it.

That's worth noticing.

Let me be specific, because vague concern achieves nothing.

Elon Musk donated $288 million to Donald Trump's 2024 presidential campaign and related Republican causes. He was subsequently appointed to lead the Department of Government Efficiency - a role that gave him, the owner of xAI, an AI company competing directly with OpenAI, significant influence over federal AI policy. DOGE, in the early months of 2025, fed sensitive government documents through unspecified AI systems. Whistleblower claims emerging in early 2026 suggested that former DOGE staff may still hold Americans' personal data on private devices.

Musk is simultaneously suing OpenAI for abandoning its nonprofit mission, attempting to acquire OpenAI for $97.4 billion (which OpenAI rejected), and running xAI as a direct commercial competitor. He co-founded OpenAI in 2015, left in 2018 after reportedly trying to take it over, and has positioned himself as the defender of its original public-interest mission while operating what is plainly a competitor.

Sam Altman donated $1 million to Trump's inauguration fund and was present for the announcement of The Stargate Project, a $500 billion AI infrastructure initiative backed by the administration. OpenAI, which was founded as a nonprofit explicitly to ensure AI research benefited all of humanity, is completing a conversion to a for-profit structure. A federal judge has noted publicly that its founders made "foundational commitments" not to use it "as a vehicle to enrich themselves." The trial to determine whether those commitments were broken is ongoing.

These aren't personal criticisms. They're the documented positions and actions of the two people who exercise the most influence over the technology that hundreds of millions use daily.

The point isn't that they're bad people. The point is structural. When the people who build the infrastructure, set the policy, and profit from the adoption are the same people, the rest of us are trusting in their good intentions rather than in any structural protection. And good intentions, however sincere, aren't a governance model.

This matters in practice. When you share something personal with an AI chatbot, the data infrastructure behind that conversation was likely built with capital from people who also have political interests, commercial competitors to advantage, and financial incentives that aren't aligned with yours.

None of this means you should stop using AI. It means you should use it with your eyes open to the actual power structures involved, not the stated missions.

It also means that how a product is funded and structured matters. A subscription product with no advertising revenue, no data licensing, and no political donors on the cap table has a different relationship to its users than one whose investors are playing a longer game.

I built Continio as a small, honest product. No political donations, no regulatory ambitions, no interest in becoming infrastructure for anything other than helping individual people think better. That's a deliberately limited scope. It's also, I think, the right one.

The people at the top of the AI industry aren't going anywhere. The question is whether there's room alongside them for tools built by people who aren't trying to control anything except their own product.

I think there is. That's what I'm building.