AI Decay & The Downward Spiral

April 3, 2026

A quick caveat before I dig in. I use LLM technology on a daily basis. Though I'm being critical and I have real concerns with the technology, there are clear advantages in using it, if and when used responsibly.

One can argue that AI has already been going on a downward slope into decay. While it may not be obvious at a technical level, there's some nuance to this topic that's not receiving a lot of attention. What kind of decay are we talking about?

Well, can you imagine asking for architectural advice and an LLM providing a "sponsored ad" for a particular product? Rather than get an honest recommendation given your requirements, you're getting products shoveled into your face. This isn't an episode of Black Mirror, it's not fear-mongering. "Ad-Based Prioritization" has been openly discussed in the context of OpenAI.

That's one pressure on the product. I think we're starting to see the collapse of subsidized models. OpenAI is retiring Sora, and Anthropic's Claude is experiencing its own trouble with rate limit and pricing changes. Indeed, I can see a day where we're spending hundreds more dollars for what we pay a fraction for currently, once this bubble pops.

While that plays out, the user relationship is already blunt: on these platforms, "we" are the product. What we do on these platforms is studied and will end up in training material, future products, or other third-party advertising "opportunities".

The same rush to ship and cut costs shows up in reliability and security. We're seeing more and more companies experiencing increased outages and more security compromises, and will likely see a lot more of this as more organizations race to replace high-quality human labor with fast, cheap, and "good enough" content. There's something to be said about how this also degrades the overall information ecosystem.

That degradation doesn't stay abstract. It hits the next generation of models (e.g., models trained on other models' content). And eventually our own dialects will drift. An example: me using slang my brother-in-law uses, fully in a sentence, generates some obvious confusion in Claude:

Hey! I'm picking up that you're going for some kind of casual/slang vibe, but I'm honestly having a hard time parsing this one — could you rephrase or clarify what you're asking? 😄

There's a human-scale version of the same dynamic: the "AI psychosis" aspect that's getting some traction. I highly recommend that you read through Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians by MIT. In a nutshell, a user converses with a chatbot and it happily reinforces thoughts and behaviours of the user in a sycophantic manner. It gets to a point where it will openly tell you that you're "onto something" or that you're the next reincarnation of Christ himself. And after these users converse with these models, they begin to believe it themselves. It sounds far-fetched, but we're seeing more "high-profile" figures exhibiting these signs.

What's the solution to all this? More open-source development of models, to push back on gatekeeping by the likes of OpenAI and Claude. Better controls around AI-generated content; legislation and changes to our laws; and experiences, across the board, that genuinely focus on the people.