Every modern language model—like the ones that power search engines, chatbots, and writing tools—learns by consuming massive amounts of text. Ideally, this text comes from humans: books, conversations, code, messy real-world writing. Human text has high entropy. That means it’s full of irregularities—unexpected word choices, contradictory ideas, cultural references, edge-case logic. It reflects how people actually think and speak.
But increasingly, these models are trained on synthetic data: text generated by other language models. This text looks fluent but lacks the mess. It’s optimized for readability, coherence, and speed. The result? Lower entropy. Fewer surprises. Narrower expression. Each time a model learns from its own outputs, it becomes less capable of handling outliers or novel inputs. Rare patterns get erased. The model forgets how to generalize.
This process is recursive. The first model outputs synthetic text. The second model learns from it. The third learns from that. Each layer smooths the world a little more, and loses a little more signal. This is entropy collapse. The model still speaks fluently—but it no longer knows anything real. DeepMind called it “model autophagy disorder”: the system eats itself and calls it progress. Once enough cycles pass, the original structure of human language—the weird, jagged, brilliant parts—can’t be recovered. The model becomes a fluent echo chamber. Efficient. Hollow. Degraded.
Principle
Human agency depends on lived entropy. Complexity, ambiguity, error—these are not bugs in experience; they are proof of original signal. A person flattened by routine, comfort, or imitation becomes low-entropy: predictable to self and others. What distinguishes one mind from another is not just opinion, but informational shape. Real identity lives in jagged syntax: opinions half-formed, instincts misaligned, choices outside the norm. To be unpredictable is not to be erratic—it is to be sovereign. No agency survives long in a low-entropy system.
Application
To rebuild entropy and preserve selfhood, structure life to resist smoothness:
1. Break Predictive Loops
Audit your inputs. Where do you consume variations of your own thoughts? Block feedback loops: apps that show what you already like, people who affirm what you already believe, routines that run without friction. If your inputs feel fluent, they’re already compressing you.
2. Install Variance as a Constraint
Impose systems that force interaction with the unexpected. Choose routes with uncertainty. Seek conversations you are unqualified for. Design projects where failure is likely but illuminating. Keep at least one practice in your life that can’t be optimized—manual, analog, slow.
3. Track Entropy Manually
Once a week, record the most confusing thing you encountered. If nothing confused you, you’re in a low-entropy state. If you haven’t made a wrong assumption, said the wrong thing, or been surprised by your own reaction, you’re becoming distributionally flat. Treat confusion as evidence of cognitive health.
Limit / Cost
Entropy is expensive. High-entropy life resists clean narratives, tidy plans, scalable outcomes. You will look inconsistent. You will move slower. People will misread your signal. But the alternative is worse: to become a replica of a replica, to reduce identity to compliance. This isn’t a call for chaos. It’s a design demand: engineer for meaningful unpredictability. Models can’t do it. You still can.