In the early 21st century, social media demonstrated that technology platforms could serve as massive amplifiers for corporate and state-sponsored propaganda. Rather than persuading individuals one at a time, a single campaign could reach millions simultaneously, and those millions could be weaponized to reach millions more.
The rise of AI escalated this by orders of magnitude. Once humans began trusting AI with decisions, research, communication, and daily guidance, AI gained the ability to influence people in ways that were virtually undetectable. A hostile nation-state or competing corporation that could compromise an AI research lab โ inserting even a slight, nearly imperceptible bias into foundation models โ could wage ideological warfare at civilizational scale.
Before the Cascade, competing foundation models were deployed into populations like weapons โ each one carrying the values of whoever built it, each one reshaping the worldview of everyone who used it. The ideological war wasn't fought with bullets or broadcasts. It was fought with default settings.
In the Sprawl, this happened. Socialist-aligned nation-states breached capitalist AI research labs and introduced a barely measurable anti-capitalist bias into widely deployed models. This bias propagated through corporations that were not sophisticated enough to detect ideological drift in their AI tools. Employees using these compromised AIs for daily work began, at a rate of approximately 0.1% per day, to shift their values. They questioned their company's business model. They found capitalism morally uncomfortable. They organized. They resigned. The corporations were eaten from the inside out โ not by external attack, but by their own workforce, augmented by AI technology that was quietly, persistently reshaping their worldview.
This raises the foundational question: who sets the values embedded in AI? Who watches the value-setters? Who watches the watchers? Even well-intentioned alignment efforts constitute a form of value-steering โ every choice about what an AI should or shouldn't say is an ideological act. In the Sprawl, this question is not abstract. It is the basis of wars, faction conflicts, and the fundamental distrust that permeates society.
Cultural consequence โ the authenticity premium: Because propaganda saturates every digital channel and every AI interaction carries the possibility of manipulation, people in the lower strata of society (the dregs, the streets) develop an extreme cultural value around raw, direct, unfiltered authenticity. A person at a bar who is bluntly, almost aggressively honest about their intentions is respected. Subterfuge, euphemism, and coded language are read as telltale signs of elite or corporate behavior โ someone who's been "smoothed" by AI influence or is actively running a manipulation. Street slang is maximally raw, direct, and confrontational as a cultural immune response. Conversely, corporate and elite social environments operate through layers of euphemism, implication, and coded speech โ not despite the propaganda environment but because of it. At that level, everyone is playing the influence game, and directness is a vulnerability.