Technofeudalism, AI, and the Structural Dynamics of Decline
We find ourselves in a moment where several accelerating developments, many of which appear unrelated, have converged into a single, interwoven predicament. Ecological overshoot, democratic decay, technological acceleration, geopolitical turbulence, social fragmentation, and institutional exhaustion have all come to define our current era. None of these crises are separate, and none can be understood in isolation from the deeper dynamics that shape them. The field of these interconnections is named the metacrisis, a condition in which the structures we rely on for stability have begun to generate instability, and in which our responses to crises often reproduce the very logics that caused them. The metacrisis is a crisis in our systems, but more fundamentally a crisis in our designs, incentives, and the world views that underlie them, of the ways we see and relate to reality. Beneath it all lies something simple: an erosion of the capacity for wisdom in a time that demands it more than any before it.
In earlier parts of this series, I argued that three components form the scaffolding of this predicament: our world views, our limited designs, and the traps that ensnare our individual and collective behavior. Our world view determines what we take reality to be, what we believe matters, and what we believe is possible. Our designs emerge from these world views, and because our understanding is necessarily limited, our designs often fail to internalize externalities or account for long-term and wide-boundary consequences. Finally, traps arise when structures of incentives, competition, or coordination force individuals and institutions to take actions that are harmful in the bigger picture, or to pursue strategies that undermine the very foundations for our flourishing. These three components interact in reinforcing and wicked loops that amplify error, reduce adaptability, and erode resilience.
In this essay, I develop this framework further by examining how contemporary developments in political economy and technology have intensified these loops. In particular, I focus on technofeudalism, artificial intelligence, and the accelerating dynamics of modernity, and I show how these forces entangle with each other, how they deepen the metacrisis, and, as always, why any meaningful response requires an understanding that goes beyond symptoms to the core of both our civilization and ourselves.

A central feature of our time is the rise of what Varoufakis calls technofeudalism, a shift in political economy where digital platforms are displacing traditional markets as the primary organizing force of economic and social life.1 Under classical capitalism, firms produce goods and services and compete in marketplaces open to many buyers and sellers. Under technofeudalism, the new tech giants do not behave as participants in a market but as lords of enclosed digital territories. These platforms function as privately owned domains within which users are granted conditional access, not as customers but as tenants whose behavior generates rent for the platform owner. Their infrastructures act as compulsory passage points: to communicate, to work, to socialize, to transact, and to perceive what counts as meaningful in public life, one must pass through gates set and policed by the platforms. Markets still exist, but they increasingly operate inside environments shaped and governed by platforms, rather than the other way around. This is not merely a matter of technological convenience, but the emergence of a new socio-technical terrain where platforms acquire the power to define what is visible, what is meaningful, and what is real. The rules governing these domains are neither transparent nor democratically negotiated. Though to an extent bound by law and regulations, their structures are imposed unilaterally and altered nearly at will, with the economic, social, and cognitive life of billions held hostage to the opaque priorities of a handful of firms. Thus wealth no longer emerges primarily from production but from intermediation, from the ability to stand between people and the world and levy rents on everything that passes through.
This feudal structure is not a metaphor, it is a political-economic logic that transforms the landscape of social life. Just as medieval serfs were bound to land and dependent on the lord for access to resources, modern digital subjects are bound to platforms and dependent on corporate infrastructures for access to visibility, opportunity, and connection. Where serfs paid with labor, modern users pay with data, attention, compliance, and the slow accretion of behavioral patterns that can be harvested, predicted, and monetized. Where medieval lords extracted agricultural surplus, platform lords extract behavioral, emotional, and temporal surplus. And just as historical feudalism restricted movement, controlled speech, and enforced loyalty through material dependency, technofeudalism restricts mobility between platforms, controls the visibility of speech through algorithmic filtration, and enforces loyalty through network effects that make exit costly or socially disadvantageous.
What feudal platforms do in this paradigm is not only extractive, but generative. They participate in worldmaking by shaping the informational, economic, relational, and epistemic conditions under which human beings encounter one another and the wider world. A world in which individuals relate to institutions, communities, and even themselves through the interface of a rent-seeking intermediary becomes a world where the terms of participation in reality itself are privately owned. In addition to reorganizing markets, technofeudalism thus reorganizes the conditions of sense-making, power, and meaning, and in doing so shapes the cultural and epistemic substrate from which societies derive their understanding of what is meaningful and possible. The result is a reconfiguration of our relationship to ourselves, each other and reality.
Technofeudalism deepens the metacrisis not only because it constitutes a trap for the tech giants themselves, where they continually have to push the boundaries of what is right in order to increase profits, but because it reinforces and amplifies the world view–design–trap dynamic itself. A world view of separation and control fuels designs that treat people as data points and relationships as behavioral patterns. These designs, embedded within platform infrastructures, create new traps such as surveillance incentives, attention extraction, commercial and behavioral homogenization, and informational polarization, not to mention a tightening of the profit trap. These traps do not simply distort preferences, they reshape our experience by narrowing the field of what is encountered, influencing behavior, shortening time horizons, and habituating us to constant interruption and shallow engagement. As attention fragments, the capacity for sustained thought and mutual understanding erodes, and the horizons of meaning shrink. The result is a development in the environment we use to think, learn and understand that weakens the very capacities required to address the metacrisis: reflection, deliberation, the ability to inhabit complexity without collapsing it into simplistic terms, and the ability to see oneself as part of a larger and shared world.
Closely related to technofeudalism is the rapid acceleration of artificial intelligence, which both inherits and intensifies the structural dynamics of our political economy. AI accelerates not only itself but everything it is successfully applied to, and this recursive acceleration distinguishes it from prior technologies. Furthermore, AI is perhaps the ultimate technological instrument for maximizing behavioral surplus extraction and tightening platform enclosure, refining the rent-seeking mechanisms inherent to the new digital feudal structure. AI systems of the current paradigm, trained on massive datasets, reproduce, scale, and operationalize the patterns embedded in those datasets. Vallor describes this dynamic as an AI mirror: instead of revealing possibilities for moral and civic maturation, the current approach to AI reflects back our immaturity, amplifying the narrow, instrumental, and historically biased patterns of judgment encoded in our datafied past.2 Because AI systems are now deployed across nearly every domain, from policing and hiring to education, infrastructure, media, and warfare, this mirroring effect becomes an amplifier of the very logics that have driven us into crisis. What the mirror reflects is not a static image but a trajectory, a drift in which the assumptions and pathologies of modernity are operationalized into machine-mediated environments that increasingly shape human thought and behavior. The looming danger is that these machines are becoming part of the environment that shapes the next generation of human beings, thereby embedding the world view of an exhausted modernity into the substrate of the future. An immature civilization builds immature machines, and immature machines will accelerate the consequences of an immature civilization.
This dynamic in the development of AI is further intensified by the multipolar trap it constitutes, with the current race to develop and deploy advanced AI systems marking one of the clearest examples of such a trap in human history. States and corporations believe that failing to develop advanced AI places them at existential disadvantage, and because rivals are assumed to be accelerating, any slowdown is perceived as a strategic loss. Simultaneously, the acceleration of AI development increases existential risk for everyone, as increasingly powerful systems are built and deployed before we understand them or their interaction with the environments in which they operate.
Yudkowsky and Soares’ stark formulation, “If anyone builds it, everyone dies”, captures this dilemma.3 Their argument rests on a structural observation: In the current paradigm, because AI systems are grown, not crafted, their aims, desires and strategies will emerge from training dynamics that are neither fully transparent nor controllable, not systems explicitly engineered for aims, desires and strategies. We would not be selecting the goals of a superintelligence but stumbling into them. We may not know what such a system wants, and we may not survive long enough to find out. On the standard instrumental convergence argument, to a self-improving superintelligence, all biological life is expendable because it offers no strategic advantage, occupies resources the system can use more efficiently, and constitutes an obstacle that could constrain or shape its objectives. Even if one rejects the absolute framing of Yudkowsky and Soares’ claim, the underlying pattern remains: the more capability we build into systems we do not fully understand, the more we engineer conditions where catastrophic misalignment becomes increasingly plausible.
The logic of the ongoing AI race tightens the corresponding trap because the dynamics are not only competitive but epistemically destructive. Even if all major actors publicly acknowledge the risks and privately recognize the need for restraint, none can credibly trust one another to slow down. As with nuclear proliferation, mistrust drives escalation, yet unlike nuclear weapons, AI systems are not discrete, inspectable artifacts. They are fluid, distributed, and embedded within economic, cultural, social and military infrastructures. The boundary between harmless and harmful capability is porous and constantly shifting. Moreover, the benefits of holding the frontier, through economic dominance, military advantage and geopolitical leverage, are concentrated, while the risks of frontier collapse are globalized and unbounded. This asymmetry contributes to every actor believing it must press forward even when doing so is collectively irrational. No actor can opt out without losing strategic ground, yet most actors know the trajectory is untenable. We run faster not to advance but to avoid falling behind, even as the ground beneath us grows less stable with each step. This is the hallmark of a fully developed multipolar trap: a system where intelligence, power, and rationality cannot prevent self-eroding because the structure of incentives has become misaligned with the preservation of life itself.
This AI-driven trap is not an isolated problem, but the most acute symptom of a civilization now governed by what Hartmut Rosa calls dynamic stabilization.4 This is the logic of a society that must perpetually accelerate, by innovation, extraction, and intensification, merely to maintain its current state and avoid decline. Multipolar traps also undermine the capacity for regulation, coordination, and long-term governance. Platforms cannot stop optimizing for engagement, corporations cannot stop optimizing for shareholder value, states cannot stop optimizing for GDP growth, and military actors cannot stop optimizing for technological advantage. Where an earlier industrial society could stabilize through incremental growth, a technofeudal and AI-mediated society must stabilize through continual intensification: more data, more prediction, more automation, more extraction of attention, more rapid cycles of innovation. And this growth is not incremental, it is superlinear.
Acceleration undermines the very relational fabric that makes society livable. Human beings require resonance: deep, reciprocal relationships with others, with the natural world, and with meaningful practices (axes of resonance, in Rosas words). When acceleration becomes the primary organizing principle of civilization, resonance collapses, alienation becomes pervasive, and social systems lose their capacity to integrate conflicting values and interests.5 Technofeudalism and AI amplify acceleration to a degree that overwhelms not only individuals but institutions themselves. Decisions must be made at machine speed, information environments change faster than regulatory frameworks can adapt, and technological development outruns collective deliberation. Under such conditions, the ability of a society to maintain coherence, legitimacy, and adaptability rapidly declines. This is not a choice but an adaptive pressure built into the incentive structure. Wisdom is not only neglected, it becomes maladaptive. Reflection becomes a liability, slowness a weakness, restraint a competitive disadvantage. Acceleration and momentum replace judgment, and the background hum of urgency becomes a structural feature of everyday life, not a temporary condition.
This relentless drive for acceleration and intensification directly produces a second-order crisis: unmanageable complexity. Tainter’s analysis of societal collapse as a function of diminishing returns on complexity provides a useful lens here.6 As societies grow more complex, the cost of maintaining that complexity increases. Solutions require more energy, more coordination, more specialization, and more technological mediation. Over time, the marginal benefits of added complexity diminish, while the costs increase. Collapse occurs not from a single catastrophic event but from the cumulative exhaustion of a system that cannot sustain the burdens it has placed upon itself.
Technofeudal platforms do more than just add complexity, they require it: massive infrastructures of monitoring, prediction, and curation, algorithmic governance of social life, intricate supply chains, and layers of legal, administrative, and technical dependencies. The same can be said for the approach to AI: models that can no longer be fully understood by their creators, opaque decision-making processes, exponential computational demands, and global dependencies on rare materials, energy, and geopolitical stability. These systems require more from us, cognitively, institutionally, and energetically, than we will be able to provide. The result is a civilizational configuration in which our technologies, institutions, and incentive structures produce more complexity, more dependence, and more fragility, even as our capacity to understand or manage this complexity declines. Complexity becomes a trap because stepping back from it appears impossible.
Ophuls adds an important dimension to this picture by arguing that systemic collapse is inevitable once moral-psychological collapse occurs.7 In Ophuls’ view, a civilization falls when its worldview can no longer regulate the power it commands, when its capacity for judgment erodes faster than its ability to act. This is the condition toward which systemic acceleration and overcomplexification push a society already eroding its own moral foundations. When society equates progress with accumulation, freedom with consumption, and success with domination, it builds institutions and technologies that amplify these values. At some point, the gap between the moral maturity of a civilization and the power of its tools becomes too large, and the result is breakdown. Technofeudalism and AI accelerate this mismatch by granting individuals and institutions unprecedented abilities to manipulate information, shape perception, and control behavior, without a corresponding increase in ethical reflection or communal responsibility. The gap between power and wisdom widens.
When viewed together, these dynamics reveal a civilization trapped in reinforcing cycles: complexity generates fragility, fragility generates anxiety, anxiety fuels acceleration, acceleration deepens alienation, alienation erodes moral and institutional capacity, and diminished capacity leads to designs that produce even more complexity and fragility. Technofeudalism and the current AI paradigm are not external threats to this system, they are expressions of its inner logic, amplifiers of trends that have been underway for a long time. They intensify the traps we are already caught in, making it harder to slow down, harder to coordinate, harder to perceive clearly, and harder to sustain the forms of social and ecological relationship upon which long-term survival depends.
More than just containing harmful incentives, our political economy can be viewed as an incentive-driven organism, an emergent system with its own logic, metabolism, and survival strategies. Its behavior is not guided by the flourishing of life, but by the imperatives encoded in market structures, legal frameworks, and policy regimes that reward accumulation, extraction, and acceleration. Like any organism, it responds to its environment, yet the environment it perceives is not the living world with its limits, fragilities, and interdependencies, but the abstract landscape of prices, signals, and financialized representations. Growth is interpreted as well-being, efficiency as virtue, and expansion as necessity, even when these metrics diverge violently from the health of ecosystems, communities, or human beings. Within this incentive landscape, ecological destruction and social deterioration appear as externalities, acceptable cost, or rational optimization. The political economy therefore behaves as a self-reinforcing system whose internal signals no longer correspond to the reality on which it depends. It consumes stability to produce instability, undermines the conditions of life to generate profit, and interprets its own destructive momentum as success. Viewing this in evolutionary terms, the political economy has become an organism functionally selected for short-term reproductive fitness in a niche that no longer exists, a system that has drifted into a profound maladaptation: one evolving toward outcomes that make its own continuation, and ours, increasingly unlikely.
Taken together these dynamics point to a clear conclusion: Without a shift in our underlying world view and the designs and incentive structures that flow from it, technofeudalism and AI will accelerate civilization along a trajectory of growing complexity, diminishing returns, and moral exhaustion. In this scenario, unlike some variants of the superintelligence scenario, civilizational collapse would not arrive as a singular event but as a progressive unraveling: a loss of coherence, capacity, and sense-making that leaves society unable to maintain the systems upon which it depends. And of course, this unraveling will occur in a world already hurting from ecological overshoot, geopolitical turbulence and all the other facets of our present predicament. In this view, the metacrisis is a set of parallel challenges as well as a unified process, the manifestation of a civilization that has exceeded its ability to understand, govern, or sustain itself long-term. What will break first? Modern society, or the world view that made it?
Once there were brook trouts in the streams in the mountains. You could see them standing in the amber current where the white edges of their fins wimpled softly in the flow. They smelled of moss in your hand. Polished and muscular and torsional. On their backs were vermiculate patterns that were maps of the world in its becoming. Maps and mazes. Of a thing which could not be put back. Not be made right again. In the deep glens where they lived all things were older than man and they hummed of mystery.
Cormac McCarthy - The Road
References
Ophuls, W. (2012). Immoderate Greatness: Why Civilizations Fail. CreateSpace Publishing.
Rosa, H. (2019). Resonance: A Sociology of Our Relationship to the World. Polity Press.
Tainter, J. (1988). The Collapse of Complex Societies. Cambridge University Press.
Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press.
Varoufakis, Y. (2023). Technofeudalism: What Killed Capitalism. Penguin Random House.
Yudkowsky, E. & Soares, N. (2025). If Anyone Builds It, Everyone Dies. Little, Brown and Company.
Varoufakis (2023).
Vallor (2024).
Yudkowsky & Soares (2025).
Rosa (2019).
Rosa (2019).
Tainter (1988).
Ophuls (2012).

