Misaligned States
The Current Paradigm of States
Modern states and institutions, in their myriad forms, ostensibly exist in service of human needs and values. Democratic governments provide infrastructure, safeguard individual rights, offer social services and enable some degree of self-governance. Even autocratic regimes, while often prioritizing the interests of a ruling elite, must maintain a degree of popular support or acquiescence to function effectively. This apparent alignment with human interests is not, however, an inherent feature of these systems. Rather, it is a byproduct of their dependence on human participation and support.
This dependence manifests in several crucial ways. States have historically relied on their citizens for essential resources: labor to run the economy and administration, taxes to fund state activities, and military service to maintain security and project power. As [0] argued, this reliance has been a key driver in the development of more inclusive and responsive state institutions. For instance, the need for an educated workforce to compete economically, and the requirement for a motivated army drawn from the general population, has historically incentivized states to invest in public education and extend political rights. Even seemingly basic features of modern states, like universal public education or broad-based political participation, can be understood as necessary responses to the state's dependence on its citizenry [0, 0]. Even in context of autocratic regimes, the necessity of investment in public education and civic empowerment through rule of law has been a substantial driving factor for transition into modern democracies after two generations, such as in South Korea or Taiwan.
Typical liberal democracies have explicit feedback loops that ostensibly aligns state actions with the will of the populace, via elections and mechanisms for public input. While this explicit alignment mechanism is highly visible, the implicit pressures from state dependence on citizens may be even more significant. Even the most basic functions of democratic states — from maintaining order to collecting taxes — rely on widespread voluntary compliance rather than constant coercion [0]. States need citizens not just as sources of legitimacy through democratic processes, but as willing participants in state functions [0].
Autocratic states, while less directly accountable to their citizens, are not completely exempt from this dependence — even totalitarian states at least need human agents to staff their security apparatus. Moreover, the ever-present threat of an uprising or a coup serves as a check on the most egregious abuses of power.
The example of 'rentier states' [0], dependent more on external rents such as oil revenues, and less on their citizens, illustrates how states can become more autonomous from citizens when the dependence on citizens comparatively is weaker. Absence of taxation reduces citizen engagement in political processes, and the state's ability to distribute wealth allows it to maintain loyalty from key stakeholders (like social elites and the military).
Crucially, the functioning of both democratic and autocratic systems hinges on human involvement at every level. Bureaucracies operate through hierarchies of human officials. Laws are created, interpreted, and enforced by humans. While the letter of the law may be rigid, its application is filtered through human discretion and judgment [0, 0]. The security forces that maintain order are staffed by humans capable of questioning or refusing orders.
This pervasive human element ensures that institutions and states, regardless of their formal structures, remain at least somewhat tethered to human needs and values. It is this tethering that creates the majority of alignment between these systems and the humans they govern. However, as we will explore, the potential of AI replacing humans in many or all of these functions could weaken or even reverse the link between institutional behavior and human interests.
AI as a Unique Disruptor of States
Unlike previous technological innovations that primarily augmented human capabilities, AI has the potential to supplant human involvement across a wide range of critical state functions. This shift could fundamentally alter the relationship between governing institutions and the governed.
The unique disruptive potential of AI in this context is derived from its ability to simultaneously reduce the state's dependence on human involvement while enhancing its capabilities across multiple domains. This combination could fundamentally reshape the nature of governance and the relationship between institutions and the humans they ostensibly serve.
Here we consider three key ways that citizens contribute to the state, and how AI might alter them: tax revenue, the security apparatus, and the legal system.
Tax Revenue
Most governments currently rely heavily on their citizens for tax revenue. Typical well-functioning governments need to nurture long term economic productivity from innovation and high-skill work to support themselves. But if AI systems eventually perform a large portion of overall labor, and innovation, they will also generate a large fraction of economic output and, by extension, tax revenue. The loss of tax revenue from citizens would make the state less reliant on nurturing human capital and fostering environments conducive to human innovation and productivity, and more reliant on AI systems and the profits they generate.
If AI systems come to generate a significant portion of economic value, then we might begin to lose one of the major drivers of civic participation and democracy, as illustrated by the existing example of rentier states.
The Security Apparatus
Governments maintain their power through use of a security apparatus spanning police forces, intelligence services, and a military. This keeps the government connected to human values in two ways.
Firstly, the government cannot antagonize its security apparatus too much, or cause too much harm to the portion of the population from which it is drawn. If it does, the security apparatus can either overthrow the government or simply allow it to be overthrown by others.
Secondly, the security apparatus itself can exercise discretion, refusing to follow certain orders. This can occur on both the level of the organization and the level of the individual.
AI systems have the potential to massively automate the security apparatus and confer more power to the government, weakening both of these components. Indeed, AI systems might make the apparatus far more powerful: it is likely to enable surveillance on much larger, more pervasive and more accurate scale, as well as increasingly capable autonomous military units [0, 0].
Meanwhile, the human population has historically retained revolution as a last resort. The implicit threat of protests and civil unrest serves as a check on state power, forcing responsiveness to popular will. However, an AI-enhanced security apparatus could make effective protest increasingly difficult. A state with sufficiently advanced AI systems might be able to predict and shut down civil unrest before it can exert meaningful pressure on institutional behavior [0].
The Legal System
Theoretically, the rights of humans and the functioning of the state are enshrined in laws, which are created, interpreted, and enforced by humans. It is the laws themselves which enshrine certain responsibilities of the state towards the individual, certain mechanisms by which individuals can advocate against the state.
AI systems are already being used to draft contracts and analyze legal documents. It is conceivable that in the future, AI could play a significant role in drafting legislation, interpreting laws, and even making judicial decisions [0].
Not only could this diminish human participation and discretion in the legislative and judicial systems, it also risks making the legal system increasingly alien. If the creation and interpretation of laws becomes far more complex, it may become much harder for humans to even interact with legislation and the legal system directly [0, 0].
Transition to AI-powered States
As with the economy and culture, there will be strong incentives for states to integrate AI systems, likely undermining the alignment between states and their citizens.
Incentives for AI Adoption
The transition towards AI-dominated state functions would likely be driven by several powerful incentives:
Geopolitical Competition: As AI systems become increasingly powerful, states will face a growing pressure to adopt these technologies to maintain their relative power compared to other states. Countries that rely on humans for defense, economic development or regulation might find themselves at a significant disadvantage in international relations compared to those states willing to give more power to AI systems. The first-mover advantages in military applications, economic planning, and diplomatic strategy create particularly strong incentives for early and aggressive AI adoption [0, 0, 0, 0].
Administrative Efficiency: AI systems offer unprecedented capabilities in processing information and coordinating complex state functions [0]. While human administrators are limited by cognitive constraints and working hours, AI systems can continuously analyze vast amounts of data, deploy new regulations almost instantly, and implement policies with greater consistency. This efficiency advantage creates incentives for states to automate administrative functions, potentially reducing human involvement in governance. Also, while initial implementation costs may be high, the long term cost advantages of AI systems over human bureaucrats could create fiscal incentives for automation [0].
Enhanced Control: AI-driven governance systems promise greater predictability and control than human-based bureaucracies. Unlike human officials, AI systems, if successfully controlled, do not form independent power bases, engage in corruption, or challenge authority based on personal convictions. They can also enable more sophisticated surveillance and social control mechanisms, making them particularly attractive to states prioritizing stability and control over other values.
Relative Disempowerment
A state where AI systems have replaced human labor in many facets of governance — such as administration, security, and justice — could provide some enormous boons. On the surface, it might appear highly efficient and even benevolent. We might see lower crime rates, less low-level corruption, greater tax revenues, and more efficient public services.
At the same time, the gradual replacement of human involvement in governance could lead to a subtle but profound shift in the relationship between citizens and the state. Even if the system appears to function well, citizens might find themselves increasingly unable to meaningfully participate in or influence their governance. This relative disempowerment could manifest in several ways.
Democratic processes might persist formally but become less meaningful. While politicians might ostensibly make the decisions, they may increasingly look to AI systems for advice on what legislation to pass, how to actually write the legislation, and what the law even is. While humans would nominally maintain sovereignty, much of the implementation of the law might come from AI systems.
The complexity of AI-driven governance might make it increasingly difficult for human citizens to understand or critique government decisions. Traditional forms of civic engagement — from public consultations to protests — might become less effective as the state grows less dependent on human cooperation and more capable of predicting and preempting resistance.
The bureaucracy itself might become increasingly opaque to human oversight. While human officials can be questioned and held accountable through various mechanisms, AI decision-making processes might be too complex for meaningful human review, and if such review happens, it may depend on yet more AI-driven cognition.
Even if oversight boards and democratic institutions remain in place, they might struggle to exercise real control over the intricate web of AI systems actually implementing policy.
Furthermore, as AI systems become more integral to governance, the state's incentives might shift away from serving human interests. Much like how rentier states become less responsive to citizen needs when they do not depend on tax revenue, AI-powered states might become less responsive to human preferences when they do not depend on human participation for their core functions.
The security apparatus, powered by AI, would have an unprecedented ability to predict and prevent crime and civil unrest. While this could ensure a high level of safety, it also eliminates the possibility of meaningful protest or revolution. A state that can preempt and resist any challenge to its authority long before it materializes will have effectively removed a crucial check on institutional power that has shaped human societies for millennia.
And with average humans contributing less in tax revenue or to society more generally, the state would face a lower cost to sliding back citizen power. There would be less need to cater to the actual needs of voters, or to make democratic concessions, and less cost to rolling back civil liberties.
Ultimately, we might find ourselves in nations where nominally humans hold sovereignty and even vote for their preferences, where in practice the high-level decisions are disconnected from citizens and even politicians.
Absolute Disempowerment
In more extreme scenarios, the disconnect between state power and human interests might become not just relative but absolute, potentially threatening even basic human freedom. This could occur through several mechanisms.
First, states might become totalitarian, self-serving entities, optimizing for their own persistence and power rather than any human-centric goals. While states have always had some self-preservation incentives, these were historically constrained by their dependence on human populations. An AI-powered state might pursue its institutional interests with unprecedented disregard for human preferences and interests, viewing humans as potential threats or inconveniences to be managed rather than constituents to be served [0].
Second, the legal and regulatory framework might evolve to become not just complex but incomprehensible to humans. If AI systems begin to play dominant role in drafting and interpreting legislation, they might create regulatory structures that optimize for machine-compatibility over human understanding. Citizens might find themselves subject to rules they cannot meaningfully comprehend or navigate without AI assistance, effectively losing their ability to participate in the legal system as autonomous agents.
Third, the state apparatus might become not just independent of human input but actively hostile to it. Human decision-making might come to be seen as an inefficiency or security risk to be minimized. We might see the gradual elimination of human involvement in governance, through systems that route around human input as a source of error or delay, or even through explicit policy decisions which remove humans from certain critical processes.
In the final state, with AI systems providing most economic value and governance functions, human citizens might find themselves in a novel form of totalitarian system, struggling to maintain basic autonomy and dignity within their own societies. The state, while perhaps highly capable and efficient by certain metrics, would have abandoned human interests.