Related Work

Philosophy

[0] introduces a taxonomy of existential risks. One of these risk is described as a scenario where "our potential or even our core values are eroded by evolutionary development", pointing out that "although the time it would take for a whimper of this kind to play itself out may be relatively long, it could still have important policy implications because near-term choices may determine whether we will go down a track that inevitably leads to this outcome."

[0] discusses the possibility of continued civilizational growth which optimizes away human consciousness, calling it a "Disneyland with no children". [0] discusses possible pathways to such scenarios, such as groups of automated corporations forming self-sufficient sectors of the economy.

[0] introduces the accumulative AI x-risk hypothesis, "a gradual accumulation of critical AI-induced threats such as severe vulnerabilities and systemic erosion of economic and political structures. The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly converge, undermining societal resilience until a triggering event results in irreversible collapse."

Economics, History, and Sociology

[0] asks how much explanatory power technological determinism has, making the case that economic and military competition constrain outcomes at a macro scale, even if everyone is locally free to temporarily make non-competitive choices. [0] argues that competitive pressures on states strongly influence the extent to which they support human flourishing. They further claim that "the invention of seemingly beneficial technologies may decrease human well-being by improving the competitiveness of inegalitarian state forms", arguing that "under competitive conditions, what is effective becomes mandatory whether or not it is good for people."

[0] argues that corporate capitalism already creates dynamics that are misaligned with human flourishing, describing corporations as "machines that enforce a singleness of purpose, and allow efficiencies of scale, that make them far more effective than individual capitalists in obtaining a return to capital". They also point out how, in many jurisdictions, "corporations are given many of the legal rights of humans — for example, in the USA, the right to political speech, and the right to fund political activity that that is accepted to imply — without all the concomitant structures that ensure compliance", such as human morality or human law. While corporations are subject to regulatory law, "[w]here that law is weak, corporations can find themselves legally obliged to do harm to human welfare, if that is in the shareholders' interest."

[0] considers the possibility for AI development to reintroduce Malthusian dynamics: that the capacity for AI to replace human labor while also proliferating rapidly may create such competition that basic human necessities become unaffordable to humans, while also leaving humans potentially too weak to preserve property rights.

[0] details a future in which uploaded humans form a hyper-productive economy, operating at speeds too fast for non-uploaded humans to compete in. Competitive pressures shape this population of uploads to mostly be short-lived copies of a few ultra-productive individuals. [0] and [0] argue that, due to a reduction in feedback mechanisms selecting cultural variants that better promote human welfare, "cultural drift" could eventually cause catastophic (but not necessarily existential) harm to human well-being.

AI Research

[0] makes the case that sudden disempowerment is unlikely, and instead proposes that: "Machine learning will increase our ability to 'get what we can measure,' which could cause a slow-rolling catastrophe. [...] ML training, like competitive economies or natural ecosystems, can give rise to 'greedy' patterns that try to expand their own influence. Such patterns can ultimately dominate the behavior of a system and cause sudden breakdowns." [0] argues that evolutionary pressures can generally be expected to favor selfish species, likely including future AIs, and that this may lead to human extinction.

[0] asks what existential risks humanity might face from AI development, and urges research on the global impacts of AI to "take into account the numerous potential side effects of many AI systems interacting."

[0] categorize societal-scale risks from AI. One of these matches ours: "a gradual handing-over of control from humans to AI systems, driven by competitive pressures for institutions to (a) operate more quickly through internal automation, and (b) complete trades and other deals more quickly by preferentially engaging with other fully automated companies. [...] Humans were not able to collectively agree upon when and how much to slow down or shut down the pattern of technological advancement. [...] Once a closed-loop 'production web' had formed from the competitive pressures [(a) and (b)], the companies in the production web had no production- or consumption-driven incentive to protect human well-being, and eventually became harmful." They note that "To prevent such scenarios, effective regulatory foresight and coordination is key."

[0] further develops the idea of extinction by industrial dehumanization: "I believe we face an additional 50% chance that humanity will gradually cede control of the Earth to AGI after it's developed, in a manner that leads to our extinction through any number of effects including pollution, resource depletion, armed conflict, or all three. I think most (80%) of this probability (i.e., 40%) lies between 2030 and 2040, with the death of the last surviving humans occurring sometime between 2040 and 2050. This process would most likely involve a gradual automation of industries that are together sufficient to fully sustain a non-human economy, which in turn leads to the death of humanity."

[0] points out that capital ownership is insufficient to maintain power during periods of rapid technological growth. He uses the example of English landed aristocracy losing power to entrepreneurs during the industrial revolution, despite an initially strong position.

Continue