Posthumanism and the Political Problem of Risk– Robert N. Spicer

This article is about technological utopias and dystopias. It is about technology as both savior and existential threat. It is about the conflict between enormous inequality and social equilibrium in a potential future world produced by new technologies. It is about theorizing the depoliticizing impact of technology and the need to re-inject the political process into debates around technology. It is about Marx’s idea that “Men make their own history, but they do not make it … under circumstances chosen by themselves.” As Marx goes onto say, the “tradition of all the dead generations weighs like a nightmare on the brain of the living.”

This series of conflicting positions make up the atmosphere of posthuman political problems and the central concern of how atmosphere plays with the concept of risk. Marx’s oft quoted argument serves as a reminder that the consequences of political decisions today echo through years, decades, even centuries. In other words, as we implement policies, build technologies, and incorporate them all into our lives we take risks with lives that do not yet exist. This article explores how risk is discussed in posthuman discourse in three ways: (1) through the body, (2) through economies, and (3) through existential threat. 

What is posthumanism?

So what does it mean to create a “posthuman” world? One answer, that can at times sound like science fiction, is in the blurred lines between human bodies and technologies. This can include something like the extended mind thesis that characterizes the human-to-technology connection as “an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right” (Clark and Chalmers, 1998, p. 8). It also extends in perhaps more fantastical ways with Ray Kurzweil’s (2005) idea of the singularity where the line between human and machine is basically meaningless. The risks coming out of these altered bodies, as Shatzer (2012) says, may “sound far-fetched to contemporary ears” (p. 81) but they signify a potential future reality. This is what some like Shatzer see as a risk to the very concept of humanity and human dignity that is created as the line between human and technology is blurred. Shatzer points to our immersion in virtual worlds and the use of robots as health care tools and companions as two key examples of this.

Posthumanism can also be thought about in terms of the impacts of new technologies on economic processes and relationships. The evolving presence of technology in our lives, especially ubiquitous digital technology, poses a threat to the economic prospects of many. This is probably the posthuman risk that is most tangible and most easily grasped by the contemporary mind. This is the economic/social risk that comes with lost employment created by automation that is described in a recent Oxford University study on the subject. In that study Frey and Osborne (2013) found that “47 percent of total US employment is in the high risk category, meaning that associated occupations are potentially automatable over some unspecified number of years, perhaps a decade or two” (p. 38).

There is risk in the creation of automation and artificial intelligence, technologies that take jobs away from workers. There is the potential for not only economic destabilization for a particular sector of society but also for the economy in general. This reminds us that technologies are not neutral; design is a theory about how the world works or at least should work; the functionality of a technology betrays the social and political assumptions of the designer. Neil Postman (1993), in his book Technopoly, refers to Harold Innis’s idea of “knowledge monopolies” where groups that control a new technology “accumulate power and inevitably form a kind of conspiracy” against groups that do not have access to knowledge and skills necessary for success in the environment created by the new technology (p. 9). In the contemporary context ubiquitous digital technology and libertarian economic/social thought combine to create a dangerous political environment.

The Risk Society and Roko’s Basilisk

This is where Ulrich Beck’s concept of the risk society presents an especially useful lens through which to view technology and social change. In the risk society, Beck (1998) argues, “Society has become a laboratory where there is absolutely nobody in charge” (p. 9). In other words, private industries take risks and the public, which had no input into the decisions behind those risks are forced to live with the consequences, while the government must act to legitimize those decisions (p. 15). The problem, Beck argues, is that “Sociologically, there is a big difference between those who take risks and those who are victimized by the risks others take” (p. 10).

The third risk of the posthuman political problem is the most frightening of the three. It also may seem like the most far-fetched but some people take it very seriously. This is the existential threat of technology. Elon Musk, the CEO of Tesla Motors and one of the co-founders of PayPal, argued in 2014 that the creation of artificial intelligence is “our biggest existential threat” and described it as “summoning the demon” (David, 2014, para. 1-2). Stephen Hawking has expressed similar fears arguing, “The development of full artificial intelligence could spell the end of the human race” (Cellan-Jones, 2014, para. 2).

While someone as serious as Hawking can see a risk in the development of AI there are other slightly less serious manifestations of such fears. Less serious may be an unfair characterization because to some there is a very real fear behind the existential threat of Roko’s Basilisk. This is a thought experiment that originated on the LessWrong web forum. It proposes this possibility, summarized by David Auerbach (2014) writing for Slate:

What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer? (para. 4)

Part of the assumption of this thought experiment is that you may be inside the AI’s simulation of the world. You are actually a simulation of yourself that the AI is using to test whether you would be willing to do its bidding.

The proposal of this idea from Roko on the LessWrong forum provoked a strong, negative reaction from its community especially its founder Eliezer Yudkowsky. Yudkowsky argued that Roko “just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING” (Hallquist, 2014, para. 44). In other words, by merely proposing the possibility of a malevolent AI that would blackmail humanity into helping it come into being, Roko had given the future AI a good reason for contemplating the possibility of doing just that.

There are reports from Auerbach’s Slate article and from Yudkowsky that members of the LessWrong forum had nightmares and emotional trauma caused by this thought experiment. While it is easy to mock the response, which Auerbach does in Slate, it is also important, even if you see it as the ramblings of scientific fiction obsessed techies, to recognize this as useful for thinking about the political risks of control over technology; to remember, as Beck (1998) argues, “dangers are being produced by industry, externalized by economics, individualized by the legal system, legitimized by the sciences and made to appear harmless by politics” (p. 16).


Posthumanism is not just the stuff of science fiction. Automation is already having an impact on the economy. We are already altering and augmenting our bodies with technologies. If nothing else, Roko’s Basilisk reminds of the risks of the deification of technology in the cultural mode of Technopoly described by Postman. Postman argues,

“Technopoly is a state of culture. It is also a state of mind. It consists in the deification of technology, which means that the culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology” (p. 71).

In her analysis of posthumanism Rosi Braidotti (2013) argues that discussion of the subject suffers from a problematic division of labor. On one side is the science and technology studies perspective characterized by a “panhumanity” or “global sense of interconnection among all humans” that fails to adequately address the concept of subjectivity (p. 40). On the other is a depth of analysis of subjectivity from theorists such as Lazzarato, Virno, and Hardt and Negri who “tend to avoid science and technology” and fail to address them with “the depth of sophistication that they devote to the analysis of subjectivity” (p. 43).

Posthumanism requires a new Technontology, reassessing the ways in which we understand our reality through technology and the reality of technology. Roko’s Basilisk forces us to think about how our actions today could be used by some future AI which forces us to think through how the political ideology of today will present itself as motivation for the malevolence of future human political actors. The present free market ideology, the belief in the virtue of those who succeed in the market, even the very concept of meritocracy creates a potential future problem. Combine social Darwinism with technology that augments the human mind and body. Imagine the social and economic inequality of today only amplified by the inequality of access to such technologies. Auerbach says it best, in his criticism of the field of AI and its proponents arguing, “the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology” (para. 22).


Auerbach, D. (2014). The most terrifying thought experiment of all time. Slate. Retrieved from

Beck, U. (1998). Politics of risk society. In The Politics of Risk Soceity, Jane Franklin (Ed). Malden, MA: Polity Press.

Braidotti, R. (2013). The posthuman. Malden, MA: Polity Press.

Cellan-Jones, R. (2014). Stephen Hawking warns artificial intelligence could end mankind. BBC. Retrieved from

Clark, A. & Chalmers, D. (1998). The extended mind. Analysis 58(1), 7–19.

David, J. (2014, October 25). Rise of the machines! Musk warns of ‘summoning the demon’ with AI. CNBC. Retrieved from

Frey, C. Osborne, M. (2013) The future of employment: How susceptible are jobs to computerisation? Oxford Martin School. Retrieved from

Hallquist, C. (2014). Roko’s Basilisk illustrates the problems with the LessWrong community. Patheos. Retrieved from

Kurzweil, R. (2005). The singularity is near. New York: Penguin Books.

Postman, N. (1993). Technopoly. New York: Vintage Books.

Shatzer, J. (2012). Are we forming ourselves for a posthuman future? Ethics & Medicine: An International Journal of Bioethics, 28(2), 81-87.

Thacker, E. (2003). Data made flesh: Biotechnology and the discourse of posthumanism. Cultural Critique, 53, 72-97.

Robert N. Spicer is Assistant Professor of Digital Journalism at Millersville University. His primary area of research is in media and political culture. He also does work examining emerging media and immaterial labor. His dissertation, which he defended in June of 2014 at Rutgers University School of Communication & Information, is titled “The Discourses and Practices of Political Deception: From Campaigns to Cable to the Courts”. His most recent publication is “Long-Distance Caring Labor: Fatherhood, Smiles, and Affect in the Marketing of the iPhone 4 and FaceTime” in the journal Techne: Research in Philosophy and Technology. He sometimes tweets at @rspicer.