The above is a very tentative title for a chapter I have started working on that might go to press in 2017 (book still waiting on approval). The book will (rather use future tense than conditional *might) bring up issues regarding our attitudes toward risk when planning for future generations. Of course, I am going to imagine that ‘future generations’ need not necessarily fit standard humanist visions of the term and instead will promote a more posthumanist perspective.
The potential book’s editor, Steve Fuller of Warwick University, recently engaged in debate with Rupert Read, East Anglia University. Audio of that debate and ensuing dialog with audience members is here.
Below is a quick abstract I have worked on for the chapter. I find a number of implications regarding the proactionary v. precautionary debate fascinating, and Fuller has forced me to question a number of ideas I did not realize I un-reflectively supported, not least of which the notion that evolution has some normative standing. I seem to privilege a normative conception of nature, where the ‘natural’ is somehow intrinsically good. I had not considered things in that way, a phrase I usually repeat when reading/hearing/conversing with Fuller.
In any case, my draft abstract:
Posthumanism collapses boundaries, particularly between humans and technologies with the concept of technogenesis: technology is simply part of what makes up the human. Conflating humans and technologies removes one border, and in doing so it might enable a different perspective on the precautionary v proactionary debate. When considering ‘our’ future, the identity of the stakeholders must first be identified, and the speculative goals assessed based on their interests. At first glance, Posthumanists seem closer to the precautionary side of the debate, with Transhumanists on the proactionary side. I will examine proactionary and precautionary principles from a Posthuman perspective—contrasting it with a broad Transhuman perspective—and look at one or two specific examples: automated (driverless) vehicle adoption in the U.S. and/or the use of CRISPR-Cas9 to modify human embryos. I side with the Posthuman perspective in many situations, but can that be maintained while promoting widespread adoption of automated vehicles now and limiting use of CRISPR-Cas9 for the time being? How should the posthumanists deal with the risks associated with automated vehicles and gene manipulation? Does the collapse of the border between humans and technologies provide any normative guidance in considering our adoption or rejection of these technologies?