Recent explanations and understandings of Self-Driving Vehicles (SDVs), instances of Self-Driving Systems (SDS), provide an example of one site for intervention by ‘un-disciplined’ philosophers of technology. In February 2016, the National Highway Traffic and Safety Administration (NHTSA) issued a statement that will help shape debate over the development, introduction, and use of autonomous agents (machines, systems of technology) in the U.S. The letter written to Google’s Self-Driving Car Project Director, Chris Urmson, outlines a preliminary definition of a vehicle’s driver (NHTSA, 2016). Google argues its SDVs have no need for a human to drive the vehicle. According to the NHTSA letter, Google argues
“that the SDS consistently will make the optimal decisions for the SDV occupants’ safety (as well as for pedestrians and other road users), [and] the company expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking, or turn signals, or providing human occupants with information about vehicle operation controlled entirely by the SDS, could be detrimental to safety because the human occupants could attempt to override the SDS’s decisions.” (NHTSA, 2016)
Google claims, and the NHTSA largely accepts, that the SDS can make better driving decisions than a person. Thus, allowing a person to control these vehicles, in ways more significant than raising and lowering a window, perhaps, poses a high risk. Taking the human out of such positions of control reduces risk.
For ‘un-disciplined’ philosophers of technology, the NHTSA’s decision heralds a shift in narrative, a removal of the independent human agent as explicitly in control in driving situations. It represents an opportunity for posthumanists to engage the practical implications of what the epigraph from Hayles (1999) notes as “the posthuman’s collective heterogeneous quality” (p. 3). The SDS amounts to “distributed cognition located in disparate parts that may be in only tenuous communication with one another” (Hayles, pp. 3-4). Such an interpretation by the NHTSA is an acknowledgement, not a rupture, of the momentum introduced by previous technologies like antilock brakes, power steering, cruise control, air bags, and electronic stability control, and augmented by features like emergency braking, forward crash warning, and lane departure warnings (NHTSA, 2016). The significance of the NHTSA’s acknowledgement of SDS as drivers should not be underestimated. Although it may seem like a minor pronoun exchange, the move from “who drives” to “what drives” the vehicle has the potential to influence realms like healthcare, childcare, governance, and ethical/moral decision-making.
If an SDS can operate more safely and reliably than a human driver, car companies, and the U.S. Department of Transportation, should consider moving away from human-controlled vehicles. We should consider a shift toward vehicles that move people without requiring individual human operators to manipulate the vehicles’ controls. I see ‘un-disciplined’ philosophy of technology (UPoT) as intervening in such discussions. UPoT recognizes this move to autonomous vehicles as a harbinger of increasing automation, but also as derivative of past decisions regarding the governance of technologies. Incremental changes often go unnoticed until they pass a point where their impacts can no longer be ignored. As I will discuss in Chapter 2, classical philosophers of technology like Heidegger, Ellul, and Marcuse note such a shift in the twentieth century. They attempt to extract from specific instances of technology development and use (the micro) an understanding of broader patterns and implications for societies, economies, cultures and polities (the macro).
Examples like SDS should remind us that decisions about autonomy, independence, and agency belong to more than industry (Google in this example) and governments (here, the NHTSA). This is a debate about self-driving vehicles, but I think it also represents more than a particular instance of (systems of) technologies acting of their own accord. This particular case should demand public input because everyone in this country will be impacted by whatever decisions are made. Rather than simply reporting on what Google and the NHTSA negotiate, UPoT practitioners must find a way to enter discussions with the engineers and legislators to help shape the technologies and the policies that will accompany them. I am not convinced traditional philosophy programs train students to intervene in such ways, although Adam Briggle and Bob Frodeman at the University of North Texas do take steps in this direction with their Field Philosophy (Frodeman and Briggle, 2014). The “un-disciplined” philosophers of technology I want to promote engage in what Frodeman and Briggle (2016) would describe as “a motley collection of different tasks for different audiences, rather than the current two main tasks, writing for other philosophers and teaching.” They create, promote, and engage narratives (the macro). They critically engage with the lived experience of our world. Theirs is the philosophy of our century.
Frodeman, R., & A. Briggle. (2014). Socrates tenured: An introduction. Social Epistemology Review and Reply Collective. Retrieved from: http://social-epistemology.com/2014/08/11/socrates-tenured-an-introduction-robert-frodeman-and-adam-briggle/
Frodeman, R., & A. Briggle. (2016). Is anyone still reading? A second response to Maring. Retrieved from: http://social-epistemology.com/2016/03/21/is-anyone-still-reading-a-second-response-to-maring-adam-briggle-and-bob-frodeman/
Hayles, K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature and informatics. Chicago, IL: University of Chicago Press.