Object-Oriented Ontology and Philosophy of Technology

Referring to the “Copernican Revolution of Kant,” Graham Harman (2005) notes that “Like all events of shattering genius, the Kantian Revolution is so victorious that it is now taken for granted” and any attempt to subvert it, from within the academy at least, is futile (p. 75). This seems similar to the dominant paradigm described by Kuhn (1996), where even trying to think of a different question, one not supported and promoted as a puzzle to solve, becomes difficult. So, too, with the ideas of the posthuman where the human as the measure of all things, from which and to which all things must refer, dominates.

Katherine Hayles (1999) seeks an embodied posthuman rather than the disembodied posthuman imagined in cybernetic work. Hans Moravek’s notion of “downloading” one’s mind or consciousness into a computer implies that one’s “self” has nothing overly important to do with one’s body. Interestingly, the work of those believing that the mind or consciousness could be “downloaded” or placed into any other container could be taken as both a rejection of Cartesian dualism and an affirmation. Claiming that the mind/consciousness consists of information that can be recorded and translated into the 1s and 0s of computation points distinctly to an ontological physicalism. It reduces everything in the universe to the same kinds of “stuff” and means that anything and everything can be manipulated with sufficient knowledge, skill and resources.

In other words, the mind must be made of the same “stuff” as the body for such a translation to work. Or, we might even consider Moravek’s idea, as well as much work in artificial intelligence (AI), as somehow granting the dualism of Descartes but with a computational twist: the body exists, but the body does not confine me or define me. Whatever shape I wear, “I” wear it, and that “me” in there can exist in whatever shape or form I may like as long as the cognitively functioning “me” remains. Of course, more than a mere waft of physicalism pervades such ideas because that cognitively functioning “me” must be transported in some fashion or another.

For Hayles (1999), her

nightmare is a culture inhabited by posthumans who regard their bodies as fashion accessories rather than the ground of being, [her] dream is a version of the posthuman that embraces the possibilities of information technologies without being seduced by fantasies of unlimited power and disembodied immortality, that recognizes and celebrates finitude as a condition of human being., and that understands human life is embedded in a material world of great complexity, one on which we depend for our continued survival. (p. 5)


Rather than accuse Hayles of fetishizing the body, a charge that might hold up if it mattered (which it does not), another way to consider her evocative nightmare/dream scenario involves questioning why we privilege the human at all. I grant that “life is embedded in a material world of great complexity,” but I side with object-oriented ontologists like Graham Harman and Ian Bogost that the material world/universe involves many kinds of bodies/objects and many kinds of consciousness. Hayles (1999), as well as a host of feminist, postmodern and deconstructionist thinkers, offers a critique of the liberal humanist subject as the dominant paradigm, arguing that it violates humanity as a whole by singling out certain genders and races as supreme.


Ranging from feminist thinkers contending that the humanist subject has been imagined as a white European male, whose universalization serves “to suppress and disenfranchise women’s voices,” to postmodernist theorists like Gilles Deleuze and Felix Guattari that link the humanist subject to capitalism (p. 4), the thrust of such critiques has been the egalitarian move that makes human “being” something to which no race, gender or sociopolitical group can claim exclusive access. For OOO, such a move begs an important question: why do cognitive rights and abilities stop at the human species, or even apply only to animals? Assuming life exists out in the universe beyond our planet, such a definition of a thinking thing might exclude them, even if they were capable of making a journey to our world. Further, what if such aliens had no “bodies” in the sense of which we conceive the term? Our fascination with embodiment would seem rather arbitrary to such “beings” (indeed, we might even lack a proper vocabulary to describe such creatures that need no body to survive/exist).


According to Harman (2005),


experience shows that it is often a mental image of what constitutes intellectual progress, rather than any inherently weighty arguments, that explains why the antiessentialist, antisubstance, philosophy-of-access viewpoint enjoys such apparently unshakable prestige in continental philosophy today. (p. 81, emphasis in original)


Pointing to the primacy of embodiment as the locus of cognition seems strangely flawed as yet another continuation of the humanist project. Random access memory works as a metaphor for the thinking human mind partly because it evokes an image, a perspective in which humans can understand themselves. To claim such an image as the only possible one does serve traditional humanist ends in that it holds humans as superior to all other being/objects. The human remains the measure of all things, and as no other animal or nonhuman has sent in any manuscripts to academic journals or global media claiming otherwise, we humans feel safe in just such an assumption. The speculative turn that Harman (2005) attempts to reclaim from Whitehead permits Harman, and other object-oriented ontologists/speculative realists like Ian Bogost (2012) to make forays into wondering about what it is like to be a “thing.” I find the mental image these speculative realists invoke equally compelling to that of the embodied camp. As my advisor frequently reminds me, we all choose what we want to privilege, what we wish to hold inviolable. To a certain extent, we cannot move beyond such dogma. As Harman (2005) compellingly claims:

Beginners in any field generally lack such paradigms, which is why they often strike us as lost or confused, and also why they are often more difficult opponents in debate than trained experts, since experience provides us with a rapid but predictable organizing mechanism for what we learn. . . . Hollow dogma can be found in any party at any time, and is equally paralyzing no matter where it occurs. (p. 80)

Genetically modified, transgenic

How do labels like ‘genetically modified organism’ and/or ‘transgenic organism’ affect the way biologists understand these organisms? How do such labels impact how the rest of us understand them? I do not mean these can only be two interpretations–scientists and non-scientists. I do mean that there are multiple ways to understand such organisms.


Recent work in Scientific American has caused me to question my own understanding of these terms. Monique Brouillette’s article, about pigs whose genes have been modified by CRISPR/Cas9, attempts to make it clear that “You can edit a pig, but it will still be a pig.” The article reminds me of the thought experiments centered around Theseus’s ship. In brief, the question has to do with identity: can you change the genes of a pig and still call it a pig (can you replace all the wood planks–goes one version of the ship dilemma–and still call it Theseus’s ship?)? How much modification of a pig would be required for it not to be a pig anymore? Of course, similar questions relate to humans, cyborgs, etc., but I’ll prattle on about that some other post. If you are interested, a related post about some philosophers being too consumed with demarcating and defining showed up recently in The New York Times The Stone blog.


We could question our definition of pig. We could attempt to refine it to such an extent that only organisms with X, Y, and Z genes get to be called pigs. We could broaden our understanding of pigs such that significant gene changes could occur before we needed to re-evaluate our definition of the pig. We could recognize that language is inherently fuzzy and there is not we can do to make it much less so–spin in your graves, logical positivists, spin! We could use scare quotes and call them “pigs.” None of these seems sufficient to the task of delineating between a genetically modified pig and other kinds.


One reason for the confusion might be issue of explaining what another term/phrase means: genetically modified. Are not all forms of selective breeding kinds of genetic modification? If so, then are they not as natural as anything else? I think part of the issue with genetic modification, perhaps especially transgenic modification, is its perceived lack of natural-ness. I am not sure biologists see it as unnatural  General publics, on the other hand, might see it as unnatural. They also might go a step further: if we modify plants (more on that in a future post) and other animals, what stops people from modifying embryos?


One reading of Brouillette’s article–brief as it is–would be that she is aiming to influence the readership of Scientific American to think of the gene-editing tool CRISPR as a lot like traditional breeding programs, only much, much, much faster (informal poll of a couple biologists at my uni confirmed that this last idea conforms roughly to how they see it). If so, and readers of the magazine take that version and promote it in their circles of colleagues, acquaintances, etc., then that narrative gets a boost. Fast forward a few years, and any self-respecting scientist, or science literature person, might accept that reading as well. This pattern, you might argue, conforms to how emerging perspectives gain traction and eventually dominate.


So, what would your position be? Is it still a pig? Should we come up with another name for the animal–implying, then, that it is new or no longer a pig? I am more interested in how people, communities, cultures, etc. define this animal than how a dictionary might define it. I feel that way because i think the aforementioned groups eventually influence dictionary definitions–in the USA at least (hello, literally. So glad you also can mean ‘virtually,’ which seems to make you mean just about nothing.)

How I Begin

Below is how I start my dissertation (snipped from Ch. 1). I have sent out what I hope are the last edits. My committee accepts them or sends me back to revise. I hope for the former.


Chapter 1: Introduction

Our Technological Selves

“The posthuman subject is an amalgam, a collection of heterogeneous components, a material-information entity whose boundaries undergo continuous construction and reconstruction. . . . the presumption that there is an agency, desire, or will belonging to the self and clearly distinguished from the “wills of others” is undercut in the posthuman, for the posthuman’s collective heterogeneous quality implies distributed cognition located in disparate parts that may be in only tenuous communication with one another. . . . my dream is a vision of the posthuman that embraces the possibilities of information technologies without being seduced by fantasies of unlimited power and disembodied immortality . . . that understands human life as embedded in a material world of great complexity, one on which we depend for our continued survival.” (Halyes, 1999, pp. 3-5)


Despite the spate of technological transformations and permutations that we in the West encounter each passing year; despite hyperbolic exclamations about technologies to revolutionize our lives, our relationships, and our world, even the state-of-the-art soon becomes quotidian. Perhaps humans adapt too well to change, to original and remarkable situations and devices. Because people adapt[1] so quickly, and with seeming ease and aplomb—we might even perceive societal pressure to do so as new technologies become imbedded in, for example, our professions, like electronic mail[2]–emerging technologies do not appear to herald much more than a need to purchase them, or incorporate them into daily life. The most recent handheld computers (nee cellular/mobile phones), packed with innovative features, become obsolete within a matter of years—if not months.

Our technologies teach us to expect such novelty from them, and they do not often disappoint in that regard. We learn from them to embrace modifications. More, we learn to seek out change lest we succumb to the boredom and monotony that results from engagement with the same old technologies, the same relationships we have already experienced. Somewhat counterintuitively, however, we often think that the technologies themselves will transform us and that we need only participate by, for instance, buying the product. The epigraph from Katherine Hayles (1999) reminds us that conceptions of the human should evoke ideas of heterogeneous entities, hybrid entities that depend on each other. To understand the human is to understand technologies: changes to the latter often require alterations to our own bodies, perceptions, and perspectives. The posthuman is embedded in a world of technologies, among other things. Discussions of agency or cognition, for instance, must account for these other things as co-constituting each other.

Philosophers of technology, then, have a particular responsibility. Just as “there is a place for specialization in philosophy”—like philosophy of technology—there is a need for persistent reflection on technological artifacts and processes themselves with an “’eye on the whole’” (Sellars, 1963, p. 3). One purpose of philosophy of technology is to connect the specifics (the micro)[3] with the broader social, economic, political and cultural tendencies and habits of our time (the macro).[4] Thus, in this dissertation, I explore what a philosophy of technology can, and should, account for in the creation, mediation and transfer of values to an epistemic community. In particular, I argue that our technologies, and the relationships we have with them, should compel us to reject essentialist visions of humans. We are hybrids, mixtures of many things. We should not axiomatically privilege humans over any “other,” whether nonhuman animals/life, the environment, or technology. That perspective of dominance masks our responsibility and co-dependence, and promotes an instrumental view of technologies that leads us away from discussing the technologies as producers, conveyors, and sites of value-formation.

How do, and should, we engage with our technologies, and how do technologies affect our relationships with other humans, animals, environments, and societies? Such broad and far-reaching questions occupied philosophers of technology like Jacques Ellul (1964), Martin Heidegger (1979), and Herbert Marcuse (1994); further, they remain as relevant today as they were in the last century. Our technologies have altered/enabled humans, relationships, environments, and just about every aspect/product of our existence; that seems a likely constant for the near future. Just as our devices need updates, so do our perspectives.

Philosophers of technology have an opportunity to help guide conversations and worldviews, and to do so will require engagement with the broad publics, engineers, and scientists regarding the values we wish to promote for the future.[5] In this dissertation, I review works from a variety of philosophers of technology and investigate how they propose we act with, and in relation to, our technologies. Further, I also engage thinkers/philosophers that imagine the prospect of humans merging with technologies, like Ray Kurzweil (2005), to form some new creature/being. For my part, I will side with those for whom the future entails an acknowledgment of the mergers/amalgamations that have already taken place, particularly over the past century (Hayles, 1999, 2011). The latter two positions represent a variety of speculative philosophy of technology, what I will term ‘un-disciplined’ philosophy of technology (UPoT), and both offer—at times conflicting—paths and standpoints for how we should approach human-technology relations.

[1] Paul Ceruzzi (2005) makes an analogous point regarding technologies in our lives: we adapt to them. Humans do not simply control and manipulate technologies according to our needs. We begin to conceptualize our problems based on the technologies at our disposal, and this affects what we see as solutions.


[2] Even writing that phrase out, as opposed to ‘email,’ is jarring.


[3] Peter-Paul Verbeek (2005), for example, performs empirical research into particular technologies while attempting to maintain focus on macro conditions and situations. He examines the role technology plays in human existence and in the relation between humans and reality. He does so by analysing particular technologies. Classical philosophers of technology (see Chapter 2) overgeneralized technology and based their theories of human-technology relations upon a false determinism where technologies drove societies and humans. Contemporary philosophers of technology (see Chapter 3), on the other hand, do not imagine technology as a single “thing” because that makes invisible the different pieces that make up the whole—like the rubber, metal and wood of the early bicycles (Bijker, 1993, p. 118). Bijker (1993) argues for a blurring of social and technical divisions in part because it allows him to show the related aspects of each, as well as the inherently contingent character of technological development.

Through demonstrating the interpretative flexibility of a technical artifact, it is shown that an artifact can be understood as being constituted by social processes, rather than by purely technical ones. This seems to leave more latitude for alternatives in technical change. (p. 121).

[4] Nicholas Rescher (2006) offers further explication regarding metaphilosophy, including first principles—akin to maxims in moral philosophy of the type “always keep your promises” (p. 2). For Rescher, these principles have functional efficacy for philosophy. Philosophy’s mission is “to enable us to orient ourselves in thought and action, enabling us to get a clearer understanding of the big issues of our place and our prospects in a complex world that is not of our own making” (p. 2). Philosophers of technology, as specialist philosophers, have a part to play in such engagement, and it extends beyond analysing and describing the particulars of technologies. After separating out the particulars of the technologies themselves, we must re-form and re-mould the specifics to show how they connect back to larger phenomena and practices.

[5] No stranger to such public engagement, Martin Heidegger sought it out explicitly. His essay, “The Question Concerning Technology” (1979), developed out of a series of lectures he gave to wealthy Bremen businessmen in 1949 (Heidegger, 2012; Merwin, 2014). Although I do not advocate philosophers of technology exclusively targeting businesspeople, or even technologists, as the essential audiences for their work, philosophers of technology must account for them and their products as they both represent important actors effecting change for our present and future.

Self-Driving Systems as Opportunities for Engagement

Recent explanations and understandings of Self-Driving Vehicles (SDVs), instances of Self-Driving Systems (SDS), provide an example of one site for intervention by ‘un-disciplined’ philosophers of technology. In February 2016, the National Highway Traffic and Safety Administration (NHTSA) issued a statement that will help shape debate over the development, introduction, and use of autonomous agents (machines, systems of technology) in the U.S. The letter written to Google’s Self-Driving Car Project Director, Chris Urmson, outlines a preliminary definition of a vehicle’s driver (NHTSA, 2016). Google argues its SDVs have no need for a human to drive the vehicle. According to the NHTSA letter, Google argues

“that the SDS consistently will make the optimal decisions for the SDV occupants’ safety (as well as for pedestrians and other road users), [and] the company expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking, or turn signals, or providing human occupants with information about vehicle operation controlled entirely by the SDS, could be detrimental to safety because the human occupants could attempt to override the SDS’s decisions.”  (NHTSA, 2016)

Google claims, and the NHTSA largely accepts, that the SDS can make better driving decisions than a person. Thus, allowing a person to control these vehicles, in ways more significant than raising and lowering a window, perhaps, poses a high risk. Taking the human out of such positions of control reduces risk.

For ‘un-disciplined’ philosophers of technology, the NHTSA’s decision heralds a shift in narrative, a removal of the independent human agent as explicitly in control in driving situations. It represents an opportunity for posthumanists to engage the practical implications of what the epigraph from Hayles (1999) notes as “the posthuman’s collective heterogeneous quality” (p. 3). The SDS amounts to “distributed cognition located in disparate parts that may be in only tenuous communication with one another” (Hayles, pp. 3-4). Such an interpretation by the NHTSA is an acknowledgement, not a rupture, of the momentum introduced by previous technologies like antilock brakes, power steering, cruise control, air bags, and electronic stability control, and augmented by features like emergency braking, forward crash warning, and lane departure warnings (NHTSA, 2016). The significance of the NHTSA’s acknowledgement of SDS as drivers should not be underestimated. Although it may seem like a minor pronoun exchange, the move from “who drives” to “what drives” the vehicle has the potential to influence realms like healthcare, childcare, governance, and ethical/moral decision-making.

If an SDS can operate more safely and reliably than a human driver, car companies, and the U.S. Department of Transportation, should consider moving away from human-controlled vehicles. We should consider a shift toward vehicles that move people without requiring individual human operators to manipulate the vehicles’ controls. I see ‘un-disciplined’ philosophy of technology (UPoT) as intervening in such discussions. UPoT recognizes this move to autonomous vehicles as a harbinger of increasing automation, but also as derivative of past decisions regarding the governance of technologies. Incremental changes often go unnoticed until they pass a point where their impacts can no longer be ignored. As I will discuss in Chapter 2, classical philosophers of technology like Heidegger, Ellul, and Marcuse note such a shift in the twentieth century. They attempt to extract from specific instances of technology development and use (the micro) an understanding of broader patterns and implications for societies, economies, cultures and polities (the macro).

Examples like SDS should remind us that decisions about autonomy, independence, and agency belong to more than industry (Google in this example) and governments (here, the NHTSA). This is a debate about self-driving vehicles, but I think it also represents more than a particular instance of (systems of) technologies acting of their own accord. This particular case should demand public input because everyone in this country will be impacted by whatever decisions are made. Rather than simply reporting on what Google and the NHTSA negotiate, UPoT practitioners must find a way to enter discussions with the engineers and legislators to help shape the technologies and the policies that will accompany them. I am not convinced traditional philosophy programs train students to intervene in such ways, although Adam Briggle and Bob Frodeman at the University of North Texas do take steps in this direction with their Field Philosophy (Frodeman and Briggle, 2014). The “un-disciplined” philosophers of technology I want to promote engage in what Frodeman and Briggle (2016) would describe as “a motley collection of different tasks for different audiences, rather than the current two main tasks, writing for other philosophers and teaching.” They create, promote, and engage narratives (the macro). They critically engage with the lived experience of our world. Theirs is the philosophy of our century.


Frodeman, R., & A. Briggle. (2014). Socrates tenured: An introduction. Social Epistemology Review and Reply Collective. Retrieved from: http://social-epistemology.com/2014/08/11/socrates-tenured-an-introduction-robert-frodeman-and-adam-briggle/

Frodeman, R., & A. Briggle. (2016). Is anyone still reading? A second response to Maring. Retrieved from: http://social-epistemology.com/2016/03/21/is-anyone-still-reading-a-second-response-to-maring-adam-briggle-and-bob-frodeman/

Hayles, K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature and informatics. Chicago, IL: University of Chicago Press.