One of these things is not like the other

Intuitively, anthropocentrism and dualism make perfect sense: we distinguish the human as the subject, the knower, and we then locate everything else, all non-humans, as objects of inquiry, observation, study, etc. Humans do this sorting, but rather than recognize the importance of the act of compiling things into the categories each time we make such categories, we simply get on with the act of categorizing. We are so used to sorting and labeling that we forget that sorting and labeling—distinguishing this from that—solely serves a human purpose. From our earliest lessons, we are taught to distinguish unlike things and place them in different locations—if not physically, at least cognitively. Think of the children’s song in the program Sesame Street: “one of these things is not like the other; one of these things just doesn’t belong.” The viewer has no physical access to the objects on the screen, but she can imagine them as separated until the child on the screen does the physical sorting herself. We train ourselves, from an early age, to make distinctions between physical objects, and between concepts: the corporeality of the things matters little. The process of distinguishing becomes the hallmark of learning and thus of thinking.

Updating Philosophy

Updating Philosophy

Our world with modern technologies requires updates to ontology, ethical theories and policies. Technologies undergo modifications—updating of software, hardware redesigns—to improve their performance and to fit better with other technologies in a particular system, whether technological, economical, social or political.  Based, in part, on the phenomenological experiences of users (which could be humans or other technologies) with a given technology, the updates reflect the need for technologies to be tested, redesigned, tested, redesigned, etc. At first glance, this analogy of ontology, ethics and policies with technologies might seem farfetched. However, there are 5 (more?) reasons why the analogy works well, and attention to shifts in human-technology relations will require philosophy of technology—with its focus on human-technology relations—to reach wider audiences and make normative assessments of emerging technologies.

First, although ethical norms and policies—created by humans—appear distinct from technologies like smart phone app’s or computer operating systems, both are purportedly designed for the betterment of societies and individuals. Technologies, like policies and moral codes, serve people and help direct their actions. They facilitate communication and interaction among individuals and groups, as well as with the world around us. They serve as reminders of our obligations to ourselves, our communities and our world. As a field of inquiry, bioethics reminds us that our technological capacities have, and will continue to have, transformative impacts on the human. Manipulating genes, cognitive abilities and biological functions are important areas of research, but not only because of their potential benefits to humans. As George Annas (2009, “The Man on the Moon) points out, since discovering the structure of DNA, scientists and technologists have sought ever-wider applications for their findings (p. 232). Many of these technological applications involve changes to the human that necessitate a re-imagining of what it means to be human. Altering DNA—what Annas argues has been mislabeled “gene therapy” but should be called “gene transfer experiments” (p. 238)—and other forms of human augmentation that move beyond “treatment” and into the realm of “extension” demand public scrutiny and debate because at stake is what it means to be human now and in the future.

Whether we wish to pursue or limit research on potential species altering technologies should not be left to scientists or technologists alone as the impacts of these shifts are global concerns. Such technologies are not simply new devices, like the latest iteration of a computer or smartphone; they represent fundamental—even if incremental—changes to the human and should be carefully scrutinized before adoption. As Annas (2009) argues,

This is not say that changing the nature of humanity is always criminal, only that no individual scientist (or corporation or country) has the social or moral warrant to endanger humanity, including altering humans in way that might endanger the species. Performing species-endangering experiments in the absence of social warrant, democratically developed, can properly be considered a terrorist act. . . . Altering the human species in a way that predictably endangers it should require a worldwide discussion and debate, followed by a vote in an institution representative of the world’s population. . . . It should also require a deep and wide-ranging discussion of our future and what kind of people we want to be, what kind of world we want to live in, and how we can protect universal human rights based on human dignity and democratic principles. (pp. 238-9)

When humans/societies ignore the broad scope of scientific and technological developments, we do more than exercise freedoms to consider what we wish and focus on ourselves, our communities and our quotidian lives. We remain ignorant to the implications and applications of technologies that might fundamentally alter our humanity. Speculative ethics of emerging technologies (Michelfelder, 2011; Roache, 2008) may appear similar to academics taking science fiction seriously, but it also seeks goals akin to those outlined by Annas. Attending to the ethical, legal and social implications/applications of emerging technologies amounts to considering what kind of future we want to live in and with what kind of people we want to populate that future. These are no more academic arguments than discussions of the ethics we practice in our daily lives and how we think humans should consider and treat each other. Our scientific and technological capacities have enabled us to make world-changing devices like nuclear bombs, genetically modified foods and anti-bacterial medicines. We have no problem seeing such devices as spectacularly transformative, but perhaps that is because they do not require us to take serious steps in re-imagining ourselves. The potentials of cognitive enhancement and gene alteration, though still in early stages of research, should concern us in similar ways because though they appear focused on individuals, their effects will reach far beyond single humans or cultures. With these latter technologies, we are not simply updating ourselves as we would our computer or phone’s operating system—though of course that is a seductive metaphor because it turns complexities into banalities, permitting us all to ignore the import of such transformations.[1] Instead, we might be creating something new that requires a wholesale shift in our conceptions of humanity, and such changes should not be undertaken without circumspect analysis.

As Annas reminds us, “novelty is not progress, and technique cannot substitute for life’s meaning and purpose” (2009, p. 234). This statement runs counter to technological determinist views and promotes a constructivist view of technological change and impact that empowers lay publics, and experts, to contemplate the meanings we ascribe to ideas like humanism, personhood, and progress. Philosophers of technology attending to the “good life” in our current age[2] must find ways to make their ideas accessible—in the senses of understandable as well as available outside of expensive academic journals—, timely and synthetic—speaking to multiple audiences and drawing from a variety of disciplines—if they wish to be more than a sub-discipline of academic philosophy that reaches limited epistemic communities. Taking the democratic development of technologies seriously requires, just like in a political democracy, an informed public that has the means to engage in deliberations over which technologies we ought to develop and why. An un-disciplined philosophy of technology, un-limited by traditional academic philosophy’s style and content, must evaluate trends, developments and concrete cases as they do—and may—occur. This work entails moving from the ‘ideal situations’ of traditional philosophy into the messy, and often unclear, scenarios whose best outcomes are equally murky. In short, philosophy of technology practitioners cannot wait to make ‘end of the day’ pronouncements as more traditional philosophers may.

Classical philosophers of technology, like Martin Heidegger, Jacques Ellul, and Herbert Marcuse produced texts that assessed the trends they envisioned in human-technology relations. They transcended disciplines by taking technology as a theme that had implications and applications for philosophy, politics, economics and cultures. I argue that their work engaged ideas and themes that we now would label speculative ethics (Michelfelder, 2011), and that they did so in ways that required their readers to examine the human-technology relations in their lives and communities, cultures and countries, and that prompted them to make normative evaluations of such relations.[3] The push within philosophy of technology, like philosophy of science before it, to specialize has left a gap that un-disciplined philosophers of technology must fill: what kind of future will our technological developments entail and how can we can direct our research and development to help usher in the kind of future we wish to live in? Bioethics, Machine Ethics and Transhumanism, areas of research that push beyond traditional academic disciplinary boundaries, are three fields that take seriously the scientific and technological developments that have the potential to transform the human, and the natural world. Un-disciplined philosophers of technology, like Ray Kurzweil, Eric Drexler, Kevin Kelly, and Jaron Lanier, take up the kinds of themes Heidegger, Ellul, and Marcuse considered in the twentieth century, and they utilize current and speculative technological developments as guides for their views. Like those classical philosophers of technology, current un-disciplined philosophers of technology also take normative stands on what the ‘good life’ of our present and future might entail. They present their claims in ways that invite evaluation by lay and expert publics alike, and so provide the background information necessary for engagement by a wide variety of publics.

Un-disciplined philosophy of technology is a meta-philosophy that invites audiences to consider current moral philosophy, ontologies, epistemologies, and metaphysics—found in our sciences and technologies—and ask whether or not these systems should hold for the future. It also asks audiences to re-imagine public participation in such projects. Rather than wait for some future Aristotle, Immanuel Kant, Jeremy Bentham or John Stuart Mill to unify values into a moral system, perhaps we need epistemic communities to do the work that might once have been left to individual thinkers. We need a meta-philosophy that reflects the advances made in communication technologies that permit global participation and analysis if we wish to move beyond Enlightenment ontology, epistemology and metaphysics of individual, a-contextual, self-sufficient knowers. Our media remind us that we live in a “globalized world” or “global village;” our philosophy should reflect that present and future as well. Feminist ethics and ethics of care emphasize just such a connectedness, and undisciplined philosophy of technology can apply these themes to human-technology relations. Through her work in Machine Ethics, Susan Leigh Anderson, raises an ontological issue that STS scholar and philosopher of technology Bruno Latour has long emphasized: we must re-imagine our notion of actor to include the non-human. Indeed, if Anderson is right, we might need to imagine intelligent machines as agents or patients, or simply move past such labels and create some other term (as Donna Haraway did with her 1983 “A Cyborg Manifesto”).

Though perhaps too modest a goal, un-disciplined philosophy of technology must engage audiences that span continents and cultures, creating new kinds of epistemic communities comprised of lay and expert publics. Returning to the strained metaphor of updated ethics as similar to technology updates, moral theory, ontology and epistemology needs to be refreshed and reimagined. Constructivism reminds us of the inter-connectedness of humans, economies, polities and technologies; our moral theories, epistemology and ontology demand no less. Having a plan for the kind of future we want to have would enable us to direct our scientific and technological research in ways that seek to reach that future, but in ways that acknowledge that those voices we might have once ignored or suppressed would also have to inhabit that future. The image of a lone figure offering such direction is no longer tenable (if it ever were); we are, and have the potential to be, far too connected to imagine that one, or even a few, individuals should provide sole direction. In this sense, recent work by the Social Epistemology Review and Reply Collective makes an important first step in connecting practitioners from various backgrounds, cultures and disciplines. Though still an ‘expert’ community in that it consists of graduate students and academically-minded professionals, many of its members seek to make impacts outside of academia. Cabrera, Davis and Orozco (2015), for instance, discuss the potentials of making visioneering assessment practical for more than just academic audiences. As N. Katherine Hayles (2012) reminds us, however, we also need to reimagine how we promulgate such work: traditional printed manuscripts may reach academic audiences, but they are far from making inroads into non-academic circles (pp. 3-4).[4]

STS, with its attention to under-represented groups, ideas and practices, has always had the goal of critiquing boundaries and limitations on perspectives that might have served in the past but will not do for future investigation. The next step, as Fuller’s Social Epistemology proclaims, involves providing normative directions for organizing the pursuit of knowledge. In that sense, the un-disciplined philosophy of technology I propose here draws from traditional academic disciplines and infuses them with inter- and trans-disciplinary perspectives in order to invigorate forms of investigation that have explicit normative goals. Karl Popper’s Open Society, with critical rationalism at its base, provides a methodology for un-disciplined philosophy of technology work. For instance, rather than seeking confirmation of X moral theory based on whether or not it historically ‘worked,’ un-disciplined philosophers of technology should be free to speculate as to what kind of ontology, epistemology and moral theory humans will require in the future because the human-technology relationships in that future may diverge significantly from anything that preceded it. Perhaps, then, we need to imagine not what is humanly possible, but what our technologies will allow. For instance, what kind of person, government, science, economy or society would present and emerging technologies enable and would they be preferable to what we have now? How does our increasing hybridity—in terms of how our technologies change how think and act (Clark, 2008; Carr, 2011; Hayles, 2012)—alter our cognitive capacities, relationships with others (human and non-human), and understanding of risks? STS scholars routinely perform such inquiry; now STS needs more explicit philosophical underpinnings to ground its work even further. Undisciplined philosophy of technology provides just such an opportunity.

Bioethics, Machine Ethics, and Post- and Transhumanism present ontological, epistemological, metaphysical and moral questions, and they also provide tentative answers. For all their insights and speculations, however, they remain fringe fields (Bioethics, perhaps, less so). Un-disciplined philosophers of technology like Kelly, Kurzweil and Drexler, similar to science fiction writers, engage in expansive thought experiments about the kinds of futures that technological advances would permit and even promote. Kurzweil’s Singularity and Drexler’s atomic precision machines, thought far-fetched by many, radically imagine future humans and societies, worlds and environments. At the base of their visions, however, are ontologies, epistemologies, metaphysics and moral theories that demand attention if we continue on our technologically incrementally advancing paths. Just as critics question the need for speculative ethics of emerging technologies (Nordmann, 2007; Nordmann and Rip, 2009), so, too, could we argue that such speculation might be fine for science fiction but not for philosophy as the futures Drexler and Kurzweil imagine seem some way off still. Though we certainly need ethical evaluation of current and developing technologies, what Nordmann and Nordmann and Rip argue, speculation about possible futures provide us with forward-looking theories that imagine humanity as more connected with our technologies than we are at present. Peter-Paul Verbeek’s theory of technological mediation (2005, 2008, 2011) and Philip Brey’s anticipatory ethics for emerging technology (2012) address the need for a philosophy of technology to account for the potential scenarios that developing technologies present us with, but they do not go far enough because they limit their theories to those technologies current in design phases. Further, they do not engage in analysis of the future scenarios that might develop if the ideas that Kurzweil and Drexler are adopted and implemented. In short, though Brey and Verbeek develop future-oriented theories, they do not go far enough in imagining the widespread changes ethical changes that would be needed if such radical visions came to fruition.

Second, moral codes and policies, in democratic states, are malleable and fallible. Moral codes and legal policies depend on circumstances, and as those circumstances change, the codes and policies can adapt. There is an assumption that though these codes and policies are in place (this is not always true of moral theories), they are not permanent and open to critical revisions or outright revoke. Technologies fit these three categories as well: technologies are often updated yearly—phones, cars, gps devices, computers, snowboards—in terms of hardware, or more often in terms of software updates. Technologies are usually assumed to be imperfect—hence all the updates.

Third, once implemented, both require prodigious effort to remove from use. Like technologies, ethical theories and legal policies develop momentum. Through adoption and use, both categories gain support among citizens/users to the point that revoking them requires substantial effort (cognitive, political, economic, engineering). Just as the Enlightenment brought change to theories of humanity—from moral and biological standpoints, requiring a reimagining of identity and worth, it took quite a well for these new ideals and views to permeate societies, in some cases hundreds of years like for gender and race. Similarly, some technologies can be, though with great difficulty, removed from use. If your technology requires an operating system that is no longer supported by newer iterations of the ‘same’ device, that early device becomes something else (my old iphone is now an ipod touch). Chemical pesticides, like DDT, though effective for their designed use, had/have disastrous implications for environments, individual and communities. Such technologies can be removed from use, but the longer they are used, and continue to be efficacious for specific purposes, the more difficult their removal becomes.

Fourth, they must be compatible with each other. In some sense, we might say that they develop together. Because values are embedded in technological design—like the values of dependability, repeatability and efficiency—the technologies, to be adopted at least, must match the values of the communities they purport to serve. Humans value communication, and our phones, messaging apps and even GPS devices serve to connect us to other people and the world around us. They facilitate efficient, if, at times, thin, communication (a quick text or phone call may pale in comparison to some face-to-face verbal and non-verbal communication). As societies accept this ‘thinning’ of communication (to avoid fetishizing face-to-face communication too strongly, I concede that much face-to-face communication is/was also thin), our communication technologies adapt to our supposed preferences. As a substitute for depth and breadth, our communication technologies provide us with increasing speed and quantity—a social media post can reach thousands all over the world with just a few keystrokes whereas analog (face-to-face) communication cannot. Albert Borgmann’s (1984, Technology and the Character of Conemporary Life) device paradigm—that effective technologies are ubiquitous, safe, instantaneous and easy—illustrates this tendency of technologies to instantiate values that humans prize. As humans value autonomy, our technologies permit us to perform more functions without the aid of others. I can communicate my individual ideas on a blog, with the potential to be read by anyone with an internet connection, instead of sending my writing/video/sound recording to some publishing house, tv/video broadcaster or radio station. Policy develops to facilitate such forms of communication, and our ethical theories are interpreted to explain how these technologies augment the values we prize. Susan Leigh Anderson makes this point in her work on Machine Ethics and Metaethics.

Fifth, both are amenable to automation. As Susan Leigh Anderson argues in relation to Machine Ethics, programming a machine to perform based on ethical theories like consequentialism or deontology is possible and necessary in some cases (driverless cars, for example need rules to follow in dealing with other automated cars, human driver, pedestrians, etc.). Anderson claims that using machines as ethical advisors, without granting them moral agent or even patient status, could help humans respond in ‘preferred’ ways to a vary of situations with important ethical impacts.

[1] The title of Steve Fuller’s (2011) Humanity 2.0 evokes a similar metaphor of humans simply “upgrading” as a computer’s operating system might. As bait to gain an audience or pique their interest, I see its rhetorical significance. However, whether or not human enhancement involves making minor adjustments to a known quantity (in technological parlance, updated iterations receive labels like 1.4, 2.5.2, etc.) or represents a more fundamental shift and way of thinking deserves greater attention. Incremental changes have the tendency to lull us into thinking that nothing truly transformative is occurring. For instance, some species of animals are currently dying off, but there are so many more in the world that losing a few here and there does not represent anything of consequence. However, when looked back upon after 25, 50 or 100 years, we will much different perspective.

[2] Cf. Technology and the Good Life? (2000), E. Higgs, A. Light and D. Strong (eds).

[3] As a kind of hold-over from classical philosophy of technology that has resisted strong specialization, Albert Borgmann serves as a bridge between classical and contemporary philosophy of technology. His (1984) Technology and the Character of Contemporary Life extrapolates from specific technologies to broader technological, social and ethical trends.

[4] Hayles emphasizes that printed works in the humanities largely go un-cited, perhaps even unread, yet printed work (monographs and journal articles) remains the standard if one wants university tenure. She proposes a different sort of rating system, one that acknowledges audience, outreach and influence. If such a system were put in place, “one might make a strong argument for taking into account well-written, well-researched blogs that have audiences in the thousands or hundreds of thousands, in contrast to print books and articles that have audiences in the dozens or low hundreds—if that” (2012, p. 4). Wittkower, Selinger and Rush (2014) make a similar argument for how to increase the importance and readership of philosophy of technology.