A Philosophy of STS

A philosophy of STS needs to:
1. Set normative standards that both define research and appropriate methods.
2. Provide goals (temporally relative?) for scholarship and interaction
3. Teach its practitioners how to engage and what to engage with; the purpose is to be critical and constructive as opposed merely to critical.

Because STS scholars represent a variety of disciplines (history, philosophy, policy and sociology, among others), the discipline needs standards for what the research should look like and how it should be performed. A philosophy of STS would seek answers to such questions as: What is the goal of micro case-studies? How should these studies impact other research (and if the answer to this question is that the studies do not have relevance outside the sample group, they should not be pursued)? How does STS research differ from social-scientific and scientific research? Regarding the methods for research that should be pursued, a philosophy of STS will evaluate dominant methodologies currently practiced in the field, like Actor Network Theory, and either propose alternatives or improve upon the current methodology (the assumption is that ANT has advantageous aspects but could be improved). What is the role of empirical research in STS, and how should the results of the empirical research be evaluated? The philosophy of STS is not a method, although it will provide criteria for what a method in STS should include.
Second, a philosophy of STS will examine past and current scholarship in the field of STS to determine what the goals were and are. These goals will be evaluated on such criteria as: reception, efficacy and adaptability. In terms of reception, how have scholars, within STS and those disciplines it investigates, reacted to or incorporated the findings of STS? As far as efficacy, what has resulted from STS research, publication and intervention (these three areas might serve to define “STS scholarship”)? Do the results match the goals? If not, why not? Finally, how adaptable are the goals of STS? Do they lose significance if abstracted beyond their initial context—the case study, the policy, the historical moment? An assumption here is that STS scholarship should not be so narrow as to serve only the idiosyncratic interests of its practitioners. Instead, STS scholarship should have broad applications and impact on society, just as its objects of study, science and technology, do.
Finally, because STS takes place in multiple forums, its practitioners need a varied skill set that will allow them to participate in multiple discourses, including public debate, academic writing and various governing activities (from local to national and international levels). Furthermore, STS should be part of multiple curriculums in the academy, including the natural and social sciences, engineering and the humanities. This means that university students in these areas of study should take courses in STS as part of their required classes. The skills they learn will help them communicate with others in their area and the broader public (this last terms needs much more explanation—I am inclined to use Philip Wander and Dennis Jaehne’s definitions on p. 219 of their article “Prospects for a Rhetoric of Science”).
A philosophy of STS, therefore, needs history, political science, rhetoric and sociology in order to achieve its purposes. STS scholarship does not aim at objectivity; it should not be “scientized.” However, the products of STS scholarship should have broad reach, and this means its arguments must be crafted and publicized with clear intentions. As D. McCloskey (1990) has emphasized regarding economists, STS practitioners need training in metaphor and storytelling. If STS scholarship proposes policy reform, then its arguments need more than fact and logic; the arguments must appeal to multiple audiences on multiple levels. STS practitioners are experts, but their expertise is broad rather than narrow.

The Continuing Allure of Technological Determinism

In many STS (Science and Technology Studies, Science and Technology in Society) circles, the notion of technological determinism is often kindly dismissed as an incomplete understanding of human and world development. With a little more education, explanation, and experience, anyone can see the flaws in technological determinism and steer away from that line of thinking. Why, then, do I read a physics associate professor, Rhett Allain, espousing an unabashedly deterministic view of technological and human progress in early January of 2015? Has Wired magazine somehow corralled all the technological determinists–the faithful–into a bin and forced them to write articles about the future of our world as decided by current and emerging technologies? If not, what else is going on that would allow such highly educated individuals to feel drawn to a theory of human/world development that so brazenly flouts the work of STS scholars over the last thirty years or more? As Sally Wyatt (2008) muses: “Technological determinism is dead; long live technological determinism.” Technological determinism persists, Wyatt claims, because it offers an explanation of human-technology relations that makes sense to many people and offers predictive power (I am struggling to refrain from making connections to religions here, but they, too, make superficial sense at least). Langdon Winner (2004) might call writers of such deterministic work “technological somnambulists”–they are sleep walking through life and not fully aware of the processes and artifacts that constitute their surroundings. Whatever explanation we want to give as to why technological determinism persists, the point is that it continues as a mantra (for instance, see Ray Kurzweil, Nick Bostrom and other transhumanists) for coping with our technologically-infused present and hopeful future.

Technological determinism is the idea that that technologies determine the social, political, economic, environmental, psychological(?), artistic/creative(?), humanistic(?) directions and valuations of life on this planet and beyond. That is a rough, and poor, definition, but hopefully the main idea shines through: technologies determine how humans live/die and interact (with each other, animals and, generally, the world around us). Technologies make possible futures and unmake others. Increasingly, technologies determine what it means to be human–posthumanists and transhumanists are currently wading through such territory. Technological determinism, for those whose specific areas of study are the sciences and technologies, is not only an incorrect view. Its adherents should be rooted out and their propositions refuted and disparaged. The facile claims and explanations of teleological progress espoused by technological determinists have detrimental and flattening effects on the audiences they reach. Audiences will come away feeling both awe and exasperation. Though you likely hear the hyperbole in these typed words, let me state it flatly: most STS scholars likely do not have such strong feelings about technological determinists. I, however, think they should.

Technological determinism offers (often) simple predictions based on extrapolations of gathered data–this is the process of induction; that same process is, despite Sir Karl Popper’s exhortations to the contrary (well, maybe he would accept that it is the process but that it should not be), the skeleton of modern science. Science would not lead humanity astray. Science, and scientists, only want what is best for humans and humanity, the world, and our environments. Strangely, Allain sees a flaw here: what if corporations control the development and distribution of technologies–his example is Taco Bell creating the first fully automated restaurant and the commodification of everything that would ensue (that he uses a conditional here is amusing. Corporations do develop and control technologies; these technologies do have strong impacts on economies, individuals, cultures, etc.). I find his hypothetical situation strange because though he paints it as a potentially dystopian future, he also does not see much that could stop it. And that might be the thing that strikes me as so odd about determinists (social or technological): they stand on the shore and see the massive wave miles out to sea. They describe the wave and the forces that must have pushed it, compelled it to advance on their position. They marvel at its majesty and complexity, its capacity for destruction (and creation–the creation that will come after the destruction). From that destruction they imagine a new world emerging (though they do not state it, they must also imagine the death that would be needed to sustain a world where half the population would be un-employable because robots had taken over the manufacturing, service, transportation, and even creative sectors) that dwarfs our present in terms of efficiency, profit and happiness. Their optimism is so infectious, it is hard not to stand with them on that shore, aware but oblivious/unconcerned to/about the power that approaches them.

Perhaps the determinists are right. The tide is already higher than we think.

Allain, R. (2015). The robotification of society is coming. Wired. http://www.wired.com/2015/01/robotification-society-coming/
Winner, L. (2004). Technology as Forms of Life. In D. Kaplan (Ed.) Readings in the Philosophy of Technology: pp.104-113. Oxford: Rowman & Littlefield.
Wyatt, S. (2008). Technological determinism is dead; long live technological determinism. In E. Hackett (Ed.) The Handbook of Science and Technology Studies: pp. 165-180. Cambridge, MA: The MIT Press.

Which spaces, which robots?

James Temple, at recode.net, put out an article at the end of 2014 profiling Cynthia Breazeal, the influential developer of robotics for home/personal use (as opposed to, say, industrial bots or space explorers like those found on Mars). Breazeal, working at MIT Media Lab’s Personal Robots Group (though now on leave as she works on her Jibo, a bot that might be consumer-ready by the end of 2015), seeks to create robots that engage directly with humans in everyday capacities. To do so, she first had to examine how humans interact with each other: her idea is that rather than humans adapting to machines (learning to use a computer mouse or touchscreen, for instance), machines should learn the human cues that allow humans to get along successfully with each other and to emulate those to foster improved human-robot relationships.

As Temple’s article describes, the rising number of ill or elderly in the U.S. (not to ignore the rest of the world, but to focus on the U.S. for now) will necessitate increasing numbers of caregivers in the coming years. Assuming there is not a parallel increase in the number of nurses and other elderly care providers, there is an opportunity for sophisticated robots to help provide assistance for millions of people. As Breazeal’s work emphasizes, however, simply placing a robot in a home or hospital will not necessarily help anyone. People must want to interact with the robot. They must trust the robot. My own father, now in his early 70’s and somewhat of a novice with most things computer, might not exactly welcome a robot into his home much less trust it. But Breazeal, and other researchers like her, are gambling that he would if the robot could understand social cues, converse with him in everyday language, and provide him with useful services.

In many ways, the above description of Breazeal’s work seems like she is proposing a sophisticated, robotic personal assistant. Initially, I think of Apple’s Siri, Google Now, and Microsoft’s Cortana when I imagine such an assistant. What Breazeal and others are creating, however, are corporeal manifestations of the digital personal assistant found in many telephonic operating systems. The experience of asking my phone a question is halting. Perhaps part of the problem is how we think of phones/handheld computers as objects: we do not talk to them: we talk to other people through them. As robots gain the abilities to react to human social cues–tone of voice, body movements, facial gestures–and can emulate more and more of the cues–Breazeal’s earlier project, Kismet, would lower its eyes if spoken to in harsh tones–the likelihood of humans welcoming them into personal/private spaces like our homes and hospitals increases. Of course, a key component will be trust: do we trust these machines and how far should we trust them?

Robots that lack appendages seem somewhat benign, but if they have microphones and video cameras, these machines can be sites and sources of surveillance. Most smartphones have the same capabilities with addition of GPS localization, as the Brookings Institute’s Benjamin Wittes and Jane Chong point out in their 2014 report “Our Cyborg Future: Law and Policy Implications,” yet it seems very few of us are concerned about the loss of privacy when the potential benefits seem so high: we can search for restaurants near us, upload pictures and videos instantly to social media sites, and keep enormous amounts of personal information like contacts, credit and banking information, and online search history at our fingertips. Why are the ethical implications of such devices not a higher priority for people in the U.S.?

One reason might be that smartphones lack the anthropomorphic characteristics that the bots Breazeal and others are developing. Though the fears associated with such sophisticated machines as HAL-9000 in Arthur C. Clarke’s 2001 appear to be lacking in similarly-sophisticated devices like smartphones, many of likely still approach the robots from Isaac Asimov’s I, Robot and James Cameron’s Terminator with palpable trepidation. My computer’s webcam stares back at me as I type, and my phone’s GPS pings in my pocket, but I assume these are either not “on” or can do little to harm me. I have no very good reason to assume these things, but I also have no very good reasons–aside from those gleaned from science fiction stories and films–to assume that a robot would harm me either. But, there seems more potential harm in anthropomorphic machines: their “eyes” might watch and follow me, blink at me, and show evidence that the machines are somehow “aware” of my presence in ways that the webcam and phone GPS do not. Should I be more wary of such robots/machines than I am of laptop computers and smartphones? Though they do not speak directly to this point, Wittes and Chong seem to imply that I should be just as wary of my “smart” devices as I would be of anthropomorphic bots. From privacy and surveillance standpoints, the potential harms are quite similar.

Strangely, to me at least, one area where machines are becoming increasingly pervasive and present is in medicine. Beyond the myriad scanning and diagnostic tools in use in the U.S., a growing number of hospitals are incorporating robot-assisted surgical tools that replace the human hand with mechanical devices. NYU-Langone offers over 50 such procedures at their medical centers. Laparoscopic surgery, where small incisions into the body allow the insertion of cameras and other tools into the body as opposed to surgery that requires larger incisions and more “opening up” of the body, has become increasingly common in recent decades, and the use of robots to perform more tasks appears an obvious application of increasingly precise machines. Surgical robots appear to have greater capabilities than humans in a number of areas, from making tiny incisions to finely dexterous movements that are difficult for the human hand. Of course, this does not imply that the best surgeons are inferior to their robot counterparts–indeed, human surgeons still remotely control the mechanical appendages of the machines. Instead, there is a fundamental assumption that most people do not have access to the best surgeons: there are simply not enough sufficiently trained and skilled human surgeons to go around. The lack of skilled practitioners leads to an opening for such robotic applications. We begin to trust machines as much, perhaps even more than, humans when it comes to medical procedures because the machines have finer motor skills than their counterparts (yes, I use the term “motor skills” with purpose here: the motor is a machine of recent advent, within the past few hundred years, yet it is telling that we use that language to describe human appendage functioning).

If it is the case that humans already place great trust in machines–using them to operate on us, and generally maintain and safeguard human wellbeing, seems an impressive example of such trust–then bringing Breazeal’s bots into homes and hospitals no longer sounds that far-fetched of an idea. Given the relative paucity of caregivers compared to the increasingly aging population of people in the U.S., it would even seem wise to promote the usage of such bots. Dystopian futures, like those imagined in The Terminator movies, need not be the only possible futures for human-robot interactions. Instead, as Breazeal points out herself, another science fiction future is also possible: that of the robots in the Star Wars films. R2-D2 and C-3PO are in many ways slaves (another topic for another day), but they also seem autonomous and, most importantly, friendly and cooperative with humans. The goal, I imagine, would be to build bots that do not menace humans but work with us and alongside us. Of course, that same set of films has a very different lesson when it comes to some cyborgs, e.g., Darth Vader.