James Temple, at recode.net, put out an article at the end of 2014 profiling Cynthia Breazeal, the influential developer of robotics for home/personal use (as opposed to, say, industrial bots or space explorers like those found on Mars). Breazeal, working at MIT Media Lab’s Personal Robots Group (though now on leave as she works on her Jibo, a bot that might be consumer-ready by the end of 2015), seeks to create robots that engage directly with humans in everyday capacities. To do so, she first had to examine how humans interact with each other: her idea is that rather than humans adapting to machines (learning to use a computer mouse or touchscreen, for instance), machines should learn the human cues that allow humans to get along successfully with each other and to emulate those to foster improved human-robot relationships.
As Temple’s article describes, the rising number of ill or elderly in the U.S. (not to ignore the rest of the world, but to focus on the U.S. for now) will necessitate increasing numbers of caregivers in the coming years. Assuming there is not a parallel increase in the number of nurses and other elderly care providers, there is an opportunity for sophisticated robots to help provide assistance for millions of people. As Breazeal’s work emphasizes, however, simply placing a robot in a home or hospital will not necessarily help anyone. People must want to interact with the robot. They must trust the robot. My own father, now in his early 70’s and somewhat of a novice with most things computer, might not exactly welcome a robot into his home much less trust it. But Breazeal, and other researchers like her, are gambling that he would if the robot could understand social cues, converse with him in everyday language, and provide him with useful services.
In many ways, the above description of Breazeal’s work seems like she is proposing a sophisticated, robotic personal assistant. Initially, I think of Apple’s Siri, Google Now, and Microsoft’s Cortana when I imagine such an assistant. What Breazeal and others are creating, however, are corporeal manifestations of the digital personal assistant found in many telephonic operating systems. The experience of asking my phone a question is halting. Perhaps part of the problem is how we think of phones/handheld computers as objects: we do not talk to them: we talk to other people through them. As robots gain the abilities to react to human social cues–tone of voice, body movements, facial gestures–and can emulate more and more of the cues–Breazeal’s earlier project, Kismet, would lower its eyes if spoken to in harsh tones–the likelihood of humans welcoming them into personal/private spaces like our homes and hospitals increases. Of course, a key component will be trust: do we trust these machines and how far should we trust them?
Robots that lack appendages seem somewhat benign, but if they have microphones and video cameras, these machines can be sites and sources of surveillance. Most smartphones have the same capabilities with addition of GPS localization, as the Brookings Institute’s Benjamin Wittes and Jane Chong point out in their 2014 report “Our Cyborg Future: Law and Policy Implications,” yet it seems very few of us are concerned about the loss of privacy when the potential benefits seem so high: we can search for restaurants near us, upload pictures and videos instantly to social media sites, and keep enormous amounts of personal information like contacts, credit and banking information, and online search history at our fingertips. Why are the ethical implications of such devices not a higher priority for people in the U.S.?
One reason might be that smartphones lack the anthropomorphic characteristics that the bots Breazeal and others are developing. Though the fears associated with such sophisticated machines as HAL-9000 in Arthur C. Clarke’s 2001 appear to be lacking in similarly-sophisticated devices like smartphones, many of likely still approach the robots from Isaac Asimov’s I, Robot and James Cameron’s Terminator with palpable trepidation. My computer’s webcam stares back at me as I type, and my phone’s GPS pings in my pocket, but I assume these are either not “on” or can do little to harm me. I have no very good reason to assume these things, but I also have no very good reasons–aside from those gleaned from science fiction stories and films–to assume that a robot would harm me either. But, there seems more potential harm in anthropomorphic machines: their “eyes” might watch and follow me, blink at me, and show evidence that the machines are somehow “aware” of my presence in ways that the webcam and phone GPS do not. Should I be more wary of such robots/machines than I am of laptop computers and smartphones? Though they do not speak directly to this point, Wittes and Chong seem to imply that I should be just as wary of my “smart” devices as I would be of anthropomorphic bots. From privacy and surveillance standpoints, the potential harms are quite similar.
Strangely, to me at least, one area where machines are becoming increasingly pervasive and present is in medicine. Beyond the myriad scanning and diagnostic tools in use in the U.S., a growing number of hospitals are incorporating robot-assisted surgical tools that replace the human hand with mechanical devices. NYU-Langone offers over 50 such procedures at their medical centers. Laparoscopic surgery, where small incisions into the body allow the insertion of cameras and other tools into the body as opposed to surgery that requires larger incisions and more “opening up” of the body, has become increasingly common in recent decades, and the use of robots to perform more tasks appears an obvious application of increasingly precise machines. Surgical robots appear to have greater capabilities than humans in a number of areas, from making tiny incisions to finely dexterous movements that are difficult for the human hand. Of course, this does not imply that the best surgeons are inferior to their robot counterparts–indeed, human surgeons still remotely control the mechanical appendages of the machines. Instead, there is a fundamental assumption that most people do not have access to the best surgeons: there are simply not enough sufficiently trained and skilled human surgeons to go around. The lack of skilled practitioners leads to an opening for such robotic applications. We begin to trust machines as much, perhaps even more than, humans when it comes to medical procedures because the machines have finer motor skills than their counterparts (yes, I use the term “motor skills” with purpose here: the motor is a machine of recent advent, within the past few hundred years, yet it is telling that we use that language to describe human appendage functioning).
If it is the case that humans already place great trust in machines–using them to operate on us, and generally maintain and safeguard human wellbeing, seems an impressive example of such trust–then bringing Breazeal’s bots into homes and hospitals no longer sounds that far-fetched of an idea. Given the relative paucity of caregivers compared to the increasingly aging population of people in the U.S., it would even seem wise to promote the usage of such bots. Dystopian futures, like those imagined in The Terminator movies, need not be the only possible futures for human-robot interactions. Instead, as Breazeal points out herself, another science fiction future is also possible: that of the robots in the Star Wars films. R2-D2 and C-3PO are in many ways slaves (another topic for another day), but they also seem autonomous and, most importantly, friendly and cooperative with humans. The goal, I imagine, would be to build bots that do not menace humans but work with us and alongside us. Of course, that same set of films has a very different lesson when it comes to some cyborgs, e.g., Darth Vader.