Humans as Technology

Philosopher Joseph Pitt considers technology. That sounds strange at first, but I think it a decent description of his work. He has a book titled Thinking about Technology, so one might find it rather obvious that he considers technology. The meaning I take from his considerations depends in part on his sometimes maligned definition of technology: humanity at work (see Thinking about Technology for further explication).

His former student, and current colleague Ashley Shew (in her dissertation), quibbled with his definition. She explored nonhuman animal uses of technology–a fascinating topic in its own right. From them both I have learned the usefulness and limitations of broad definitions. From their perspectives, I have begun crafting a definition of technology that also includes elements of posthumanism: technology is life at work.

If you were among those counting “humanity at work” as too vague, broad, inclusive, or unhelpful, thinking of technology as life at work will, at best, cause you to roll your eyes. Pitt and Shew might be right there with you. Even “humanity at work,” however, makes a step in the direction I propose: definitions of technology should be vague.

An initial move in traditional philosophy involves definition and demarcation (see Nicholas Rescher’s Philosophical Dialectics for a condensed introduction to metaphilosophy). Thus, we would need to define human and work for Pitt’s definition (and life for the one I propose, but more on that later). Setting aside work as the least problematic of the two (I could be wrong), humanity becomes the key term to define and demarcate. I find it difficult, however, to separate humans and technologies neatly (hence the title of this post, humans as technology). For that, and a plethora of other reasons, I am likely not a traditional philosopher (of technology). Because of that, my dissertation has focused on what I describe as “un-disciplined” philosophy of technology and what that label entails.

As I mentioned, posthumanism attracts my attention–particularly as Francesca Ferrando uses and explains the term (a wonderful paper by her here).  In her work, she notes that humans and technologies emerge and develop together. Kevin Kelly’s What Technology Wants makes a similar claim. He describes the evolution of technology and argues that humans could not be what we are today without technologies. We humans rely on technologies. Many other animals do as well. Of course, such claims depend on how permissive you are regarding definitions of technology: language as a technology, for instance. Or, provocatively, as one Ferrando writes, evolution as “a technology of existence” (2013, p. 17).

Saying that humans rely on technology is something of a truism. Thinking of humans as technology, on the other hand, requires much more explication. In doing so, I may even talk myself out this phrase and into another one. I am happy about that. My ideas need refreshing/updating (two words I use deliberately, especially as they are imagined in relation to computers and software).

For now, I continue this dialogue with myself and an invented interlocutor: you (where you could be human or nonhuman–an artificial intelligence scanning/crawling this page and making some sense of the language I use). Of course, I also welcome readers who might wish to comment and/or debate these ideas, and I hope they reach out to me via this page or email: williamdavis@vt.edu. I’ll be back tomorrow.

Another test for artificial intelligence

An A.I. would have an answer to the question, “What do you want to do today?”

It seems a child (that can speak somewhat coherently) can answer this question. We might quibble about whether the answer makes sense, is feasible, etc. We might also object that an A.I. could be programmed to answer this question. To the latter objection, it seems relevant to ask how much metaphorical programming the child (or any human deemed ‘intelligent’) has received.

T.V. shows, movies, adverts, talk with playmates, etc., have an effect on the child’s expressed preference of what she wants to do that day. She might simply mimic her friend, sibling, or parent, a character in a story (book, t.v. show, etc.), or something she heard someone or something else declare. Would we consider the child intelligent no matter what she says? Would silence (no response) have a meaning as well?

The objection that the A.I. could be programmed to answer the question certainly seems relevant. It seems to imply, however, that programming makes the programmed less (or even not) intelligent. Judging machine/robotic intelligence based on human intelligence has at least two problems. First, the machine is not human, so why would it need to mimic human ability to be intelligent? Second, humans have varying levels of intelligence, so what is the minimum level of intelligence a machine would need to have? How can explicitly limit that given the amount of variation displayed by humans (not to mention other animals)?

I do not claim that this test is perfect, but I think it points to a perspective that does not require an A.I. to mimic human intelligence in all respects. Much remains to be fleshed out–and many objections remain to the test beyond those I have considered–but it serves as a thought experiment that challenges how we might usually evaluate artificial intelligence. I hope it sparks better questions, and better tests, than the one I mention.