Future Fundamentals of Philosophy of Technology; Or, Un-disciplined (Public) Philosophy of Technology

And yes, the following will sound like a manifesto.

Philosophy of technology needs speakers espousing a variety of normative positions, and these normative agendas should be fully elucidated in a manner that they are comprehensible to more than academic audiences.


Once normative agendas have been explained and discussed, these same philosophers of technology must engage critics of their normative positions in order to further clarify the various positions. The multiple positions on each topic should be presented by supporters of those particular claims in order to ensure they receive their strongest reading and pronouncement (J. S. Mill, On Liberty, 1859, pp. 67-70). Martha Nussbaum also speaks to this idea.


A solely descriptive philosophy of technology is insufficient to help shape the direction of thought regarding human-technology relations.


Philosophers of technology must teach, and be taught, to engage more than other philosophers of technology, academics and policy makers. Philosophers of technology have a social responsibility to broader publics that requires them to engage and provide, at minimum, a normative agenda directing future thought and action regarding technologies, both developed and developing.


Academic and pedagogical curricula must be developed for teaching philosophy of technology to undergraduate and graduate students. Technological literacy?


Because technological development is neither deterministic nor teleological, all normative positions regarding human-technology relations must be held as tenable. Therefore, they will require future defense and explication when such social, economic, political and philosophical criteria that serve as their base are altered by future conditions.


Suspension of judgment regarding human-technology relations shall be a last recourse, and any such suspension will have definite and explicit temporal limits.


From J. S. Mill (1859) On Liberty: pp. 96-97

I do not pretend that the most unlimited use of the freedom of enunciating all possible opinions would put an end to the evils of religious or philosophical sectarianism. Every truth which men of narrow capacity are in earnest about is sure to be asserted, inculcated, and in many ways even acted on, as if no other truth existed in the world, or at all events none that could limit or qualify the first. I acknowledge that the tendency of all opinions to become sectarian is not cured by the freest discussion, but is often heightened and exacerbated thereby; the truth which ought to have been, but was not, seen, being rejected all the more violently because proclaimed by persons regarded as opponents. But it is not on the impassioned partisan, it is on the calmer and more disinterested bystander that this collision of opinions works its salutary effect. Not the violent conflict between parts of the truth, but the quiet suppression of half of it, is the formidable evil; there is always hope when people are forced to listen to both sides; it is when they attend only to one that errors harden into prejudices, and truth itself ceases to have the effect of truth, by being exaggerated into falsehood. And since there are few mental attributes more rare than that judicial faculty which can sit in intelligent judgment between two sides of a question of which only one is represented by an advocate before it, truth has no chance but in proportion as every side of it, every opinion which embodies any fraction of the truth, not only finds advocates, but is so advocated as to be listened to. (pp. 96-7)

Philosophy of Science and Technology Studies

Steve Fuller’s 2006 Philosophy of Science and Technology Studies served an important function in my first year studying Science and Technology Studies (STS). First, it gave voice to some sentiments  I had regarding the place of philosophy in STS. It also provided an extended, if somewhat intellectually daunting, overview of the history of STS into the early 2000s. I quickly became lost in Fuller’s references to positivist tendencies, 19th century sociologists, science wars debates and a host of other thinkers and themes. In short, I was just starting in STS and had no real bearings on what had come before I arrived.

As I go back to the text five years later, I realize first that, though still somewhat daunting, Fuller’s review of intellectual thought relating to knowledge formation, practices and theory makes much more sense now that I have more contact with the epochs and writers he describes. I also see that many of the questions I began asking my first year still tug at me: why is philosophy not as strong a component of STS as I wish it to be? where, outside of activism (and I in no way wish to belittle that important and crucial function), do normative claims in STS arise?  

Fuller (2006) argues that STS practitioners, and texts, provoke us “to engage in theory rather than philosophy. ‘Theory’ consists of serval possible frameworks for doing STS research, whereas ‘philosophy’ constitutes a more basic inquiry that asks embarrassing questions about the relative merits of particular frameworks vis-a-vis the reasons we have for wanting to do STS research in the first place” (p. 5). Yet, philosophers and philosophical questions are at the core of why STS emerged as a discipline (at least at Virginia Tech, my current institution) in the late 1970s and early 1980s. In 2009, at the 4S conference (Society for the Social Studies of Science), I presented a talk “Toward a Philosophy of Technology Studies,” claiming that STS needed such a philosophy. One commenter remarked that STS already has a philosophy, Actor-Network Theory. Though I do not mean to imply that everyone would agree with that commenter, it struck me as odd for two reasons: 1. no one in the audience disagreed with him; and 2. ANT is a theory, perhaps even a methodology, but not a philosophy.

In the subsequent years since that 4S conference, I have alternated between disillusionment with STS and its lack of direct philosophical orientation, and hope that there may be a way to bring normative discussions about the creation, mediation and transfer of knowledge/values back into STS discussions–if you follow, as I do, the idea that such discussions are not already part of STS. Fuller’s Social Epistemology (SE), not Alvin Goldman’s Analytic Social Epistemology (ASE), opened up a way for me to bring in normative discussions of STS issues. Unfortunately, as Fuller (2006, p. 8) notes, his philosophy of STS and SE do not explicitly deal with technology studies. Readings in philosophy of technology introduced me to STS in the first place, so I was filled with hope–that there was a topic for me in SE that had few people working on it, and thus a place for my ideas–and hesitation–that there was a topic for me in SE that had few people working on it, and thus where would I find basis and support for my ideas. 

in progress…

‘particle physics’ is taped before a live studio audience

what do we learn when experiments, like those at cern, are taped to be broadcast (like the documentary ‘particle’)? scientists get even more nervous than they might performing in a lab in front of only their colleagues. they feel as if they are being observed to closely (they should talk to performers: theatre, music, sport, academics(?). attention, cameras, and audience affect their science (again, should we say the same about synchronous and asynchronous online classes?). they want to do experiments once or twice, test runs, to practice before anyone watches them. understandable. but should that happen?

scientists are increasingly connected on scales transcending campus, country, culture, language, and paradigm. yet, many of these scientists will hear about first discoveries/results on twitter or facebook. and why is that bad in the sense that otherwise they might have had to wait for in-depth blog posts or, much longer, journal articles. the ‘data’ coming in, so heralded, lauded and fawned over, comes in very fast. so fast individuals and teams cannot understand–nor even would they have a chance to ‘observe’ it by looking at the recorded data in anything like what we would now consider a ‘reasonable’ amount of time (years, perhaps, not weeks or months). and algorithms to interpret the data are born. but the algorithms are a collection of current ideas. do the programs change themselves, adapting, without human intervention? human intervention would require reanalysis of the data. something no one likely wants when so much more information keeps coming in from new experiments. 
and what is all the data about, actually? the scientists talk of nature being revealed: nature revealing itself. confirming, again, what i would consider a bane of much modern philosophy: the subject-object dichotomy. many scientists have a realism forged through faith: that there is a real explanation, if only we become subtle and attentive enough to listen. that there are laws governing, confining, reassuringly buttressing us, an edge to lean upon. belief. desire for a regularity yet hoping for more to discover and explain. test tube buccaneers. particle pirates. because they need rules/boundaries to push against. something to take from the unsuspecting. 
the scientists would have to become media savvy. many likely don’t want that. so what? does it make the more social and outgoing of their number into better scientists? because they are exposed to more scientists from different cultures and groups? we could go into the idea that pluriculture is not just preferable, but actually the reality. that multiple cultures do not just exist independently of each other. the pluriculture, where separate cultures interact with each other, weaving technologies, ideas, religions, economic schemes, values, etc. 
the cern experiment itself is fascinating in its social scope–countries, languages, cultures, values, paradigms, ethics. how much are they interacting? how much are they collaborating? how much should they be? is it only the data that are uniting them? without the experiment, would they not talk? 
i wonder if there are more technologies at work on the project than there are people. likely so. yet humans made ways to integrate them (sometimes at least), enable them to perform seamlessly. from the size of a building to smaller than an eye can see, and then into a different plane–programming and computational–that is hidden (black-boxed and made opaque) from vision in a way that may, perhaps, mimic the microchip.
wild. sometimes things just align. just saw this article and line from Dr. Ben Goldacre:
“The world of public science is changing fast. Anyone can engage with the public, and this presents new challenges, but huge opportunities. There is now a vast army of nerds who are popularising science online and in pubs, theatres, cafes and more. They are often able to do pop science better than the big names of mainstream media. I look forward to working with the BSA to give this nerd army the respect, support, and love it deserves.”

sleeping and waking

“Snoozers are, in fact, losers”– Maria Konnikova, The New Yorker 10 December 2013. http://www.newyorker.com/tech/elements/snoozers-are-in-fact-losers

The time zone is a technical artifact. We have socially-mandated wake times that correspond to day/night shifts, but these often seem at odds with our biologically optimal wake times. What would it be like if we eschewed our now standard time schemes and found a way to make our work correspond with when we are naturally “most” awake? In the early 19th century the U.S.A. had 184 separate time zones, determined by, often, when the sun reached its zenith at a particular place. We no longer use the sun’s location in the sky in the same way to determine time, but should, or even could, we?


Standardization has certainly been the norm for centuries, for all sorts of measurements, but is standardization in our best interests? On a social, perhaps even global level, the answer seems obviously yes. On an individual level, however, the answer might be no. How can we resolve this issue? A move to the local away from standardizing models would certainly be difficult, but is there a future where this type of problem would no longer be a problem? 

Future Consequences

Sven Ove Hannsson, 2011: “Coping with the unpredictable effects of future technologies,” Philosophy and Technology 24: 137-149.

 mere possibility arguments.

Hannsson offers a way out of ‘predicting the future’ while retaining the ability to make rational arguments about whether to plan for and adopt future technologies. He sets forth a claim for ‘mere possibility arguments’ that appear to similar to Fuller’s ‘proactionary principle’ and might well fit with it as Hannsson gives a five-step process for the evaluation of new technologies and their potential benefits and harms for humans and our world.

1. Inventory: Finding symmetric arguments

2. Scientific Assessment: Specification; Refutation

3. Symmetry Tests: Test of opposite effects; Test of alternative causes

4. Evaluation: Novelty; Spatio-temporal unlimitedness; Interference with complex systems

5. Hypothetical Retrospection


In response to Mel’s 2nd point (in her email), that the idea of visionereering (Techno-enthusiast Visioneering or Societal Visioneering) might not be trapped by philosophical concerns, I would respond that the ideas proposed by TV writers (and SV writers as well) already begin from a philosophical position, or starting point. The techno-enthusiast has certain basic assumptions about the role technologies should play in societies and there are values they espouse that are worth exploring. I think SE may have a place here (and, I think I am following Fuller and Bob Frodeman on the point that SE is a way to do philosophy). Social epistemologists could provide perspectives regarding the values promoted by the adoption of certain technologies–like more AI, privatizing space flight, prolonging human “life”–as an aside, I think we will need to expand or at least refine our understanding of “living” were humans to reach the point of Singularity that Kurzweil envisions. In that sense, maybe TV and SV should be, as one of the commenters on Laura’s paper seemed to imply, part of a larger whole as TV definitely has broad social implications were any of their suggestions adopted, and SV in large part relies on technological developments (desalination efforts comprising just one example). 

Continuing on Mel’s 2nd point, how exactly might SE contribute to TV and SV? What can we as social epistemologists do in aid of these visioneering projects? I think one answer would be that through our own academic writing, we bring the ideas and concerns raised by TV/SV into broader conversations. For example, as artificial intelligence applications and programs spread deeper into our world (from GPS, to stock market trades, to our smartphone connections to the algorithms that direct our entertainment–places to eat, music to listen to, movies to watch) the general publics might be best served by a richer understanding of the ethical and value impacts these technologies have on us as humans. How we see the world through our technologies changes the world for us in substantial ways. How much privacy are we willing to sacrifice for more efficiency? Things as simple as grocery store reward cards, which track our purchases and recommend coupons to us that might save us money, also make that same information about us available to private companies that want to better track how their products are sold, to whom they are sold, and whether they can target their advertising more efficiently to reach certain consumers. On the surface, it all sounds benign, even to our advantage in the short run. Broader questions arise, however, about how much choice we leave ourselves if all our past purchases and buying proclivities are things algorithms ‘learn’ about us and what they present to us as options. Saying that the internet provides seemingly unending choice is, to my mind, not exactly true–the choices are there if one knows how to look and how to get past the easy and most obvious choices first (which, coincidentally, are presented to us in internet searches by algorithms which are locked away from our view and which we cannot control–Amazon and Google being two major players here). 
As a commenter on Laura’s paper observed, TV appears elitist because the drivers of these technologies require lots of funding. Public or private, the funding for their projects also steers them in certain directions. Do we want more public participation in the funding decisions that are made? Who is qualified to intervene? I think SE, as form of philosophical inquiry, should be involved more closely in the design and implementation of such broad projects like AI in cars that can drive themselves and large safety systems that follow algorithms to determine courses of action. In a way, I see algorithms as forms of control that can and should be influenced by more than a few elites. I do not think the TV elites mean to exert dominance or control, but that the algorithm itself is a mechanism of control–a decision engine–that takes numerous inputs and provides an output without the need for human intervention beyond initial programming. And here I begin to sound a bit like Langon Winner (The Whale and the Reactor, “Do artifacts have politics”) and other philosophers of technology like Ellul, Marcuse and even Borgmann by arguing that our technologies are not neutral.

Aún no leíste estos artículos?

“Theories and Figures of Technical Mediation,” por Steven Dorrestijn



“Expanding Mediation Theory,” por Peter-Paul Verbeek (2012); Foundations of Science November 2012Volume 17Issue 4pp 391-395


The Moral Status of Technical Artifacts, edited by Peter Kroes and Peter-Paul Verbeek, 2014


The Cognitive Turn in Sociology

Advances in Social Theory and Methodology: Toward an Integration of Micro- and Macrosociologies by K. D. Knorr-Cetina; A. V. Cicourel
Review by: Steve Fuller
Michael Sacasas–interesting stuff from here, but here are two: http://thefrailestthing.com/2014/02/15/technology-that-word-you-keep-using-i-do-not-think-it-means/,
And since Sacasas keeps coming up, a brief paragraph (and my response)  from “Where have all the public intellectuals gone

“One last thought: It may be that the craving for public intellectuals is a kind of nostalgic longing for a time when we could reasonably imagine that even though we ourselves couldn’t get an intellectual grip on the complexities of modern society, out there, somewhere, there were smart people at the controls. These mythical public intellectuals we long for were those whose cultural function was to reassure us with their calm, accessible, and smart talk that people who knew what they were doing were steering the ship.[1] I suspect the unnerving truth is that the trade-off for the benefits of an unfathomably complex technological society is the disquieting reality that understanding is now beyond the reach of any intellectual, public or otherwise. ”


I think Sacasas makes a fine point here: The nostalgic longing is for a mythical past. There still are smart people at the controls, but diviners they are not. They, past and present, were/are beset by faults and foibles, predilections and priorities–and while they may have a set of general  interests in mind when they make their proclamations, that set will always be shaped by national, cultural, linguistic, economic and/or political interests.  With that said, I am not sure the tradeoff he mentions works: when was there ever a time when anyone understood the full complexity of a society–high technology or no–and its  associated workings? Perhaps “public intellectuals” should remain plural–to the extent that there will be public intellectuals, teaming up with others might (partially) offset the weight of explaining it all. No one needs to have all the answers, but the right group might fare better.


Adam Thierer’s technology blog: http://techliberation.com/2014/04/29/defining-technology/


“Hacking Feenberg,” por Mark Coeckelberghsymploke 20 (1-2)



%d bloggers like this: