sleeping and waking

“Snoozers are, in fact, losers”– Maria Konnikova, The New Yorker 10 December 2013. http://www.newyorker.com/tech/elements/snoozers-are-in-fact-losers

The time zone is a technical artifact. We have socially-mandated wake times that correspond to day/night shifts, but these often seem at odds with our biologically optimal wake times. What would it be like if we eschewed our now standard time schemes and found a way to make our work correspond with when we are naturally “most” awake? In the early 19th century the U.S.A. had 184 separate time zones, determined by, often, when the sun reached its zenith at a particular place. We no longer use the sun’s location in the sky in the same way to determine time, but should, or even could, we?

 

Standardization has certainly been the norm for centuries, for all sorts of measurements, but is standardization in our best interests? On a social, perhaps even global level, the answer seems obviously yes. On an individual level, however, the answer might be no. How can we resolve this issue? A move to the local away from standardizing models would certainly be difficult, but is there a future where this type of problem would no longer be a problem? 

Future Consequences

Sven Ove Hannsson, 2011: “Coping with the unpredictable effects of future technologies,” Philosophy and Technology 24: 137-149.

 mere possibility arguments.

Hannsson offers a way out of ‘predicting the future’ while retaining the ability to make rational arguments about whether to plan for and adopt future technologies. He sets forth a claim for ‘mere possibility arguments’ that appear to similar to Fuller’s ‘proactionary principle’ and might well fit with it as Hannsson gives a five-step process for the evaluation of new technologies and their potential benefits and harms for humans and our world.

1. Inventory: Finding symmetric arguments

2. Scientific Assessment: Specification; Refutation

3. Symmetry Tests: Test of opposite effects; Test of alternative causes

4. Evaluation: Novelty; Spatio-temporal unlimitedness; Interference with complex systems

5. Hypothetical Retrospection

Visioneering

In response to Mel’s 2nd point (in her email), that the idea of visionereering (Techno-enthusiast Visioneering or Societal Visioneering) might not be trapped by philosophical concerns, I would respond that the ideas proposed by TV writers (and SV writers as well) already begin from a philosophical position, or starting point. The techno-enthusiast has certain basic assumptions about the role technologies should play in societies and there are values they espouse that are worth exploring. I think SE may have a place here (and, I think I am following Fuller and Bob Frodeman on the point that SE is a way to do philosophy). Social epistemologists could provide perspectives regarding the values promoted by the adoption of certain technologies–like more AI, privatizing space flight, prolonging human “life”–as an aside, I think we will need to expand or at least refine our understanding of “living” were humans to reach the point of Singularity that Kurzweil envisions. In that sense, maybe TV and SV should be, as one of the commenters on Laura’s paper seemed to imply, part of a larger whole as TV definitely has broad social implications were any of their suggestions adopted, and SV in large part relies on technological developments (desalination efforts comprising just one example). 

 
Continuing on Mel’s 2nd point, how exactly might SE contribute to TV and SV? What can we as social epistemologists do in aid of these visioneering projects? I think one answer would be that through our own academic writing, we bring the ideas and concerns raised by TV/SV into broader conversations. For example, as artificial intelligence applications and programs spread deeper into our world (from GPS, to stock market trades, to our smartphone connections to the algorithms that direct our entertainment–places to eat, music to listen to, movies to watch) the general publics might be best served by a richer understanding of the ethical and value impacts these technologies have on us as humans. How we see the world through our technologies changes the world for us in substantial ways. How much privacy are we willing to sacrifice for more efficiency? Things as simple as grocery store reward cards, which track our purchases and recommend coupons to us that might save us money, also make that same information about us available to private companies that want to better track how their products are sold, to whom they are sold, and whether they can target their advertising more efficiently to reach certain consumers. On the surface, it all sounds benign, even to our advantage in the short run. Broader questions arise, however, about how much choice we leave ourselves if all our past purchases and buying proclivities are things algorithms ‘learn’ about us and what they present to us as options. Saying that the internet provides seemingly unending choice is, to my mind, not exactly true–the choices are there if one knows how to look and how to get past the easy and most obvious choices first (which, coincidentally, are presented to us in internet searches by algorithms which are locked away from our view and which we cannot control–Amazon and Google being two major players here). 
 
As a commenter on Laura’s paper observed, TV appears elitist because the drivers of these technologies require lots of funding. Public or private, the funding for their projects also steers them in certain directions. Do we want more public participation in the funding decisions that are made? Who is qualified to intervene? I think SE, as form of philosophical inquiry, should be involved more closely in the design and implementation of such broad projects like AI in cars that can drive themselves and large safety systems that follow algorithms to determine courses of action. In a way, I see algorithms as forms of control that can and should be influenced by more than a few elites. I do not think the TV elites mean to exert dominance or control, but that the algorithm itself is a mechanism of control–a decision engine–that takes numerous inputs and provides an output without the need for human intervention beyond initial programming. And here I begin to sound a bit like Langon Winner (The Whale and the Reactor, “Do artifacts have politics”) and other philosophers of technology like Ellul, Marcuse and even Borgmann by arguing that our technologies are not neutral.