Visioneering

In response to Mel’s 2nd point (in her email), that the idea of visionereering (Techno-enthusiast Visioneering or Societal Visioneering) might not be trapped by philosophical concerns, I would respond that the ideas proposed by TV writers (and SV writers as well) already begin from a philosophical position, or starting point. The techno-enthusiast has certain basic assumptions about the role technologies should play in societies and there are values they espouse that are worth exploring. I think SE may have a place here (and, I think I am following Fuller and Bob Frodeman on the point that SE is a way to do philosophy). Social epistemologists could provide perspectives regarding the values promoted by the adoption of certain technologies–like more AI, privatizing space flight, prolonging human “life”–as an aside, I think we will need to expand or at least refine our understanding of “living” were humans to reach the point of Singularity that Kurzweil envisions. In that sense, maybe TV and SV should be, as one of the commenters on Laura’s paper seemed to imply, part of a larger whole as TV definitely has broad social implications were any of their suggestions adopted, and SV in large part relies on technological developments (desalination efforts comprising just one example). 

 
Continuing on Mel’s 2nd point, how exactly might SE contribute to TV and SV? What can we as social epistemologists do in aid of these visioneering projects? I think one answer would be that through our own academic writing, we bring the ideas and concerns raised by TV/SV into broader conversations. For example, as artificial intelligence applications and programs spread deeper into our world (from GPS, to stock market trades, to our smartphone connections to the algorithms that direct our entertainment–places to eat, music to listen to, movies to watch) the general publics might be best served by a richer understanding of the ethical and value impacts these technologies have on us as humans. How we see the world through our technologies changes the world for us in substantial ways. How much privacy are we willing to sacrifice for more efficiency? Things as simple as grocery store reward cards, which track our purchases and recommend coupons to us that might save us money, also make that same information about us available to private companies that want to better track how their products are sold, to whom they are sold, and whether they can target their advertising more efficiently to reach certain consumers. On the surface, it all sounds benign, even to our advantage in the short run. Broader questions arise, however, about how much choice we leave ourselves if all our past purchases and buying proclivities are things algorithms ‘learn’ about us and what they present to us as options. Saying that the internet provides seemingly unending choice is, to my mind, not exactly true–the choices are there if one knows how to look and how to get past the easy and most obvious choices first (which, coincidentally, are presented to us in internet searches by algorithms which are locked away from our view and which we cannot control–Amazon and Google being two major players here). 
 
As a commenter on Laura’s paper observed, TV appears elitist because the drivers of these technologies require lots of funding. Public or private, the funding for their projects also steers them in certain directions. Do we want more public participation in the funding decisions that are made? Who is qualified to intervene? I think SE, as form of philosophical inquiry, should be involved more closely in the design and implementation of such broad projects like AI in cars that can drive themselves and large safety systems that follow algorithms to determine courses of action. In a way, I see algorithms as forms of control that can and should be influenced by more than a few elites. I do not think the TV elites mean to exert dominance or control, but that the algorithm itself is a mechanism of control–a decision engine–that takes numerous inputs and provides an output without the need for human intervention beyond initial programming. And here I begin to sound a bit like Langon Winner (The Whale and the Reactor, “Do artifacts have politics”) and other philosophers of technology like Ellul, Marcuse and even Borgmann by arguing that our technologies are not neutral.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s