Minds, Others

Peter Godfrey-Smith, philosopher of science, recently published a book on consciousness and intelligence entitled Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness. The work stands out to me, initially, because it is accessible to non-philosophers. I will not claim it reaches out to broad audiences of all stripes. Its initial chapters chronicle human understandings of the origins of life, decked out in biological and geological vernacular finery. If, say, you read the book in spurts and you forget how to verbalize cnidarians (after a lovely parenthetical guide to its pronunciation a few pages earlier), you might get stuck for a few seconds/minutes as you attempt to sound it out while seeming rather odd to anyone else in the room as you trip over a double consonant to begin the word.

I have, clearly, just laid bare my own shortcomings when it comes to knowledge of such periods as the Ediacaran, and Cambrian. Nevertheless, Godfrey-Smith’s text tempts me to continue reading because his subjects are so distinct from me and so fascinating: creatures of the sea like jellyfish, octopuses, and squid. Though I have no serious diving experience like Godfrey-Smith, I can easily imagine myself on the sea floor outside the hovel of an intrepid octopus as it (she, he?) extends a tentacle my way. The author adroitly ushers the reader under the waves with him. On these excursions, we encounter creatures with qualities at once foreign–tentacles with suckers–and familiar–curious as kittens.

Near the end of the second chapter, Godfrey-Smith casually announces: “From this point on, the mind evolved around other minds,” as if he were reminding a snorkeling novice that, once in the water, it’s wise to put on the mask to see. His claim does not refer to apes or land-based creatures–these are minds we might first imagine, yet the ones we imagine are hundreds of millions of years after this point. Instead, he refers to animals of the Cambrian period that dwell in the sea–where, we imagine, all life began on this planet. Up to this point in the text, our author has journeyed into the distant, distant past of our planet. He has hypothesized about the earliest ancestors of all life, and, crucially, attempted to delineate the rough origins of predation. Predators matter because, until their emergence, peaceful coexistence might likely have dominated. After their emergence, life had to adapt to other life in unprecedented ways.

Imagining a time when minds began being affected by other minds caused me to jump to thoughts our own era. All around us are machines and programs that adapt to humans and other animals. We have created artificial intelligence, and now we must learn to adapt to them. They, of course, are programmed (and are programming themselves, increasingly) to adapt to us, but we ignore the other half of the equation at our own peril. Not noticing how bots influence stock trading, which videos you watch in your feed, what stories are presented to you online (and, thus, what stories exist for you), and how you commute/travel might make you unable to adapt to them properly. We are in a feedback loop with them. They ‘know’ it: they track us, entice us to click on images/videos/stories, and generally seek to engage us. Do we know it? If so, what ought we do about it?

Importantly, we humans have no code for how to adapt properly to machines, bots, programs, and artificial intelligences. If Godfrey-Smith’s text goes as I think it might (only 3 chapters in as of today), I am hoping he comes back to a tantalizing line from chapter 1: octopuses might be the closest we ever come to alien intelligence. Because octopuses seem to exhibit the kind of intelligence a dog might have, they represent sources of unparalleled quality for imagining an extra-terrestrial creature might be (following Nagel’s notion that “there is something it is like to be an X”). If by alien Godfrey-Smith means exotic, or incongruous, then his thesis is both striking and timely.

We humans stand at a precipice (edges, geologically speaking, can last hundreds or thousands of years, so that hopefully deflates any hyperbolic tones of this sentence). On the other side is an existence thoroughly saturated with other minds, other intelligences. We ought to construct some malleable, tentative guidelines for how to engage these intelligences. If you have recently, or are currently, raising children in the U.S., you likely struggle with how to handle the content available to the kiddos, how to regulate their intake. Of course, they learn from watching us (so do our machines? octopuses?), so I hope we are also struggling with the exact same questions. No lasting rules will emerge anytime soon because the situation is fluid; nevertheless, without consciously considering how we want to be with these new intelligences, we will fall back on old habits and instincts that do not serve on the other side of that precipice.

Ends of Undergraduate Research

To what ends ought undergraduate research at our college of health sciences aim? Our students should certainly learn research methods, research practices, styles of reasoning, research ethics, and means of publishing/communicating their findings. Though many students will not become research scientists themselves, they ought to have experience with the methods, practices, pitfalls, biases, publicity, and promotion of rigorous inquiry. In their future roles as professional health practitioners, they must parse the evidence, findings, and recommendations of researchers—in the sciences, social sciences, and humanities. Thus, experience as a quasi-independent researcher, however rudimentary, offers students an opportunity to understand the perspectives of professional researchers in significant ways.

Measurable outcomes of course-based undergraduate research experiences (CUREs) include laboratory reports, posters, oral presentations, audio-visual presentations, papers, and critical analyses of the explicit and implicit paradigms—including procedures—that provide the foundation for professional work in the particular fields of study that they investigate. CUREs provide students with formative education on and practice with investigate techniques and evaluative methods that foster the kinds of problem-solving and critical thinking skills that students will need as undergraduates and beyond.

Should students choose a profession in the health sciences, CUREs will expose them to the abductive reasoning strategies that health professionals use in their daily practice. From medical diagnoses to evaluation of treatment plans, health practitioners must reason from an incomplete set of observations and processes to the likeliest possible explanation(s). Course-based undergraduate research experiences (CUREs) permit students to explore topics in greater depth than they would through typical introductory science courses and labs. CUREs require students to engage research questions more independently than they might in a typical undergraduate laboratory environment because students will have a role in designing the study. Working directly with faculty to learn research methods and theory will enable students to devise appropriate questions that their research will address. Further, they will collect the data, evaluate the information, and communicate the results through papers, posters, and interactive talks.

Inquiry into the historical, sociological, and philosophical underpinnings of the specific disciplines/fields that students will investigate will, additionally, serve manifold functions. First, such scrutiny will disabuse students of the notion that current scientific and technological practices and aims develop deterministically and teleologically: many such advancements occur despite the separate, even contradictory, goals of those practitioners actually doing the work.

When students are alive to the open-endedness of inquiry, they are unfettered by constraints that Whig historiography—the style often portrayed in scientific textbooks—engenders: scientific ‘progress’ is inevitable; science is self-correcting; there exists a uniform, cumulative path from previous theories to present perspectives. Instead, students ought to learn that unanticipated answers are not necessarily ‘wrong.’ They will, first-hand, appreciate that even work that fails to yield further funding, marketable technologies, novel approaches, and/or published results (acclaim) can be considered beneficial and useful. Descriptors like ‘correct’ and ‘incorrect’ are social constructions that accrue to investigations deemed worthy by specific actors—institutions, funding agencies, governments, societies (academic and lay), and politicians. Terms like ‘success’ and ‘failure’ have limited applications when one inquires to improve understanding, aptitude, and appreciation.

Second, students will appreciate that instrumental ends should not be considered the only valuable products of investigation. When students inquire to comprehend a topic/process in greater detail, they acquire a more robust conception of their object(s) of study. When, in the course of investigation, students hone their skills of observation and improve their dexterity in manipulating equipment/apparatuses, they prepare themselves for future work in evidence-based inquiry. Observation and experiment, hallmarks of most scientific methods, become habits that students will refine as they themselves develop. Students will not be evaluated on whether or not they produce the quality of work that professionals in the field might. Therefore, professors should be stoking the students’ creative and critical tendencies and aptitudes while introducing them to academic investigation, writing, and presentation styles.

Third, students will note that exemplary thinkers and practitioners critically examine, and revise, their core assumptions and perspectives. Through readings in the history and philosophy of science, students will learn the significance of contexts and background assumptions to scientific practice and technological development. How one approaches a problem/issue, including which techniques and methods are chosen, partly determines the outcome of experiments. Data, results, and findings do not ‘speak for themselves’—they form a piece of the scientist’s perspective that must be explained and defended. Through CUREs, students will demonstrate, for themselves, their peers, and future students, that even valid methods and results require critical interpretation and evaluation.

Provocation for a series of Undergraduate Research-based Courses

Advances in biology and chemistry now permit us to redefine our understanding of the human form. Soon, individuals will be able to construct, alter, and augment their bodies/minds in ways previously unimagined.

Like many technological developments of the last century, products and procedures that are initially seen as optional quickly become compulsory. Consider a mother-to-be eschewing an ultrasound. Imagine an education where you opt of email, online learning platforms like Canvas, or even, gasp, internet access. Technologies that initially aim to make our lives easier, more efficient, and safer often completely remake our lives; we must adapt to them.

Now, as lines between therapy and augmentation blur—look no further than CRISPR gene editing, or cognitive enhancement drugs (Ritalin for the non-ADHD)—we have the opportunity to turn bodies and minds into canvases on which to tinker. The ‘human as social construct’ paradigm will replace the outmoded notion of the ‘human as an expression of biological imperatives.’ We will fit ourselves with bioengineered parts that enable us to express personal preferences and whims. We will ingest targeted drug delivery systems that perform upkeep on our insides. Each incremental jump in neuroscience enables an opportunity to manipulate our brains, even our emotions, in barely imaginable ways. Terms like cyborg will lose significance: we will call ourselves human though humans from the last century might not recognize us as such.

In the coming decades, the glacial pace of evolutionary change will tire us to the point that we dare to remake our environment—the organic and inorganic materials all around us—as thoroughly as we will transform ourselves. Rather than change our energy consumption habits, for instance, we will simply exploit our environment in ways that match our caprices: we will warm global temperatures until we must bootstrap a solution to change it—or us. Technology producers do not teach us to be critical and reflective; they teach us to express our wants in ever-easier, one-way communication, like petulant whelps starved of attention.

To deliver on promises of personalized medicine, we will adopt materialist perspectives that, pace Renee Descartes’s dualism, dissolve the mind-body divide. We will conceive of the mind as mere physical ‘stuff’ that awaits manipulation like playdough in plastic cups: we are creators; we are divine builders. Embracing diversity will manifest in meddling with our physical forms and cognitive abilities. Scientific investigations and engineering projects demonstrate that, from quarks to quasars, human imagination leads to creation and exploration.

You might consider the above a dystopian fantasy. Or, you might find promise in a world that offers more malleability than the current iteration. No matter your position, you will find an outlet in this course. You will explore the philosophical (ontological, epistemological, and moral) implications of developing technologies. Historically-minded students will have the opportunity to parse the ideas, practices, people, and institutions that have permitted us to view nature so cavalierly. The social and political ramifications of emerging technologies demand scrutiny; you will learn to offer such analyses. As health science practitioners, you will modify biological and artificial systems. In this class, you will learn how and why we have arrived at a time that allows you to do so.

As bioethicist Allan Buchanan (2011) notes, “Biomedical science is producing new knowledge at an astounding rate—knowledge that will enable us, if we choose, to transform ourselves. Biomedical enhancements can make us smarter, have better memories, be stronger and quicker, have more stamina, live much longer, be more resistant to disease and to the frailties of aging, and enjoy richer emotional lives” (p. 4). The questions, then, involve what we will do with ourselves, as well as how we will live, work, and play, once some of us undergo such transformations.

Real Illusion: Virtual Twins and Control

A student recently pointed me to a, perhaps unintentionally, provocative article from The Economist. The notion of ‘virtual twins’ used by GE reminded me of the kind of biological ideology identified by Richard Lewontin in Biology as Ideology: The Doctrine of DNA. My student sent me an excerpt from the article for he thought the topic would interest me. He was right.

The excerpt intrigued me enough to seek out its parent paper. I appreciate The Economist for its forthright purpose: advice in making/managing money and assessment, from an economic standpoint (at times thinly veiled as politics, social science, etc.), of that same advice.

With that in mind, I see how the ‘virtual twin’ model can be pitched as a way to improve products (i.e., sell more of them). Though a discussion of ‘who made who’ can be reserved for another time, it is instructive to note how business drives social concerns, in this case health care, and political agendas, in this same case a kind of authoritarianism.

Were every person to have her own ‘virtual twin,’ the decisions we make (eat that carrot now, later, or not; visit/move to ____ city/country; run versus hike versus bike versus watch t.v.) could, conceivably, be ‘tested’ before we make them. The logic of biological ideology tells us that the genome determines much about the life of an organism. Thus, we could imagine a future time where many (‘all’ stretches the bounds of even this hypothetical) decisions are run through simulations that each person consults. Further, based on the kinds of decisions the person makes, she could be held accountable for her actions if the simulation predicted an outcome that adversely affects her health (read: if she does something that will cost money for health practitioners to diagnose, cure, treat, etc.).

Of course, most biologists are not themselves so overtly deterministic or sanguine about the information to be gained from gene sequencing, as Dr. Meyer’s lecture regarding gene editing techniques made clear. Prediction, lacking all necessary information on permutations and the ‘rules’ that govern interactions, is a fancy term for educated guessing. My above scenario is, clearly, a guess. What such guesses reveal, however, are the things we want.

We want an understanding of which investment pays the most dividends (financial, salutary, etc.). We read The Economist and get our genomes sequenced, then, for similar reasons. We take the advice offered as it fits our own perspectives, ignoring what we will because, sometimes, we do not like the predicted outcome or it goes against other interests that we have. To say that a virtual windmill and an actual windmill are similar is a metaphor, just as the claim that the human body is like a machine is a metaphor.

Virtual twins, like the proverbial broken clock (bad metaphor), prove correct some of the time. We ignore the discrepancies, however, between the model and reality at our own peril. Moreover, because large companies, like health practitioners, are considered trusted sources of information in their relevant domains of expertise, their advice has the potential to impact many people who have no idea about the inner working of the decision processes of the individuals involved. People desire explanation of things beyond their control (lightning, disease, sporting contests), and our models, perhaps, give us the illusion of control.

Virtual twins, to conclude this long-winded and digressive reply, provide the illusion of control. What, then, are the financial, social, ethical, and political costs of such models on lay persons, governments, health insurers, and businesses? I’m guessing there is a model for that question.

Security and Autonomous Systems

Users of autonomous systems, or just about anyone using a computer (desktop, laptop, tablet, handheld), can easily comprehend the importance of keeping their devices secure. What, exactly, that security will entail, of course, depends on the device and its ability and requirements to communicate with other machines and systems.

For makers and users of increasingly automated vehicles–like cars–keeping malicious programs and people away from the controls of the vehicle should be more important than any aesthetic choices and equal to environmental concerns. Users must be aware that people and programs could break into the operating systems of their vehicles and make them perform in unanticipated and negative ways, and makers of the vehicles/software must constantly work to keep such intrusions as limited as possible.

That the software of such vehicles are vulnerable to outside programs seems an unintended but unavoidable consequence of the technology itself. Just as markets, elections, and choices in general (by marketing, for instance) can be rigged, so can technology. A drug like piracetam, for instance, has specific targets when prescribed by a physician. Since the drug can be purchased without a prescription, however, its ‘off-label’ uses are vast and hard to trace specifically. To me, the piracetam and autonomous vehicle have a few things in common, and one is the importance for the consumer of investigating what she is purchasing and the risks involved for herself and others.

For more on this topic, see the case of someone trying to raise awareness about this issue: http://venturebeat.com/2016/11/12/before-you-sign-up-for-a-self-driving-car-pay-attention-to-hacker-charlie-miller/

 

Power Belongs to Programmers

The following is inspired by a lovely article found here.

 

CRISPR-Cas9 gives choices and options to people. It allows for a sense of control. We want to imagine that we have control over our lives, our bodies, our habits, our proclivities, and goals. But tools like CRISPR are made by powerful elites and only give the illusion of empowerment when really we are still dependent on the companies, the programmers, making the tools—the software and hardware. We fall under the spell of control, of supposed choice, seduced by own our wants and wishes, not by the tools themselves. These tools have their ethos, to be certain: use me to become better, to fulfill your hopes and dreams. Yet the dreams are pre-programmed: they, like the tools, are given to us like preset buttons on a radio—you may choose from the limited options (AM/FM stations) only. Herbert Marcuse labeled this one-dimensionality. Jaron Lanier and Evgeny Morozov recognize the one-dimensionality masquerading as openness, freedom, independence. The problem, for Marcuse, Lanier, and Morozov, and philosophy of technology in general, is gaining the attention of the masses, to encourage them to self-reflect when the digital and economic and political environments continue to bombard them with so many demands that seem so necessary, so time-dependent. We should not be surprised that we go for the quick fix—CRISPR—and trust that the science will catch up and solve the unintended consequences our quick fixes usher along. The proactionary imperative glorifies the just-in-time mentality, a faith that is well-founded. After all, have technological advances not improved our lives? Have they not made food procurement simple, shelter ample, and luxury as close as our screens?

 

Advertisers and app designers are better schooled in the psychology that underpins our wants and motivates than most of us. They play on these right under our noses.

Counter-intuitively, the ‘right’ design or ethos will also be a bully. It will push people to see the world and themselves in what Heidegger might call the ‘right relation to technology.’ The right relation is worth seeking, but it will not be one size fits all and that means we all must put effort into finding it. We must fiddle with our behaviors until we come to a posthuman view that promotes symbiosis. I do not claim this is the natural, true, or only perspective. It should be the preferred perspective, though.

 

How do we learn to pay attention? To see our technologies as extensions of ourselves, not solutions in themselves. We do not need a new philosophy of technology. We need a philosophy of technology that engages broader audiences, that promotes self-reflection, and that exposes the seducers. We must listen to the mantra of Marcuse. We must accept our dependence on each other, on our communities—and that includes our machines—as opposed to some supposed freedom that we are told lies just a click away, an edit away, a hack away.

 

In education, we make learning a game—an app to download—but unlike games, the penalty for failure impacts our future selves. We mortgage our future for quick fixes because it is easier than trying hard—I am not immune. The siren song of the technology companies and advertisers tell us when we’re happy because their employees study us more than we reflect on ourselves, like slot machines that we play on our phones. We are, per Postman (1985), amusing ourselves to death. The game is tilted toward the producers (Tristan Harris) and our economy runs on the same operating system. The operating system becomes a metaphor for control. Who controls the message, the menu, the reward, has the power. We are just players. And we are all in.

Comfort from not understanding?

I came across an anecdote including a quotation attributed to Alan Turing from 1949. Ruminating on the potentials of a computational machine, Turing purportedly stated, “I suppose, when it gets to that stage, we shan’t know how it does it.”

Let us assume Turing refers to the potential time when a machine begins to think–to orient, to imagine, to willfully choose, to cogitate–for itself. Clearly, the term ‘think,’ that I have offered my own synonyms for, needs explication. Whether it means something more than ‘following a pattern’–the capabilities of millions of current machines–is not certain. It is the possibility that the machine could get to this level–the level of deciding of its own accord/volition–that draws my attention. Once the machine achieves that which children and adults fall out of bed able to do, it will have passed some chimeric border that is, from what I can tell, not too different from the one separating my own thoughts and ideas from those of everyone else.

Impressively, the machine will have reached a level of complexity that appears tantamount to thought: something indecipherable and inexplicable. Descartes seemed to define the mind by that which it is not: physical extension. If it is not extendable in space, it becomes mind. That seems the only consequence of his dualistic system.

My parsing of Descartes is poor, and I do not intend it to stand in for the wonderful, detailed, and rich exegeses you can find elsewhere. However, Turing’s move, quoted above, points to a similar negative definition: once a machine reaches some sort of consciousness (and I do not claim Turing meant just that), we will not be able to explain it. If we cannot explain it, we cannot well understand it. Has our understanding of how the human mind works progressed much further?

Should we interpret Turing’s (supposed) line above as a kind of pretext, a way to maneuver around the tricky issue of consciousness? I think of David Toomey’s Weird Life book where he notes that biologists are fundamentally blinded to the possibility of life that does not resemble something found on Earth. It could be right on planets and moons in our solar system, but how would we know? We have in our minds that life resembles ______ or has ________ characteristics. And if it does not, then it is not life. That seems a weak distinction based on our own cognitive limitations.

More than Human?

A few years ago, I put together a very basic sketch of a course proposal for a writing course. This fall, I get a chance to teach an interdisciplinary class of my own devising. I am going to revisit some of the themes I considered in 2014. Following are a few paragraphs outlining topics and perspectives.

 

Do the technologies we use determine who we are? Would integrating technologies into our bodies change what it means to be human? How ought we make decisions about our technologies and our bodies? A June 2014 Supreme Court ruling, Riley v. California, raises critical questions regarding human-technology relationships, and even the potential of cyborg law. In this course, we will explore the ethical, legal and social implications/applications of human enhancement. As we learn about recent technological developments that permit such augmentations, we will pose and investigate questions invoked by these opportunities and challenges. For instance, we will examine issues raised by technologies such as cognitive enhancement pharmaceuticals, and predictive technologies such as autotype. Ours is a time of incredible, yet often incremental, technological change.  We must carefully consider the creations that help shape us, and our world, if we wish to be more than passive recipients of technological change.

 

In this course, students will encounter academic writing from disciplines like history, philosophy, and political science, but texts will also range from science fiction to socio-technical commentary to film. Current controversies over specific emerging technologies will provide a basis for exploring the social, economic, political, and environmental roots and branches of these developments. These texts will draw students into taking, defending, and critiquing positions as they learn to imagine the implications of their chosen positions, analyzing them in relation to those of their peers. The semester-long writing project, of roughly 12 pages, will begin with a series of short pieces that result from research and comparative analysis of particular emerging technologies that students may choose.

 

Arguments serve as our course scaffolding: we will articulate our own; we will interpret and parse professionals’ work; and, we will learn to polish our prose through workshopping, revising and editing. Focusing on the gaps, inconsistencies, and complexities in the arguments we read, whether from professionals, peers, or our own texts, will strengthen our critical and analytical reading and writing skills. Students will practice communicating ideas effectively, coherently and concisely by engaging a variety of academic and non-academic audiences because the positions we argue for will have impacts that extend beyond a university campus. We will learn to sharpen our claims, enroll our supporters—ranging from people to positions—and enter debates. Students will create a blog to post weekly writings and use it as a space to give and receive feedback. This forum will also serve as a means to make our ideas public and, potentially, engage broader audiences.

Indeterminate and Inconclusive: Satisfied?

Does any profession, academic or otherwise, require its practitioners to give decisive, categorical answers to questions? Should it?

My background in the humanities and social sciences, and my own propensity for equivocating, waffling, and generally mincing words (what a wonderfully evocative phrase: words minced like meat), tells me the answer should be No! Not a precise, clear-cut No but more of a wishy-washy Nah.

If I had not written a similar phrase, and thought it innumerable times, the following might jar me more than it does: “In this chapter I will not deliver any definite answers” (not citing). I will not cite this line in deference to the actual author and to my inner author that so often thinks/wants to write/does write such a phrase. My dissertation committee had no problem objecting to my lack of commitment as they appraised my final work. They exhorted me to take a position and defend it. I failed to adequately do so then. I still fail to do so. You should find no blustering, crowing, or gloating in that admission. If I were more sure of my ideas, I hope I would defend them better. A significant problem, then, amounts to a lack of confidence. It is a symptom which many others, particular in my branches of academia, appear to share. We hedge. Often. Wittingly.

An imaginative tale would recount a time when I gushed certainty and was rebuffed. Or when I lacked conviction only to later discover I had the right idea all along. Or recount someone I knew/know who experienced one or the other. I could go religious and try one of those stories. Or fairy-tale. Each appears ready to adjudicate the timid.

I have often been quick to pounce on texts in Science and Technology Studies as mere descriptions. Prescription should be the goal; description marks one as little more than an observer, a passenger. I did not want to go along for the ride. I wanted to drive. My infatuation with automated vehicles, and my exasperation with commuting by car (that I drive) reminds me that such a metaphor was as foolish then as it is now.

%d bloggers like this: