Why YOU Should Be an Academic Cyborg (and Maybe Already Are)

My PhD supervisor, Inger Mewburn (The Thesis Whisperer), and I recently had a journal article published about how technologies like smartphones allow an academic to be in multiple places and times at once. To use an example from our paper, an iPhone can make it possible to have one’s physical body attend a meeting in one timezone, while simultaneously collaborating with a co-author on a research paper in a different time zone, and at the same literal time, virtually observe one’s son walking home from school, using the iPhone’s ‘Find my Friends’ app. Technologies like these, we argued, allow academics to achieve more than ever before, yielding a range of both positive and (ahem!) negative consequences. The article was premised on me piloting part of my PhD methodology, ‘shadowing’, by following Inger around for a couple of days looking like her much shorter but otherwise oddly-similar-looking shadow. I quickly came to the realisation that she literally does the work of several people, by inhabiting multiple spaces at once. She’s currently recording an experiment on social media aimed at combating this craziness, in which she will try to only work her 35 mandated hours per week. You should follow along at home. It’s going to be fun for the whole academic family.

Anyway, then I was listening to my favourite podcast the other day (other than ours of course!), called Very Bad Wizards, where they were discussing a philosophical concept called ‘the extended mind’. It got me thinking about how the academic mind could be said to extend into the technologies we co-opt and that co-opt us… but often, it doesn’t. The article I’d written with Inger was all about how we, as academics, can, and sometimes do, use technologies to control the world around us. But, if I’m honest, I know that Inger is something of an exceptional case. Her favourite saying, after all, is, “I welcome our robot overlords”.

At the other end of the spectrum, some of my other academic colleagues don’t even own a mobile phone. And, although I’ve often been heard to sigh and groan that “technology hates me”, just like any other self-respecting anthropologist, in this post I want to consider just what we might be missing out on if we choose to totally avoid extending our minds into cyber-infinity and beyond.

The brain, the mind and the Extended Mind

To understand the notion of the extended mind, you first have to accept a difference between the mind and the brain. For some people, in some disciplines, ‘the brain’ is made of pinky-grey-coloured, squishy stuff, is fired by electrical pulses, and is like the hardware in which the software of ‘the mind’ can be operated. But, from a philosophical and anthropological perspective, ‘the mind’ is more than just the brain. The mind can reside elsewhere, in addition to residing within the brain.

Clark and Chalmers (1998), the authors of the Extended Mind article that the aforementioned Very Bad Wizards episode focused on, were concerned with where the mind could be said to stop. They asked thought-provoking questions about where cognition is located – if you add up 12+5 in your head, then that’s your brain doing the work, right? No question. But what if you add it up using your fingers to help? What about if you use a pen and paper to help you do long division? Or an abacus?

In each of these instances, the paper points out, some of the cognitive processing is occurring in the brain of the individual, while other operations have been outsourced to objects we have deemed better suited to the task. And, of course, then we play the argument out further, to calculators, and then computers. We extend beyond maths into calendars, or recall vs reading of complex facts, or visual artefacts like maps, and, inevitably, we arrive at smartphones and apps and the year 2018.

Finding your (Omni)Focus©

In our article, we referred to Inger’s iPhone as being, ‘much like a familiar, always ready to leap to do its mistress’s bidding’ (p. 9). Although it was always with her, essential to her daily operations, she still saw it as quite separate to her ‘self’. But this extended mind stuff has actually got me wondering if my smartphone (and Inger’s, and yours perhaps?) is actually a part of my extended mind. For example, I use project management software called OmniFocus, which basically manages my entire life for me. Everything from ‘submit thesis chapter on 16th’ to ‘get clothes out of wash after 8pm’ gets recorded in OmniFocus, and in turn, Omni sends me reminders that help me prioritise and structure my time.

Before Omni and I paired up, I was trying to do all this myself, inside my head, and it… really wasn’t going well (you can read more about my attempts at adulting here). So now, I move both the data, and also some of the cognitive processing of that data, into the Omni app on my phone or computer (they sync), and because Omni does it better than I do, voila! My life is more organised. BUT, of more interest for the purposes of this post, does this also mean that some of my mind has been moved to a new location? Actually, infinite locations, OmniFocus is cloud-based… is my head, literally, in The Cloud(s)?

Socially distributed cognition (or, the case for collaboration)

The other aspect of the extended mind that my omni-example doesn’t quite capture is that of interaction with other humans and technology. Psychiatrist Dan Siegel, in an article for online magazine Quartz, describes the mind as being a combination of both the thoughts, feelings, attitudes and memories contained inside us, but also the interactions we have with others and with the environment around us. Most social scientists, anthropologists or otherwise, would agree.  

Let’s take a Wikipedia site as an example. Say I know stuff (because I’m an academic). I go to Wikipedia, and I write a short entry about that stuff that I know. Someone at Wikipedia headquarters reviews it (because it’s an ideal world) and approves it. Another clever academic (Academic B) also knows stuff about the topic, and edits my entry, adding some new stuff. The next time I look at my Wikipedia entry, I see Academic B’s additions, which remind me of some other stuff I know, but hadn’t remembered until now. I add that stuff to the entry. And so on.

Okay, so if you accepted my OmniFocus example and agreed that I had extended part of my mind – a) things I knew and b) cognitive processes – into my technology, then is that also the case in this example? I have transferred some things I know into the technology. Together with the Wiki platform, the internet and its infrastructures, and the Wiki editor, the group of us have collaborated to make that knowledge public. Making the knowledge public has allowed me to then collaborate with Academic B in making it a better entry – she adds something, which sparks something in me, which causes me to add something. Her collaboration with the technology, creates a cognitive process that happens inside my brain. But the cognition is also happening elsewhere – in the mind of the other academic, and in the processes we have undertaken using the various tools. This is known as distributed cognition (and somewhat ironically, here’s the Wikipedia entry for it).

So, if we are putting parts of ourselves into the technologies we use, isn’t that, in it’s very essence, the act of creating a human/technology hybrid? A cyborg, so to speak?

Cognition in the wild

Anthropologist Edwin Hutchins, I think, would say yes. In “Cognition in the Wild” (1996), Hutchins argues that cognition is not only socially distributed amongst humans, nor is it only extended into tools. Cognition is also a cultural phenomenon that is fundamentally part of, and constitutes, a social system. And if cognition is cultural, he believes that culture is also cognitive:

“Culture… is a human cognitive process that takes place both inside and outside the minds of people. It is the process in which our everyday cultural practices are enacted. I am proposing an integrated view of human cognition in which a major component of culture is a cognitive process and cognition is a cultural process” (p.354).

Therefore, I suppose my concern is this. Some academics, like Inger, are online. A lot. They have huge cyber-presences, have all their papers on Academia.edu and ResearchGate and LinkedIn, maybe they blog, maybe they contribute to the Ask Me Anything forums on Reddit, or they get into debates with fellow ‘thought leaders’ about their areas of expertise on Twitter. They have extended their cognition and, in concert with the rest of their fellow cyborgs, they’ve created an online society with its own culture.

Yet, still there are other academics who are not involved in this culture at all. Who, feeling that engagement with new technology is a betrayal of their academic identity, or a waste of their time, or perhaps just too much of a cognitive burden, have chosen to remain apart from that world. And I worry that they are not aware that this entirely new society is developing, with a culture that they will struggle to understand if they don’t change their minds and ‘go over to the dark side’ for fear of it being a social system that does not play by the rules they are used to.

Cyborgs all let us rejoice

Have I given you ‘Academic FOMO’ yet? Well, for once, maybe that’s a good thing. As Donna Haraway argued, in her seminal essay ‘A Cyborg Manifesto’, the time has long since come to rejoice in the confusion of boundaries between ‘man and machine’. And even for those that can’t easily rejoice, maybe it’s time to accept and embrace the inevitable: that the ‘life of the mind’ must now, at least sometimes, be lived online.

                                                                 [Image by Jodie-Lee Trembath and Julia Brown]

4 thoughts on “Why YOU Should Be an Academic Cyborg (and Maybe Already Are)

  1. When we use any sort of technology aren’t we just co-opting the cognitive processes of other minds i.e. the minds of the developers of that piece of software or hardware, not to mention the minds of all the developers of the technologies that came together to give you that particular robot.

    • Ooh meta! But are you arguing that it’s the humans, not the technologies, that are collaborating? Or just saying that it goes a step beyond what I’m suggesting?

      • Definitely it is the humans not the technology but maybe assisting is a better word than collaborating. I think collaborating implies some sort of investment in the task which is probably not there.

  2. Pingback: Stephen Hawking, Dis-Incorporated | The Familiar Strange

Leave a Reply