Cyborgs, Second Brains, and Techno-Lobotomies: Metablog #2

Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!

I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?

I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.

Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.

A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”

Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.

Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.

There was an article  last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”

This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.

I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.

The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.

But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.

Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked:  “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”

I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.

But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture?  I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug  does a beautiful job of this.

Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”

So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! –  but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.

So Long Ago I Can’t Remember: Memory, Technology, and Creativity

I recently read an interesting blog through Scientific American by the writer Maria Konnikova. In it, she writes about how memorization may help us be more creative. This is a counterintuitive idea in some ways because memorizing information or learning something by rote seems the antithesis of creativity. In explanation, she quotes the writer Joshua Foer, the winner of the U.S. memory championship, from his new book: “I think the notion is, more generally, that there is a relationship between having a furnished mind (which is obviously not the same thing as memorizing loads of trivia), and being able to generate new ideas. Creativity is, at some level, about linking disparate facts and ideas and drawing connections between notions that previously didn’t go together. For that to happen, a human mind has to have raw material to work with.”

This makes perfect sense. How can we create something new, put things together that have never before been put together, if we don’t really know things “by heart”? This makes me think of the great classical musicians. Great musicians know the music so well, so deeply that you both play it perfectly in terms of the intention of the composer AND you are able to add that ineffable creative flair. It’s only when you’ve totally mastered and memorized the music that you can put your own stamp on it and it becomes something special. Otherwise, it’s robotic.

These issues are incredibly relevant to how human memory is adapting to new information technologies. Research has recently shown that when we think we can look up information on the internet, we make less effort and are less likely to remember it. This idea is referred to as “transactive memory” – relying on other people or things to store information for us. I think of it as the External Second Brain phenomenon – using the internet and devices as our second brain so that we don’t have to hold all the things we need to know in our own brain. As a result, how much do we actually memorize anymore? I used to know phone numbers by heart – now, because they are all in my phone’s address book, I remember maybe five numbers and that’s it. How about little questions I’m wondering about, like: When was the first Alien movie released (okay, I saw Prometheus last week)? The process of getting the information is – 1. Look it up; 2. Say, “ah, right, interesting”; 3. Then with a 75% probability in my case forget it within a week. Information is like the things we buy at a dollar store – easily and cheaply obtained, and quickly disposed of.

A colleague in academia once told me about an exercise his department made their graduate students go through in which they presented their thesis projects – the information they should know the best, be masters of really – using an old-school flip board with paper and sharpies. Without the help of their PowerPoint slides and notes, they could barely describe their projects. They had not internalized it or memorized it because they didn’t need to. It was already in the slides. If they didn’t know something about their topic, they could just look it up with lightening speed. Only superficial memorization required.

In addition, the process of relating to and transcribing information has changed. Today, if students need to learn something, they can just cut and paste information from the internet or from documents on their computers. They often don’t need to type it in their own words, or even type it at all. They miss a key opportunity to review and understand what they are learning. We know that things are remembered better when they are effortfully entered into memory – through repetition, and using multiple modalities like writing it out and reading it. If you quickly and superficially read something, like we do all the time when we are on the internet or zooming from email to website to app, then you cannot put that information into memory as efficiently. For most of us, it just won’t stick.

On the other hand, shouldn’t the vast amounts of information we have at our fingertips aid us in our creative endeavors? Haven’t our world and the vision we have of what is possible expanded? Couldn’t this make us more creative? Perhaps, by delegating some information to our external second brains, we are simply freed up to focus our minds on what is important, or on what we want to create (credit to my student Lee Dunn for that point).

Also, I think many of us, me included, know that we NEED help negotiating the information glut that is our lives. We CAN’T keep everything we need to know memorized in our brains, so we need second brains like devices and the internet to help us. I don’t think we can or should always relate deeply to and memorize all the information we have to sift through. It is a critical skill to know what to focus on, what to skim, and what to let go of. This is perhaps the key ability of the digital age.

I also appreciate all the possibilities I have now that I would NEVER have had before were it not for the incredible breadth and speed of access to information. As a scientist, this has transformed my professional life for the good and the bad – along with opportunities comes the frequently discussed pressure to always work. But give up my devices? I’d rather give you my left arm (75% joking).

As a child developmentalist and psychologist, I feel that we have to acknowledge that these shifts in how we learn, remember, and create might start affecting us and our children – for good and bad – sooner than we think. This isn’t just the current generation saying “bah, these new fangled devices will ruin us (while shaking wrinkly fist)!!!” I think these changes are evolutionarily new, all-pervasive, and truly different. We as a society have to contend with these changes, our brains have to contend with these changes, and our children are growing up in a time in which memory as we think of it may be a thing of the past.

The Gamification of Learning

A recent Pew Report polled internet experts and users about the “gamification” of our daily lives, particularly in our networked communications. They write:

The word “gamification” has emerged in recent years as a way to describe interactive online design that plays on people’s competitive instincts and often incorporates the use of rewards to drive action – these include virtual rewards such as points, payments, badges, discounts, and “free” gifts; and status indicators such as friend counts, retweets, leader boards, achievement data, progress bars, and the ability to “level up.”

According to the survey, most believe that the effects of this gamification will be mostly positive, aiding education, health, business, and training. But some fear the potential for “insidious, invisible behavioral manipulation.“

Don’t pooh-pooh the behavioral manipulation point. Do you really want to have your on-line behavior shaped like one of Skinner’s rats by some faceless conglomerate? But that’s actually not what got me going. What got me wondering about where this is all going is that it seems undeniable that gamification will shape how we learn, in particular how kids learn.

Elements that make up this gamification – rewards, competition, status, friend counts – are particularly powerful incentives. Neuroscience had repeatedly documented that these incentives rapidly and intensely “highjack” the reward centers of our brain. So it begins to feel as if we’re addicted to getting that next retweet, higher friend counts, higher scores on fruit ninja, etc.,…. Even the sound that our device makes when a message pops up gives us a rush, makes us tingle with anticipation. We eagerly wait for our next “hit” and are motivated to make that happen.

This gamification could have a powerful impact on how we go about learning. Psychological researchers distinguish between a fixed and a growth mindset – that is, peoples’ beliefs – about intelligence and learning. When people have a fixed mindset, intelligence is viewed as a hard-wired, permanent trait. If intelligence is a fixed trait, then we shouldn’t have to work very hard to do well, and rewards should come easily. In contrast, in a growth mindset, intelligence is viewed as something that can grow and develop through hard work. In this way, a growth mindset promotes learning because mastering a new skill or learning something new is enjoyable for its own sake and is part of the process of intellectual growth. Intelligence is not fixed because it is shaped by hard work and effort. For a nice summary of these distinctions, see a recent post on a wonderful blog called Raising Smarter Kids.

This is where gamification comes in. If children are inundated with incentives and rewards for even the simplest activity or learning goal, motivation for learning becomes increasingly focused on the potential for reward, rather than the process and joy of learning. In addition, when you’re doing things mainly for the reward, the motivation for hard work will peter out after a while. You just move on to the next, perhaps easier way of getting rewards rather than digging in and trying to master something. It also becomes more difficult to appreciate the value of setbacks – not getting a reward – as an opportunity to improve. In these subtle ways, gamification may undermine a child’s ability to develop a growth mindset. Instead, we might have a generation of children who are implicitly taught that everything we do should be immediately rewarded, and that getting external things, rather than the joy of learning, is why we do what we do.

Promoting a growth mindset is not only important for helping our children learn, but for helping them face frustrations and obstacles. Dona Matthews and Joanne Foster, in Raising Smarter Kids, highlight several rules to promote a growth mindset:

1. Learn at all times. This means think deeply and pay attention. When we use technology and social media, we can sometimes err on the side of doing things very quickly and superficially. So, this rule is important to emphasize with children today more than ever. We also have to remind our children (and ourselves) that it’s ok to make mistakes, even if we don’t get rewarded for our efforts.

2. Work hard. This is a skill that of course can be promoted by the presence of incentives – kids will work for hours at a game if they can beat their highest score. But what happens after they get the reward? Are they committed to continue learning? Will they continue struggling and practicing? Sustained hard work is an opportunity for personal growth that external motivation, like that from rewards, may not be able to sustain. Here, the enjoyment of learning and gaining mastery may be the most powerful motivator when it comes to helping children become dedicated learners for the long haul.

3. Confront deficiencies and setbacks. This is about persisting in the face of failure. The increasing role of gamification could both help and hinder this. Gamification will help in the sense that with so many rewards and game dynamics, opportunities for failure are around every corner and children will need to learn to persist. At the same time, what guarantees that a child will persist to obtain these rewards? Rewards are not equally motivating for all individuals. Will those not interested in rewards and games just be left feeling bored, and take part in fewer opportunities for learning?

I’m not saying that we should avoid all rewards – that would too extreme and impossible to boot. But we must maintain our awareness of how, with increasing gamification, the simplest act of using technology, logging onto our favorite website, or using social media might be subtly changing our motivation to learn.

Follow

Get every new post delivered to your Inbox.

Join 12,291 other followers