Our Avatars, Ourselves

laura croft

After a little hiatus from blogging, this article about how digital avatars influence our beliefs got me back on the wagon. In particular, it got me thinking about the amazing, good ol’ fashioned power of storytelling – that the stories we tell shape our beliefs about who we are, what we can become, and what is possible or impossible.  This idea is an old one, but its prosaicness lulls us into thinking that the power of stories is an abstraction, not a reality.

This article highlights the very real power of stories – in the form of digital avatars. An avatar, from the Sanskrit word origin, means an incarnation. More commonly, we think of avatars as representations of ourselves in virtual environments. When we represent ourselves digitally we are expressing some aspect of ourselves. That is, we are telling a self-story, real or imagined, that we want to explore. This psychological experience of an embodiment or “incarnation” of self goes a long way in explaining the research findings described in this article.

The research shows that using a “sexy avatar” in a video game influences women – and not for the better.  For example, women who played a game using sexualized avatars – especially those that looked like them – were more accepting of the rape myth (rape is a woman’s fault) and more likely to objectify themselves sexually in an essay. Other studies document the “Proteus effect” in which embodying a character in virtual environments like a game influences behaviors in  in the real world, such as eating patterns, brand preference, and physiological arousal. This effect is strongest when people actively engage with an avatar as compared to passively watching the character. While many of these studies have flaws (e.g., small sample size which makes it hard to generalize that these findings actually apply to people in general) they also have strengths such as strong experimental methods. So, these studies should be given serious consideration.

This article might lead some to demonize video games; but I think that is a mistake. We can bash video games all we want, but this black and white view misses the point that one can tell stories that sexualize women to the exclusion of individuality, intelligence, or competence in all sorts of media: books, movies, cosplay, the news we follow, and the conversations we have. It also misses the point that if avatars are so powerful, they can be used in positive ways.

So, is there something special about video games besides the fact that a single game can make billions of dollars in two weeks? Is actively engaging in a story rather than passively watching it the key to the effects that avatars can have on us? As a society, we need to have this conversation. But it will be crucial for science to weigh in and help interpret whether and how the stories we tell in virtual worlds transform what we do, believe, and become.

The Happiest iPhone on the Block: Why Managing Your Digital Life is Like Good Parenting

When I started blogging a little over a year ago, I was a true social media skeptic. I drew more inspiration from thinkers like Sherry Turkle than Anil Dash. But my experiences with social media have turned this on its head. I’m still a skeptic in the sense that, as a scientist, I believe we need to know a lot more about how social media affect our lives for better and for worse. But I don’t feel the kind of concern I used to feel. Perhaps I’ve been tempted by the siren song of technology, lulled by a false sense of security engendered by the all-consuming digital embrace… but I don’t think so. I actually feel more in control and less overwhelmed by social media and other digital forms of communication than ever before. I feel they are tools, which I can selectively choose among and harness. I believe that a sense of well-being and balance in social media use is possible if we use some simple practices. The best metaphor I can think of for these practices is that they are the types of things that an effective and sensitive parent does. Here are the top five “parenting strategies” I’ve used to manage my social media burden:

naughty child

  1. Establish rules and set limits. Children thrive when there are consistent limits and structure. In the same way, our technology use needs rules and limits. If I don’t set limits on when and how I use social media, I’m more likely to get sucked into the black hole of keeping up with every tweet/text/email/post/newsfeed. I’m more easily distracted by social media, less present with others, and more likely to waste time and be less efficient because of it. Like all good parents, I try to create structure that is firm but fair. Harsh discipline might work in the short term, but the child usually rebels. So, I try not to be unreasonable or unrealistic about the rules (e.g., “I can only check email once a day, and for no more than 10 minutes” doesn’t work). I’ve tried to find a set of guidelines that work with my life and make me happy.
  2. Monitor communication technology use. It’s 10 o’clock. Do you know how much social media you’ve used today? This is really about being mindful about how we’re using our technology. I prioritize my time – I only have so much time and attention in a day, and so I try to spend my mental and social capital wisely. I keep track and schedule times that I will use these tools, and know the times that they need to be put to bed.
  3. Reinforce good behavior. It’s not only the amount of time we use social media or communication technology. It’s about how we use it and what it brings to our lives. I try to select digital communities that brings something positive to my life and that cultivates a positive peer network.
  4. Selectively ignore. In parenting, the idea here is that if a child is showing a troublesome behavior, as long as it’s not destructive, it can be “extinguished” by just ignoring it. If there is no reaction, and no reward, there ceases to be a reason for the child to act that way. And then the child stops being a nuisance. In the similar vein, when I start to feel that my communication technology use is becoming burdensome and bossy, when I feel the pressure to respond to every message or push notification is too much, I start ignoring it. Most of us like the feeling of being connected, and hope that the dings and rings on our devices will bring something good into our lives or that stressful things can be averted and dealt with quickly. So, we start to check obsessively and end up spending dinner time with our family on a device, or walking into traffic with our eyes glued to our iPhone. When I begin to move in this direction, I reverse course and start to consciously and selectively ignore my devices in order to break the cycle.
  5. Adapt technology use to fit my life. One key to being a good parent, I believe, is structuring your life so that it can accommodate children in support of their well-being and happiness. Some (in my opinion) not-so-great parents do the opposite, they expect not to change their lives at all and that children should just fit in. In contrast to my list of strategies thus far, when it comes to mobile technology and social media I try to follow the inspiration of the questionable parent: I fit technology into my life so that I remain able to do what I want and need to do without being sidetracked. If my life is becoming  more stressful and less organized because of social media burden, then I’m probably doing the opposite.

So remember, when that naughty stream of Facebook status updates are just too much to handle, you’re a week behind on your twitter feed, the pesky email inbox just won’t empty out, and those 10 texts – that are going to go unanswered for another few days – won’t stop bugging you, ask yourself: what would mom do?

Rebel Without a Status Update

I am fascinated by the psychology of Facebook status updates. There are many reasons to make a status update. One reason, of course, is obvious – let others know what you’re up to or share something that’s cool. For example, if I did frequent status updates, I might decide to post “Buying a fantastic ½ pound of Australian feta at Bedford Cheese Shop on Irving Place – should I up it to a pound?!” (and seriously, it is incredible). This may be an interesting snapshot of a day in my life, but these types of status updates are exactly the ones that tend to annoy me for some reason. Even the most benign version of this feels like TMI.

Why? Status updates are for many an instinctive way to reach out.  A recent study even showed that increasing the number of status updates you do every week makes you feel more connected to others and less lonely. Seems like a good thing! Moreover, it’s consistent with what seems to be our new cultural comfort zone – being virtually seen and heard by a loosely connected group of people we know (or sort of know) as our “social network.” This virtual network is the social status quo for many of us, and certainly for many children growing up today.

I believe one consequence of this is that no one wants to be James Dean anymore. Putting it another way, maintaining privacy and being a strong silent type, like Dean, are no longer alluring ideas to us.  And when I thought of this, I realized why I don’t feel fully comfortable with the status update culture – I am a proponent of the James Dean School of Sharing Personal Information in Public: motto, the less the better. I like understatement, privacy, the choice to share with a few and retain privacy with most.

 Image

It’s no coincidence that as a culture, we don’t fetishize James Dean any more. Many of today’s icons (some of them “anti-icons” because we love to feel superior) are people who humiliate themselves, who will tweet that they’re on the toilet and what they’re doing there, who end up in compromised positions, and happen to have pictures and videos of those positions, which then promptly go viral (funny how that happens). James Dean would have disapproved.

James Dean himself would have been very bad at social media…..or perhaps very, very good. Very bad, because he would have had little to say, and would have hated the constant spotlight and social media culture of ubiquitous commentary and chit chat. On the other hand, he might have been very good at it because he would have been the Zen master of the status update, expounding with haiku-like pithiness. An imaginary James Dean status update:

James Dean…….

Old factory town

Full moon, snow shines on asphalt

#Porsche alive with speed

But seriously, while he probably wouldn’t have written haiku, perhaps he somehow would have figured out how to use sharing to create a sense of privacy, because a sense of mystery would have remained intact.

Yes, the status update is a beautiful thing. We have an efficient and fun tool which allows us to reach out to others, curate our self-image, and think out loud to a community. But I wonder if we’re starting to lose the simple pleasures of privacy, of knowing less and wondering more.

Downton Abbey: Television for the Internet Age?

I love Downton Abbey. It hits a sweet spot of mindless pleasure for me. Yes, it’s really just a British-accented Days of our Lives, but it’s wonderfully acted, soothingly English, and with a few nice clever twists. In honor of the US premiere of the third season last night, I thought I’d bring my interest in things digital to bear on Downton Abby. “How?” you might ask. It all starts with Shirley MacLaine.

Shirley MacLaine, who joined the cast for the third season (already aired in the UK but just now airing in the US), was recently interviewed by the New York Times about why she thinks the show has captured the devotion of so many. As most of you probably know, it’s been a huge, surprise international hit. If I have my stats right, it’s one of the biggest BBC shows ever.

She made a comment that caught my attention. From the interview (verbatim):

Q. What about the show hooked you in?

A. I realized that Julian [Fellowes, the “Downton Abbey” creator and producer] had either purposely or inadvertently stumbled on a formula for quality television in the Internet age. Which means there are, what, 15 or so lives and subplots, with which not too much time is spent so you don’t get bored, but enough time is spent so you are vitally interested.

Photo: Carnival Film

This comment alludes to an idea that we’re all familiar with – because we’re constantly multitasking and skimming huge amounts of information in a superficial way in order to wade through our daily information overload, we have developed a preference for short snippets of entertainment rather than more in-depth (read intelligent/complex) material. We just don’t have the patience or bandwidth anymore for anything longer or more involved.

I think linking up the popularity of Downton Abbey with this notion is an interesting idea. Of course, I have no basis upon which to say whether Ms. MacLaine is right or wrong, but my instinct is that she is not quite right. Although it’s hard to avoid the conclusion that much of our entertainment has evolved towards less depth and more superficiality over the past decades, this drift towards the superficial precedes the internet age. Soap operas have been popular for a long time. Reality television was a well-entrenched phenomenon before the dominance of mobile devices made multi-tasking a daily reality. And come on now; look at the TV shows from the 1950s or 1960s: Not exactly sophisticated material across the board.  How much have we actually drifted towards the superficial? Maybe we’ve just always been here.

So, for me, this explanation doesn’t hit the mark. However, another way to interpret Ms. MacLaine’s comment is that we love having “15 or more subplots” (to avoid getting bored) simply because we enjoy multitasking. It’s not that we CAN’T focus our attention for a long period of time. We just don’t like to. Perhaps we prefer shifting our attention because it feels better/easier/more familiar to divide our attention among several things. Perhaps we just want to have it all.

In illustration, yesterday, I showed my four-year-old son Kavi a picture (on my iPad) of some of his friends. He liked it a lot but there was something about it he didn’t totally understand (it was a joke picture). Whatever the case, he thought “it was cool.” His dad, Vivek Tiwary, and he were having a little boys’ time watching Tintin, so I started to walk away with the picture. He completely balked at that claiming he wanted to look at the picture and watch Tintin at the same time. I asked him how he’d do that, why he would want to do that, etc,…. No coherent answers were forthcoming except his claim that “it’s better this way.” And indeed, he proceeded to watch the movie, glance down at the picture on my iPad, watch the picture, glance down….for the next several minutes. He seemed to be enjoying himself. He seemed to feel it was better this way.

For me, the take-home message here was that for my little guy, more was just better. Maybe that’s the secret of Downton Abbey as well: it’s just a whole lot of whatever it is that makes it special.

 

Islands in the Stream: A Meditation on How Time Passes on Facebook

Shortly after the terrible tragedy in Newtown, I received email notifications that my (designated) close friends on Facebook had made status updates. Scrolling through my news feed, my friends expressed the range of emotions that we all felt – horror, sadness, distress, anger, and confusion. Later that day, I popped onto Facebook again and was jarred and a little upset to read that friends who seemed to have just expressed horror and heartbreak were now posting about every day, silly, and flippant things.

Now, why should I be jarred or upset? Hours had gone by. After three, or six, or ten hours, why wouldn’t we be in a different emotional state, and why wouldn’t it be ok to post about it? I started to think that it was not my friends’ posts that were at issue here. Rather, it was the nature of how I perceive the passage of time and sequence of events on Facebook. A couple aspects of this came to mind:

Facebook time is asynchronous with real time. Time is easily condensed on Facebook. Events and updates that might be spread out over the course of a day or several days can be read at a glance, and therefore seem to be happening almost simultaneously. So, our perception of time on Facebook is a combination of how frequently our friends post and how frequently we check in. For example, say I check in twice in two days – at 9am on day 1 and at 9pm on day 2. I know a good bit of time has passed (and the amount of time that has passed is clearly indicated next to friends’ updates), but I still read each status update in the context of the previous ones – especially if I pop onto a friend’s Timeline instead of my news feed.

With this type of infrequent checking, friends’ updates about their varying and changing emotions (which might be reasonably spread out over the course of a day or multiple days) appear to be an emotional roller coaster. If someone has several posts in a row about the same thing, even if they are spaced days apart, the person comes across as preoccupied with the topic. Somehow, I form a view of this individual that brings these little snippets together into one big amorphous NOW. If I were checking more frequently, however, perhaps I wouldn’t lump updates together in this way. I’d “feel” the passage of time and – more accurately – see that the ebb and flow of status updates are like islands in the stream of our lives rather than a direct sequence of events.

RiverIslands_2

Related to this first point, it occurred to me that status updates are not meant to be interpreted in the context of preceding status updates. Our brains are pattern recognition machines. So, if Facebook status updates follow one after the other, our brains may perceive a direct sequence of events. But, each status update is a snapshot of a moment, a thought, or a feeling. Intuitively, they are supposed to be stand-alone, not readily interpreted in the context of a previous update, even if they occur close together in actual time. Think how different this is from our face-to-face interactions, in which sequence of events matter. For example, imagine that you’re at work, and your co-worker tells you she is on pins and needles waiting to hear back about a medical test. When you see her a few hours later, she is joking and laughing. You assume she either (a) got some good news from the doctor, or (b) is trying to distract herself from the worry. You don’t think she’s just having a good time, out of context of what you learned about her earlier in the day. But this contextualization is not the way it works on Facebook. Linkages between updates are tenuous, connections malleable. We can lay out our stream of consciousness in a way that requires no consistency among updates. Maybe the temporal and logical requirements of the off-line world are suspended on social networking sites like Facebook. Maybe our brains need to catch up.

This is Your Brain on Technology?

There is a lot of polarized dialogue about the role of communication technologies in our lives – particularly mobile devices and social media: Technology is either ruining us or making our lives better than ever before. For the worried crowd, there is the notion that these technologies are doing something to our brain; something not so good – like making us stupid, numbing us, weakening social skills. It recalls the famous anti-drug campaign: This is your brain on drugs. In the original commercial, the slogan is accompanied by a shot of an egg sizzling on a skillet.

So, this is your brain on technology? Is technology frying our brain? Is this a good metaphor?

One fundamental problem with this metaphor is that these technologies are not doing anything to us; our brain is not “on” technology. Rather, these technologies are tools. When we use tools, we change the world and ourselves. So, in this sense, of course our brain is changed by technology. But our brain is also changed when we read a book or bake a pie. We should not accord something like a mobile device a privileged place beyond other tools.  Rather, we should try to remember that the effects of technology are a two-way street: we choose to use tools in a certain way, which in turn influences us.

We would also do well to remember that the brain is an amazing, seemingly alchemical combination of genetic predispositions, experiences, random events, and personal choices. That is, our brains are an almost incomprehensibly complex nature-nurture stew.  This brain of ours is also incredibly resilient and able to recover from massive physical insults. So, using a tool like a mobile device isn’t going to “fry” our brain. Repeated use of any tool will shape our brain, surely, but fry it? No.

So, “this is your brain on technology” doesn’t work for me.

The metaphor I like better is to compare our brains “on technology” to a muscle. This is a multi-faceted metaphor. On one hand, like a muscle, if you don’t use your brain to think and reason and remember, there is the chance that you’ll become less mentally agile and sharp. That is, if you start using technology at the expense of using these complex and well-honed skills, then those skills will wither and weaken. It’s “use it or lose it.”

On the other hand, we use tools all the time to extend our abilities and strength –whether it’s the equipment in a gym that allows us to repeatedly use muscles in order to strengthen them; or whether it’s a tool that takes our muscle power and amplifies it (think of a lever). Similarly, by helping us do things better, technology may serve to strengthen rather than weaken us.

It is an open question whether one or both of these views are true – and for what people and under what conditions. But I believe that we need to leave behind notions of technology “doing” things to our brains, and instead think about the complex ways in which our brains work with technology – whether that technology is a book or a mobile device.

 

Cyborgs, Second Brains, and Techno-Lobotomies: Metablog #2

Last week, I had the pleasure of being Freshly Pressed on WordPress.com – that is, I was a featured blog on their home page. As a result, I got more traffic and more interesting comments from people in one day than I have since I began blogging. Thanks, WordPress editors!

I’ve been really excited and inspired by the exchanges I’ve had with others, including the ideas and themes we all started circling around. Most of the dialogue was about a post I made on technology, memory, and creativity. Here, I was interested in the idea that the more we remember the more creative we may be simply because we have a greater amount of “material” to work with. If this is the case, what does it mean that, for many of us, we are using extremely efficient and fast technologies to “outsource” our memory for all sorts of things? – from trivia, schedules, and dates, to important facts and things we want to learn. What does this mean in terms of our potential for creativity and learning, if anything? What are the pros and cons?

I was fascinated by the themes –maybe memes? – that emerged in my dialogue with other bloggers (or blog readers). I want to think through two of them here. I am TOTALLY sure that I wouldn’t have developed and thought through these issues to the same degree – that is, my creativity would have been reduced – without these digital exchanges. Thanks, All.

Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto
Picture taken from a blog post by Carolyn Keen on Donna Haraway’s Cyborg Manifesto

Are We Becoming Cyborgs? The consensus is that – according to most definitions – we already are. A cyborg (short for cybernetic organism) is a being that enhances its own abilities via technology. In fiction, cyborgs are portrayed as a synthesis of organic and synthetic parts. But this organic-synthetic integration is not necessary to meet the criteria for a cyborg. Anyone who uses technology to do something we humans already do, but in an enhanced way is a cyborg. If you can’t function as well once your devices are gone (like if you leave your smart phone at home), then you’re probably a cyborg.

A lot of people are interested in this concept. On an interesting blog called Cyborgology they write: “Today, the reality is that both the digital and the material constantly augment one another to create a social landscape ripe for new ideas. As Cyborgologists, we consider both the promise and the perils of living in constant contact with technology.”

Yet, on the whole, comments on my post last week were not made in this spirit of excitement and promise – rather, there was concern and worry that by augmenting our abilities via technology, we will become dependent because we will “use it or lose it.” That is, if we stop using our brain to do certain things, these abilities will atrophy (along with our brain?). I think that’s the unspoken (and spoken) hypothesis and feeling.

Indeed, when talking about the possibility of being a cyborg, the word scary was used by several people. I myself, almost automatically, have visions of borgs and daleks (look it up non sci-fi geeks ;-)) and devices for data streaming implanted in our brains. Those of us partial to future dystopias might be picturing eXistenZ – a Cronenberg film about a world in which we’re all “jacked in” to virtual worlds via our brain stem. The Wikipedia entry describes it best: “organic virtual reality game consoles known as “game pods” have replaced electronic ones. The pods are attached to “bio-ports”, outlets inserted at players’ spines, through umbilical cords.” Ok, yuck.

There was an article  last month on memory and the notion of the cyborg (thank you for bringing it to my attention, Wendy Edsall-Kerwin). Here, not only is the notion of technological augmentation brought up, but the notion of the crowdsourced self is discussed. This is, I believe, integral to the notion of being a cyborg in this particular moment in history. There is a great quote in the article, from Joseph Tranquillo, discussing the ways in which social networking sites allow others to contribute to the construction of a sense of self: “This is the crowd-sourced self. As the viewer changes, so does the collective construction of ‘you.’”

This suggests that, not only are we augmenting ourselves all the time via technology, but we are defining ourselves in powerful ways through the massively social nature of online life. This must have costs and benefits that we are beginning to grasp only dimly.

I don’t have a problem with being a cyborg. I love it in many ways (as long as I don’t get a bio-port inserted into my spine). But I also think that whether a technological augmentation is analog or digital, we need to PAY ATTENTION and not just ease into our new social landscape like a warm bath, like technophiles in love with the next cool thing. We need to think about what being a cyborg means, for good and for bad. We need to make sure we are using technology as a tool, and not being a tool of the next gadget and ad campaign.

The Second Brain. This got a lot of play in our dialogue on the blog. This is the obvious one we think about when we think about memory and technology – we’re using technological devices as a second brain in which to store memories to which we don’t want to devote our mental resources.

But this is far from a straightforward idea. For example, how do we sift through what is helpful for us to remember and what is helpful for us to outsource to storage devices? Is it just the trivia that should be outsourced? Should important things be outsourced if I don’t need to know them often? Say for example, I’m teaching a class and I find it hard to remember names. To actually remember these names, I have to make a real effort and use mnemonic devices. I’m probably worse at this now than I was 10 years ago because of the increased frequency with which I DON’T remember things in my brain now. So, given the effort it will take, and the fact that names can just be kept in a database, should I even BOTHER to remember my students’ names? Is it impersonal not to do so? Although these are relatively simple questions, they raise, for me, ethical issues about what being a professor means, what relating to students means, and how connected I am to them via my memory for something as simple as a name. Even this prosaic example illustrates how memory is far from morally neutral.

Another question raised was whether these changes could affect our evolution. Thomaslongmire.wordpress.com asked:  “If technology is changing the way that we think and store information, what do you think the potential could be? How could our minds and our memories work after the next evolutionary jump?”

I’ll take an idea from andylogan.wordpress.com as a starting point – he alluded to future training in which we learn how to allocate our memory, prioritize maybe. So, perhaps in the future, we’ll just become extremely efficient and focused “rememberers.” Perhaps we will also start to use our memory mainly for those types of things that can’t be easily encoded in digital format – things like emotionally-evocative memories. Facts are easy to outsource to digital devices, but the full, rich human experience is very difficult to encode in anything other than our marvelous human brains. So if we focus on these types of memories, maybe they will become incredibly sophisticated and important to us – even more so than now. Perhaps we’ll make a practice of remembering those special human moments with extreme detail and mindfulness, and we’ll become very, very good at it. Or, on the other hand, perhaps we would hire “Johnny Mnemonics” to do the remembering for us.

But a fundamental question here is whether there is something unique about this particular technological revolution. How is it different, say, than the advent of human writing over 5,000 years ago? The time-scale we’re talking about is not even a blink in evolutionary time. Have we even seen the evolutionary implications of the shift from an oral to a written memory culture?  I believe there is something unique about the nature of how we interact with technology – it is multi-modal, attention grabbing, and biologically rewarding (yes, it is!) in a way that writing just isn’t. But we have to push ourselves to articulate these differences, and seek to understand them, and not succumb to a doom and gloom forecast. A recent series of posts on the dailydoug  does a beautiful job of this.

Certainly, many can’t see a downside and instead emphasize the fantastic opportunities that a second brain affords us; or at least make the point, as robertsweetman.wordpress.com does, that “The internet/electronic memory ship is already sailing.”

So, where does that leave us? Ty Maxey wonders if all this is just leading to a technolobotomy – provocative term! –  but I wonder if instead we have an amazing opportunity to take these technological advances as a testing ground for us to figure out as a society what we value about those capacities that, for many of us, are what we think make us uniquely human.

So Long Ago I Can’t Remember: Memory, Technology, and Creativity

I recently read an interesting blog through Scientific American by the writer Maria Konnikova. In it, she writes about how memorization may help us be more creative. This is a counterintuitive idea in some ways because memorizing information or learning something by rote seems the antithesis of creativity. In explanation, she quotes the writer Joshua Foer, the winner of the U.S. memory championship, from his new book: “I think the notion is, more generally, that there is a relationship between having a furnished mind (which is obviously not the same thing as memorizing loads of trivia), and being able to generate new ideas. Creativity is, at some level, about linking disparate facts and ideas and drawing connections between notions that previously didn’t go together. For that to happen, a human mind has to have raw material to work with.”

This makes perfect sense. How can we create something new, put things together that have never before been put together, if we don’t really know things “by heart”? This makes me think of the great classical musicians. Great musicians know the music so well, so deeply that you both play it perfectly in terms of the intention of the composer AND you are able to add that ineffable creative flair. It’s only when you’ve totally mastered and memorized the music that you can put your own stamp on it and it becomes something special. Otherwise, it’s robotic.

These issues are incredibly relevant to how human memory is adapting to new information technologies. Research has recently shown that when we think we can look up information on the internet, we make less effort and are less likely to remember it. This idea is referred to as “transactive memory” – relying on other people or things to store information for us. I think of it as the External Second Brain phenomenon – using the internet and devices as our second brain so that we don’t have to hold all the things we need to know in our own brain. As a result, how much do we actually memorize anymore? I used to know phone numbers by heart – now, because they are all in my phone’s address book, I remember maybe five numbers and that’s it. How about little questions I’m wondering about, like: When was the first Alien movie released (okay, I saw Prometheus last week)? The process of getting the information is – 1. Look it up; 2. Say, “ah, right, interesting”; 3. Then with a 75% probability in my case forget it within a week. Information is like the things we buy at a dollar store – easily and cheaply obtained, and quickly disposed of.

A colleague in academia once told me about an exercise his department made their graduate students go through in which they presented their thesis projects – the information they should know the best, be masters of really – using an old-school flip board with paper and sharpies. Without the help of their PowerPoint slides and notes, they could barely describe their projects. They had not internalized it or memorized it because they didn’t need to. It was already in the slides. If they didn’t know something about their topic, they could just look it up with lightening speed. Only superficial memorization required.

In addition, the process of relating to and transcribing information has changed. Today, if students need to learn something, they can just cut and paste information from the internet or from documents on their computers. They often don’t need to type it in their own words, or even type it at all. They miss a key opportunity to review and understand what they are learning. We know that things are remembered better when they are effortfully entered into memory – through repetition, and using multiple modalities like writing it out and reading it. If you quickly and superficially read something, like we do all the time when we are on the internet or zooming from email to website to app, then you cannot put that information into memory as efficiently. For most of us, it just won’t stick.

On the other hand, shouldn’t the vast amounts of information we have at our fingertips aid us in our creative endeavors? Haven’t our world and the vision we have of what is possible expanded? Couldn’t this make us more creative? Perhaps, by delegating some information to our external second brains, we are simply freed up to focus our minds on what is important, or on what we want to create (credit to my student Lee Dunn for that point).

Also, I think many of us, me included, know that we NEED help negotiating the information glut that is our lives. We CAN’T keep everything we need to know memorized in our brains, so we need second brains like devices and the internet to help us. I don’t think we can or should always relate deeply to and memorize all the information we have to sift through. It is a critical skill to know what to focus on, what to skim, and what to let go of. This is perhaps the key ability of the digital age.

I also appreciate all the possibilities I have now that I would NEVER have had before were it not for the incredible breadth and speed of access to information. As a scientist, this has transformed my professional life for the good and the bad – along with opportunities comes the frequently discussed pressure to always work. But give up my devices? I’d rather give you my left arm (75% joking).

As a child developmentalist and psychologist, I feel that we have to acknowledge that these shifts in how we learn, remember, and create might start affecting us and our children – for good and bad – sooner than we think. This isn’t just the current generation saying “bah, these new fangled devices will ruin us (while shaking wrinkly fist)!!!” I think these changes are evolutionarily new, all-pervasive, and truly different. We as a society have to contend with these changes, our brains have to contend with these changes, and our children are growing up in a time in which memory as we think of it may be a thing of the past.

Same, Same, But Different: Similarities and Differences Between our Online and Offline Lives

I was having an online dialogue with my friend Mac Antigua about how being an active social media and technology user can change how we relate to the world, and can make us feel that we are always on stage. He directed me to an interesting post about digital classicism.

The whole exchange made me think a lot about how the line between our offline “real life” and our lives online is becoming more blurred. Is there even a need to make this distinction? Isn’t the way we conduct ourselves online just an extension of who we are offline? The answer to this is complex, but I think, nicely summed up in t-shirts that my husband Vivek and I saw all over Bangkok when we visited in 2003 – “Same, same, but different.” At the time, we were pretty puzzled by it but found ourselves constantly quoting it. Later, we found out it’s a common Thai-English phrase meaning just what it sounds like.

I feel like life online is just like this – same, same, but different. How we interact, how we create identity, how we feel special and understood online is the same, same but different from our offline life. Here are three examples of this:

1. What counts as clever.  In the offline world, being clever usually involves being quick-witted: having the fast comeback, thinking on your feet, etc,….But online, you have oodles of time to compose, rewrite, think about, and edit every comment you make. Self-presentation becomes a long-term process rather than a series of quick, face-to-face exchanges that “disappear” as soon as they have happened.  These disappearing impressions are what used to be the basis of our views about each other. Perhaps no more. That’s not to say that many of us don’t dash off the spontaneous tweet or post. It’s just that when we’re trying to be clever, we can take our time about it.

This is nice in some ways, because it has an equalizing effect and gives those of us who are shy or just not speedy thinkers time to express what we mean. This feels like a healthy slowing down. On the other hand, for young people growing up today, does this create less of challenge to their conversational skills? – and conversational skills are definitely learned and need to be practiced. Are kids going to be less able to carry on conversations that occur in real time than their counterparts a decade ago?

At the same time, does the knowledge that everything you post will be documented (forever) create a whole new set of pressures? These pressures are making some young people “drop out” of digital communities like Facebook: Just too much work and scrutiny. It’s nerve-wracking, trying to be clever.

2. It’s OK to brag. I’m actually not sure that it is OK to brag in online communities, but I see a lot more of it online than offline – even though I live in what is perhaps the bragging capital of the world, New York City. For example, when I first started tweeting, I was surprised that people were spending so much time retweeting posts that others made about them, or tooting their horn about something or other.  In the offline world, if someone started saying things like – “Oh, so and so just mentioned what an awesome researcher I am!” – multiple times a day, I would think they were disturbingly self-involved and ego-centric.

This seems to be an important difference between online and offline, because one of the purposes of the digital social network is to get yourself and your work “out there.”  So, perhaps this is exactly what people should be doing. Does this mean that social mores about bragging may be changing? The interesting thing to watch will be whether these tendencies trickle down into our offline lives.

3. Being cool. I’m no expert on cool, but it seems to me that how people are cool online is quite different than the traditional ways of being cool. Online, cool seems to be defined by the number of friends/followers/connections you have, as well as your sheer presence in terms of posts. It’s about how interesting a conduit of information and cutting edge ideas you are. Cool also is something you have time to work at since very little is spontaneous (see #1 above).

In contrast, few are being the strong, silent, aloof type, full of self-confidence and self-control (think James Dean). Instead, everyone seems to be shouting from the rooftops (or whatever the digital analogy would be) what they think and feel and see. It’s a very “look at me” world on-line, not a subtle world of understatement and innuendo. This is a world in which people live out loud, the louder the better.

Online heroes seem to act the same way as us regular folk in this regard – and maybe even worse because of what can be at times their oblivious self-importance. I once followed an actor on Twitter for all of 10 minutes before unfollowing him because the first tweet of his that I read was about the enormous bowel movement he just had. Seriously.

Of course, there is a lot of variability in how people behave online, but based on my observations, this non-James Dean way of being seems to be the norm. One reason for this shift in cool may be that online, tech-savvy geeks rule the world, so the definition of cool has altered to fit their goals and ways of being. Another may simply be a function of the technology. You can’t be strong and silent online because you would never post anything – and you therefore wouldn’t “exist.” One must be active and one must be taking a chance by putting oneself out there.

This breaking down of cool, in this sense, seems cool to me – when it’s not annoyingly self-involved.  And honestly, it is NEVER cool to tweet about your poo.