Blog

  • The problem with the gaming/cognitive functioning link

    As someone who spent countless hours in his youth playing Doom, Street Fighter II and other effective ways of making time speed up, I really want the link between computer gaming and enhanced cognitive functioning, which I’ve mentioned before, to be true. It would validate every hadoken, justify every gib. But although the evidence is promising – encouraging even – it’s not quite there yet. Walter Boot, Daniel Blakely and Daniel Simons published a review in 2011 pointing out the distance we have yet to go before we can be sure about StarCraft’s place in our cognitive training routine.

    Remember these guys?

    Firstly, we have the problem of demand characteristics in some of the non-experimental studies — the ones that take a group of gamers and compare them to non-gamers on various cognitive abilities. Gamers need to come out on top here to even consider video games as cognitive enhancers, of course, but even if they do, it doesn’t mean that games are causing the difference. Perhaps the gamers had these cognitive advantages to begin with, and that’s why they take so well to the games. Or perhaps they were more motivated to perform well during the testing.

    Many such studies specifically advertise for experienced gamers. Other research has shown that if you think you’re likely to perform well on a certain task, you’re sometimes more likely to do so. This problem is particularly relevant when you consider that many gamers will be aware of the news reports linking gaming to cognitive enhancement, and may have some idea that this is what the researcher is testing.

    The way around this is normally to do an experiment — take a group of people, preferable non-gamers, and give them a battery of cognitive test. Then randomly split them into two groups, tell one group to play video games for a few weeks and the other group not to, then give the same tests again. You’ll then see if the video gamers have improved relative to the non-gaming group.

    But the same problems exist as with the non-experimental studies. The gamers know they have been gaming and might deduce that they are supposed to perform better on the cogntive tests in a follow up. This is why placebo control groups are used — both groups would play video games, but the placebo group would play one that is not expected to bring any cognitive benefits, usually a slower paced game like Tetris. However, if the tests used more closely resemble the action video game than Tetris, you can make the case that the expectancy effect is still in play. The design of the experiment is not sufficient to pry the two possibilities apart conclusively (for example, by asking participants whether they expected to improve, although even this has it’s own problems), even though it might make more sense intuitively that the video games are working.

    Further muddying the waters, some studies have failed to find a difference between gaming and non-gaming groups in both experimental and non-experimental tests.

    Where to go from here

    This might be disappointing, but there is some evidence of cognitive benefits caused by video games. We just don’t know why, or what conditions or individual differences are most amenable to such effects. Boot, Blakely and Simons propose that future studies should meet the following criteria (no study yet published has managed to meet them all):

    • Covert recruitment (participants aren’t told the nature of the study)
    •  The paper should detail the recruitment method
    •  Experimental studies should be conducted
    •  Participants should be screened for familiarity with the idea that gaming brings cognitive benefits, and whether they expected the gaming they did in the study to enhance their test results
    •  The placebo control games should offer equal expectancy effects on the performance of the cognitive tests
    •  Neuroimaging should be used to help pry apart expectancy effects versus actual cognitive changes

    If gaming has any chance of non-domain specific cogntive enhancement, the results could be used to help fight age-related cognitive decline, help people in their personal development (working memory may be more closely linked to academic success than IQ), and give teenagers the world over valid excuses not to get off the PlayStation. So it’s worth spending the time andmoney getting to the bottom of this.

    Now if you’ll excuse me I have to go play Call of Duty. For science.

     

  • How to improve social anxiety by training your attention

    In 2009 Brad Schmidt and colleagues published a clever treatment for social anxiety disorder. Before I describe it, a short “spoiler” alert…

    If, as i suspect, you are reading this looking for a self-help treatment for social anxiety, I recommend that you do not read this article, as knowing the nature of the experiment might negate its effects (or it may not; I don’t know, but it surely can’t help you so let’s stay on the safe side).

    Instead, try to get hold of the computer program used in the study. The best lead I have is Richard McNally’s lab who tested an iPhone, iPad and android app of the program. There might be an ongoing study you can take part in, or you could try requesting a copy of the app for your own use.

    End of spoiler alert

    Hypersensitivity to threats is a feature of social anxiety disorder. Where one person sees a disgusted facial expression and ignores it to continue chit-chatting, the person with SAD will focus on this facial expression and take it as evidence that they are being poorly judged.

    They are negative evaluation detectives, scanning and interpreting social situations in a way that paints them negatively. For whatever reason, an adaptive behaviour — making sure we’re not pissing off our allies — has become maladaptive, leading to anxiety.

    Threat? Photo: massdistraction

    A potential treatment, then, would be to re-train the attention not to focus on negative facial expressions so much. This is what the program aims to do. Here’s how it works.

    Participants are presented with two pictures of people, one displaying a threatening facial expression, the other a neutral one. The pictures stay for a while and then disappear, and one picture leaves a letter in its place. Participants press a key to indicate which face left the letter behind. They are told to do this as fast as they can.

    The trick is that 80% of the time the letter appears behind the non-threatening face so that over time, participants are being trained to move their attention away from threatening faces. With less attention paid to them, there’s less opportunity to infer negative judgements. The fact that participants have to press the keys quickly is important here, like a “gamification” effect to increase engagement and attention.

    Participants completed eight 15-minute sessions on the program, two per week for 4 weeks. Could such a short, simple game really make real-world differences in social anxiety disorders? Well this is only one test and it needs to be repeated, but the results were impressive. After 4 weeks, 72% of participants no longer met the criteria to be diagnosed with social anxiety disorder, compared with 11% in the control group. The results remained in a follow-up four months later.

    So, yes, so far it seems it can.

  • Psychic powers, the collective consciousness and publication bias (and Ben Goldacre)

    Ben Goldacre’s second TED talk was published this month, and it’s on similar lines to the first (you can find his previous one here). He’s on top form and the whole thing is great, but I want to mention one part of it.


    Magic magic performed here! See your money disappear!

    He starts of by discussing the infamous Bem study on precognition from a little while back. This study found that people’s emotional reactions to certain images (like a snake) sometimes occurred before they saw the picture, as though their spidey-sense was tingling.

    This caused quite a stir. Believers felt vindicated and skeptics started looking for methodological flaws, and predicting that future replications would fail. However, some believers in the paranormal predicted that a replication would be attempted and fail too — but not because there’s no such thing as psychic powers.

    Instead, the reason is that successful results are only possible when the collective consciousness of the people who are aware of them believes that precognition exists. Sort of like the character in Mystery Men who can only turn invisible when no one is watching, or, to complete the analogy, only when people who believe he can turn invisible are watching.

    Once a study is made public, the consensus of the consciousnesses that are aware of it changes. Previously, the majority of the minds that knew about the study were believers in the paranormal, but now, the majority believe in the materialistic scientific model, which does not predict psychic powers to exist. Hence, further replications fail.

    This explanation is quite a common one, and it’s also used by psychics, dowsers and telepathics who fail tests under controlled conditions — essentially, skeptical vibes interfere with psychic powers. In order to experience something in your life, you have to believe it to be true wholeheartedly; if you don’t believe in ghosts you’ll never see one, or at least, your experience would be easy to explain away.

    Reasoning forward, if you’re a skeptic scientist studying the paranormal, you’re destined to the life of a debunker, while if you’re believer scientist studying the paranormal, you’re destined to be debunked by the skeptics. Presumably, paranormal scientist believers who later changed their tack never really believed.

    But as Goldacre points out, there’s another explanation for this — publication bias. Science is probabilistic; it’s all about the likelihood of a certain hypothesis being correct, which gets closer and closer to proven with more positive studies, but it can never get there. Along the way, some studies will support a hypothesis and some won’t.

    Even the historic discovery of the Higgs Boson earlier this year was like this. They didn’t get a picture on screen and go “Oh shit! There it is!” They continuously calculate a percentage chance that it exists by analaysing the data produced by the particle bashing and when that probability reaches a certain level, they release the findings.

    Many of the particle collisions didn’t contribute to that probability, and so it is in human studies — only now we have a larger problem. The collider is continuously doing its work an analysing data, each collision adding to the total pool of data. But in peer-reviewed science, the pool of data tends to be skewed towards positive results. This is publication bias.

    Prior to this study, there will have been lots and lots of tests of precognition and psychic powers. But, as Goldacre points out in the talk, try going to a leading journal and saying “Hey, I’ve got a study here saying that students can’t predict the future! Want to publish it?” It’s unlikely to work.

    But combine these two things — the probabilistic nature of science and the tendency of journals to only publish positive results, and you get a problem — fluke findings are more likely to make their way into papers.

    The Bem study was later repeated by Stuart Richie, Richard Wiseman and Christopher French, who failed to replicate the findings. They had trouble getting this one published, but eventually did. I wonder if they would have gotten their replication published if Bem’s study had never existed.

    Still, all this is consistent with the collective consciousness idea, because we don’t know what happened in every single study on precognition ever conducted. If we did (which, ironically would probably require paranormal powers), we’d be able to say the collective consciousness idea is dead wrong because many studies conducted prior to this one were unsuccessful, therefore it’s nothing to do with belief and shared reality and everything to do with good old fashioned publication bias. For now I’ll let you make up your own mind on that one.

    Moving to more earthly concerns than the nature of reality, this problem of publication bias isn’t limited to the supernatural. Every study looking at it has noted its prevalence, and the publication of studies in favour of certain drugs while negative results were buried has cost hundreds of thousands of lives. But I’ll let Ben Goldacre tell you all about that:

  • 6 ways that the influence of Facebook has changed our lives

    I’m writing this partly for posterity — maybe in 10 years back when we’re living our entire lives in the Facebook Virtual Reality Matrix, we’ll look back and say “Remember when it was just a social networking site?” And partly out of old-fashioned curiosity. I should disclose that I’m one of the five or six people in the world that doesn’t have a Facebook account.

    Ubiquity

    If you don’t use Facebook, you know about it. It has close to a billion users, about 1/7th of the human population of the planet! And if you use Facebook, you really use it:  3.2 billion likes or comments are generated, every single day while in the first quarter of 2011 over 300 million photos were uploaded each day. Each day!

    Entertainment

    A study this year tried to find out what was driving the eight hours a month that Americans spend in front of Facebook. They tested the five established categories for online activity: information seeking, interpersonal communication, self-expression, passing time and entertainment. Only information seeking wasn’t relevant to Facebook, with the biggest factors being entertainment and time passing. In other words, we use Facebook because mainly we’re bored!

    Business

    Over 4 million businesses have pages on Facebook now. With a billion people to sell to and ease of content sharing, why wouldn’t they be? If you can write a good piece that people like, and people share it, they’re doing your marketing for you. Facebook itself is the second top earner of online display ads (behind the mighty Goog), although their growth forecast was cut last month by about a billion dollars.

    Lex-Appeal

    Through shock and awe Facebook has invaded our vernacular. It can be a noun — “Are you on Facebook?” A verb “Look, a goat that sounds like a man, I’m going to Facebook that!” It even has a gerund: “Are you still Facebooking?” Other aspects of Facebook vernacular have also found their way into the dictionary, like “Unfriend.” Yes, unfriend is a word and has been since 2009.

    Email is for dinosaurs now

    Email is passe now? You’re kidding me. Yet it makes sense — why log in to Gmail when you can message your friends on Facebook? They probably check Facebook more often than email, giving you more chance of a reply, and you don’t have to open a new tab. I remember when people would say to me “I don’t have email,” and I’d think “Dinosaur.” Now I’m the dinosaur. Hey, don’t you get cocky, Facebooker. In 20 years you’ll be trying to double-click your quantum mind-control matrix interface and your kids will be laughing at you.

    Don’t search us, we’ll search you

    This is one that I find particularly interesting. People are expecting less-and-less to go and find news and content they find interesting; they expect it to come to them. And the more that sites know about you, the better they can get at delivering what you want. Facebook are not the only ones involved in this process — even search engines now deliver results to you not based on an objective search of the web, but based on your past searches and browsing history. But the nature of Facebook necessitates this. Although most people post things on Facebook that they like, not necessarily what they think their friends like, birds of a feather flock together, making it a safe bet anyway.

  • Is porn bad for you?

    Gary Wilson of YourBrainOnPorn.com believes that it is.

    The empirical evidence for this is getting there but still somewhat thin. There’s a hilarious reason for that — researchers can’t find enough men who haven’t watched porn in order to form a comparison group! However, there’s some mileage to the idea and it warrants further study.


    You should see the other pics I considered using.

    Wilson’s premise is one that I discussed previously in the Tugging the Human Instinct post from a while back. Actually the same reasoning can be applied to much that’s fucked up about modern life (and points to the solutions too). It goes like this:

    Our culture has evolved far more quickly than our biology. We’re no longer living in the environment that we’re most suited for. Parts of our brain are wired to respond to certain things that were beneficial to our survival and replication. Our culture now rewards people (monetarily) if they can find ways to activate these areas with superstimuli, which tend to come with negative side effects. Pornography, particularly online pornography, is one such superstimulus.

    To be more specific, we’re adapted for life in 100-150 strong tribes, who would occasionally come into contact with other similarly sized wandering tribes (this is where our instincts towards in-group out-group behaviour stems from, be it my sports team is better than yours, my marital art is more effective than yours, my religion is the true one, and so on). I don’t know how many tribes you’d bump into as a hunter-gatherer, but given a life expectancy of around 30 and excluding women below breeding age, you’d probably see no more than a few thousand women, and only maybe 60 or so on a regular basis.

    If you go to a porn site, you can see 60 women of above-average attractiveness in a few minutes. This overloads your brain in a sense, tricking it into thinking you’re part of the hottest tribe ever!

    And if you get bored of one woman, you can load another up in a second. This level of novelty is also a superstimulus. It’s this combination of availability and instant novelty that creates the dependence and the psychological issues.

    There’s a little more to it that that neurologically, but that’s the gist of it. If you’re interested in learning more, check out Wilson’s TED talk, conveniently located right here:

    It’s ironic that he did a TED talk, since if there’s such a thing as “Information Porn,” that site is its biggest pimp!

  • Evolution has no goal

    Have you seen “The One with Ross’s Library Book?” It’s an episode of Friends where Ross finds out that people in his university are going to the library and doing naughty adult things in front of his thesis. So he stakes it out, and when a hot blonde comes along he cynically questions her reasons for being there: “I suppose you’re here to read up on Merriam’s views on evolution?”


    Evolution has no goal: not even Judo.

    To his surprise, she actually is there for a book and not for some nookie, and replies with “Actually I find Merriam’s views too progressionist.” Ross agrees and then they also get it on in front of Ross’s poor thesis.

    As well as being pretty funny, it’s also accurate. John C. Merriam was a palaeontologist who believed the purpose (so to speak) of evolution was to increase the efficiency and/or complexity of organisms. This is what “progressionist” means in this context; making “progress” in that organisms show improvements over the generations they follow.

    Although Progressionism hasn’t been completely killed off, it has long since fallen out of favour. You can sometimes see hints of it in papers and books, but it’s hard to say whether these represent statements of position or whether they’re just layman’s-terms shorthand. It kind-of happens in short-runs, such as with human intelligence, but for the idea to be accepted you’d have to see consistent, long-term one-directional changes in an organism, for which there’s no conclusive evidence.

    Despite lacking decisive evidence to back it up, this idea crops up all over the place. A common one, for instance, is to say that since we’ve created safe society in which everyone is more likely to survive and reproduce, we’ve stopped evolving. This is an example of progressionism, because it assumes that, in order to evolve, the “best” (aka the “fittest”) need to pass their genes on while the weak must fail to. In a society where the “weak” have just as much chance of reproducing as the “strong” (or perhaps more; many competent people put career ahead of family), this weeding out process cannot happen. Hence, “progress” has been halted.

    However, that’s incorrect, because the “fittest” in “survival of the fittest” just means “most likely to reproduce in a given environment.” So that career minded go-getter who put money ahead of family until her 40s and then found she couldn’t have kids due to various complications is less well fitted to her environment, evolutionarily-speaking, than someone who lives on benefits, does nothing but pops out six kids (neither of which can be quiet in public).

    From the progressionist point of view, Mrs Career is the fittest. But if she doesn’t have kids, her genes won’t reach the next generation. If less intelligent people were more likely to reproduce in a given environment, it means that intelligence is a hindrance, an evolutionary disability. The direction of evolution isn’t guided by anthropocentric values, so be careful about projecting them onto it.

    OK, now you have some geeky things to say next time you’re watching Friends!

  • Is food addictive?

    Last year an edition of the Journal “Addiction” was dedicated to food addiction. But whether food addiction actually exists is not an easy question to answer.

    Sometimes the definition is that use of a substance or participation in an activity continues even after if has a detrimental effect on your life. So you keep taking heroin instead of eating, and that’s detrimental. Could the same be said about food? Certainly, some people are unable to control their eating to the point that it becomes detrimental, leading to health problems like heart disease and diabetes.

    Because of course, not everyone who takes drugs becomes addicted, and likewise not everyone who eats does so to consistent excess. Interaction between the substance or activity and the body – some people’s brains react differently. The brain has reward pathways which trigger a dopamine release whenever we do something that was, in our ancestral past, beneficial to out survival and replication.


    Have you noticed this trend in “food porn,” where people are taking photos of their food and posting it online? Photo by SteFou

    For example, because sugar was rare back then (there was no Cave Mart), and because it was contained in food that was nutritious (that is, fruit), our ancestors who gorged on sugary food when they found it did better – they got more nutrition than those who ate a couple of berries and left it at that.

    But now of course, sugary food is not only plentiful, but its correlation with nutrients has diminished greatly. Yes, you can get any fruit you want, but who satisfies their sweet tooth with an apple? No one, we head for the cake and chocolate, and take the fat and other crap that goes with it.

    Yet for some people, food isn’t much of a big deal. The reason for that might be variation in this desire to seek out food – when some people eat, they get a bigger dopamine release, a bigger reward response than other people, which encourages eating and keeps calories high.

    This has been demonstrated in neuroimaging studies – obese people tend to show altered reward and tolerance responses to food. Though it’s not the whole story as insulin resistance and variations in other hormones also tends to be present. Furthermore, it’s hard to say if the food itself is what’s triggering the changes in reward response, because it’s hard to find people who eat junk food but aren’t also exposed to marketing messages, stress, or have dieted in the past, all of which may mess around with things.

    Also, maybe “food addiction” is too broad a term. You don’t see people whose lives have been affected by their inability to stop eating vegetables, for instance. Although sugar addiction has not been demonstrated in humans, fat and salt may have some addictive potential, though there isn’t much data on that yet. And that’s not even mentioning additives and other junk that gets put in food.

    So you’ve got a few issues here. First, you have inherent variation in the way people respond to food, neurologically. Then you have environmental factors that change the way people do the same. The people who have a high reward response to food are perfectly fine in certain environments, such as the proverbial active hunter-gatherer lifestyle, but put them in the modern world and things are different because of the things I mention above. Only by combining the two can you get a sensible idea of who’s at risk.

    Remember, what we consider to be obviously addictive now was once up for debate a while back, including nicotine and cocaine, and now we’re discussing things like food, gambling, sex and even the internet. Maybe one day we’ll look back and say, yes, when a person has addictive potential x and their in food-abundant-marketing-heavy environment y, they have z per cent chance of displaying addiction. Or in other words, put human population x in environment y will find z per cent of people becoming addicted. Or maybe psychologists are just looking for another disease they can cure you of. 🙂

    See also:

    Is fast food addictive

  • How to dance according to science (includes videos!)

    The theory of sexual selection proposes that certain traits evolved due to the preference of the other gender. These preferences may evolve because the trait is an indicator or genetic fitness, for example through being related to better health. Random genetic mutations that lead an individual to better display this trait are make that person “sexier” to the other sex, and hence the gene is more likely to make it into the next generation.


    T1000 Getting jiggy with it. John Connor, get down!!

    Many such traits are physical characteristics, as we’ve discussed before, but research on numerous species suggests that certain variations in movement patters can also be “sexy,” particularly when displayed by males and preferred by females, as seen in some birds, ungulates and crustaceans, for instance.

    We humans seems to use this fitness indicator too — married couples dance together by tradition, strippers dance instead of just standing there taking their clothes off, and I’ve never heard someone say that they don’t want a partner whose a good dancer! So maybe dance serves to indicate beneficial traits in humans too? A study from 2010 tested this idea.

    Confounds

    A problem with testing this scientifically are certain confounds that tend to go along with good dancers. For example, if you got a load of people to dance in front of participants, then asked them to rate the dancers’ attractiveness, things like facial attractiveness, clothing or height might get in the way.

    To isolate the effect of dancing alone, the researchers had males dance for 30 seconds using a motion-capture system. The movements were then mapped onto an avatar, a faceless humanoid shape that kind of looks like the T1000 from terminator two when it’s in the liquid metal mode. Females then rated the avatars on their dancing quality.

    The best dancer

    The results indicated that the following are preferable to females in a male dancer:

    • Variability and amplitude of movements in the head, neck and trunk
    • Faster leg movements
    • Move and quicker right knee bending and twisting

    Here’s the good dancer:

    I know, it looks ridiculous to me too. Here’s the bad dancer:

    These are really only preliminary results, and more tests need to be done to test this type of movement. Then it’s necessary to figure out if and how these particular movements could be signals of fitness and health. But in the mean time, now you know what to do on the dance floor!

    And here’s (kind of) an attempt by a YouTuber to reenact the good dancer. He seems to have thrown a few of his own moves in, making it only slighty cheesier…

  • Can being an expert undermine your performance?

    As with bilingualism, it’s generally assumed that being an expert completely beneficial and has no downsides to performance. However we know that expertise tends to be domain specific, for example, chess grand masters can memorise chess boards far more quickly and easily that novices, but on standard cognitive tests tend to fare no better. In fact, if you arrange chess pieces to positions that would never be encountered in an actual game, again their recall is no better than chess novices, showing just how domain-specific expertise can be. But surely within a given domain, expertise can only be beneficial?

    Castel, McCabe, Roediger and Heitman suggest not. They gave 40 students a memory test consisting of eleven animal names and eleven body parts. The twist here was that all the animal names were also NFL team names, like dolphins, colts, seahawks and bears. After the memory test, participants were given an NFL quiz, and the group was split into two, those scoring above and below the median on this test, to give high expertise and low expertise groups in the domain of NFL knowledge.

    The results on the memory test for the two groups was then compared. Indeed, the NFL experts remembered more of the animal names than the non-experts, while there was no difference between groups on the body parts test. So far so good, however, the researchers also tested for incorrect answers — NFL animal team names and body parts that were not part of the original test. The results indicated that the experts were much more likely to make incorrect guesses than the non-experts. The authors suggest that this represents memory errors, the domain-relevant information of the experts got in the way of their accurate recall of the animal names. Since there was no difference between groups in body part experience, false answers were about even between groups on that test.

    Is this really the case though? Or was it that the experts consciously noticed that the animal names belonged to the NFL teams and simply reeled off as many as they could remember during recall. Perhaps it was not a case of the existing schema interfering with memory, but a recognition that they already know these names, so why bother taking the extra effort to think back and recall? Why not just reel off my schema? I wonder if the results would be the same if participants were told that they would score 1 point for a correct guess, but minus 1 point for an incorrect guess, which might increase the incentive to actually recall. In other words, maybe this effect is a conscious strategy used in situations where there’s no cost to an incorrect answer.

    However, there are other studies that support the authors’ conclusions, which I haven’t read so perhaps my question has been answered before or since. Either way, it’s an interesting thought that the knowledge base acquired by experts might be detrimental in certain tasks.

    ref:
    Castel AD, McCabe DP, Roediger HL 3rd, & Heitman JL (2007). The dark side of expertise: domain-specific memory errors. Psychological science, 18 (1), 3-5 PMID: 17362368

  • Is it better to use pictures or words when learning languages?

    The Rosetta Stone people are making a killing through their concept of “natural” language learning. That is, their angle is that with their product, you supposedly learn a new language in the same way you learned your first, which allegedly makes the process easier.

    To accomplish this, they use pictures. So you see and hear a foreign word, and a collection of pictures, and you pick the one you think the word represents.

    This makes nice, intuitive sense, although if you were skeptical you might think that this method is simply an easier way to increase the size of your product line, since pictures are universal while using words would basically mean re-writing the whole thing for each country you’re selling in. So it’d be good to see a few tests of this learning method.

    You’d certainly expect pictures to be more effective, but results have been mixed. Shana K. Carpenter and Kellie M. Olson devised a few studies to tease out the answers.

    A Good Old-Fashioned RCT

    Using Swahili as the target language, they first did a standard randomised controlled trial. Half were shown a Swahili word with an English word, while the other half got a picture and the Swahili word, probably something like this:


    kolb (Photo Credit: EpSos.de)

    (Just to clarify, “kolb” is only the relevant part. Don’t run up to four legged animals in Kenya shouting “Here Photo Credit! Heeere Photo Credit!!”)

    The results did not indicate a difference in the words learned by the participants — pictures were no more effective than words. Why could this be? One reason might be that the picture wasn’t encoded into memory very well. To test this, participants were also asked to free-recall as many pictures (or English words) from the test as they could. People who were presented images rather than words remembered significantly more items. This indicated that the lack of benefit from using pictures was not caused by insufficient encoding of the picture.

    So if the pictures themselves are easier to recall than plain words, why weren’t their paired Swahili words easier to remember too? On to the second experiment…

    The Multi-Media Heuristic

    A heuristic is a basic rule of thumb that the brain uses to save time when processing. Think of it like a stereotype — to conserve the energy that would be spent taking people as they are, it’s easier to assume people possess characteristics associated to groups they belong to. There’s probably a survival thing going on here, since in life-or-death situations you need to respond quickly, so we have a built-in time saving “automatic” reasoning system.

    The multimedia heuristic is the assumption people have that text combined with images is easier to remember than text alone. Seems like a reasonable rule of thumb, yet evidence doesn’t support it. Maybe when people see the picture with the foreign word, the energy-conserving multimedia heuristic kicks in and the brain allocates less resources to processing and encoding that word. Why bother with the effort? It’s got a picture with it!

    So the test was repeated, but this time participants were asked, for each item, if they thought they’d remember it in five minutes. This test was repeated three times. In the first test, the pictures group was overconfident, and as before there was no difference in performance between groups. However, in the second test both groups saw a dramatic reduction in confidence (perhaps after seeing the results of the first test), and the pictures group did indeed recall more words than the words-only group! The same was found in the third test.

    So it works! Perhaps by removing their overconfidence, the multimedia heuristic was assuaged and the brain provided more resources to the learning.

    Don’t be overconfident!

    In the third test, participants were split into two groups, each of which were tested on both picture-Swahili word and English word-Swahili word combinations. However, one group was given a little message telling them not to be too overconfident:

    People are typically overconfident in how well they know something. For example, people might say that they are 50% confident that they will remember a Swahili word, but later on the test, they only remember 20% of those words. It is very important that you try to NOT be overconfident. When you see a Swahili word, try very hard to learn it as best you can. Even if it feels like the word will be easy to remember, do not assume that it will be. When you see a Swahili word with a picture, try your best to link the Swahili word to that picture. When you see a Swahili word with an English translation, try your best to link the Swahili word to that English translation.

    Confidence was tested in the same way as the previous test, and indeed, the group receiving the warning reported lower confidence. Did this affect results?

    Of course! People receiving the warning performed better than people who didn’t on the picture task, but not on the words task. Since overconfidence is not an issue when remembering word pairs, this both implicates the multimedia heuristic and suggests a way to improve learning of second language words — don’t be overconfident!

    Maybe you can now remember the Swahili word for dog, presented earlier? It’s “kolb.” If you’re thinking “Photo Credit” I apologise profusely!

    Reference:

    Carpenter, S., & Olson, K. (2012). Are pictures good for learning new vocabulary in a foreign language? Only if you think they are not. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38 (1), 92-101 DOI: 10.1037/a0024828