Category: Psychology

  • The evolution of gratitude

    Photo by Tim Dennel

    We humans intuitively know how important it is to express gratitude. “Thank you” is one of the first phrases we teach our children, and it’s a key thing to learn when we visit a foreign country. There is part of us that would just feel really bad if we didn’t express gratitude when someone has done something for us (well, for most of us at least).

    Try going a week without saying “thank you” to anyone. Not your spouse, not a work colleague, not the barista at Starbucks. Imagine how uncomfortable you’d feel! Somehow we just know that this would be upsetting to others, and also bad for us too – we’d get a reputation for being rude, and people would be less likely to help us out in the future.

    Where did this strong instinct and understanding come from? Is it something we simply learn from a young age, and have drilled into us? Maybe, but some researchers think it’s an evolved trait – we’re simply born with a gratitude instinct.

    A paper from 2008 by Michael McCullough, Marcia Kimeldorf, and Adam Cohen called “An Adaptation for Altruism? The Social Causes, Social Effects, and Social Evolution of Gratitude” looked into this idea a bit.

    Gratitude: a prosocial emotion

    First, they define gratitude – it’s a positive emotion that comes when we feel we have benefited from someone else’s actions. The researchers label gratitude as a prosocial emotion – in other words, the whole reason it’s there is to nudge humans to act in prosocial ways.

    There’s a lot of research supporting this idea. One experiment was particularly clever. Researchers created a situation where some of there participants thought that they’d received some money from another participants. Other participants thought they got this money by random chance.

    Then, the participants were given some more money, and told that they could either keep it all themselves, or share it with their partner. As expected, the ones who thought they’d been gifted the money earlier were far more likely to share it in the second part of the experiment.

    When asked why they wanted to share the money instead of just keep it all for themselves, they simply said they wanted to express their thanks to the other person, and this was their way of doing it.

    So as we see, gratitude makes people kinder – at least, towards people who have already helped them in some way. We call this reciprocity – a fancy word meaning you scratch my back, I’ll scratch yours.

    Gratitude is everywhere

    So let’s get back to the topic of evolution.

    If gratitude was an evolved emotion, you’d expect to find it everywhere – and you do. Just as there’s no culture without anger, love, or joy, we haven’t found one without gratitude. But we can go back even further than that – some researchers have even looked for evidence of gratitude in animals.

    Now, this is very tricky to do. Is your cat grateful after you feed her? It’s easy to imagine that she’s happy about it, but grateful? Since cats can’t say “Thank you!”, how would you know?

    Well, it’s hard to say for sure. But one way is to think about the idea of reciprocity. In another study, researchers set up a food puzzle for chimpanzees, that couldn’t be completed alone – they needed help from another chimp. When watching them work on this puzzle, the researchers observed that chimps were most likely to help another chimp if that chimp had helped them out previously.

    Did the chimps do this because a feeling of gratitude compelled them? Who knows. Emotions are a common tool of evolution, used to nudge behaviour – so it’s certainly possible. But it doesn’t have to be. If the helping was nudged by an emotion, it might have come from a feeling of indebtedness, for example.

    Why would gratitude evolve?

    Another way of thinking though whether a particular trait is an evolved adaptation, is to consider, what’s the purpose of it?

    We have to be careful of just making up nice sounding evolution stories here, but with that said why might gratitude evolve? According to the Selfish Gene view of evolution, there would have to be a benefit to the genes that lead us to feel gratitude. MucCullough, Kimeldorf, and Cohen thought this boils back to the concept of reciprocity again.

    As a tribal species, we couldn’t survive on our own – we needed help from other people. This can lead to imbalances – what if you’re doing all the helping, but no one’s helping you? That wouldn’t be fun. Maybe you’ve had a job like that. On the other hand, if everyone’s helping you out, but you’re not lifting a finger to help anyone, then that’s fantastic for you (you’d be an arsehole, but it’s still good for you) and bad for everyone else.

    So, perhaps we’d evolve a way of balancing the books, so to speak. A way to know who we should be helping, and by how much. The researchers think gratitude might have been one part of the solution to this problem – that is, maybe it evolved to help facilitate fairer exchanges.

    If this was true, we’d expect two things. First, we’d expect the strength of the gratitude we experience to be proportional to the benefit we got. Second, we’d expect to feel more gratitude towards strangers than to people we’re related to.

    To the first point, this is something we know intuitively, but studies back this up too. One experiment was set up in a similar way to the money sharing one earlier – participants got a gift that they were told was from another person, and later had a chance to pay the giver back. This time, they varied the amount people received in the first place – people receiving larger gifts were more generous when they had a chance to repay the favour than those who received a smaller gift.

    To the second point, why would we expect to feel less gratitude towards relatives? It’s because we share genes with our relatives, and so there’s another system already “built in” to make sure we help them (according to a theory known as kin selection). So, gratitude doesn’t need to “switch on” as much to achieve helping between relatives (although it would help our relationships if we expressed it more often anyway).

    A study from way back in 1977 suggests there might be something to this, although this was based on hypothetical scenarios (basically just asking people, if a stranger or a relative helped you out, who would you be more grateful towards). So this study at least doesn’t give super-solid support for this point.

    So where does that leave us? Darwin had suggested that gratitude was a universal emotion, which other primates experience. We’re not quite at the point where we can say that’s true, but based on behaviour, so far it seems they at least experience something that does the same job.

    References:

    Bar-Tal, Daniel & bar-zohar, Yaakov & Hermon, & Greenberg, Martin. (1977). Reciprocity Behavior in the Relationship Between Donor and Recipient and Between Harm-Doer and Victim. Sociometry. 40,. 293-298. 10.2307/3033537.

    McCullough, M. E., Kimeldorf, M. B., & Cohen, A. D. (2008). An adaptation for altruism: The social causes, social effects, and social evolution of gratitude. Current directions in psychological science17(4), 281-285.

    Suchak, M., Eppley, T. M., Campbell, M. W., & de Waal, F. B. (2014). Ape duos and trios: spontaneous cooperation with free partner choice in chimpanzees. PeerJ2, e417.

    Tsang, Jo-Ann. (2007). Gratitude for Small and Large Favors: A Behavioral Test. The Journal of Positive Psychology. 2. 157-167. 10.1080/17439760701229019.

  • Richard Feynman on thinking processes.

    Feynman said that there are no miracle people, and anyone can do what he did if they put their mind to it (my thoughts here). Yet there’s one domain in which Feynman clearly had a natural gift in — curiosity! This is exemplified by the little experiments he describes in the video below, where he learned how accurate his sense of time was and what things affected this sense. He’d count to a minute in his head and learn that when he got to 48, a minute had passed. Then he tested what else he could do while doing this, and he could read but not talk.

    At the end of the video he says “Now I’m starting to talk like a psychologist, and I know nothing about that!” Let’s test that theory. Here’s the video.

    For the lazy, when Feynman told mathematician John Tukey about this, Tukey could do the reverse — talk but not read. The reason was that Feynman would talk to himself in his head, while Tukey would see an image of a clock ticking over. Feynmann suggests this could be because people think differently, and if you’re having trouble getting a point across, it might be because what your saying is more difficult to translate into the other person’s favoured modality than it is your own.

    I don’t know if he’s right about that latter point, but he’s certainly right about the rest. We have multiple cognitive “modules” in the brain which are specialised to different functions, and it’s possible to bring different modules to bear on a task. For example, our working memory, which is the cognitive process in use whenever you’re consciously “doing” something (like Feynman’s counting task) has a number of different components. I discuss these here. Each of these components has limitations, but your brain can use all the components at the same time.

    When Feynman started counting in his head he was employing the phonological loop, and when counting lines in a book he’s using the visuo-spatial sketch pad. These are different “modules,” that’s why he could do both tasks at the same time. Talking uses the phonological loop, so when he tried that, he’s asking too much of the module (which in most people would be fully occupied by the counting) causing him to mess up on the task.

    For Tukey, the reverse is true. He visualised a clock, occupying the visuo-spatial sketch pad but leaving the phonological loop free. So he could talk freely but as soon as he tried to read, he messed up.

    Some experiments even take advantage of this fact, by having participants count out-loud as they perform some other task, so they occupy the phonological loop as they test some other cognitive module.

    It’s also true that different people have different preferences in terms of how the process information, and cultural differences play a big role in this. So at the end of the video, Feynman was being a little unfair on himself when he said he knew nothing about psychology!

  • Alan Wallace on scientific dogmatism and materialism

    Alan Wallace, a Buddhist and writer on consciousness and meditation, talks about what he sees as the dogmatism and idolatry of the current, materialistic scientific paradigm.

    While there are some questions about materialism that no one has been able to answer, I don’t agree that the focus materialism is a form of idolatry. It’s just the framework into which all the other empirical data best fits. If another model came along that fit the data better, or data came along that did not fit the model, the prevailing paradigm would change. It would change slowly I’m sure, because paradigms do, but it would change. It’s a bit unfair to talk about current scientific models as if they are not works in progress — even if they slow, perhaps too slow, to change.

    Since there’s a finite amount of time and money that can be invested into consciousness research, it makes more sense to start your investigations from the standpoint of the most supported, the most accepted and the most validated paradigm, which is the material model. So you start from here, you make assumptions from here and then test them. A difficult question then becomes, at what point do you know that you’ve exhausted all the avenues of this model, and should start looking to others?

    Wallace says that a better way to study consciousness is to use our immediate experience, through our own observations, because this is a direct experience of consciousness, unlike second-hand self-report or brain imaging data. But I don’t see how this can answer the fundamental question – whether consciouness emerges from matter, as the materialistic view proposes, or whether matter emerges from consciousness, as the Buddhist and other views propose. How would introspection answer that?

    Observing the mind might well let you understand it, it might show you, as Wallace describes, this blissful second “layer” of consciousness, which Wallace claims does not arise from matter. How is it possible to know this from introspection? If you answer “You have to experience it to know,” then that’s an argument to authority (to people who have already experienced it) and I won’t be convinced by that, but at the very least it’s testable and a million times better than “you must have faith.” That it takes years and years of meditation to test this hypothesis is somewhat inconvenient, but at least its falsifiable.

    But let’s say I do experience it. How do I know it does not arise from matter? How can introspection separate something that does not arise from matter and never did, from something that does but has changed through years of mental training?

  • The problem with the gaming/cognitive functioning link

    As someone who spent countless hours in his youth playing Doom, Street Fighter II and other effective ways of making time speed up, I really want the link between computer gaming and enhanced cognitive functioning, which I’ve mentioned before, to be true. It would validate every hadoken, justify every gib. But although the evidence is promising – encouraging even – it’s not quite there yet. Walter Boot, Daniel Blakely and Daniel Simons published a review in 2011 pointing out the distance we have yet to go before we can be sure about StarCraft’s place in our cognitive training routine.

    Remember these guys?

    Firstly, we have the problem of demand characteristics in some of the non-experimental studies — the ones that take a group of gamers and compare them to non-gamers on various cognitive abilities. Gamers need to come out on top here to even consider video games as cognitive enhancers, of course, but even if they do, it doesn’t mean that games are causing the difference. Perhaps the gamers had these cognitive advantages to begin with, and that’s why they take so well to the games. Or perhaps they were more motivated to perform well during the testing.

    Many such studies specifically advertise for experienced gamers. Other research has shown that if you think you’re likely to perform well on a certain task, you’re sometimes more likely to do so. This problem is particularly relevant when you consider that many gamers will be aware of the news reports linking gaming to cognitive enhancement, and may have some idea that this is what the researcher is testing.

    The way around this is normally to do an experiment — take a group of people, preferable non-gamers, and give them a battery of cognitive test. Then randomly split them into two groups, tell one group to play video games for a few weeks and the other group not to, then give the same tests again. You’ll then see if the video gamers have improved relative to the non-gaming group.

    But the same problems exist as with the non-experimental studies. The gamers know they have been gaming and might deduce that they are supposed to perform better on the cogntive tests in a follow up. This is why placebo control groups are used — both groups would play video games, but the placebo group would play one that is not expected to bring any cognitive benefits, usually a slower paced game like Tetris. However, if the tests used more closely resemble the action video game than Tetris, you can make the case that the expectancy effect is still in play. The design of the experiment is not sufficient to pry the two possibilities apart conclusively (for example, by asking participants whether they expected to improve, although even this has it’s own problems), even though it might make more sense intuitively that the video games are working.

    Further muddying the waters, some studies have failed to find a difference between gaming and non-gaming groups in both experimental and non-experimental tests.

    Where to go from here

    This might be disappointing, but there is some evidence of cognitive benefits caused by video games. We just don’t know why, or what conditions or individual differences are most amenable to such effects. Boot, Blakely and Simons propose that future studies should meet the following criteria (no study yet published has managed to meet them all):

    • Covert recruitment (participants aren’t told the nature of the study)
    •  The paper should detail the recruitment method
    •  Experimental studies should be conducted
    •  Participants should be screened for familiarity with the idea that gaming brings cognitive benefits, and whether they expected the gaming they did in the study to enhance their test results
    •  The placebo control games should offer equal expectancy effects on the performance of the cognitive tests
    •  Neuroimaging should be used to help pry apart expectancy effects versus actual cognitive changes

    If gaming has any chance of non-domain specific cogntive enhancement, the results could be used to help fight age-related cognitive decline, help people in their personal development (working memory may be more closely linked to academic success than IQ), and give teenagers the world over valid excuses not to get off the PlayStation. So it’s worth spending the time andmoney getting to the bottom of this.

    Now if you’ll excuse me I have to go play Call of Duty. For science.

     

  • How to improve social anxiety by training your attention

    In 2009 Brad Schmidt and colleagues published a clever treatment for social anxiety disorder. Before I describe it, a short “spoiler” alert…

    If, as i suspect, you are reading this looking for a self-help treatment for social anxiety, I recommend that you do not read this article, as knowing the nature of the experiment might negate its effects (or it may not; I don’t know, but it surely can’t help you so let’s stay on the safe side).

    Instead, try to get hold of the computer program used in the study. The best lead I have is Richard McNally’s lab who tested an iPhone, iPad and android app of the program. There might be an ongoing study you can take part in, or you could try requesting a copy of the app for your own use.

    End of spoiler alert

    Hypersensitivity to threats is a feature of social anxiety disorder. Where one person sees a disgusted facial expression and ignores it to continue chit-chatting, the person with SAD will focus on this facial expression and take it as evidence that they are being poorly judged.

    They are negative evaluation detectives, scanning and interpreting social situations in a way that paints them negatively. For whatever reason, an adaptive behaviour — making sure we’re not pissing off our allies — has become maladaptive, leading to anxiety.

    Threat? Photo: massdistraction

    A potential treatment, then, would be to re-train the attention not to focus on negative facial expressions so much. This is what the program aims to do. Here’s how it works.

    Participants are presented with two pictures of people, one displaying a threatening facial expression, the other a neutral one. The pictures stay for a while and then disappear, and one picture leaves a letter in its place. Participants press a key to indicate which face left the letter behind. They are told to do this as fast as they can.

    The trick is that 80% of the time the letter appears behind the non-threatening face so that over time, participants are being trained to move their attention away from threatening faces. With less attention paid to them, there’s less opportunity to infer negative judgements. The fact that participants have to press the keys quickly is important here, like a “gamification” effect to increase engagement and attention.

    Participants completed eight 15-minute sessions on the program, two per week for 4 weeks. Could such a short, simple game really make real-world differences in social anxiety disorders? Well this is only one test and it needs to be repeated, but the results were impressive. After 4 weeks, 72% of participants no longer met the criteria to be diagnosed with social anxiety disorder, compared with 11% in the control group. The results remained in a follow-up four months later.

    So, yes, so far it seems it can.

  • Is porn bad for you?

    Gary Wilson of YourBrainOnPorn.com believes that it is.

    The empirical evidence for this is getting there but still somewhat thin. There’s a hilarious reason for that — researchers can’t find enough men who haven’t watched porn in order to form a comparison group! However, there’s some mileage to the idea and it warrants further study.


    You should see the other pics I considered using.

    Wilson’s premise is one that I discussed previously in the Tugging the Human Instinct post from a while back. Actually the same reasoning can be applied to much that’s fucked up about modern life (and points to the solutions too). It goes like this:

    Our culture has evolved far more quickly than our biology. We’re no longer living in the environment that we’re most suited for. Parts of our brain are wired to respond to certain things that were beneficial to our survival and replication. Our culture now rewards people (monetarily) if they can find ways to activate these areas with superstimuli, which tend to come with negative side effects. Pornography, particularly online pornography, is one such superstimulus.

    To be more specific, we’re adapted for life in 100-150 strong tribes, who would occasionally come into contact with other similarly sized wandering tribes (this is where our instincts towards in-group out-group behaviour stems from, be it my sports team is better than yours, my marital art is more effective than yours, my religion is the true one, and so on). I don’t know how many tribes you’d bump into as a hunter-gatherer, but given a life expectancy of around 30 and excluding women below breeding age, you’d probably see no more than a few thousand women, and only maybe 60 or so on a regular basis.

    If you go to a porn site, you can see 60 women of above-average attractiveness in a few minutes. This overloads your brain in a sense, tricking it into thinking you’re part of the hottest tribe ever!

    And if you get bored of one woman, you can load another up in a second. This level of novelty is also a superstimulus. It’s this combination of availability and instant novelty that creates the dependence and the psychological issues.

    There’s a little more to it that that neurologically, but that’s the gist of it. If you’re interested in learning more, check out Wilson’s TED talk, conveniently located right here:

    It’s ironic that he did a TED talk, since if there’s such a thing as “Information Porn,” that site is its biggest pimp!

  • Is food addictive?

    Last year an edition of the Journal “Addiction” was dedicated to food addiction. But whether food addiction actually exists is not an easy question to answer.

    Sometimes the definition is that use of a substance or participation in an activity continues even after if has a detrimental effect on your life. So you keep taking heroin instead of eating, and that’s detrimental. Could the same be said about food? Certainly, some people are unable to control their eating to the point that it becomes detrimental, leading to health problems like heart disease and diabetes.

    Because of course, not everyone who takes drugs becomes addicted, and likewise not everyone who eats does so to consistent excess. Interaction between the substance or activity and the body – some people’s brains react differently. The brain has reward pathways which trigger a dopamine release whenever we do something that was, in our ancestral past, beneficial to out survival and replication.


    Have you noticed this trend in “food porn,” where people are taking photos of their food and posting it online? Photo by SteFou

    For example, because sugar was rare back then (there was no Cave Mart), and because it was contained in food that was nutritious (that is, fruit), our ancestors who gorged on sugary food when they found it did better – they got more nutrition than those who ate a couple of berries and left it at that.

    But now of course, sugary food is not only plentiful, but its correlation with nutrients has diminished greatly. Yes, you can get any fruit you want, but who satisfies their sweet tooth with an apple? No one, we head for the cake and chocolate, and take the fat and other crap that goes with it.

    Yet for some people, food isn’t much of a big deal. The reason for that might be variation in this desire to seek out food – when some people eat, they get a bigger dopamine release, a bigger reward response than other people, which encourages eating and keeps calories high.

    This has been demonstrated in neuroimaging studies – obese people tend to show altered reward and tolerance responses to food. Though it’s not the whole story as insulin resistance and variations in other hormones also tends to be present. Furthermore, it’s hard to say if the food itself is what’s triggering the changes in reward response, because it’s hard to find people who eat junk food but aren’t also exposed to marketing messages, stress, or have dieted in the past, all of which may mess around with things.

    Also, maybe “food addiction” is too broad a term. You don’t see people whose lives have been affected by their inability to stop eating vegetables, for instance. Although sugar addiction has not been demonstrated in humans, fat and salt may have some addictive potential, though there isn’t much data on that yet. And that’s not even mentioning additives and other junk that gets put in food.

    So you’ve got a few issues here. First, you have inherent variation in the way people respond to food, neurologically. Then you have environmental factors that change the way people do the same. The people who have a high reward response to food are perfectly fine in certain environments, such as the proverbial active hunter-gatherer lifestyle, but put them in the modern world and things are different because of the things I mention above. Only by combining the two can you get a sensible idea of who’s at risk.

    Remember, what we consider to be obviously addictive now was once up for debate a while back, including nicotine and cocaine, and now we’re discussing things like food, gambling, sex and even the internet. Maybe one day we’ll look back and say, yes, when a person has addictive potential x and their in food-abundant-marketing-heavy environment y, they have z per cent chance of displaying addiction. Or in other words, put human population x in environment y will find z per cent of people becoming addicted. Or maybe psychologists are just looking for another disease they can cure you of. 🙂

    See also:

    Is fast food addictive

  • Can being an expert undermine your performance?

    As with bilingualism, it’s generally assumed that being an expert completely beneficial and has no downsides to performance. However we know that expertise tends to be domain specific, for example, chess grand masters can memorise chess boards far more quickly and easily that novices, but on standard cognitive tests tend to fare no better. In fact, if you arrange chess pieces to positions that would never be encountered in an actual game, again their recall is no better than chess novices, showing just how domain-specific expertise can be. But surely within a given domain, expertise can only be beneficial?

    Castel, McCabe, Roediger and Heitman suggest not. They gave 40 students a memory test consisting of eleven animal names and eleven body parts. The twist here was that all the animal names were also NFL team names, like dolphins, colts, seahawks and bears. After the memory test, participants were given an NFL quiz, and the group was split into two, those scoring above and below the median on this test, to give high expertise and low expertise groups in the domain of NFL knowledge.

    The results on the memory test for the two groups was then compared. Indeed, the NFL experts remembered more of the animal names than the non-experts, while there was no difference between groups on the body parts test. So far so good, however, the researchers also tested for incorrect answers — NFL animal team names and body parts that were not part of the original test. The results indicated that the experts were much more likely to make incorrect guesses than the non-experts. The authors suggest that this represents memory errors, the domain-relevant information of the experts got in the way of their accurate recall of the animal names. Since there was no difference between groups in body part experience, false answers were about even between groups on that test.

    Is this really the case though? Or was it that the experts consciously noticed that the animal names belonged to the NFL teams and simply reeled off as many as they could remember during recall. Perhaps it was not a case of the existing schema interfering with memory, but a recognition that they already know these names, so why bother taking the extra effort to think back and recall? Why not just reel off my schema? I wonder if the results would be the same if participants were told that they would score 1 point for a correct guess, but minus 1 point for an incorrect guess, which might increase the incentive to actually recall. In other words, maybe this effect is a conscious strategy used in situations where there’s no cost to an incorrect answer.

    However, there are other studies that support the authors’ conclusions, which I haven’t read so perhaps my question has been answered before or since. Either way, it’s an interesting thought that the knowledge base acquired by experts might be detrimental in certain tasks.

    ref:
    Castel AD, McCabe DP, Roediger HL 3rd, & Heitman JL (2007). The dark side of expertise: domain-specific memory errors. Psychological science, 18 (1), 3-5 PMID: 17362368

  • Where is my mind? Is the materialistic model of reality incorrect?

    My belief about the nature of reality is that the only “thing” that exists is matter. That is, there is no soul, no heaven and no hell. Effects aren’t caused without an interaction with different pieces of matter, and consciousness exists within the confines of the physical head that gives rise to it.

    However, although I used to be extremely firm in this position, now I am less sure, because of one question. I don’t know how to answer this from a materialist perspective. Maybe there’s just a really simple answer that I’m missing, but I’ve spoken to many people on this and no one has given it to me. Maybe you can. So here’s the question.

    Where is the cat?

    “HAHAHA puny humans you will never find me. (Photo by Tambako the Jaguar

    I can make a picture of a cat in my head; I can close my mind and think of it. So I’m perceiving this image of a cat.

    Where is the image? Where is the cat?

    I first heard this question (well, I added the cat part myself) in a lecture on the mind/body problem, and my initial answer is that the cat is simply a 1:1 correlate of certain neurological activity in the brain. That is, if you open up my head you won’t see a picture of a cat, but you’d see something that’s the equivalent of it, sort of like the dots and dashes of Morse code are not English characters, but they are equivalents of them. From a materialistic perspective, you’d theoretically be able to interpret the activity in my brain through some technology, and recreate the image of the cat that I am picturing on a screen.

    In fact, we’re past theorising on this, as a famous experiment last year that was widely reported as “Mind Reading” in the media demonstrated. Here’s what they did:

    1) Measured brain activity as someone watched a load of YouTube videos
    2) Linked up the brain imaging data with the image on the screen, creating a sort of database whereby such-and-such brain activity relates to, say, a red object in the middle of the screen, such-and-such relates to certain shape moving to the left, and so on. I’m probably over-simplifying, but that’s the gist.
    3) Get the same person to watch a new set of YouTube videos, again while in the scanner measuring brain activity.
    4) Use the database created in step 2 to predict what the person was seeing in step 3.

    Here’s how the reconstructions compared to the original videos:

    It’s important to note that the brain may not code imagined images in the same way as those you see with your own eyes, and also that each person’s brain will likely code the image of the cat in different ways (hence the need for steps 1 and 2), but, since all of the activity of the mind is thought to have a direct neural correlate, the principle is the same.

    So when I was asked “where” my mental image of the cat is, that’s why I responded in this way — the image is located in the brain – it’s just in a different format.

    But really, I’m not satisfied with that answer. Because in my mind I can see (well maybe not see, but certainly perceive) the cat; not the equivalent neural ‘code’, but the actual cat. I know where the neural code is, but I don’t know where the cat is.

    I can’t think how the materialistic model can explain where the cat is. Doesn’t this mean then that there’s more to reality than the purely materialistic? That the materialistic model is incomplete? What am I missing?

    To use a computer analogy, the words you are reading now (hello!) are represented in a chip in a computer as a string of 0’s and 1’s. That’s like the neural code in your brain. But the actual words are represented on the screen in front of your eyes. What’s the equivalent of the screen in the case of the cat? Where is it?

    I’m actually asking this to you – do you know where the cat is? Am I making a simple mistake? Please leave a comment and help me out!

    Where is reality?

    That’s probably enough for one day, but just to take this one step further; we know that what we see is not the world. The image we see is a mental construction of the world, and psychology has identified numerous examples of how we each see the world a little differently. An obvious example is colour-blindedness. Since the brain is constructing the world we see around us, and if we assume that the neural code and the image are different things… where is reality?

    Ref:

    Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, & Gallant JL (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current biology : CB, 21 (19), 1641-6 PMID: 21945275

  • Playing Tetris reduces the effects of traumatic events

    A study in 2009 tested the effect of Tetris on memory consolidation. Forty participants were split into two groups. After watching a film showing traumatic stuff (surgery, people drowning etc), one group sat quietly while the other was told to play Tetris for 10 minutes. All participants kept a diary for a week where they recorded their flashbacks. The Tetris group reported fewer, and scored lower on the Impact of Events questionnaire.

    Before we go on to the reasons and implications of this, might I suggest this listening music for the rest of the post?:


    Photo Credit: Profound whatever

    The idea is, that if you keep the visuo-spatial system busy, it has a harder time consolidating this memory. Other tests have suggested that the visuo-spatial element is key – verbal tasks, for instance seem to make flashbacks worse. This is thought to be because PTSD and flashbacks stem from two kinds of mental processing – visuo-spatial, and verbal/narrative. If the ratio is too heavy on the visuo-spatial side, you’re more likely to get flashbacks. This is also in line with other research on the beneficial effects of expressive writing as a form of therapy — perhaps this helps to bring the ratio back towards the verbal/narrative side.

    So if you want to reduce unpleasant flashbacks, it’s best to carry a Gameboy around with you at all times, and if something traumatic happens, just whip it out and play for 10 minutes.

    But since that’s pretty inconvenient for most people, there might be another way. Other research suggests that memories can be altered when you recall them, that is, the memory is loaded up and then “re-saved” when you’re done reminiscing/ruminating, and it’s possible to make “edits” to the “file” while it’s “loaded.” Maybe this plus tetris could equal something useful in therapy or self-help sessions. The idea is, you think back over the traumatic event and hold it mind. Then, start playing Tetris. I don’t know if this would work but if the authors of this paper are right that Eye Movement Desensitization and Reprocessing (EMDR) therapy (holding a memory in mind while moving your eyes around) works due to demand on the visuo-spatial system, then Tetris should too. Another advantage is you don’t have to feel like a twit sitting there moving your eyes while thinking bad thoughts.

    Other suggestions would be basically to do the opposite of anthing related to accelerated learning during this session. So no exercise before hand, which helps sprout new neurons and aid learning, and no sleep afterwards, which also helps with memory creation. It’s like Accelerated Unlearning. How’s that for a book title? All I need to do is take this simple idea, explain it in five pages, then throw in 245 pages of anecdotes and other filler. Bestseller!

    Interestingly, this only seems to affect involuntary flashbacks, because when a questionnaire about the film was administered after 1 week, the groups scored about even. Which is good since that would cause problems for students who use computer games as a break!