Blog

  • What the hell is Bonferroni correction?

    That’s a question every psychology student has asked at one time or another! Well, I’ll tell you.

    In order to understand Bonferroni, there is some prerequisite knowledge you need to possess. You need to understand what null hypothesis significance testing is, p values, and Type I/Type II errors. If you understand these things, read on. If not, read on also, but this will make less sense to you (I haven’t yet covered these things on this blog, so you’ll have to do some Googling, or buy my book where it is all explained in the absolute best way that is humanly possible. Ahem).

    You’re still with me! That’s good. I wonder what percentage of readers have already pressed the back button? Hmm.

    So, Bonferroni correction. You know that with a p value set at .05 were looking for a less than 5% chance of getting our result (or greater) by chance, assuming the null hypothesis is true. 5% is an arbitrary significance level (or ‘alpha’); not too high that we’re making too many Type I errors (assuming an effect where there isn’t one), but not too low that were making too many Type II errors (assuming there isn’t an effect where there is one).

    Imagine that we did 20 studies, and in each one we got a p value of exactly .05. A 5% chance of a fluke result over 20 studies means it’s odds on that one of these results really was a fluke. Now think about how many thousands of studies have been done over the years! This demonstrates the importance of replicating studies – fluke findings have definitely happened and will continue to happen.

    However this situation isn’t limited to findings spread over multiple papers. Sometimes in larger papers with several studies and/or analyses rolled into one, you might get a similar predicament. Simply, the more tests you do in a paper, the more chance there is that one of them will have come about through pure chance.

    This would be a bad thing – a theory that is modified as a result of an incorrect finding would, of course, be a weaker reflection of reality, and any decisions that were made based on that theory (academic or not) would also be weaker.

    So, we need a way to play a little safer when doing multiple tests and comparisons, and we do this by changing the alpha – we look for lower p values than we normally would before we’re happy to say that something is statistically significant.

    This is what Bonferroni correction does – alters the alpha. You simply divide .05 by the number of tests that you’re doing, and go by that. If you’re doing five tests, you look for .05 / 5 = .01. If you’re doing 24 tests, you look for .05 / 24 = 0.002.

    Bonferroni correction might strike you as a little conservative – and it is. It is quite a strict measure to take, and although it does quite a good job of protecting you from Type I errors, it leaves you a little more vulnerable to Type II errors. Again, this is yet another reason that studies need to be replicated.

    There you go! An answer to an age-old question. Up next; does the light in the fridge stay on when the door is closed??

  • The 28 best psychology blogs on the internet, organised by topic


    Here are the best blogs in each of the following respective areas, according to me.

    There are some sub-fields I know nothing about (clinical topics mainly), so this list is not comprehensive in that regard.

    I’ll update this as time goes by. Feel free to repost this on your own website to spread the word about these excellent resources.

    General Psychology Blogs

    PsyBlog – Jeremy Dean at the top of the list, as it should be.

    Research Digest – The BPS Research Digest is a superb resource, for students, researchers and people with a general interest in psychology. Christian Jarrett has done a great job with this.

    GenerallyThinking.com – A great psychology blog written by a handsome, accomplished, and modest psychology student (i.e., me).

    PsychFutures Blog – By psych students, for psych students. You can sign up and make a contribution yourself!

    Social Psychology

    Social Psychology Eye – Blog of the Social & Personality Psychology Compass; some brilliant articles in here.

    Situationist – How “the situation” influences our behaviour.

    What Makes Them Click by Susan Weinschenk – A blog about understanding people. The ‘100 things you should know about people’ series is fantastic, I wish I’d thought of it first!

    Exercise Psychology

    Physical Exercise and Psychology Blog – Not that well known, and has been quiet for a while, but Sean Webster has some good posts and links on here about the cross-over between psych and physical exercise.

    Neuroscience / Cognitive Science

    Neurocritic – A pleasingly critical look at Neuroscience and related fields

    Neuronarrative – No list of psych blogs is complete without David DiSalvo’s Neuronarrative.

    Neurophilosophy – Good stuff here on all aspects of neuroscience.

    Neuroconscience – Brain plasticity particularly in relation to media and technology use.

    The Mouse Trap – Great stuff from Sandy, you’ll find some general posts on psychology with a preference towards neuroscience, and a recent interest in positive psychology too!

    Mind Hacks – Big, popular, successful, long-lasting, accessible – it’s Mind Hacks!

    Cognitive Daily – Sadly, Cognitive couple Greta and Dave closed shop in Jan 2010, but the archives are still up and valuable.

    Psychiatry

    Frontier psychiatrist – Mental illness for the masses!

    Psychotherapy Brown Bag – Absolutely fantastic blog, even if psychotherapy isn’t your bag. Lot’s in here on the right way to think as a scientific psychologist, critical thinking etc. Well worth reading if you’re a psych student.

    Positive Psychology

    Positive Psychology News Daily – The best positive psychology site on the web, written mostly by current and former MAPP students.

    Curious – Todd Kashdan’s interesting blog. Check it out if you’re curious.

    The Good Life – The Psychology Today blog of Christopher Peterson, sports fan and VIA strengths co-founder.

    The Meaning in Life – Michael Steger’s blog, more focus on meaning, as you might have guessed.

    Persuasion

    Influence PEOPLE – Brian Ahern’s excellent ethical persuasion blog. Also his podcasts are here.

    Persuasion Theory – Matt Fox’s blog focuses the science of persuasion into various applications, mainly marketing. Lot’s of practical tips here.

    Sex

    Jena Pincott – Author Jena Pincott’s interesting and sometimes cheeky blog.

    Dr Petra – More focus on sex education and policy issues here. Thoughtful stuff.

    Critical Thinking

    Bad Science – Ben Goldacre’s site. Not a psychology site actually, but if you’re a psychology student you should immediately subscribe to Ben’s blog and let his critical attitude rub off on you through the power of web-osmosis.

     

  • Between participants and within participants designs explained

    If you’re doing a study using two or more groups, you’ve got two options: You can use different people in each group (between participants design), or you can use the same participants in each group (within participants design). There are pros and cons to each.

    Say I’m an office manager and I want to measure the effect of distraction on workplace performance. My workers always have that damned radio playing all those cheesy love songs, and I think it’s distracting them and costing me valuable profit. So I hire a researcher to find out for sure.

    He might use a between participants design. On one floor of my office he bans the radio. On another floor, he let’s them carry on as usual. At the end of the week, we work out which floor got the most work done. Pretty simple.

    Or, he might use a within participants design. He only looks at one floor. For a week he measures their workrate while the radio is there, then the week after he takes it away and measures workrate again.

    What’s the best way?

    There’s no right or wrong answer. If he used a between participants design, he’s got different people on each floor. Maybe one floor is populated by people who are better workers than the other floor. To truly test the effect of the radio, and nothing else, the conditions – and the people – in the test would have to be exactly the same. Normally in psychology, researchers try to get large numbers of people in each group, and assign people randomly to each one. That way it’s expected that, since most human traits fall on a normal distribution, the groups will be pretty similar to each other on average.

    But to get them exactly the same, you’d have to use the same people! That’s a within participants design. This brings it’s own problems with it. In this particular example, there might be temporal effects – differences in the environment week by week. For instance, maybe there were an unusually high number of birthdays in the office on the second week, and they went out celebrating a few times, leaving them tired at work. Or maybe on the first week, they were a bit nervous about having a researcher watching over them, but by the second one they had gotten over it.

    Temporal effects aren’t limited to within participants designs of course – in a between participants design you might test the groups at different times; although you should not do this unless you have no other choice, to avoid these temporal effects.

    Within participants designs are also vulnerable to something called practice effects. If I’m measuring the effect of caffeine on some cognitive ability, such as working memory, I might test people when they first step into the lab, then give them a triple espresso, and test them again.

    Is this a between or within participants design? It’s within participants – testing the same people twice. But, the second time they do the test they know what to expect; they have had a little practice. So they might improve on the second test purely through this practice effect, rather than the caffeine.

    Alternatively, maybe the results were influenced in the opposite direction – maybe they got bored of doing the test twice and didn’t put as much effort in the second time around.

    There’s a way of getting around this – counterbalancing. You split the sample in two, and half of them would get the espresso before the first test, while the other half would get it before the second test. In this case you’d have to leave a few hours between tests so that the caffeine wears off, but both conditions – with caffeine and without caffeine – would be equally susceptible to practice effects, so we can be more certain that any difference is due to the effect of the caffeine.

    Once more, just to clarify

    In a between participants design, a given participant is allocated to one group or the other, but not both.

    In a within subjects design, a given participant is allocated to both groups.

    Advantages of between participants design:

    Help to avoid practice effects and other ‘carry-over’ problems that result from taking the same test twice.

    Is possible to test both groups at the same time.

    Disadvantages of between participants design:

    Individual differences may vary between the groups

    Vulnerable to group assignment bias (though you would use random assignment wherever possible to compensate)

    Advantages of within participants design:

    Half the number of participants need to be recruited

    They offer closer matching of the individual differences of the participants.

    Disadvantages of within participants design:

    Practice and other ‘carry-over’ effects may contaminate the results (though you would use counterbalancing where possible to compensate)

    Visualising this in SPSS

    When you’re putting data into SPSS, a row always indicates a single participant. Data from two different participants will never appear on the same row. Therefore, in a within participants design, our coffee experiment would have two columns for our data – one for with caffeine, one for without.

    But, if we had used a between participants design, we would have ONE column for the data, plus another column saying which group that participant was in.

    Alternative names

    Between participants is also known as independent measures design, or between subjects design.

    Within participants is also known as repeated measures design, or within subjects design.

    Further info…

    Are you finding the stats section of your course a little difficult? It’s hard to understand at first, but I’ve explained the bulk of what you need to know in plain English, in the study guide. Have a look.

  • Four ways to boost your creativity when writing

    Whether you’re writing a novel, a blog, or an essay, the biggest problem has to be writer’s block. It’s so annoying when you’ve actually gotten past the initial procrastination hurdle (not an easy task in itself), you’re sat at your desk and you WANT to write – but it’s just not coming out. Or, what is coming out isn’t to your liking.

    Trust me, I know how that feels; I started this blog post in 2005.

    Here are a few tips you can try to give your creativity a little boost.

    1) Get the right emotions for the job

    3342877736_374c327e7a_m
    Photo credit: Joe Shlabotnik

    Our emotions serve evolutionarily adapted purposes. We have different emotions because they each solve different evolutionary problems. Anger helps to stop people transgressing against us, love helps us keep a mate, fear keeps us out of danger, and so on. Because they are specialised, emotions have different effects on our perceptual system, and the ways we think. ‘Negative’ emotions tend to narrow our though-action repertoire, while ‘positive’ emotions tend to broaden them (Fredrickson, 2001 – PDF).

    In other words, when you’re in a lower mood, you’re more likely to look at the little details, when your mood is high, you’re see the bigger picture and be more creative.

    So whenever you’re brainstorming or coming up with new ideas, boost your mood with something uplifting, heart warming, or side-splitting.

    Whenever you’re editing or proof-reading, put some downbeat music on, and get to work.

    (We might also suppose that continuing a long argument with your spouse is a bad idea – you’re both focusing too much on the things that put you into that negative state. Sleep on it and discuss in the morning.)

    2) Take breaks

    Whenever you’re doing work that requires focused, mental effort, you’re draining your mental willpower reserves. Once these are depleted, performance suffers and you suddenly can’t be bothered to work. Take regular breaks and do something that requires no effort to attend to – spending time in nature has proven to be a useful exercise to this end (See the extensive work of the Kaplans). Also, make sure you don’t go hungry while working – the fuel that willpower runs on is glucose (Gailliot and Baumeister, 2007) – empty stomach equals poorer mental performance.

    3) Use novelty to your advantage

    Short answer: If you want a novel idea, expose your brain to novelty.

    Long answer: The area of the brain associated with novelty is thought to be the substantia nigra/ventral tegmental area (SN/VTA). The VTA is part of the dopamine system, in other words, the reward system. When we perceive novelty, our brain signals us to explore, because it is always looking for rewards. The brain likes novelty.

    What’s more, these brain areas are connected to the hippocampus, which is involved in learning. You can see where I’m going with this; enhanced learning might occur in the context of novelty. On top of that, a novel environment exposes you to a different set of priming, which themselves trigger different areas in your brain.

    There’s the science, here’s the simple advice – go somewhere new to write. Try a park, a library, a coffee shop you’ve never been to. Try a car park, a zoo, a big wheel, try wearing different clothes, talking to different people. Use novelty to your advantage and see if your brain doesn’t come up with some new ideas.

    4) Try the SCAMPER method

    Luciano Passuello of LiteMind discusses the SCAMPER method of creative problem solving. This is going to be more effective when you have a specific writing problem you are ‘stuck’ on. It is essentially a list of questions which help you to look at the problem from 7 different angles, each represented by the SCAMPER acronym. Not all of these angles will be appropriate to your specific writing problem, but I’ve found it really useful at various times. I’m interested to hear how you get on with this method, so let me know if you use it – leave a comment with your experiences!

    Get to it

    I’m certain that by using one or more or these techniques (all four if necessary) you’ll be able to make at least SOME headway on your writing. Let me know how you get on!

  • The incredible reason why you should be exercising regularly

    I think everyone is sold on the idea that exercise is good for the body, assuming no contraindications. Everyone who can, should do it – it makes you physically healthier, stronger, etc.

    Fewer people are aware of it’s effect on mood though, which I have discussed before. Physical exercise makes you happier, and more likely to overcome stressful setbacks that you encounter through your life. I believe it was Tal Ben-Shahar who said “Not exercising is like taking depressants.”

    Fewer people still are aware of another benefit to exercise. It’s even good for the brain. Is there nothing it can’t do?

    Take dementia for instance. Laurin et al (2001) looked at a huge sample of randomly selected Canadian men and women. 6,434 of these were ‘cognitively normal’ at baseline; that is, no dementia. Five years later, 4,615 people completed a follow up test which asked them about their exercise habits, as well as other tests, such as for cognitive impairment.

    High levels of activity were associated with reduced risks of cognitive impairment, dementia (of any type), and Alzheimer disease. The odds of someone having Alzheimer’s in the group who exercise were half as low as those who did no exercise at all!

    So it seems that regular physical activity might be a preventative factor in age related cognitive decline and Alzheimer’s. This is pretty big.

    But the benefits of exercise do not seem to stop at prevention – they may actually have an augmenting effect on cognitive function in healthy adults; and the evidence for this is getting stronger.

    Take Winters et al (2007) for instance. They took a group of people, got them to run around a bit, and then tested their learning performance, both immediately afterwards and long term. They found that vocabulary learning was 20% faster after intense exercise, as well as increases in a protein called brain-derived neurotrophic factor, or BDNF, and sustained BDNF was linked to greater learning success. BDNF is sort of the holy grail of cognitive enhancement, helping to support the survival of existing neurons as well encourage the growth of new neurons and synapses. This protein may explain both the preventative and enhancement effects of exercise on cognitive function.

    So when you’re wondering whether to get off your ass and get to the gym today, keep this in mind – you’re not only keeping your body healthy, you’re improving your mental function and preventing cognitive decline.

    This might also play a role in the timing of your exercise sessions. Try working out immediately prior to any time you need to learn. It should improve your performance.

  • Obesity, junk food, and the brain – tugging the human instinct in an unhealthy direction

    Ever wonder about the effect modern life has on us? Unbridled freedom, choice and… fat? Yep. According to the BBC back in ’06, we were on course for 20% child obesity in 2010. I haven’t checked that fact, nor the actual 2010 figure, but I doubt anything has been done to solve this problem. It doesn’t appear that way on a walk down an average street, anyway.

    Circumstances do kind of conspire against us though. Massive corporations spend millions learning how to alter the behaviour of consumers. Marketing departments and salesmen, in their own field, know as much about human psychology as scientists do. It’s almost empirical, you advertise here, and in this way, at this time, and watch the results. Companies know exactly what buttons to press to have us spending money on junk food.

    Not only that, but junk food may be a bit of a ‘hack’ in itself. As the theory goes, changes in our diet have occurred at a much faster rate than our genome is able to adapt to. Our bodies don’t ‘know’ food is abundant; as far as they are concerned, we’re hunger-gatherers in the pleistocene and food is scarce indeed. Our most successful ancestors were the ones with the strongest taste for nutrient- and calorie-dense food (fat, sugar), they were more motivated to seek and eat food, and hence more likely to pass their sugar-loving genes into subsequent generations.

    At the same time as food has been systematically refined into that which we can resist the least, our environment has been undergoing a similar shift. We can’t resist conserving our energy, resting. Wasting energy could be fatal 40,000 years ago, so if we have food and shelter nearby, we tend not to move (except perhaps for sex). Combine MacDonalds with La-z-boy, throw in a TV for entertainment, and you can see the results.

    fitness_escalator
    You just want to slap some people, don’t you?

    I’m not saying there isn’t a degree of personal responsibility here; there is. My point is that the health and fitness deck is not exactly stacked in our favour. Our natural inclinations are being pulled in an unhealthy direction.

    Do I see a similar thing going on with the brain?

    If I can’t remember the name of that song, Google is at my fingertips to relieve me of the burden of recall. If I had an iWhatever, I could do this wherever I was. Is this another example of circumstances moving us away from optimal functioning?

    It makes intuitive sense, and strong arguments have been put forward both for and against, but I’m not sure the evidence either way is deep enough to form a solid opinion yet. Some studies have been showing cognitive gains related to media usage.

    On the other hand, there’s the multi-tasking study that many bloggers have picked up – where people who were classed as heavy multi-taskers were not so good on a test of task switching ability. But how this relates to general internet usage (for instance) isn’t all that clear.

    But even assuming there is a detrimental effect going on (which as I say, I think is a big assumption at the moment), there might be a common solution, which I’ll talk about next time…

  • Is Emotional Intelligence really an intelligence?

    Some people argue that Emotional Intelligence is actually a set of skills. This makes me think, why is it called emotional intelligence, and not Emotional Skill, or something like that? Is it really an intelligence? Or if a set of skills can form an “emotional intelligence”, then can any set of skills be considered an intelligence?

    Intelligence is “the ability to carry out abstract thought, as well as the general ability to learn and adapt to the environment.” (Mayer, Salovey and Caruso, 2004, p198). Most researchers now refute the concept of ‘g’ – a common general factor that influences intelligence in each domain – which is what we generally think of when we think intelligence. Also the concept of IQ seems very narrow and misses out on a range of behaviours that you might intuitively consider to be indicative of intelligence. Currently, researchers seem to favour the idea of multiple intelligences, that each cover different domains separately, one of which being emotional intelligence, but we also have social intelligence, IQ, verbal intelligence, spatial intelligence, and so on.

    What distinguishes these intelligences from each other? And does emotional intelligence fit the bill, or is it better considered only as a set of skills? According to Mayer, Salovey and Caruso (2004) intelligences…

    * Process a distinct type of information

    Emotional intelligence certainly ticks this box. Emotions are conveyed not only verbally, but through our body language, behaviour, and facial expressions; and in the latter case, the information appears to be a human universal, consistent across culture (Ekman, 2003). Whether you go to modern, pre-industrial, or tribal societies, everyone smiles when happy, frowns when sad, etc. It’s a common ‘language’.

    * Must be operationalised in a ‘test’ format, for which there are more-or-less right answers

    If we take an IQ test, there is one correct answer to each question. I took the MSCEIT (an EI test; see Mayer, Salovey, Caruso, and Sitarenios, 2003) a while back, and I definitely didn’t see that – not through the whole test.

    There was one section in particular, which Andy Roberts of Breath London who was administering my feedback, said is frequently questioned.

    There were a load of pictures, for instance, a bunch of grey squares, and you had to answer “How much happiness is shown in this picture?”, and things like that. Is there really a right or wrong answer to that?

    Another one was a picture of a rock in a lake. Maybe I’m missing something, but how can that be happy? Its a rock in a lake. Apparently there is a right answer, which is judged by consensus and expert criteria. I’m sorry, but I don’t care how many people think that a rock is happy – it isn’t. And who can be an expert on the happiness of rocks in lakes? Or are we supposed to say how happy it makes us? Because what if rocks in lakes just don’t make me smile? Does that mean I can’t recognise happiness in people? On the other hand, if it’s measured by consensus, are we sure that identifying the happiness shown by a rock in a lake carries over to identifying emotions in people. For example, would people with average EI give a different score to people with high EI? How would we know?

    However, the other items on the test such as facial expression and emotion name recognition, certainly would have closer associations with actual emotional expression (and clearer right/wrong answers).

    * Shows patterns of correlations similar to other intelligences

    Apparently EI is ‘factorally unified’ and correlates modestly with other intelligences. So it’s a distinct construct but at the same time you wouldn’t necessarily expect someone very high in EI to be very low in, say verbal intelligence.

    By the way, if you’re wondering why intelligences can correlate, but we can’t find ‘g’, a general factor of intelligence, read this page, for a comprehensive explanation. To quote the author “This doubtless more than exhausts your interest in reading about the subject; it has certainly exhausted my interest in writing about it.”

    * It should develop with age

    According to Mayer, Salovey and Caruso, (2004), there is evidence that EI develops with age, which meets the third criteria for an intelligence.

    So it looks like EI does tick all the boxes – not completely inside the lines – but mostly so. Which isn’t to say that it’s not a set of skills – after all you could break down an IQ test into various cognitive skills – but it’s not only that.

    By the way, Bob Sternberg, the big name in intelligence research, the guy who made the call for the study of multiple intelligences back in the 1980s, also of triangular theory of love fame, has a very interesting definition of intelligence:

    “I define [intelligence] as your skill in achieving whatever it is you want to attain in your life within your sociocultural context, by capitalizing on your strengths and compensating for, or correcting, your weaknesses.”

    I’ve talked about strengths and weaknesses a lot. Perhaps, some years down the line there will be a strengths intelligence – the ability we have to recognise our personal strengths into our every day lives. Maybe when the strengths models are better developed, we will be able to compare them against the criteria for an intelligence.

    How much strength is shown by this rock in a lake?

    References:

    Ekman, P. (2003). Emotions Revealed, NY: H. Holt

    Mayer, J. D., Salovey, P., & Caruso, D. R. (2004). Emotional intelligence: Theory, findings, and implications. Psychological Inquiry, 15, 197-215.

    Mayer, J. D., Salovey, P., Caruso, D. R., & Sitarenios, G. (2003). Measuring emotional intelligence with the MSCEIT V2.0. Emotion, 3, 97-105.

  • Tim Ferriss, Email, and science reporting – Weeding out the bad data on productivity

    As many of you know, I’m in the process of writing a book: a psychology study guide to be precise. One of the things it will cover is time management, planning, productivity; that sort of thing. While researching this chapter (the last one to do I might add), I remembered an interesting study that Tim Ferriss mentioned on his site. It found that people who were distracted during an IQ test by email and ringing phones actually did worse than people who were high on marijuana. That’s just the sort of quirky study I’d love to cite. It’s interesting, it would stick out in peoples’ minds.

    In Tim’s words:

    “In 2005, a psychiatrist at King’s College in London administered IQ tests to three groups: the first did nothing but perform the IQ test, the second was distracted by e-mail and ringing phones, and the third was stoned on marijuana. Not surprisingly, the first group did better than the other two by an average of 10 points. The e-mailers, on the other hands, did worse than the stoners by an average of 6 points.”

    I wanted to find the original study for my book. With the help of Matt Fox of Persuasion Theory fame, I tracked down a Beyond the Biz article entitled “Study offers dope on e-mail madness” (good title; not as good as mine), which had more details of the study. Some highlights:

     

    “E-MAIL HAS A MORE DETRIMENTAL EFFECT on British workers’ IQs than smoking marijuana, according to a Hewlett-Packard study conducted by TNS Research. Dr. Glenn Wilson, a psychiatrist at King’s College at London University, monitored worker IQ throughout the day in 80 clinical trials. Wilson said the IQ of workers who tried to juggle e-mail messages and other work fell by 10 points-more than double the four-point fall seen after smoking pot, and about equal to missing a full night’s sleep.”

    We have a name! And it’s the well-known Dr Glenn Wilson, no less. And he monitored IQ throughout the day? In 80 trials? Wow, must be a solid finding! The article continues:

    “He added the IQ drop was more significant in men in the study, which was conducted with 1,100 people. Wilson advocates companies helping employees solve the problem. “Companies should encourage a more balanced and appropriate way of working,” he said.”

    OK so now we’re talking about ‘the’ study with 1,100 people. Is that in each of the 80 trials? That would make 88,000 participants. That’s a lot of pot! Either something’s amiss here or Dr Wilson has his ethics committee wrapped around his little finger! So maybe those 1,100 people are spread across the 80 trials. That would make an average of 13.75 participants per trial. I’m not sure which is most unlikely – a psychiatrist doing 80 trials with less than 5 participants per group, or 29,333 people getting stoned in the name of science.

    This is not looking altogether kosher.

    In a section of his site called “The Truth,” Tim Ferriss cites a New York Magazine article on Dec 4th 2006, called “Can’t Get No Satisfaction.” For your convenience, here is a link directly to the page of the magazine in question. For your greater convenience, here is a quote of the relevant section:

     

    “In 2005, a psychiatrist at King’s College London did a study in which one group was asked to take an IQ test while doing nothing, and a second group to take an IQ test while distracted by e-mails and ringing telephones. The uninterrupted group did better by an average of ten points, which wasn’t much of a surprise. What was a surprise is that the e-mailers also did worse, by an average of six points, than a group in a similar study that had been tested while stoned.”

    As you can see, New York Magazine clearly states that there were TWO groups in the study. The marijuana group data was from another study. Comparing data from one study to data from a ‘similar’ study is not something you should do without firm justification. They didn’t even give references.

    But whatever, even though the marijuana aspect is pretty central to these articles, I’m more interested in the effects of distracting emails so I can write about it in my book. So I did some further digging to find the original study.

    Conveniently, Glenn Wilson has a report on his site here (it’s the ‘infomania’ link at the bottom). This cleared everything up.

    Wilson was commissioned by Porter-Novelli to supervise an in-house experiment, which would accompany a survey of 1,000 people. Wilson had nothing to do with the survey, and just helped out with the experimental side of things.

    His study tested 8 employees in two conditions – a ‘quiet’ condition and a ‘distracting’ condition (mobile phones ringing, emails arriving; usual office stuff). He measured IQ with a matrices-type test (which I’m not familiar with), as well as physiological measurements. He did find that distraction reduced IQ performance and increased stress, but 8 participants? That’s a pilot at best. Hardly newsworthy.

    But no blame on poor Wilson here, he just did the job he was asked to do. It’s in the reporting of the study where things went a little bit silly.

    Somewhere along the line, “8 participants” become “80 clinical trials”. That is absolutely laughable. The 1,100 participants obviously came from the 1,000 participant survey that came up. But it is shocking how people will literally lie like this.

    Wilson leaves a note at the end of his report:

     

    “This study was widely misrepresented in the media, with the number of participants for the two aspects of the report being confused and the impression given that it was a published report (the only publication was a press release from Porter-Novelli). Comparisons were made with the effects of marijuana and sleep loss based on previously published studies not conducted by me. The legitimacy of these comparisons is doubtful because the infomania effect is almost certainly one of temporary distraction, whereas sleep loss and marijuana effects on IQ might conceivably be more fundamental, even permanent.”

    And later said on the matter:

     

    This “infomania study” has been the bane of my life. I was hired by H-P for one day to advise on a PR project and had no anticipation of the extent to which it (and my responsibility for it) would get over-hyped in the media. ??There were two parts to their “research” (1) a Gallup-type survey of around 1000 people who admitted mis-using their technology in various ways (e.g. answering e-mails and phone calls while in meetings with other people), and (2) a small in-house experiment with 8 subjects (within-S design) showing that their problem solving ability (on matrices type problems) was seriously impaired by incoming e-mails (flashing on their computer screen) and their own mobile phone ringing intermittently (both of which they were instructed to ignore) by comparison with a quiet control condition. This, as you say, is a temporary distraction effect – not a permanent loss of IQ. The equivalences with smoking pot and losing sleep were made by others, against my counsel, and 8 Ss somehow became “80 clinical trials”.

    Since then, I’ve been asked these same questions about 20 times per day and it is driving me bonkers.

    You’ve gotta feel for the guy.

    So even though I’m a little late to the party I thought this was worth mentioning, just to highlight that you cannot trust something just because it has been published – even in so-called ‘reputable’ places.

    Remember, this was an in-house study with 8 participants which had *nothing to do with marijuana whatsoever*. And look what happened:

    Any my personal favourite:

    • Email destroys the mind faster than marijuana – study (The Register)

    DESTROYS the mind! Are you kidding me?!

    All these articles mention the marijuana link too.

    So whose fault is this? I don’t know who started it. I can’t be bothered to check. It wasn’t Tim Ferriss. But everyone is guilty of not checking their sources. This time the only victims were Dr Glenn Wilson’s inbox and a few concerned citizens, but you can imagine how the same process on more serious topics could be quite destructive.

    It would be nice if the sources above updated or removed the articles in light of the fact that they are, well, a load of bullshit. Out of curiosity over what would happen, I sent the following email to most of the above sources:

    To whom it may concern,

    I’m writing with regards to your article:

    XXXXXX

    Which discusses the effects of marijuana on IQ. I thought I would bring to your attention the fact that your article is incorrect. The study in question did not actually look into the effects of marijuana at all, although this is certainly an honest mistake, as many news outlets have made the same error.
    The true study was:

    * A small test involving 8 participants
    * Looking at distractions of email/phone against quiet conditions
    * Not connected with the Institute of Psychiatry at Kings, except for the fact that the person hired to supervise the study, Dr Glenn Wilson, is a member of that faculty

    Further information is available here:

    http://itre.cis.upenn.edu/~myl/languagelog/archives/002493.html

    In the interests of accurate news reporting I am bringing this matter to your attention so that you can make the necessary alterations to your article.

    With kind regards,
    Warren Davies

    As you can see, I sound very polite and formal. I thought I would be taken more seriously that way, as opposed to an informal “Hey dude! FYI….” . Let’s see if anyone will agree to change their article to reflect the truth. Tim Ferriss is difficult to contact by definition, but I’ve sent him a message on twitter just in case.

    Now, if you’ll excuse me, I need to get to work on that book chapter I was talking about. Right after I check my email and finish this bong, that is.

  • Who needs self-esteem anyway?

    I discovered an interesting paper by Ryan and Brown (1), which got me thinking. This paper proposes a view of self-esteem that I hadn’t come across before.

    First, they explain their view of the self. Most researchers use the ‘self-as-object’ definition – we have a self-concept, which can be complex, simple, positive or negative based on our own appraisal and evaluation of it. These evaluative ‘schema’ make up self-esteem.

    A second perspective is the ‘self-as-process’ idea, in which the self is not the object of evaluation, but is the process of assimilating and integrating experience. From this perspective, it is not important whether self-esteem is low or high; what is important is what is going on when these evaluations are made.

    Staying with the self-determination theory (SDT) tradition, they argue from the self-as-process position that concern with the worth of the self is a byproduct of psychological need deprivation. In other words, most people don’t sit around thinking “How worthy am I?”; yet many other people obsess over this, and compare themselves continuously. The fact that they do this, Ryan and Brown propose, means there is a psychological need unfulfilled (the three psychological needs being relatedness to others, autonomy, and competence).

    For example, an individual lacking in relatedness with others may try to conform to the standards of other people in order to gain their acceptance. They might get it, but the quest for self-esteem hinders their authenticity and personal growth. Likewise, a person may seek self-esteem in achievement, if they are insecure about their competence.

    So if one has self-esteem, it is because their basic psychological needs are fulfilled. Therefore self-esteem can be beneficial to an individual – but only if they don’t need it! Seeking self-esteem it for its own sake may lead to conflicts in the basic needs, and therefore only temporary satisfaction.

    Ryan and Brown suggest, based on this, that a life lived without concern for self-esteem might be optimal. When something bad happens, we are disappointed, but we do not integrate this into our self-concept and disparage ourselves (“I’m a loser!”). Likewise, when things go well, we are pleased, but again the self is not conceptualised as an object to be praised (“I’m awesome!”).

    This phenomenon of not integrating positive or negative events with the self may be related to another interesting construct – locus of evaluation. This is the degree that an individual has integrated a set of standards or values by which to judge their actions, versus the extent that they rely on an external frame of reference. For example, if I had an internal locus of evaluation for blog writing, it would not matter how much traffic or tweets I got from this post – I do not judge my performance on that, so it would not affect me. Conversely, if I had an external locus of evaluation, I would be highly affected by the traffic I got, for I would need that external reference to know how well I did.

    An external locus of evaluation is correlated with low self-esteem (2), just as the theories of SDT and self-esteem would predict: the need for another person to set the standard for our self-evaluations is a hallmark of the introjected style of motivation – this indicates the deprivation of a psychological need, and hence low self-esteem.

    You can envision a dark side to an internal locus of evaluation too; if your own judgement is just plain wrong, for instance. But, in any case, this just seems to deflect the issue; if an internal locus of evaluation is a buffer protecting self-esteem, it would preserve both the positive and negative forms of self-esteem alike. The problem seems to be with the concept of the self itself.

    To take this further, Ryan and Brown bring in ideas from Buddhist philosophy, which go something like this: when we form a self-concept (self-as-object) we often forget that this ‘me’ is merely a creation of thought, and is only one of an infinite number of possible ways that we can construe the self.

    We know there is some truth to this idea from CBT and Seligman’s explanatory style of optimism – our self-concept, our self-esteem, the emotions we experience – even mental disorder in some cases – can be traced back to particular thoughts, beliefs and judgements we hold of ourselves.

    Here’s the point: if the self can be constructed in any number of ways – which appears to be the case – is this really the self that we want ourselves and others to esteem? Perhaps the fickleness of the self-as-object construction is the reason that self-esteem is not a reliable route to well-being or growth.

    But if not these constructions, then what? Is there a deeper ‘self’, beneath these constructions? Mindfulness is proposed as an answer – as long as we hold to a construction of the self-as-object to esteem, there will always be situations where we do not live up to the values on which the self is based. The idea is to disidentify with the self-as-object, and simply to have awareness of the processes that the self is made up of, without ever saying “That is me.”

    And coming full circle, this is in accordance with the idea of healthy self-regulation – someone who has their basic psychological needs met, does not strive for self-esteem.

    References:

    (1) Ryan, R., & Brown, K. (2003). Why We Don’t Need Self-Esteem: On Fundamental Needs, Contingent Love, and Mindfulness: Comment. Psychological Inquiry, 14(1), 71-76

    (2) Bucus, D. (2008). Defining the self: Locus of evaluation, self-esteem, and personality. Dissertation Abstracts International Section A, 69, 122.

  • Can humour be learned?

    How many therapists does it take to change a lightbulb? Only one, but it takes six months and the bulb has to want to change!

    Humour has a potentially valuable place in therapy; a large number of papers argue for the benefits of it in a therapeutic setting. There is also a lot of information about humour styles out there, and what type of humour is appropriate in different settings.

    BUT… is there any work on how to teach humour skills to professionals? It’s alright to advise a talk therapist to “use humour,” but humour could potentially have a severely negative effect too, if used ineffectively. So it seems that proper training may be needed.

    And also, in a broader sense, humour is useful in sales, business, teaching; even in romance! So what about general humour training? We have comedy improv clubs and what-not, and these might be effective in their own way, but it’s not going to convince the scientists and practitioners to go to a comedy club to help their patients. For that, there would need to be theoretical papers, randomised controlled trials, and so on, which is ironic since these are some of the least amusing things you’re likely to come across. But I decided to see if there was any science in this area. Can humour be learned, or is it just a gift?

    The Controlled Trial

    “When I first said I wanted to be a comedian, everybody laughed. They’re not laughing now.” – Bob Monkhouse

    As I looked through the research, unfortunately, I found very few studies. I managed to found one controlled test of a humour training intervention. Nevo, Aharonson, and Klingman (1) subjected 101 female teachers to a 20-hour humour training program, consisting of 14 individual units. At the end of the test, the treatment group saw greater improvements in measures of ‘humour production’ as rated by peers, compared to pre-test measurements and the control group. And subjectively, the participants felt that the program was moderately effective. So it does seem from this one study, that humour can be learned.

    An interesting finding, however, was that ‘trait-level’ measures of humour were less sensitive to changes following the program; which might question how long these benefits last for. Maybe you have to keep practising to keep your game up.

    The Uncontrolled Trial

    McGhee (2) devised an 8 week humour development program, aimed at the lay audience. Franzini (3) mentions that this program is backed by a self-report follow up, but as yet I have not found it. If anyone knows of it, please let me know.

    General skill or scripted sessions?

    humour_in_therapy
    The local yoga club was in high spirits, despite the horrific Superglue accident. (credit: lululemon athletica)

    Here’s another point – do we train professionals in humour production, then let them loose on their clients? Or do we systematically develop a set of lines and comments, a sort of therapists jokebook, that are tested and proven to be funny, appropriate, non-triggering, and so on? By doing so, we might increase the ‘hit-rate’ of the humour, but we may lose a certain authenticity to the interaction’s normal, organic flow. Carl Rogers and others have suggested that an authentic relationship between client and counsellor is an essential part of the therapeutic process – structured humour may get in the way of this. But in other fields this may work well.

    Theoretical benefits of studying humour training

    As well as looking for practical benefits in applied settings, there might be other uses to studying humour training. One in particular could be in linking the findings with the evolutionary fields. Evolutionary psychologists have been trying to find the adaptive function of humour and laughter for a while, and they’ve focused a lot of attention on attraction.

    They suggest that humour evolved, essentially, as a way of attracting a mate by displaying the health of your brain and your immunity to social pressure through your fantastic wit, hence making your genes something of a commodity to members of the opposite sex.

    So far, researchers have found certain types of humour to be more attractive than others; self-depricating humour is apparently the best one, as long as you’re already high-status (4). If you’re not already seen as high-status, self-depricating humour has the opposite effect on your perceived attractiveness (these researchers are also to be commended for the use of the word “diss” in the title of a scientific paper). But if humour training can be measured somehow, this would be better way to test these theories too – for example are people viewed as more attractive, so they meet more partners, etc., after a humour training program, all other things being equal?

    Overall, there is little research in this area, despite many papers noting the benefits of humour in a range of professional and personal settings, and this could be a fruitful area for future study.

    References:

    (1) Nevo, O., Aharonson, H., & Klingman, A. (1998). The development and evaluation of a systematic program for improving sense of humor. In W Ruch (Ed.), The sense of humor: Explorations of a personality characteristic (pp.385-404). Berlin: Mouton de Gruyter.

    (2) McGhee, P.E. (1994). How to develop your sense of humour: An 8 step humour development training program. Dubuque, IA: Kendall/Hunt.

    (3) Franzini, Louis (2001). Humor in Therapy: The Case for Training Therapists in Its Uses and Risks. The journal of General Psychology, 128(2), 170-193.

    (4) Greengross, G., & Miller, G.F. (2008). Dissing Oneself versus Dissing Rivals: Effects of Status, Personality, and Sex on the Short-Term and Long-Term Attractiveness of Self-Deprecating and Other-Deprecating Humor. Journal of Evolutionary Psychology, 6(3), 393-408.