Category Archives: Data


PredPol – Predicting crime through data mining

Not too long ago in LA, crime was going up while the number of officers was going down. The LAPD had to try something different if they wanted to make a dent in this, so they looked to an anthropologist and mathematicians from UCLA, Santa Clara University, and UC Irvine.

“PredPol,” mines vast amounts of crime data and predicts where crimes will occur. Unlike the “hot spot” system, which identifies crime-heavy areas, PredPol is updated in real time and gives predictions for the next 12 hours. Cops in LA would go to these “boxes,” sometimes as small as 500 feet square, just to make their presence known and look out for criminal activity.

According to PredPol’s Proven Results page, the system was twice as effective as trained crime analysts. In the areas in which PredPol was tested, crime dropped by 13% while other areas showed a 0.4% increase.

PredPol works because, although an individual’s behaviour is very difficult to predict, once you put people in herds the trends and averages become very apparent. If you know the factors that contribute to a certain behaviour, you can work out a probability of that behaviour occurring. The more factors you know and the more accurately you know them, the better your prediction will be.

PredPol is being rolled out further, including the UK.

It’d be interesting to see how far you can take this. If you imagine a day where PRISM style data mining is legal and totally accepted, and governments can access all data, then combine that with “quantified self” monitoring (it won’t be long before neuro imaging become cheap and portable enough to be the latest personal informatics tool), you could pretty much predict anything, couldn’t you?

What the hell is Bonferroni correction?

That’s a question every psychology student has asked at one time or another! Well, I’ll tell you.

In order to understand Bonferroni, there is some prerequisite knowledge you need to possess. You need to understand what null hypothesis significance testing is, p values, and Type I/Type II errors. If you understand these things, read on. If not, read on also, but this will make less sense to you (I haven’t yet covered these things on this blog, so you’ll have to do some Googling, or buy my book where it is all explained in the absolute best way that is humanly possible. Ahem).

You’re still with me! That’s good. I wonder what percentage of readers have already pressed the back button? Hmm.

So, Bonferroni correction. You know that with a p value set at .05 were looking for a less than 5% chance of getting our result (or greater) by chance, assuming the null hypothesis is true. 5% is an arbitrary significance level (or ‘alpha’); not too high that we’re making too many Type I errors (assuming an effect where there isn’t one), but not too low that were making too many Type II errors (assuming there isn’t an effect where there is one).

Imagine that we did 20 studies, and in each one we got a p value of exactly .05. A 5% chance of a fluke result over 20 studies means it’s odds on that one of these results really was a fluke. Now think about how many thousands of studies have been done over the years! This demonstrates the importance of replicating studies – fluke findings have definitely happened and will continue to happen.

However this situation isn’t limited to findings spread over multiple papers. Sometimes in larger papers with several studies and/or analyses rolled into one, you might get a similar predicament. Simply, the more tests you do in a paper, the more chance there is that one of them will have come about through pure chance.

This would be a bad thing – a theory that is modified as a result of an incorrect finding would, of course, be a weaker reflection of reality, and any decisions that were made based on that theory (academic or not) would also be weaker.

So, we need a way to play a little safer when doing multiple tests and comparisons, and we do this by changing the alpha – we look for lower p values than we normally would before we’re happy to say that something is statistically significant.

This is what Bonferroni correction does – alters the alpha. You simply divide .05 by the number of tests that you’re doing, and go by that. If you’re doing five tests, you look for .05 / 5 = .01. If you’re doing 24 tests, you look for .05 / 24 = 0.002.

Bonferroni correction might strike you as a little conservative – and it is. It is quite a strict measure to take, and although it does quite a good job of protecting you from Type I errors, it leaves you a little more vulnerable to Type II errors. Again, this is yet another reason that studies need to be replicated.

There you go! An answer to an age-old question. Up next; does the light in the fridge stay on when the door is closed??

Between participants and within participants designs explained

If you’re doing a study using two or more groups, you’ve got two options: You can use different people in each group (between participants design), or you can use the same participants in each group (within participants design). There are pros and cons to each.

Say I’m an office manager and I want to measure the effect of distraction on workplace performance. My workers always have that damned radio playing all those cheesy love songs, and I think it’s distracting them and costing me valuable profit. So I hire a researcher to find out for sure.

He might use a between participants design. On one floor of my office he bans the radio. On another floor, he let’s them carry on as usual. At the end of the week, we work out which floor got the most work done. Pretty simple.

Or, he might use a within participants design. He only looks at one floor. For a week he measures their workrate while the radio is there, then the week after he takes it away and measures workrate again.

What’s the best way?

There’s no right or wrong answer. If he used a between participants design, he’s got different people on each floor. Maybe one floor is populated by people who are better workers than the other floor. To truly test the effect of the radio, and nothing else, the conditions – and the people – in the test would have to be exactly the same. Normally in psychology, researchers try to get large numbers of people in each group, and assign people randomly to each one. That way it’s expected that, since most human traits fall on a normal distribution, the groups will be pretty similar to each other on average.

But to get them exactly the same, you’d have to use the same people! That’s a within participants design. This brings it’s own problems with it. In this particular example, there might be temporal effects – differences in the environment week by week. For instance, maybe there were an unusually high number of birthdays in the office on the second week, and they went out celebrating a few times, leaving them tired at work. Or maybe on the first week, they were a bit nervous about having a researcher watching over them, but by the second one they had gotten over it.

Temporal effects aren’t limited to within participants designs of course – in a between participants design you might test the groups at different times; although you should not do this unless you have no other choice, to avoid these temporal effects.

Within participants designs are also vulnerable to something called practice effects. If I’m measuring the effect of caffeine on some cognitive ability, such as working memory, I might test people when they first step into the lab, then give them a triple espresso, and test them again.

Is this a between or within participants design? It’s within participants – testing the same people twice. But, the second time they do the test they know what to expect; they have had a little practice. So they might improve on the second test purely through this practice effect, rather than the caffeine.

Alternatively, maybe the results were influenced in the opposite direction – maybe they got bored of doing the test twice and didn’t put as much effort in the second time around.

There’s a way of getting around this – counterbalancing. You split the sample in two, and half of them would get the espresso before the first test, while the other half would get it before the second test. In this case you’d have to leave a few hours between tests so that the caffeine wears off, but both conditions – with caffeine and without caffeine – would be equally susceptible to practice effects, so we can be more certain that any difference is due to the effect of the caffeine.

Once more, just to clarify

In a between participants design, a given participant is allocated to one group or the other, but not both.

In a within subjects design, a given participant is allocated to both groups.

Advantages of between participants design:

Help to avoid practice effects and other ‘carry-over’ problems that result from taking the same test twice.

Is possible to test both groups at the same time.

Disadvantages of between participants design:

Individual differences may vary between the groups

Vulnerable to group assignment bias (though you would use random assignment wherever possible to compensate)

Advantages of within participants design:

Half the number of participants need to be recruited

They offer closer matching of the individual differences of the participants.

Disadvantages of within participants design:

Practice and other ‘carry-over’ effects may contaminate the results (though you would use counterbalancing where possible to compensate)

Visualising this in SPSS

When you’re putting data into SPSS, a row always indicates a single participant. Data from two different participants will never appear on the same row. Therefore, in a within participants design, our coffee experiment would have two columns for our data – one for with caffeine, one for without.

But, if we had used a between participants design, we would have ONE column for the data, plus another column saying which group that participant was in.

Alternative names

Between participants is also known as independent measures design, or between subjects design.

Within participants is also known as repeated measures design, or within subjects design.

Further info…

Are you finding the stats section of your course a little difficult? It’s hard to understand at first, but I’ve explained the bulk of what you need to know in plain English, in the study guide. Have a look.