The Joy of Confidence Intervals

In my last blog I mentioned that Null Hypothesis Significance Testing (NHST) was a bad idea (despite most of us having been taught it, use it and possibly teach it to future generations). I also said that confidence intervals are poorly understood. Coincidentally, a colleague of mine, knowing that I was of the ‘burn NHST at the stake’ brigade recommended this book by Geoff Cumming. It turns out that within the first 5 pages, it gives the most beautiful example of why confidence intervals tell us more than NHST. I’m going to steal Geoff’s argument blatantly, but with the proviso that anyone reading this blog buy his book, preferably two copies.
OK, imagine you’ve read Chapter 8 of my SPSS/SAS or R book in which I suggest that rather than cast rash judgments on a man for placing an eel up his anus to cure constipation, we use science to evaluate the efficacy of the man’s preferred intervention. You randomly allocate people with constipation to a treatment as usual group (TAU) or to placing an eel up their anus (intervention). You then find a good lawyer.
Imagine there were 10 studies (you can assume they are of a suitably high quality with no systematic differences between them) that had report such scientific endeavors. They have a measure of constipation as their outcome (let’s assume it’s a continuous measure). A positive difference between means indicates that the intervention was better than the control group at reducing constipation.
Here are the results:
Study           Difference
                      between
                      Means             t                       p
Study 1           4.193              3.229            0.002*
Study 2           2.082              1.743            0.086
Study 3           1.546              1.336            0.187
Study 4           1.509              0.890            0.384
Study 5           3.991              2.894            0.006*
Study 6           4.141             3.551             0.001*
Study 7           4.323             3.745             0.000*
Study 8           2.035             1.479             0.155
Study 9           6.246             4.889             0.000*
Study 10          0.863             0.565             0.577
OK, here’s a quiz. Which of these statements best reflects your interpretation of these data:
  •  A. The evidence is equivocal, we need more research.
  •  B. All of the mean differences show a positive effect of the intervention, therefore, we have consistent evidence that the treatment works.
  •  C. Five of the studies show a significant result (p < .05), but the other 5 do not. Therefore, the studies are inconclusive: some suggest that the intervention is better than TAU, but others suggest there's no difference. The fact that half of the studies showed no significant effect means that the treatment is not (on balance) more successful in reducing symptoms than the control.
  •  D. I want to go for C, but I have a feeling it’s a trick question.

Some of you, or at least those of you bought up to worship at the shrine of NHST probably went for C. If you didn’t then good for you. If you did, then don’t feel bad because if you believe in NHST then that’s exactly the answer you should give. 
Now let’s look at the 95% confidence intervals for the mean differences in each study:
Note the mean differences correspond to those we have already seen (I haven’t been cunning and changed the data). Thinking about what confidence intervals show us, which of the statements A to D above best fits your view?
Hopefully, many of you who thought C before now think B. If you still think C, then I will explain why you should go for B:
A confidence interval is a boundary within which the population value falls 95 times out of 100. In other words, they reflect the likely true population value: 5 out of 100 will miss it, but 95 out of 100 contain the actual population value. Looking at our 10 studies, only 3 of the 7 contain zero (studies 3, 8 and 10) and for two of them (studies 3 and 10) they only just contain zero. Therefore, in 7 of the 10 studies the evidence suggests that the population difference between group means is NOT zero. In other words, there is an effect in the population (zero would mean no difference between the groups). So, 7 out of 10 studies suggest that the population value, the actual real difference between groups, is NOT ZERO. What’s more, even the 3 that do contain zero, show a positive difference, and only a relatively small portion of the tail of the CI is below zero. So, even in the three studies that have confidence intervals crossing zero, it is more likely than not that the population value is greater than zero. As such, across all 10 studies there is strong and consistent evidence that the population difference between means is greater than zero, reflecting a positive effect of the intervention compared to the TAU.
The main point that Cummings makes (he talks about meta-analysis too, but I’m bored of typing now) is that the dichotomous sig/non-significant thinking fostered by the NHST can lead you to radically different conclusions to those you would make if you simply look at the data with a nice, informative confidence interval. In short, confidence intervals rule, and NHST sucks.
More important, it should not be the case that the way we picture the data/results completely alters our conclusions. Given we’re stuck with NHST at least for now, we could do worse than use CIs as the necessary pinch of salt required when interpreting significance tests.
Hopefully, that explains some of the comments in my previous blog. I’m off to buy a second copy of Geoff’s book …

Top 5 Statistical Fax Pas

In a recent article (Nieuwenhuis, et al., 2011, Nature Neuroscience, 14, 1105-1107), neuroscientists were shown to be statistically retarded … or something like that. Ben Goldacre wrote an article about this in the Guardian newspaper, which caused a bit of a kerfuffle amongst British psychologists because in the first published version he accidentally lumped psychologists in with neuroscientists. Us psychologists, being the sensitive souls that we are, decided that we didn’t like being called statistically retarded; we endure a lot of statistics classes during our undergraduate and postgraduate degrees, and we if we learnt nothing in them then the unbelievable mental anguish will have been for nothing.
Neuroscientists may have felt much the same, but unfortunately for them Nieuwenhuis, at the request of the British Psychological Society publication, The Psychologist, declared the sample of papers that he reviewed absent of psychologists. The deafening sonic eruption of people around the UK not giving a shit could be heard in Fiji.
The main finding from the Nieuwenhuis paper was that neuroscientists often make the error of thinking that a non-significant difference is different from a significant one. Hang on, that’s confusing. Let’s say group A’s anxiety levels change significantly over time (p = .049) and group B’s do not (p = .060), then neuroscientists tend to assume that the change in anxiety in group A is different to that in group B, whereas the average psychologist would know that you need to test whether the change in group A differs from the change in group B (i.e., look for a significant interaction).
My friend Thom Baguely wrote a nice blog about it. He asked whether psychologists were entitled to feel smug about not making the Nieuwenhuis error, and politely pointed out some errors that we do tend to make. This blog inspired me to write my top 5 common mistakes that should remind scientists of every variety that we probably shouldn’t meddle with things that we don’t understand; Statistics, for example.

5. Median splits

OK, I’m starting by cheating because this one is in Thom’s blog too, but scientists (psychologists especially) love nothing more than butchering perfectly good continuous variables with the rusty meat cleaver that is the median (or some other arbitrary blunt instrument). Imagine 4 children aged 2, 8, 9, and 16. You do a median split to compare ‘young’ (younger than 8.5) and old (older than 8.5). What you’re saying here is that a 2 year old is identical to an 8 year old, a 9 year old is identical to a 16 year old, and an 8 year old is completely different in every way to a 9 year old. If that doesn’t convince you that it’s a curious practice then read DeCoster, Gallucci, & Iselin, 2011; MacCallum, Zhang, Preacher, & Rucker, 2002.

4. Confidence intervals:

Using confidence intervals is a good idea – the APA statistics task force say so – except that no-one understands them. Well, behavioural neuroscientists, medics and psychologists don’t (Belia, Fidler, Williams, & Cumming, 2005). (see a nice summary of the Belia paper here). I think many scientists would struggle to say what a CI represents correctly, and many textbooks (including the first edition of my own Discovering Statistics Using SPSS) give completely incorrect, but commonly reproduced, explanations of what a CI means.

3. Assuming normally distributed data

I haven’t done it, but I reckon if you asked the average scientist what the assumptions of tests based on the normal distribution were, most would tell me that you need normally distributed data. You don’t. You typically need a normally-distributed sampling distributions or normally-distributed residuals/errors. The beauty of the central limit theorem is that in large samples the sampling distribution will be normal anyway so you’re sample data can be shaped exactly like a blue whale giving a large African elephant a piggyback and it won’t make a blind bit of difference.

2. Homogeneity of variance matters

Despite people like me teaching the next generation of scientists all about how homogeneity of variance/homoscedasticity should be carefully checked, the reality is that we should probably just do robust tests or use a bootstrap anyway and free ourselves from the Iron Maiden of assumptions that perforate our innards on a daily basis. Also, in regression, heteroscedasticity doesn’t really affect anything important (according to Gelman & Hill, 2007)

1. Hypothesis testing

In at number 1 as the top statistical faux pas is null hypothesis significance testing (NHST). With the honorable exceptions of physicists and a few others from the harder sciences, most scientists use NHST. Lots is written on why this practice is a bad idea (e.g., Meehl, 1978). To sum up (1) it stems from a sort of hideous experiment in which two quite different statistical philosophies were placed together on a work bench and joined using a staple gun; (2) a p –value is the probability of something given that something that is never true is true, which of course it isn’t, which means that you can’t really get anything useful from a p-value other than a publication in a journal; (3) it results in the kind of ridiculous situations in which people completely reject ideas because their p was .06, but lovingly embrace and copulate with other ideas because their p value was .049; (4) ps depend on sample size and consequently you find researchers who have just studied 1000 participants joyfully rubbing their crotch at a pitifully small and unsubstantive effect that, because of their large sample, has crept below the magical fast-track to publication that is p < .05; (5) no-one understands what a p-value is, not even research professors or people teaching statistics (Haller & Kraus, 2002). Physicists must literally shit their pants with laughter at this kind of behaviour.
Surely, the interaction oversight (or the ‘missing in interaction’ you might say) faux pas of the neuroscientists is the least of their (and our) worries.

References

  • Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and standard error bars. . Psychological Methods, 10, 389-396.
  • DeCoster, J., Gallucci, M., & Iselin, A.-M. R. (2011). Best Practices for Using Median Splits, Artificial Categorization, and their Continuous Alternatives. Journal of Experimental Psychopathology., 2(2), 197-209.
  • Gelman, A., & Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge: Cambridge University Press.
  • Haller, H., & Kraus, S. (2002). Misinterpretations of Significance: A Problem Students Share with Their Teachers? MPR-Online, 7(1), 1-20.
  • MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7(1), 19-40.
  • Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.