## Assumptions Part 1: Normality

…. I didn’t grow a pair of breasts. If you didn’t read my last blog that comment won’t make sense, but it turns out that people like breasts so I thought I’d mention them again. I haven’t written a lot of blogs, but my frivolous blog about growing breasts as a side effect of some pills was (by quite a large margin) my most viewed blog. It’s also the one that took me the least time to write and that I put the least thought into. I think the causal factor might be the breasts.

This blog isn’t about breasts, it’s about normality. Admittedly the normal distribution looks a bit like a nipple-less breast, but it’s not one: I’m very happy that my wife does not sport two normal distributions upon her lovely chest. I like stats, but not that much …

### Assumptions

Anyway, I recently stumbled across this paper. The authors sent a sample of postgrads (with at least 2 years research experience) a bunch of data analysis scenarios and asked them how they would analyze the data. They were interested in whether or not, and how these people checked the assumptions of the tests they chose to use. The good news was that they chose the correct test (although given all of the scenarios basically required a general linear model of some variety that wasn’t hard). However, not many of them checked assumptions. The conclusion as that people don’t understand assumptions or how to test them

I get asked about assumptions a lot. I also have to admit to hating the chapter on assumptions in my SPSS and R books. Well, hate is a strong word, but I think it toes a very conservative and traditional line. In my recent update of the SPSS book (out early next year before you ask) I completely re-wrote this chapter. It takes a very different approach to thinking about assumptions.

Most of the models we fit to data sets are based on the general linear model, (GLM) which means that any assumption that applies to the GLM (i.e., regression) applies to virtually everything else. You don’t really need to memorize a list of different assumptions for different tests: if it’s a GLM (e.g., ANOVA, regression etc.) then you need to think about the assumptions of regression. The most important ones are:

• Linearity
• Normality (of residuals)
• Homoscedasticity (aka homogeneity of variance)
• Independence of errors.

### What Does Normality Affect?

For this post I’ll discuss normality. If you’re thinking about normality, then you need to think about 3 things that rely on normality:

1. Parameter estimates: That could be an estimate of the mean, or a b in regression (and a b in regression can represent differences between means). Models have error (i.e., residuals), and if these residuals are normally distributed in the population then using the method of least squares to estimate the parameters (the bs) will produce better estimates than other methods.
2. Confidence intervals: whenever you have a parameter, you usually want to compute a confidence interval (CI) because it’ll give you some idea of what the population value of the parameter is. We use values of the standard normal distribution to compute the confidence interval: using values of the standard normal distribution makes sense only if the parameter estimates actually come from one.
3. Significance tests: we often test parameters against a null value (usually we’re testing whether b is different from 0). For this process to work, we assume that the parameter estimates have a normal distribution. We assume this because the test statistics that we use (such as the t, F and chi-square), have distributions related to the normal. If parameter estimates don’t have a normal distribution then p-values won’t be accurate.

### What Does The Assumption Mean?

People often think that your data need to be normally distributed, and that’s what many people test. However, that’s not the case. What matters is that the residuals in the population are normal, and the sampling distribution of parameters is normal. However, we don’t have access to the sampling distribution of parameters or population residuals; therefore, we have to guess at what might be going on by testing the data instead.

### When Does The Assumption Matter?

However, the central limit theorem tells us that no matter what distribution things have, the sampling distribution will be normal if the sample is large enough. How large is large enough is another matter entirely and depends a bit on what test statistic you want to use. So bear that in mind. However, oversimplifying things a bit, we could say:

1. Confidence intervals: For confidence intervals around a parameter estimate to be accurate, that estimate must come from a normal distribution. The central limit theorem tells us that in large samples, the estimate will have come from a normal distribution regardless of what the sample or population data look like. Therefore, if we are interested in computing confidence intervals then we don’t need to worry about the assumption of normality if our sample is large enough. (There is still the question of how large is large enough though.) You can easily construct bootstrap confidence intervals these days, so if your interest is confidence intervals then why not stop worrying about normality and use bootstrapping instead?
2. Significance tests: For significance tests of models to be accurate the sampling distribution of what’s being tested must be normal. Again, the central limit theorem tells us that in large samples this will be true no matter what the shape of the population. Therefore, the shape of our data shouldn’t affect significance tests provided our sample is large enough. (How large is large enough depends on the test statistic and the type of non-normality. Kurtosis for example tends to screw things up quite a bit.) You can make a similar argument for using bootstrapping to get a robust if p is your thing.
3. Parameter Estimates: The method of least squares will always give you an estimate of the model parameters that minimizes error, so in that sense you don’t need to assume normality of anything to fit a linear model and estimate the parameters that define it (Gelman & Hill, 2007). However, there are other methods for estimating model parameters, and if you happen to have normally distributed errors then the estimates that you obtained using the method of least squares will have less error than the estimates you would have got using any of these other methods.

### Summary

If all you want to do is estimate the parameters of your model then normality doesn’t really matter. If you want to construct confidence intervals around those parameters, or compute significance tests relating to those parameters then the assumption of normality matters in small samples, but because of the central limit theorem we don’t really need to worry about this assumption in larger samples. The question of how large is large enough is a complex issue, but at least you know now what parts of your analysis will go screwy if the normality assumption is broken..

This blog is based on excerpts from the forthcoming 4th edition of ‘Discovering Statistics Using SPSS: and sex and drugs and rock ‘n’ roll’.

## Top 5 Statistical Fax Pas

In a recent article (Nieuwenhuis, et al., 2011, Nature Neuroscience, 14, 1105-1107), neuroscientists were shown to be statistically retarded … or something like that. Ben Goldacre wrote an article about this in the Guardian newspaper, which caused a bit of a kerfuffle amongst British psychologists because in the first published version he accidentally lumped psychologists in with neuroscientists. Us psychologists, being the sensitive souls that we are, decided that we didn’t like being called statistically retarded; we endure a lot of statistics classes during our undergraduate and postgraduate degrees, and we if we learnt nothing in them then the unbelievable mental anguish will have been for nothing.
Neuroscientists may have felt much the same, but unfortunately for them Nieuwenhuis, at the request of the British Psychological Society publication, The Psychologist, declared the sample of papers that he reviewed absent of psychologists. The deafening sonic eruption of people around the UK not giving a shit could be heard in Fiji.
The main finding from the Nieuwenhuis paper was that neuroscientists often make the error of thinking that a non-significant difference is different from a significant one. Hang on, that’s confusing. Let’s say group A’s anxiety levels change significantly over time (p = .049) and group B’s do not (p = .060), then neuroscientists tend to assume that the change in anxiety in group A is different to that in group B, whereas the average psychologist would know that you need to test whether the change in group A differs from the change in group B (i.e., look for a significant interaction).
My friend Thom Baguely wrote a nice blog about it. He asked whether psychologists were entitled to feel smug about not making the Nieuwenhuis error, and politely pointed out some errors that we do tend to make. This blog inspired me to write my top 5 common mistakes that should remind scientists of every variety that we probably shouldn’t meddle with things that we don’t understand; Statistics, for example.

### 5. Median splits

OK, I’m starting by cheating because this one is in Thom’s blog too, but scientists (psychologists especially) love nothing more than butchering perfectly good continuous variables with the rusty meat cleaver that is the median (or some other arbitrary blunt instrument). Imagine 4 children aged 2, 8, 9, and 16. You do a median split to compare ‘young’ (younger than 8.5) and old (older than 8.5). What you’re saying here is that a 2 year old is identical to an 8 year old, a 9 year old is identical to a 16 year old, and an 8 year old is completely different in every way to a 9 year old. If that doesn’t convince you that it’s a curious practice then read DeCoster, Gallucci, & Iselin, 2011; MacCallum, Zhang, Preacher, & Rucker, 2002.

### 4. Confidence intervals:

Using confidence intervals is a good idea – the APA statistics task force say so – except that no-one understands them. Well, behavioural neuroscientists, medics and psychologists don’t (Belia, Fidler, Williams, & Cumming, 2005). (see a nice summary of the Belia paper here). I think many scientists would struggle to say what a CI represents correctly, and many textbooks (including the first edition of my own Discovering Statistics Using SPSS) give completely incorrect, but commonly reproduced, explanations of what a CI means.

### 3. Assuming normally distributed data

I haven’t done it, but I reckon if you asked the average scientist what the assumptions of tests based on the normal distribution were, most would tell me that you need normally distributed data. You don’t. You typically need a normally-distributed sampling distributions or normally-distributed residuals/errors. The beauty of the central limit theorem is that in large samples the sampling distribution will be normal anyway so you’re sample data can be shaped exactly like a blue whale giving a large African elephant a piggyback and it won’t make a blind bit of difference.

### 2. Homogeneity of variance matters

Despite people like me teaching the next generation of scientists all about how homogeneity of variance/homoscedasticity should be carefully checked, the reality is that we should probably just do robust tests or use a bootstrap anyway and free ourselves from the Iron Maiden of assumptions that perforate our innards on a daily basis. Also, in regression, heteroscedasticity doesn’t really affect anything important (according to Gelman & Hill, 2007)

### 1. Hypothesis testing

In at number 1 as the top statistical faux pas is null hypothesis significance testing (NHST). With the honorable exceptions of physicists and a few others from the harder sciences, most scientists use NHST. Lots is written on why this practice is a bad idea (e.g., Meehl, 1978). To sum up (1) it stems from a sort of hideous experiment in which two quite different statistical philosophies were placed together on a work bench and joined using a staple gun; (2) a p –value is the probability of something given that something that is never true is true, which of course it isn’t, which means that you can’t really get anything useful from a p-value other than a publication in a journal; (3) it results in the kind of ridiculous situations in which people completely reject ideas because their p was .06, but lovingly embrace and copulate with other ideas because their p value was .049; (4) ps depend on sample size and consequently you find researchers who have just studied 1000 participants joyfully rubbing their crotch at a pitifully small and unsubstantive effect that, because of their large sample, has crept below the magical fast-track to publication that is p < .05; (5) no-one understands what a p-value is, not even research professors or people teaching statistics (Haller & Kraus, 2002). Physicists must literally shit their pants with laughter at this kind of behaviour.
Surely, the interaction oversight (or the ‘missing in interaction’ you might say) faux pas of the neuroscientists is the least of their (and our) worries.

## References

• Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and standard error bars. . Psychological Methods, 10, 389-396.
• DeCoster, J., Gallucci, M., & Iselin, A.-M. R. (2011). Best Practices for Using Median Splits, Artificial Categorization, and their Continuous Alternatives. Journal of Experimental Psychopathology., 2(2), 197-209.
• Gelman, A., & Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge: Cambridge University Press.
• Haller, H., & Kraus, S. (2002). Misinterpretations of Significance: A Problem Students Share with Their Teachers? MPR-Online, 7(1), 1-20.
• MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7(1), 19-40.
• Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.