Max or No Max?

There’s been a recent spat between the heavy metal bands Sepultura and Soulfly. For those unaware of the history, 50% of Sepulture used to be the Cavalera brothers (Max and Igor) until Max (the frontman and guitarist) left the band in 1996 and formed Soulfly. The full story is here. There’s a lot of bad blood even 20 years later, and according to a recent story on metal sucks, Soulfly’s manager (and Max’s wife) Gloria Cavalier recently posted a fairly pointed post on her Facebook page. This got picked up by my favourite podcast (the metal sucks podcast). What has this got to do with me, or statistics? Well, one of the presenters of the metal sucks podcasts asked me this over Twitter:

After a very brief comment about needing to operationalise ‘better’, I decided that rather than reading book proofs I’d do a tongue in cheek analysis of what is better max or no max and here it is.

First we need to operationalise ‘better’. I have done this by accepting subjective opinion as determining ‘better’ and specifically ratings of albums on amazon.com (although I am English metal sucks is US based, so I thought I’d pander to them and take ratings from the US site). Our questions then becomes ‘is max or no max rated higher by the sorts of people who leave reviews on Amazon’. We have operationalised our questions and turned it into a scientific statement, which we can test with data. [There are all sorts of problems with using these ratings, not least of which is that they tend to be positively biased, and they likely reflect a certain type of person who reviews, often reviews reflect things other than the music (e.g., arrived quickly 5*), and so on … but fuck it, this is not serious science, just a bit of a laugh.]

Post Sepultura: Max or No Max

The first question is whether post-max Sepultura or Soulfly are rated higher. Figure 1 shows that the data are hideously skewed with people tending to give positive reviews and 4-5* ratings. Figure 2 shows the mean ratings by year post Max’s departure (note they released albums in different years so the dots are out of synch, but it’s a useful timeline). Figure 2 seems to suggest that after the first couple of albums, both bands are rated fairly similarly: the Soulfly line is higher but error bars overlap a lot for all but the first albums.

Figure 1: Histograms of all ratings for Soulfly and (Non-Max Era) Sepultura

Figure 2: Mean ratings of Soulfly and (Non-Max Era) Sepultura by year of album release

There are a lot of ways you could look at these data. The first thing is the skew. That messes up estimates of confidence intervals and significance tests … but our sample is likely big enough that we can rely on the central limit theorem to do its magic and let us assume that the sampling distribution is normal (beautifully explained in my new book!)

I’m going to fit three models. The first is an intercept only model (a baseline with no predictors), the second allows intercepts to vary across albums (which allows ratings to vary by album, which seems like a sensible thing to do because albums will vary in quality) the third predicts ratings from the band (Sepultura vs Soulfly).

  maxModel2a<-gls(Rating ~ 1, data = sepvssoul, method = "ML")
  maxModel2b<-lme(Rating ~ 1, random = ~1|Album, data = sepvssoul, method = "ML")
  maxModel2c<-update(maxModel2b, .~. + Band)
  anova(maxModel2a, maxModel2b, maxModel2c)

By comparing models we can see:

           Model df      AIC      BIC    logLik   Test  L.Ratio p-value
maxModel2a     1  2 2889.412 2899.013 -1442.706                        
maxModel2b     2  3 2853.747 2868.148 -1423.873 1 vs 2 37.66536  <.0001
maxModel2c     3  4 2854.309 2873.510 -1423.155 2 vs 3  1.43806  0.2305

That album ratings varied very significantly (not surprising), the p-value is < .0001, but that band did not significantly predict ratings overall (p = .231). If you like you can look at the summary of the model by executing:
  
  summary(maxModel2c)

Which gives us this output:

Linear mixed-effects model fit by maximum likelihood
 Data: sepvssoul 
       AIC     BIC    logLik
  2854.309 2873.51 -1423.154

Random effects:
 Formula: ~1 | Album
        (Intercept) Residual
StdDev:   0.2705842 1.166457

Fixed effects: Rating ~ Band 
               Value Std.Error  DF   t-value p-value
(Intercept) 4.078740 0.1311196 882 31.107015  0.0000
BandSoulfly 0.204237 0.1650047  14  1.237765  0.2362
 Correlation: 
            (Intr)
BandSoulfly -0.795

Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max 
-2.9717684 -0.3367275  0.4998698  0.6230082  1.2686186 

Number of Observations: 898
Number of Groups: 16 

The difference in ratings between Sepultura and Soulfly was b = 0.20. Ratings for soulfully were higher, but not significantly so (if we allow ratings to vary over albums, if you take that random effect out you’ll get a very different picture because that variability will go into the fixed effect of ‘band’.).

Max or No Max

Just because this isn’t fun enough, we could also just look at whether either Sepultura (post 1996) or Soulfly can compete with the Max-era-Sepultura heyday.
I’m going to fit three models but this time including the early Sepultura albums (with max). The models are the same as before except that the fixed effect of band now has three levels: Sepultura Max, Sepultura no-Max and Soulfly:

  maxModela<-gls(Rating ~ 1, data = maxvsnomax, method = "ML")
  maxModelb<-lme(Rating ~ 1, random = ~1|Album, data = maxvsnomax, method = "ML")
  maxModelc<-update(maxModelb, .~. + Band)
  anova(maxModela, maxModelb, maxModelc)

By comparing models we can see:

          Model df      AIC      BIC    logLik   Test   L.Ratio p-value
maxModela     1  2 4686.930 4697.601 -2341.465                         
maxModelb     2  3 4583.966 4599.973 -2288.983 1 vs 2 104.96454  <.0001
maxModelc     3  5 4581.436 4608.114 -2285.718 2 vs 3   6.52947  0.0382

That album ratings varied very significantly (not surprising), the p-value is < .0001, and the band did significantly predict ratings overall (p = .038). If you like you can look at the summary of the model by executing:
  
  summary(maxModelc)

Which gives us this output:

Linear mixed-effects model fit by maximum likelihood
 Data: maxvsnomax 
       AIC      BIC    logLik
  4581.436 4608.114 -2285.718

Random effects:
 Formula: ~1 | Album
        (Intercept) Residual
StdDev:     0.25458 1.062036

Fixed effects: Rating ~ Band 
                         Value Std.Error   DF  t-value p-value
(Intercept)           4.545918 0.1136968 1512 39.98281  0.0000
BandSepultura No Max -0.465626 0.1669412   19 -2.78916  0.0117
BandSoulfly          -0.262609 0.1471749   19 -1.78433  0.0903
 Correlation: 
                     (Intr) BndSNM
BandSepultura No Max -0.681       
BandSoulfly          -0.773  0.526

Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max 
-3.3954974 -0.3147123  0.3708523  0.6268751  1.3987442 

Number of Observations: 1534
Number of Groups: 22  

The difference in ratings between Sepultura without Max compared to with him was b = -0.47 and significant at p = .012 (ratings for post-max Sepultura are significantly worse than for the Max-era Sepultura). The difference in ratings between Soulfly compared two Max-era Sepultura was b = -0.26 and not significant (p = .09) (ratings for Soulfly are not significantly worse than for the Max-era Sepultura). A couple of points here, p-values are silly, so don’t read too much into them, but the parameter (the bs) which quantifies the effect is a bit smaller for Soulfly.

Confidence Intervals

Interestingly if you write yourself a little bootstrap routine to get some robust confidence intervals around the parameters:
  

    boot.lme <- function(data, indices){
    data <- data[indices,] # select obs. in bootstrap sample
    model <- lme(Rating ~ Band, random = ~1|Album, data = data, method = "ML")
    fixef(model) # return coefficient vector
  }

maxModel.boot<-boot(maxvsnomax, boot.lme, 1000)
  maxModel.boot
  boot.ci(maxModel.boot, index = 1, type = “perc”)
  boot.ci(maxModel.boot, index = 2, type = “perc”)

  boot.ci(maxModel.boot, index = 3, type = “perc”)


Then you find these confidence intervals for the three betas (intercept, Post-Max Sepultura vs. Max Era-Sepultura, Soulfly vs. Max-Era-Sepultura):

BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates

CALL : 
boot.ci(boot.out = maxModel.boot, type = “perc”, index = 1)

Intervals : 
Level     Percentile     
95%   ( 4.468,  4.620 )  
Calculations and Intervals on Original Scale

BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates

CALL : 
boot.ci(boot.out = maxModel.boot, type = “perc”, index = 2)

Intervals : 
Level     Percentile     
95%   (-0.6153, -0.3100 )  
Calculations and Intervals on Original Scale

BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates

CALL : 
boot.ci(boot.out = maxModel.boot, type = “perc”, index = 3)

Intervals : 
Level     Percentile     
95%   (-0.3861, -0.1503 )  
Calculations and Intervals on Original Scalens: 1534
Number of Groups: 22  

The difference in ratings between Sepultura without Max compared to with him was b = -0.47 [-0.62, -0.31]. The difference in ratings between Soulfly compared to Max-era Sepultura was b = -0.26 [-0.39, -0.15]. This suggests  that both soulfully and post-Max Sepultura yield negative parameters that reflect (to the degree that you believe that a confidence interval tells you about the population parameter ….) a negative effect in the population. In other words, both bands are rated worse than Max-era Sepultura.

Summary

Look, this is just a bit of fun and an excuse to show you how to use a bootstrap on a multilevel model, and how you can use data to try to answer pointless questions thrown at you on Twitter. Based on this hastily thrown together analysis that makes a lot of assumptions about a lot of things, my 120 character twitter response will be: Sepultura Max better than everything, but post 1996 Max is no better than No Max;-)

R Code

Data

You can generate the data using this R code (data correct as of today):
morbid<-c(rep(1,2), rep(2, 4), rep(3, 3), rep(4, 8), rep(5, 36))
  Schizo<-c(rep(1,1), rep(2, 2), rep(3, 4), rep(4, 10), rep(5, 33))
  remains<-c(2, rep(3, 5), rep(4, 9), rep(5, 104))
  Arise<-c(rep(2, 2), rep(4, 16), rep(5, 89))
  Chaos<-c(rep(1,4), rep(2, 2), rep(3, 9), rep(4, 20), rep(5, 120))
  Roots<-c(rep(1,9), rep(2, 8), rep(3, 17), rep(4, 24), rep(5, 94))
  Against<-c(rep(1,16), rep(2, 14), rep(3, 11), rep(4, 20), rep(5, 32))
  Nation<-c(rep(1,3), rep(2, 7), rep(3, 6), rep(4, 22), rep(5, 19))
  Roorback<-c(rep(1,6), rep(2, 6), rep(3, 5), rep(4, 13), rep(5, 20))
  Dante<-c(rep(1,1), rep(2, 3), rep(3, 4), rep(4, 8), rep(5, 30))
  Alex<-c(rep(1,1), rep(2, 1), rep(3, 3), rep(4, 6), rep(5, 18))
  Kairos<-c(rep(1,3), rep(2, 2), rep(3, 2), rep(4, 6), rep(5, 33))
  Mediator<- c(rep(1,0), rep(2, 3), rep(3, 4), rep(4, 6), rep(5, 21))
  
  
  morbid<-data.frame(rep("Morbid", length(morbid)), rep(1986, length(morbid)), morbid)
  Schizo<-data.frame(rep("Schizo", length(Schizo)), rep(1987, length(Schizo)), Schizo)
  Remains<-data.frame(rep("remains", length(remains)), rep(1989, length(remains)), remains)
  Arise<-data.frame(rep("Arise", length(Arise)), rep(1991, length(Arise)), Arise)
  Chaos<-data.frame(rep("Chaos", length(Chaos)), rep(1993, length(Chaos)), Chaos)
  Roots<-data.frame(rep("Roots", length(Roots)), rep(1996, length(Roots)), Roots)
  Against<-data.frame(rep("Against", length(Against)), rep(1998, length(Against)), Against)
  Nation<-data.frame(rep("Nation", length(Nation)), rep(2001, length(Nation)), Nation)
  Roorback<-data.frame(rep("Roorback", length(Roorback)), rep(2003, length(Roorback)), Roorback)
  Dante<-data.frame(rep("Dante", length(Dante)), rep(2006, length(Dante)), Dante)
  Alex<-data.frame(rep("Alex", length(Alex)), rep(2009, length(Alex)), Alex)
  Kairos<-data.frame(rep("Kairos", length(Kairos)), rep(2011, length(Kairos)), Kairos)
  Mediator<-data.frame(rep("Mediator", length(Mediator)), rep(2013, length(Mediator)), Mediator)
  
  
  names(morbid)<-c("Album", "Year", "Rating")
  names(Schizo)<-c("Album", "Year", "Rating")
  names(Remains)<-c("Album", "Year", "Rating")
  names(Arise)<-c("Album", "Year", "Rating")
  names(Chaos)<-c("Album", "Year", "Rating")
  names(Roots)<-c("Album", "Year", "Rating")
  names(Against)<-c("Album", "Year", "Rating")
  names(Nation)<-c("Album", "Year", "Rating")
  names(Dante)<-c("Album", "Year", "Rating")
  names(Alex)<-c("Album", "Year", "Rating")
  names(Kairos)<-c("Album", "Year", "Rating")
  names(Mediator)<-c("Album", "Year", "Rating")

  SepMax<-rbind(morbid, Schizo, Remains, Arise, Chaos, Roots)
  SepMax$Band<-"Sepultura Max"
  SepMax$Max<-"Max"
  
  SepNoMax<-rbind(Against, Nation, Dante, Alex, Kairos, Mediator)
  SepNoMax$Band<-"Sepultura No Max"
  SepNoMax$Max<-"No Max"
  
  
  
  soulfly<-c(rep(1,8), rep(2, 9), rep(3, 4), rep(4, 16), rep(5, 89))
  primitive<-c(rep(1,11), rep(2, 5), rep(3, 5), rep(4, 19), rep(5, 53))
  three<-c(rep(1,1), rep(2, 10), rep(3, 12), rep(4, 7), rep(5, 19))
  prophecy<-c(rep(1,2), rep(2, 5), rep(3, 5), rep(4, 25), rep(5, 42))
  darkages<-c(rep(1,1), rep(2, 1), rep(3, 5), rep(4, 18), rep(5, 36))
  conquer<-c(rep(1,1), rep(2, 0), rep(3, 5), rep(4, 5), rep(5, 31))
  omen<-c(rep(1,0), rep(2, 2), rep(3, 1), rep(4, 6), rep(5, 17))
  enslaved<-c(rep(1,1), rep(2,1), rep(3, 4), rep(4, 2), rep(5, 30))
  savages<-c(rep(1,0), rep(2, 2), rep(3, 3), rep(4, 10), rep(5, 27))
  archangel<-c(rep(1,3), rep(2, 2), rep(3, 4), rep(4, 7), rep(5, 21))

  
  soulfly<-data.frame(rep("Soulfly", length(soulfly)), rep(1998, length(soulfly)), soulfly)
  primitive<-data.frame(rep("Primitive", length(primitive)), rep(2000, length(primitive)), primitive)
  three<-data.frame(rep("Three", length(three)), rep(2002, length(three)), three)
  prophecy<-data.frame(rep("Prophecy", length(prophecy)), rep(2004, length(prophecy)), prophecy)
  darkages<-data.frame(rep("Darkages", length(darkages)), rep(2005, length(darkages)), darkages)
  conquer<-data.frame(rep("Conquer", length(conquer)), rep(2008, length(conquer)), conquer)
  omen<-data.frame(rep("Omen", length(omen)), rep(2010, length(omen)), omen)
  enslaved<-data.frame(rep("Enslaved", length(enslaved)), rep(2012, length(enslaved)), enslaved)
  savages<-data.frame(rep("Savages", length(savages)), rep(2013, length(savages)), savages)
  archangel<-data.frame(rep("Archangel", length(archangel)), rep(2015, length(archangel)), archangel)
  
  names(soulfly)<-c("Album", "Year", "Rating")
  names(primitive)<-c("Album", "Year", "Rating")
  names(three)<-c("Album", "Year", "Rating")
  names(prophecy)<-c("Album", "Year", "Rating")
  names(darkages)<-c("Album", "Year", "Rating")
  names(conquer)<-c("Album", "Year", "Rating")
  names(omen)<-c("Album", "Year", "Rating")
  names(enslaved)<-c("Album", "Year", "Rating")
  names(savages)<-c("Album", "Year", "Rating")
  names(archangel)<-c("Album", "Year", "Rating")

  
  
  Soulfly<-rbind(soulfly, primitive, three, prophecy, darkages, conquer, omen, enslaved, savages, archangel)
  Soulfly$Band<-"Soulfly"
  Soulfly$Max<-"Max"
  
  
  maxvsnomax<-rbind(SepMax, SepNoMax, Soulfly)
  maxvsnomax$Band<-factor(maxvsnomax$Band)
  maxvsnomax$Max<-factor(maxvsnomax$Max)
  maxvsnomax$Album<-factor(maxvsnomax$Album)
  
  sepvssoul<-subset(maxvsnomax, Band != "Sepultura Max")
  sepvssoul$Band<-factor(sepvssoul$Band)
  sepvssoul$Album<-factor(sepvssoul$Album)

Graphs in this post

library(ggplot2)

m <- ggplot(sepvssoul, aes(x = Rating, fill = Band, colour = Band)) 
  m + geom_histogram(binwidth = 1, alpha = .2, position=”identity”) + coord_cartesian(xlim = c(0, 6)) + scale_y_continuous(breaks = seq(0, 400, 50)) + labs(x = “Rating”, y = “Frequency”, colour = “Band”, fill = “Band”) + theme_bw()
  
  
  line <- ggplot(sepvssoul,  aes(Year, Rating, colour = Band))
  line + stat_summary(fun.y = mean, geom = “point”) + stat_summary(fun.data = mean_cl_boot, geom = “errorbar”, width = 0.2) + labs(x = “Year”, y = “Mean Rating on Amazon.com”) + coord_cartesian(ylim = c(1, 5)) + scale_x_continuous(breaks=seq(1998, 2015, 1))+ theme_bw() + stat_summary(fun.y = mean, geom = “line”, aes(group=Band))

The referee’s a …..

I’m a bit late on this particular bandwagon, but it’s a busy time what with the start of term and finishing writing  textbook and all that. The textbook is also my excuse for having not written a blog for a year, well, that and the fact I rarely have anything interesting to say.

Anyway, those of you who follow football (or soccer for our US friends) will know the flack that referees get. I am a regular at the stadium of the team I’ve supported* since the age of 3 and like most football stadiums I regularly hear the chant of ‘The referee’s a wanker’ echoing around the ground. Referees have a tough job: the game is fast, often faster than they are, players can be a bit cheaty, and we have the benefit of TV replays which they (for some utterly inexplicable reason) do not. Nevertheless it’s annoying when they get stuff wrong.

The more specific referee-related thing I often hear resonating around the particular stadium that I attend is ‘Oh dear me, Mike Dean is refereeing … he hates us quite passionately’ or something similar with a lot more swearing. This particular piece of folk wisdom reached dizzy new heights the weekend before last when he managed to ignore Diego Costa trying to superglue his hands to Laurent Koscielny’s face shortly before chest bumping him to the floor, and then sending Gabriel off for … well, I’m not really sure what. Indeed, the FA weren’t really sure what for either because they rescinded the red card and banned Costa for 3 matches. You can read the details here.

So, does Mike Dean really hate Arsenal? In another blog 7amkickoff tried to answer this question using data collated about Mike Dean. You can read it here. This blog has inspired me to go one step further and try to answer this question as a scientist would.

How does a scientist decide if Mike Dean really is a wanker?

Scientists try to work with scientific statements. ‘Mike Dean is a wanker’ may be the folklore at the Emirates, but it is not a scientific statement because we probably can’t agree on a working definition of what a wanker is (at least not in the refereeing sense). However, we can turn it into a scientific statement by changing the question to something like ‘Do arsenal lose more games when Mike Dean (MD) referees?’ This is what 7amkickoff tried to do. 7amkickoff presented data from 2009-2015 and concluded that ‘Arsenal’s win% in the Premier League since 2009 is 57% so a 24% win rate is quite anomalous’ (the 24% is the win rate under Dean). There are several problems with this simplistic view, a couple of them are:
  1. MD tends to referee Arsenal’s tougher games (opponents such as Chelsea, Manchester United and Manchester City), so we need to compare Arsenal’s win rate under MD to our win rate against only the same opponents as in the MD games. (In case you’re interested the full list of clubs is Birmingham, Blackburn, Burnley, C Palace, Charlton, Chelsea, Fulham, Man City, Man Utd, Newcastle, QPR, Stoke City, Tottenham, Watford, West Brom, West Ham, Wigan). If we compare against a broader set of opponents then it isn’t a fair comparison to MD: we are not comparing like for like.
  2.  How do we know that a win rate of 24% under MD isn’t statistically comparable to a win rate of 57%. They seem different, but couldn’t such a difference simply happen because of the usual random fluctuations in results? Or indeed because Arsenal are just bad at winning against ‘bigger’ teams and MD happens to officiate those games (see the point above)
I’m going to use the same database as 7amkickoff and extend this analysis. I collected data from football-lineups.com for the past 10 years (2005-6 up to the Chelsea fixture on 19th Sept 2015). I am only looking at English Premier League (EPL) games. I have filtered the data to include only the opponents mentioned above, so we’ll get a fair comparison of Arsenal’s results under MD against their results against those same clubs when MD is not officiating. The data are here as a CSV file.
A scientist essentially sets up two competing hypotheses. The first is that there is no effect of MD, that is Arsenal’s results under MD are exactly the same as under other referees. This is known as the null hypothesis. The second is the opposite: that there is an effect of MD, that is Arsenal’s results under MD are different compared to other referees. This is known as the alternative hypothesis. We can use statistical tests to compare these hypotheses.

A frequentist approach

To grab the file into R run:
arse<-read.csv(file.choose(), header = T)
This creates a data frame called arse. 
If we look at the results of all EPL arsenal fixtures against the aforementioned teams since 2005, we get this, by running the following code in R:
MD.tab <- xtabs(~MD+result, data=arse)
MD.tab
   result
       Draw Lose Win
Other   33   43  115
MD      14   11   9
One way to test whether the profile of results is different for MD is to see whether there is a significant relationship between the variable ‘MD vs Other’ and ‘result’. This asks the question of whether the profile of draws to wins and loses is statistically different for MD than ‘other’. If we assume that games are independent events (which is a simplifying assumption because they won’t be) we can use a chi-square test.
You can fit this model in R by running:
library(gmodels)
CrossTable(MD.tab, fisher = TRUE, chisq = TRUE, sresid = TRUE, format = “SPSS”)

Total Observations in Table:  225 

             | result 
          MD |     Draw  |     Lose  |      Win  | Row Total | 
————-|———–|———–|———–|———–|
           0 |       33  |       43  |      115  |      191  | 
             |    1.193  |    0.176  |    0.901  |           | 
             |   17.277% |   22.513% |   60.209% |   84.889% | 
             |   70.213% |   79.630% |   92.742% |           | 
             |   14.667% |   19.111% |   51.111% |           | 
             |   -1.092  |   -0.419  |    0.949  |           | 
————-|———–|———–|———–|———–|
           1 |       14  |       11  |        9  |       34  | 
             |    6.699  |    0.988  |    5.061  |           | 
             |   41.176% |   32.353% |   26.471% |   15.111% | 
             |   29.787% |   20.370% |    7.258% |           | 
             |    6.222% |    4.889% |    4.000% |           | 
             |    2.588  |    0.994  |   -2.250  |           | 
————-|———–|———–|———–|———–|
Column Total |       47  |       54  |      124  |      225  | 
             |   20.889% |   24.000% |   55.111% |           | 
————-|———–|———–|———–|———–|

Statistics for All Table Factors

Pearson’s Chi-squared test 
————————————————————
Chi^2 =  15.01757     d.f. =  2     p =  0.0005482477 

Fisher’s Exact Test for Count Data
————————————————————
Alternative hypothesis: two.sided
p =  0.0005303779 


       Minimum expected frequency: 7.102222 

The win rate under MD is 26.47% compared to 41.18% draws and 32.35% losses, under other referees it is 60.21%, 17.28% and 22.51% respectively. These are the comparable values to 7amkickoff but looking at 10 years and including only opponents that feature in MD games. The critical part of the output is the p = .0005. This is the probability that we would get the chi-square value we have IF the null hypothesis were true. In other words, if MD had no effect whatsoever on Arsenal’s results the probability of getting a test result of 15.12 would be 0.000548. In other words, very small indeed. Scientists do this sort of thing all of the time, and generally accept that if p is less than .05 then this supports the alternative hypothesis. That is we can assume it’s true. (In actual fact, although scientists do this they shouldn’t because this probability value tells us nothing about either hypothesis, but that’s another issue …) Therefore, by conventional scientific method, we would accept that the profile of results under MD is significantly different than under other referees. Looking at the values in the cells of the table, we can actually see that the profile is significantly worse under MD than other referees in comparable games. Statistically speaking, MD is a wanker (if you happen to support Arsenal).

As I said though, we made simplifying assumptions. Let’s look at the raw results, and this time factor in the fact that results with the same opponents will be more similar than results involving different opponents. That is, we can assume that Arsenal’s results against Chelsea will be more similar to each other than they will be to results agains a different club like West Ham. What we do here is nest results within opponents. In doing so, we statistically model the fact that results against the same opposition will be similar to each other. This is known as a logistic multilevel model. What I am doing is predicting the outcome of winning the game (1 = win, 0 = not win) from whether MD refereed (1 = MD, 0 = other). I have fitted a random intercepts model, which says overall results will vary across different opponents, but also a random slopes model which entertains the possibility that the effect of MD might vary by opponent. The R code is this:

library(lme4)
win.ri<-glmer(win ~ 1 + (1|Opponent), data = arse, family = binomial, na.action = na.exclude)
win.md<-update(win.ri, .~. + MD)
win.rs<-update(win.ri, .~ MD + (MD|Opponent))
anova(win.ri, win.md, win.rs)

The summary of models shows that the one with the best fit is random intercepts and MD as a predictor. I can tell this because the model with MD as a predictor significantly improves the fit of the model (the p of .0024 is very small) but adding in the random slopes component does not improve the fit of the model hardly at all (because the p of .992 is very close to 1).

Data: arse
Models:
win.ri: win ~ 1 + (1 | Opponent)
win.md: win ~ (1 | Opponent) + MD
win.rs: win ~ MD + (MD | Opponent)
       Df    AIC    BIC  logLik deviance  Chisq Chi Df Pr(>Chisq)   
win.ri  2 302.41 309.24 -149.20   298.41                            
win.md  3 295.22 305.47 -144.61   289.22 9.1901      1   0.002433 **
win.rs  5 299.20 316.28 -144.60   289.20 0.0153      2   0.992396  

If we look at the best fitting model (win.md) by running:

summary(win.md)

We get this:

Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) [‘glmerMod’]
 Family: binomial  ( logit )
Formula: win ~ (1 | Opponent) + MD
   Data: arse

     AIC      BIC   logLik deviance df.resid 
   295.2    305.5   -144.6    289.2      222 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-1.6052 -0.7922  0.6046  0.7399  2.4786 

Random effects:
 Groups   Name        Variance Std.Dev.
 Opponent (Intercept) 0.4614   0.6792  
Number of obs: 225, groups:  Opponent, 17

Fixed effects:
            Estimate Std. Error z value Pr(>|z|)   
(Intercept)   0.5871     0.2512   2.337  0.01943 * 
MDMD         -1.2896     0.4427  -2.913  0.00358 **

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
     (Intr)
MDMD -0.225

The important thing is whether the variable MD (which represents MD vs. other referees) is a ‘significant predictor’. Looking at the p-value of .00358,we can assess the probability that we would get an estimate as large as -1.2896 if the null hypothesis were true (i.e. if MD had no effect at all on Arsenal’s results). Again, because this value is less than .05, scientists would typically conclude that MD is a significant predictor of whether Arsenal win or not, even factoring in dependency between results when the opponent is the same. The value of the estimate (-1.2896) tells us about the strength and direction of the relationship and because our predictor variable is MD = 1, other  = 0, and our outcome is win = 1 and no win = 0, and the value is negative it means that as MD increases (in other words as we move from having ‘other’ as a referee to having MD as a referee) the outcome decreases (that is, the probability of winning decreases). So, this analysis shows that MD significantly decreases the probability of Arsenal winning. The model is more complex but the conclusion is the same as the previous, simpler, model: statistically speaking, MD is a wanker (if you happen to support Arsenal).

A Bayesian Approach

The frequentist approach above isn’t universally accepted because these probability values that I keep mentioning don’t allow us to test directly the plausibility of the null and alternative hypothesis. A different approach is to use a Bayes Factor. Bayes Factors quantify the ratio of the probability of the data under the alternative hypothesis relative to the null. A value of 1, therefore, means that the observed data are equally probable under the null and alternative hypotheses, values below 1 suggest that the data are more probable under the null hypotheses relative to the alternative. Bayes factors provide information about the probability of the data under the null hypothesis (relative to the alternative) whereas significance tests (as above) provide no evidence at all about the status of the null hypothesis. Bayes Factors are pretty easy to do using Richard Morey’s excellent BayesFactor  package in R (Morey & Rouder, 2014). There’s a lot more technical stuff to consider, which I won’t get into … for example, to do this properly we really need to set some prior believe in our hypothesis. However, Morey’s packages uses some default priors that represent relatively open minded beliefs (about MD in this case!).
To get a Bayes Factor for our original contingency table you’d run:
bayesMD = contingencyTableBF(MD.tab, sampleType = “indepMulti”, fixedMargin = “cols”)
bayesMD
and get this output:
Bayes factor analysis
————–
[1] Non-indep. (a=1) : 34.04903 ±0%

Against denominator:
  Null, independence, a = 1 
Bayes factor type: BFcontingencyTable, independent multinomial


The resulting Bayes Factor of 34.05 is a lot bigger than 1. The fact it is bigger than 1 suggests that the probability of the data under the alternative hypothesis is 34 times greater then the probability of the data under the null. In other words, we should shift our prior beliefs towards the idea the MD is a wanker by a factor of about 34. Yet again, statistically speaking, MD is a wanker (if you happen to support Arsenal).

Conclusion

The aim of this blog is to take a tongue in cheek look at how we can apply statistics to a real-world question that vexes a great many people. (OK, the specific Mike Dean question only vexes Arsenal fans, but football fans worldwide are vexed by referees on a regular basis). With some publicly available data you can test these hypotheses, and reach conclusions based on data rather than opinion. That’s the beauty of statistical models.
I’ll admit that to the casual reader this blog is going to be a lot more tedious than 7amkickoff’s. However, what I lack in readability I gain in applying some proper stats to the data. Whatever way you look at it, Mike Dean, statistically speaking does predict poorer outcomes for Arsenal. There could be any number of other explanations for these data, but you’d think that the FA might want to have a look at that. Arsenal’s results under Mike Dean are significant;y different to under other referees, and these models show that this is more than simply the random fluctuations in results that happen with any sports team.

On a practical note, next time I’m at the emirates and someone says ‘Oh shit, it’s Mike Dean, he hates us …’ I will be able to tell him or her confidently that statistically speaking they have a point … well, at least in the sense that Arsenal are significantly more likely to lose when he referees!

————-
*Incidentally, I’m very much of the opinion that people are perfectly entitled to support a different team to me, and that the football team you support isn’t a valid basis for any kind of negativity. If we all supported the same team it would be dull, and if everyone supported ‘my’ team, it’d be even harder for me to get tickets to games. So, you know, let’s be accepting of our differences and all that ….

Perhaps my oxytocin was low when I read this paper.

I’m writing a new textbook on introductory statistics, and I decided to include an example based on Paul Zak‘s intriguing work on the role of the hormone oxytocin in trust between strangers. In particular, I started looking at his 2005 paper in Nature. Yes, Nature, one of the top science journals. A journal with an impact factor of about 38, and a rejection rate probably fairly close to 100%. It’s a journal you’d expect published pretty decent stuff and subjects articles to fairly rigorous scrutiny.

Before I begin, I have no axe to grind here with anyone. I just want to comment on some stuff I found and leave everyone else to do what they like with that information. I have no doubt Dr. Zak has done a lot of other work on which he bases his theory, I’m not trying to undermine that work, I’m more wanting to make a point about the state of the top journals in science. All my code here is for R.

The data I was looking at relates to an experiment in which participants were asked to invest money in a trustee (a stranger). If they invested, then the total funds for the investor and trustee increased. If the trustee shares the proceeds then both players end up better off, but if the trustee does not repay the investors’ trust by splitting the fund then the trustee ends up better off and the investor worse off.  The question is, will investors trust the trustees to split the funds? If they do then they will invest, if not they won’t. Critically, one group of investors were exposed to exogenous oxytocin (N = 29), whereas a placebo group were not (N = 29). The oxytocin group invested significantly more than the placebo control group suggesting that oxytocin had causally influenced their levels of trust in the trustee. This is the sort of finding that the media loves.

The paper reports a few experiments, I want to focus on the data specifically related to trust shown in Figure 2a (reproduced below):

 The good thing about this figure is that it shows relative frequencies, which means that with the aid of a ruler and a spreadsheet we can reconstruct the data. Based on the figure the raw data is as follows:

placebo<-c(3, 3, 4, 4, 4, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9, 11, 11, 11, 12, 12, 12, 12, 12, 12)

oxy<-c(3, 4, 4, 6, 6, 7, 8, 8, 8, 8, 9, 9, 10, 10, 10, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12)

The problem being that this gives us only N = 26 in the placebo group. Bollocks!

Ok, well perhaps they reported the wrong N. Here’s their table of descriptives:

Let’s have a look at the descriptives I get (you’ll need the pastecs package):

> round(stat.desc(placebo, basic = F), 1)
      median         mean      SE.mean CI.mean.0.95          var      std.dev     coef.var 
         7.5          7.9          0.6          1.3         10.2          3.2          0.4 
> round(stat.desc(oxy, basic = F), 1)
      median         mean      SE.mean CI.mean.0.95          var      std.dev     coef.var 
        10.0          9.6          0.5          1.1          8.1          2.8          0.3 
For the oxytocin group the mean, median and SD match, but for the placebo group they don’t. Hmmm. So, there must be missing cases. Based on where the median is, I guessed that the three missing cases might be values of 10. In other words, Figure 2a in the paper, ought to look like this:
So, the placebo group data now looks like this:

placebo<-c(3, 3, 4, 4, 4, 5, 6, 6, 6, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 10, 11, 11, 11, 12, 12, 12, 12, 12, 12)

Let’s look at the descriptives now:
> round(stat.desc(placebo, basic = F), 1)
      median         mean      SE.mean CI.mean.0.95          var      std.dev     coef.var 
         8.0          8.1          0.6          1.2          9.5          3.1          0.4 
They now match the table in the paper. So, this gives me confidence that I have probably correctly reconstructed the raw data despite Figure 2’s best efforts to throw me off of the scent. Of course, I could be wrong. Let’s see if my figures match their analysis. They did a Mann-Whitney U that yielded a p-value of 0.05887, which they halved to report one-tailed as .029. Notwithstanding the fact that you probably shouldn’t do one-tailed tests, ever, they conclude a significant between group difference. I replicate their p-value, again convincing me that my reconstructed data matches their raw data.
Put data into a data frame and do wilcoxon rank sum test (which is equivalent to Mann-Whitney):
zak<-data.frame(gl(2, 29), c(placebo, oxy))
names(zak)<-c("Group", "MU")
zak$Group<-factor(zak$Group, labels = c("Placebo", "Oxytocin"))

> wilcox.test(MU~Group, zak)

Wilcoxon rank sum test with continuity correction

data:  MU by Group
W = 301, p-value = 0.05887
alternative hypothesis: true location shift is not equal to 0
So, all well and good. There are better ways to look at this than a Mann-Whitney test though, so let’s use some of Rand Wilcox’s robust tests. First off, a trimmed mean and bootstrap:

> yuenbt(placebo, oxy, tr=.2, alpha=.05, nboot = 2000, side = T)
$ci
[1] -3.84344384  0.05397015

$test.stat
[1] -1.773126

$p.value
[1] 0.056

$est.1
[1] 8.315789

$est.2
[1] 10.21053

$est.dif
[1] -1.894737

$n1
[1] 29

$n2
[1] 29
Gives us a similar p-value to the Mann-Whitney and a confidence interval for the difference that contains zero. However, the data are very skewed indeed:
Perhaps we’re better off comparing the medians of the two groups. (Medians are no-where near as biased by skew as means):

> pb2gen(placebo, oxy, alpha=.05, nboot=2000, est=median)
[1] “Taking bootstrap samples. Please wait.”
$est.1
[1] 8

$est.2
[1] 10

$est.dif
[1] -2

$ci
[1] -6  1

$p.value
[1] 0.179

$sq.se
[1] 2.688902

$n1
[1] 29

$n2
[1] 29
The p-value is now a larger .179 (which even if you decide to halve it won’t get you below the, not very magic, .05 threshold for significance). So, the means might be significantly different depending on your view of one-tailed tests, but the medians certainly are not. Of course null hypothesis significance testing is bollocks anyway (see my book, or this blog) so maybe we should just look at the effect size. Or maybe we shouldn’t because the group means and SDs will be biased by the skew (SDs especially because you square the initial bias to compute it). Cohen’s d is based on means and SDs so if these statistics are biased then d will be too. I did it anyway though, just for a laugh:
> d<-(mean(oxy)-mean(placebo))/sd(placebo)
> d
[1] 0.4591715
I couldn’t be bothered to do the pooled estimate variant, but because there is a control group in this study it makes sense to use the control group SD to standardise the mean difference (because the SD in this condition shouldn’t be affected my the experimental manipulation). The result? About half a standard deviation difference (notwithstanding the bias). That’s not too shabby, although it’d be even more un-shabby if the estimate wasn’t biased. Finally, and notwithstanding various objections to using uninformed priors in Bayesian analysis) we can use the BayesFactor packaged to work out a Bayes Factor
> ttestBF(oxy, y = placebo)
Bayes factor analysis
————–
[1] Alt., r=0.707 : 1.037284 ±0%

Against denominator:
  Null, mu1-mu2 = 0 
Bayes factor type: BFindepSample, JZS
The Bayes factor is 1.03, which means that there is almost exactly equal evidence for the null hypothesis as the alternative hypothesis. In other words, there is no support for the hypothesis that oxytocin affected trust.
So, Nature, one of the top journals in science, published a paper where the evidence for oxytocin affecting trust was debatable at best. Let’s not even get into the fact that this was based on N = 58. Like I said, I’m sure Dr. Zak has done many other studies and I have no interest in trying to undermine what he or his colleagues do, I just want to comment on this isolated piece of research and offer an alternative interpretation of the data. There’s lots of stuff I’ve done myself that with the benefit of experience I’d rather forget about, so, hey, I’m not standing on any kind of scientific high ground here. What I would say is that in this particular study, based on the things least affected by the shape of the data (medians) and a (admittedly using uninformed priors) Bayesian analysis there is not really a lot of evidence that oxytocin affected trust.


Bat Cunnilingus and Spurious Results

Thanks to my brother I became aware of this article about bat cunnilingus today. My brother, knowing how entertained I had been by a bat felating itself at Disney on my honeymoon, decided that this would be just the sort of article that I’d like. Knowing the kind of examples I put in my textbooks I suspect his email is the first of many. Anyway, this article suggests that male bats engage in cunnilingus with females. Female bats, that is. They have videos to prove it. Male bats like cunnilingus so much that they don’t just use it as foreplay, but they get stuck in afterwards as well. The article has two main findings:
  1. The more that males (bats) engage in cunnilingus, the longer the duration of intercourse, r = .91 (see their Figure1). They tested this with a Pearson r, based on 8 observations (although each observation was the average of several data points I think – it’s not entirely clear how the authors came about these averages). They conclude that this behaviour may be due to sperm competition in that males sort of ‘clean out’ the area before they procreate. They argue that the longer intercourse duration supports this argument because longer duration = greater chance of paternity.
  2.  The more that males engage in pre-intercourse cunnilingus, the less they engage in post-intercourse cunnilingus, r = .79 (see their Figure2). They again tested this with a Pearson r, based on 8 observations. They conclude “… the negative relationship between the duration of pre- and post-copulatory cunnilingus suggests the possibility of sperm competition. Thus, the male removes the sperm of competitors but apparently not its own – lick for longer before but less after copulation, if paternity uncertainty is high.”

There are several leaps of faith in their conclusions. Does longer duration necessarily imply greater chance of paternity? Also, if sperm competition is at the root of all of this then I’m not sure post coital cunilingus is really ever an adaptive strategy: consuming your own sperm hardly seems like the best method of impregnating your mate. Yes, I’m still talking about bats here, what you choose to do in your own free time is none of my business.

Enough about Cunnilingus, what about Statistics?

Anyway, I want to make a statistical point, not an evolutionary one. It struck me when looking at this article that finding 2 is probably spurious. Figure 2 looks very much like two outliers are making a relationship out of nothing. With only 8 observations they are likely to have a large impact. With 8 observations it is, of course, hard to say what’s an outlier and what isn’t; however, the figure was like a bat tongue to my brain: my juices were flowing. Fortunately, with so few observations we can approximate the data from the two figures. I emphasises that these are quick estimates of values based on the graphs and a ruler. We have three variables estimated from the paper: pre (pre-intercourse cunnilingus), post (post-intercourse cunnilingus), duration (duration of intercourse).
We’re going to use R, if you don’t know how to use it then some imbecile called AndyField has written a book on it that you could look at. We can input these approximate scores as:
pre<-c(46, 50, 51, 52, 53, 53.5, 55, 59)
post<-c(162, 140, 150, 140, 142, 138, 152, 122)
duration<-c(14.4, 14.4, 14.9, 15.5, 15.4, 15.2, 15.3, 16.4)
bat<-data.frame(pre, post, duration)
Ok, so that’s the three variables entered, and I’ve made a data frame from them called bat.
Let’s load some packages that will be useful (if you don’t have them installed then install them) and source Rand Wilcox’s very helpful robust functions:
library(ggplot2)
library(Hmisc)
library(boot)
source(“http://dornsife.usc.edu/assets/sites/239/docs/Rallfun-v20”)
Let’s draw some rough replicas of Figures 1 and 2 from the paper:
qplot(pre, post, date = bat, ylim = c(85, 200), xlim = c(44, 60)) + stat_smooth(method = “lm”)
qplot(pre, duration, date = bat, ylim = c(13, 18), xlim = c(44, 60))+ stat_smooth(method = “lm”)

Figure 1: Quick replica of Maruthupandian & Marimuthu (2013) Figure 1
Figure 1: Quick replica of Maruthupandian & Marimuthu (2013) Figure 2
As you can see the figures – more or less – replicate their data. OK, let’s replicate their correlation coefficient estimates by executing:
rcorr(as.matrix(bat), type = “pearson”)
gives us
           pre  post duration
pre       1.00 -0.78     0.91
post     -0.78  1.00    -0.75
duration  0.91 -0.75     1.00
n= 8

P
         pre    post   duration
pre             0.0225 0.0018 
post     0.0225        0.0308 
duration 0.0018 0.0308
So, we replicate the pre-intercourse cunnilingus (Pre) – duration correlation of .91 exactly (p= .0018), and we more or less get the right value for the pre-post correlation: we get r = .78 (p = .0225) instead of -.79, but we’re within rounding error. So, it looks like our estimates of the raw data from the Figures in the article aren’t far off the real values at all. Just goes to show how useful graphs in papers can be for extracting the raw data.
If I’m right and the data in Figure 2 are affected by the first and last case, we’d expect something different to happen if we calculate the correlation based on ranked scores (i.e. Spearman’s rho). Let’s try it:
rcorr(as.matrix(bat), type = “spearman”)
           pre  post duration
pre       1.00 -0.50     0.78
post     -0.50  1.00    -0.51
duration  0.78 -0.51     1.00
n= 8
P
         pre    post   duration
pre             0.2039 0.0229 
post     0.2039        0.1945 
duration 0.0229 0.1945 
The Pre-duration correlation has dropped from .91 to .78 (p = .0229) but it’s still very strong. However, the pre-post correlation has changed fairly dramatically from -.79 to r = .50 (p = .204) and is now not significant (although I wouldn’t want to read to much into that with an n = 8).
What about if we try a robust correlation such as the percentile bend correlation described by <!–[if supportFields]> ADDIN EN.CITE Wilcox2012684Wilcox (2012)6846846Wilcox, R. R.Introduction to robust estimation and hypothesis testing3rd2012Burlington, MAElsevier<![endif]–>Wilcox (2012)<!–[if supportFields]><![endif]–>? Having sourced Wilcox’s functions we can do this easily enough by executing:
pbcor(bat$pre, bat$post)
pbcor(bat$pre, bat$duration)
For the pre post correlation we get:
$cor
[1] -0.3686576
$test
[1] -0.9714467
$siglevel
[1] 0.368843
$n
[1] 8
It has changed even more dramatically from -.79 to r = .37 (p = .369) and is now not significant (again I wouldn’t want to read to much into that with an n = 8, but it is now a much smaller effect than they’re reporting in the paper).
The pre-duration correlation fares much better:
$cor
[1] 0.852195
$test
[1] 3.989575
$siglevel
[1] 0.007204076
$n
[1] 8
This is still very strong and significant, r = .85 (p = .007). We could also try to estimate the population value of these two relationships by computing a robust confidence interval using bootstrapping. We’ll stick with Pearson r seeing as that’s the estimate that they used in the paper but you could try this with the other measures if you like:
bootr<-function(data, i){
cor(data[i, 1],data[i, 2])
}
bootprepost<-boot(bat[1:2], bootr, 2000)
boot.ci(bootprepost)
bootpreduration<-boot(bat[c(1, 3)], bootr, 2000)
boot.ci(bootpreduration)
If we look at the BCa intervals, we get this for the pre-post correlation:
Level     Percentile            BCa         
95%   (-0.9903,  0.5832 )   (-0.9909,  0.5670 )
and this for the pre-duration correlation:
Level     Percentile            BCa         
95%   ( 0.6042,  0.9843 )   ( 0.5022,  0.9773 )
In essence then, the true relationship between pre- and post-intercourse cunnilingus duration lies somewhere between about –1 and 0.6. In other words, it could be anything really, including zero. For the relationship between pre-intercourse bat cunnilingus and intercourse duration the true relationship is somewhere between r= .5 and .98. In other words, it’s likely to be – at the very worst – a strong and positive association.

Conclusions

What does this tell us? Well, I’m primarily interested in using this as a demonstration of why it’s important to look at your data and to employ robust tests. I’m not trying to knock the research. Of course, in the process I am, inevitably, sort of knocking it. So, I think there are three messages here:
  1. Their first finding that the more that males (bats) engage in cunnilingus, the longer the duration of intercourse is very solid. The population value could be less than the estimate they report, but it could very well be that large too. At the very worst it is still a strong association.
  2. The less good news is that finding 2 (that the more that males engage in pre-intercourse cunnilingus, the less they engage in post-intercourse cunnilingus) is probably wrong. The best case scenario is that the finding is real but the authors have overestimated it. The worst case scenario is that the population effect is in the opposite direction to what the authors report, and there is, of course, a mid ground that there is simply no effect at all in the population (or one so small as to be not remotely important).
  3. It’s also worth pointing out that the two extreme cases that have created this spurious result may in themselves be highly interesting observations. There may be important reasons why individual bats engage in unusually long or short bouts of cunnilingus (either pre or post intercourse) and finding out what factors affect this will be an interesting thing that the authors should follow up.

 References

SPSS is not dead

This blog was published recently showing that the use of R continues to grow in academia. One of the graphs (Figure 1) showed citations (using google scholar) of different statistical packages in academic papers (to which I have added annotations).
At face value, this graph implies a very rapid decline in SPSS use since 2005. I sent a tongue in cheek tweet about this graph, and this perhaps got interpreted that I thought SPSS use was on the decline. So, I thought I’d write this blog. The thing about this graph is it deals with citations in academic papers. The majority of people do not cite the package they use to analyse their data, so this might just reflect a decline in people stating that they used SPSS in papers. Also, it might be that users of software such as R are becomming more inclined to cite the package to encourage others to use it (stats package preference does for some people mimic the kind of religious fervor that causes untold war and misery. Most packages have their pros and cons and some people should get a grip). Also, looking at my annotations on Figure 1 you can see that the decline in SPSS is in no way matched by an upsurge in the use of R/Stata/Systat. This gap implies some mysterious ghost package that everyone is suddenly using but is not included on this graph. Or perhaps people are just ditching SPSS for qualitative analysis or doing it by handJ
If you really want to look at the decline/increase of package use then there are other metrics you could use. This article details lots of them. For example you could look at how much people talk about packages online (Figure 2).
Figure 2: online talk of stats packages (Image from http://r4stats.com/popularity)
Based on this R seems very popular and SPSS less so. However, the trend for SPSS is completely stable between 2005-2010 (the period of decline in the Figure 1). Discussion of R is on the increase though. Again though you can’t really compare R and SPSS here because R is more difficult to use than SPSS (I doubt that this is simply my opinion, I reckon you could demonstrate empirically that the average user prefers the SPSS GUI to R’s command interface if you could be bothered). People are, therefore, more likely to seek help on discussion groups for R than they are for SPSS. It’s perhaps not an index of popularity so much as usability. 
There are various other interesting metrics discussed in the aforementioned article. Perhaps the closest we can get to an answer to package popularity (but not decline in use) is survey data on what tools people use for data mining. Figure 3 shows that people most frequently report R, SPSS and SAS. Of course this is a snapshot and doesn’t tell us about usage change. However, it shows that SPSS is still up there. I’m not sure what types of people were surveyed for this figure, but I suspect it was professional statisticians/business analysts rather than academics (who would probably not describe their main purpose as data mining). This would also explain the popularity of R, which is very popular amongst people who crunch numbers for a living.
Figure 3: Data mining/analytic tools reported in use on Rexer Analytics survey during 2009 (from http://r4stats.com/popularity).
To look at the decline or not of SPSS in academia what we really need is data about campus licenses over the past few years. There were mumblings about Universities switching from SPSS after IBM took over and botched the campus agreement, but I’m not sure how real those rumours were. In any case, the teething problems from the IBM take over seem to be over (at least most people have stopped moaning about them). Of course, we can’t get data on campus licenses because it’s sensitive data that IBM would be silly to put in the public domain. I strongly suspect campus agreements have not declined though. If they have, IBM will be doing all that they can (and they are an enormously successful company) to restore them because campus agreements are a huge part of SPSS’s business.
Also, I doubt campus agreements have declined because they will stop for two main reasons (1) SPSS isn’t used by anyone anymore, (2) the cost become prohibitive. These two reasons are related obviously – the point at which they stop the agreement will be a function of cost and campus usage. In terms of campus usage, If you grew up using SPSS as an undergraduate or postgraduate, you’re unlikely to switch software later in your academic career (unless you’re a geek like me who ‘enjoys’ learning R). So, I suspect the demand is still there. In terms of cost, as I said, I doubt IBM are daft enough to price themselves out of the market.
So, despite my tongue in cheek tweet, I very much doubt that there is a mass exodus from SPSS. Why would there be? Although some people tend to be a bit snooty about SPSS, it’s a very good bit of software: A lot of what it does, it does very well. There are things I don’t like about it (graphs, lack of robust methods, their insistence on moving towards automated analysis), but there’s things I don’t like about R too. Nothing is perfect, but SPSS’s user-friendly interface allows thousands of people who are terrified of stats to get into it and analyse data and, in my book, that’s a very good thing.

Factor Analysis for Likert/Ordinal/Non-normal Data

My friend Jeremy Miles sent me this article by Basto and Periera (2012) this morning with the subject line ‘this is kind of cool’. Last time I saw Jeremy, my wife and I gatecrashed his house in LA for 10 days to discuss writing the R book that’s about to come out. During that stay we talked about lots of things, none of which had anything to do with statistics, or R for that matter. It’s strange then that with the comforting blanket of the Atlantic ocean between us, we only ever talk about statistics, or rant at each other about statistics, or R, or SPSS, or each other.
Nevertheless, I’m always excited to see a message from Jeremy because it’s usually interesting, frequently funny, and only occasionally  insulting about me. Anyway, J was right, this article was actually kind of cool (in a geeky stats sort of way). The reason that the article is kind of cool is because it describes an SPSS interface for doing various cool factor analysis (FA) or principal components analysis (PCA) things in SPSS such as analysis of correlation matrices other than those containing Pearson’s r and parallel analysis/MAP. It pretty much addresses two questions that I get asked a lot:
  1. My data are Likert/not normal, can I do a PCA/FA on them?
  2. I’ve heard about Velicer’s minimum average partial (MAP) criteria and Parallel analysis, can you do them in SPSS.
PCA/FA is not something I use, and the sum total of my knowledge is in my SPSS/SAS/R book. Some of that isn’t even my knowledge, it’s Jeremy’s, because he likes to read my PCA chapter and get annoyed about how I’ve written it. The two questions are briefly answered in the book, sort of.
The answer to question 1 is apply the PCA to the correlation matrix of polychoric correlations (for Likert/ordinal/skewed data) or tetrachoric correlations (for dichotomous data) rather than the matrix of Pearson’s r. This is mentioned so briefly that you might miss it on p. 650 of the SPSS book (3rd ed) and 772 (in the proofs at least) of the R book.
The answer to question 2 is in Jane Superbrain 17.2 in the books, in which I very briefly explain parallel analysis and point to some syntax to do it that someone else wrote, and I don’t talk about MAP at all.
I cleverly don’t elaborate on how you would compute polychoric correlations, or indeed tetrachoric ones, and certainly don’t show anyone anything about MAP. In part this is because the books are already very large, but in the case of the SPSS book it’s because SPSS won’t let you do PCA on any correlation matrix other than one containing Pearson’s r and MAP/parallel analysis let’s just say have been overlooked in the software. Until now that is.
Basto and Periera (2012) have written an interface for doing PCA on correlation matrices containing things other than Pearson’sr, and you can do MAP parallel analysis and a host of other things. I recommend the article highly if PCA is your kind of thing.
However, the interesting thing is that underneath Basto and Periera’s interface SPSS isn’t doing anything – all of the stats are computed using R. In the third edition of the SPSS book I excitedly mentioned the R plugin for SPSS a few times. I was mainly excited because at the time I’d never used it, and I stupidly thought it was some highly intuitive interface that enabled you to access R from within SPSS without knowing anything about R. My excitement dwindled when I actually used it. It basically involves installing the plugin which may or may not work. Even if you get it working you simply type:
Input Program R
End Program
and stick a bunch of R code in between. It seemed to me that I might as well just use R and save myself the pain of trying to locate the plugin and actually get it working (it may be better now – I haven’t tried it recently). Basto and Periera’s interface puts a user-friendly dialog box around a bunch of R commands.
I’m absolutely not knocking Basto and Periera’s interface – I think it will be incredibly useful to a huge number of people who don’t want to use R commands, and very neatly provides a considerable amount of extra functionality to SPSS that would otherwise be unavailable. I’m merely making the point that it’s shame that having installed the interface, SPSS will get the credit for R’s work.
Admittedly it will be handy to have your data in SPSS, but you can do this in R with a line of code:
data<-read.spss(file.choose(), to.data.frame = TRUE)
Which opens a dialog box for you to select an SPSS file, and then puts it in a data frame that I have unimaginatively called ‘data’. Let’s imagine we opened Example1.sav from Basto and Periera’s paper. These data are now installed in an object calleddata.
Installing the SPSS interface involves installing R, installing the R plugin for SPSS, then installing the widget itself through the utilities menu. Not in itself hard, but by the time you have done all of this this I reckon you could type and execute this command in R:
rMatrix<-hetcor(data)$correlations
This creates a matrix (called rMatrix) containing polychoric correlations of the variables in the data frame (which, remember, was called data).
You probably also have time to run these command:
parallelAnalysis<-nScree(rMatrix)
parallelAnalysis
That’s your parallel analysis done on the polychoric correlation matrix (first command) and displayed on the screen (second command). The results will mimic the values in Figure 4 in Basto and Periera. If you want to generate Figure 3 from their paper as well then execute:
plotnScree(parallelAnalysis)
It’s not entirely implausible that the R plugin for SPSS will still be downloading or installing at this point, so to relieve the tedium you could execute this command:
mapResults<-VSS(rMatrix, n.obs = 590, fm = "pc" )
mapResults
That’s the MAP analysis done on the polychoric correlation matrix using the VSS() function in R. n.obs is just the number of observations in the dataframe, and fm = “pc” tells it to do PCA rather than FA. The results will mimic the values in Figures 5 and 6 of  Basto and Periera.
The R plugin isn’t working so you’re frantically googling for a solution. While you do that, a small gerbil called Horace marches through the cat flap that you didn’t realise you had, jumps on the table and types this into the R console:
PCA<-principal(rMatrix, nfactors = 4, rotate = "varimax")
print.psych(PCA, cut = 0.3, sort = TRUE)
Which will create an object called OPCA which contains the results of a PCA on your polychoric correlation matrix, extracting 4 factors and rotating using varimax (as Basto and Periera do for Example 1). You’ll get basically the same results as Figure 13.
You probably also have time to make some tea.
Like I said, my intention isn’t to diss Basto and Periera’s interface. I think it’s great, incredibly useful and opens up the doors to lots of useful things that I haven’t mentioned. My intention is instead to show you that you can do some really complex things in R with little effort (apart from the 6 month learning curve of how to use it obviously). Even a gerbil called Horace can do it. Parallel analysis was done using 33 characters: a line of code. PCA on polychoric correlations sounds like the access code to a secure unit in a mental institution, but it’s three short lines of code. Now that is pretty cool.

Files

PS The R-code here needs the following packages installed: psychnFactors and foreign.

Bias in End of Year Album Polls?

So, in rolls 2012 and out rolls another year. I like new year: it’s a time to fill yourself with optimism about the exciting things that you will do. Will this be the year that I write something more interesting than a statistics book, for example? It’s also a time of year to reflect upon all of the things that you thought you’d do last year but didn’t. That’s a bit depressing, but luckily 2011 was yesterday and today is a new year and a new set of hopes that have yet to fail to come to fruition.
It’s also the time of year that magazines publish their end-of-year polls. Two magazines that I regularly read are Metal Hammer and Classic Rock (because, in case isn’t obvious from my metal-themed website and podcasts, I’m pretty fond of heavy metal and classic rock). The ‘album of the year’ polls in these magazines are an end of year treat for me: it’s an opportunity to see what highly rated albums I overlooked, to wonder at how an album that I hate has turned up in the top 5, or to pull a bemused expression at how my favourite album hasn’t made it into the top 20. At my age, it’s good to get annoyed about pointless things.
Anyway, for years I have had the feeling that these end of year polls are biased. I don’t mean in any nefarious way, but simply that reviewers who contribute to these polls tend to rate recently released albums more highly than ones released earlier in the year. Primacy and recency effects are well established in cognitive psychology: if you ask people to remember a list of things, they find it easier to recall items at the start or end of the list. Music journalists are (mostly) human so it’s only reasonable that reviewers will succumb to these effects, isn’t it?
I decided to actually take some time off this winter Solstice, and what happens when you take time off? In my case, you get bored and start to wonder whether you can test your theory that end of year polls are biased. The next thing you know, you’re creating a spreadsheet with Metal Hammer and Classic Rock’s end of year polls in it, then you’re on Wikipedia looking up other useful information about these albums, then, when you should be watching the annual re-run of Mary Poppins you find that you’re getting R to do a nonparametric bootstrap of a mediation analysis. The festive period has never been so much fun.
Anyway, I took the lists of top 20 albums from both Metal Hammer and Classic Rock magazine. I noted down their position in the poll (1 = best album of the year, 20 = 20th best album of the year), I also found out what month each album was released. From this information I could calculate how many months before the poll the album came out (0 = album came out the same month as the poll, 12 = the album came out 12 months before the poll). I called this variable Time.since.release.
My theory implies that an album’s position in the end of year list (position) will be predicted from how long before the poll the album was released. A recency effect would mean that albums released close to the end of the year (i.e. low score onTime.since.release) will be higher up the end of year poll (remember that the lower the score, the higher up the poll the album is). So, we predict a positive relationship between position and Time.since.release.
Let’s look at the scatterplot:
Both magazines show a positive relationship: albums higher up the poll (i.e. low score on position) tend to have been released more recently (i.e., low score on the number of months ago that the album was released). This effect is much more pronounced though for Metal Hammer than for Classic Rock.
To quantify the relationship between position in the poll and time since the album was released we can look at a simple correlation coefficient. Our position data are a rank, not interval/ratio, so it makes sense to use Spearman’s rho, and we have a fairly small sample so it makes sense to bootstrap the confidence interval. For Metal Hammer we get (note that because of the bootstrapping you’ll get different results if you try to replicate this) rho = .428 (bias = –0.02, SE = 0.19) with a 95% bias corrected and accelerated confidence interval that does not cross zero (BCa 95% = .0092, .7466). The confidence interval is pretty wide, but tells us that the correlation in the population is unlikely to be zero (in other words, the true relationship between position in the poll and time since release is likely to be more than no relationship at all). Also, because rho is an effect size we can interpret its size directly, and .428 is a fairly large effect. In other words, Metal Hammer reviewers tend to rank recent albums higher than albums released a long time before the poll. They show a relatively large recency effect.
What about Classic Rock? rho = .038 (bias = –0.001, SE = 0.236) with a BCa 95% CI = –.3921, .5129). The confidence interval is again wide, but this time crosses zero (in fact, zero is more or less in the middle of it). This CI tells us that the true relationship between position in the poll and time since release is could be zero, in other words no relationship at all. We can again interpret the rho directly, and .038 is a very small (it’s close to zero). In other words, Classic Rock reviewers do not tend to rank recent albums higher than albums released a long time before the poll. They show virtually no recency effect. This difference is interesting (especially given there is overlap between contributors to the two magazines!).
It then occurred to me, because I spend far too much time thinking about this sort of thing, that perhaps it’s simply the case that better albums come out later in the year and this explains why Metal Hammer reviewers rate them higher. ‘Better’ is far too subjective a variable to quantify, however, it might be reasonable to assume that bands that have been around for a long time will produce good material (not always true, as Metallica’s recent turd floating in the loo-loo demonstrates). Indeed, Metal Hammer’s top 10 contained Megadeth, Anthrax, Opeth, and Machine Head: all bands who have been around for 15-20 years or more. So, in the interests of fairness to Metal Hammer reviewers, let’s see whether ‘experience’ mediates the relationship between position in the poll and time since the album’s release. Another 30 minutes on the internet and I had collated the number of studio albums produced by each band in the top 20. The number of studio albums seems like a reasonable proxy for experience (and better than years as a band, because some bands produce an album every 8 years and others every 2). So, I did a mediation analysis with a nonparametric bootstrap (thanks to the mediation package in R). The indirect effect was 0.069 with a 95% CI = –0.275, 0.537. The proportion of the effect explained by mediation was about 1%. In other words, the recency bias in the Metal Hammer end of year poll could not be explained by the number of albums that bands in the poll had produced in the past (i.e. experience). Basically, despite my best efforts to give them the benefit of the doubt, Metal Hammer critics are actually biased towards giving high ratings to more recently released albums.
These results might imply many things:
  • Classic Rock reviewers are more objective when creating their end of year polls (because they over-ride the natural tendency to remember more recent things, like albums).
  • Classic Rock reviewers are not human because they don’t show the recency effects that you expect to find in normal human beings. (An interesting possibility, but we need more data to test it …)
  • Metal Hammer should have a ‘let’s listen to albums released before June’ party each November to remind their critics of albums released earlier in the year. (Can I come please and have dome free beer?)
  • Metal Hammer should inversely weight reviewer’s ranks by the time since release so that albums released earlier in the year get weighted more heavily than recently released albums. (Obviously, I’m offering my services here …)
  • I should get a life.

Ok, that’s it. I’m sure this is of no interest to anyone other than me, but it does at least show how you can use statistics to answer pointless questions. A bit like what most of us scientists do for a large portion of our careers. Oh, and if I don’t get a ‘Spirit of the Hammer’ award for 15 years worth of infecting students of statistics with my heavy metal musings then there is no justice ion the world. British Psychological Society awards and National Teaching Fellowships are all very well, but I need a spirit of the hammer on my CV (or at least Defender of the Faith).

Have a rockin’ 2012
andy
P.P.S. My top 11 (just to be different) albums of 2011 (the exact order is a bit rushed):
  1. Opeth: Heritage
  2. Wolves in the Throne Room: Celestial Lineage
  3. Anthrax: Worship Music
  4. Von Hertzen Brothers: Stars Aligned
  5. Liturgy: Split LP (although the Oval side is shit)
  6. Ancient Ascendent: The Grim Awakening
  7. Graveyard: Hisingen Blues
  8. Foo Fighters: Wasting Light
  9. Status Quo: Quid Pro Quo
  10. Manowar: Battle Hymns MMXI
  11. Mastodon: The Hunter

Meta-Analysis and SEM

Someone recently asked me about how to incorporate results from Structural Equation Models into a meta-analysis. The short answer is ‘with great difficulty’, although that’s not terribly helpful.
One approach it to do the meta-analysis at the correlation level, which is good if the SEM studies report the zero-order correlations (which hopefully they do). That is, you use meta-analysis to estimate pooled values of correlations between variables. Imagine you had three variables: anxiety, parenting, age and a bunch of papers that report relationships between some of these papers. Let’s say you have 53 papers in total, and 52 report the correlation between age and anxiety, then you use these 52 studies to get a pooled value of r for that relationship. If only 9 of the studies report the relationship between age and anxiety, then you use only these to get a pooled r for this association and so on. You stick these pooled values into a correlation matrix and then do an SEM on it to test whatever model you want to test. So, the meta-analysis part of the analysis informs the correlation matrix on which the resulting SEM is based. So, it’s running an SEM on data that others have collected (and you have pooled together). This is called meta-analytic structural equation modeling (MASEM; Cheung & Chan, 2005, 2009). You can implement it with Mike Cheung’s metaSEM package in RMike Cheung has some great resources here.

References

  • Cheung, M. W. L., & Chan, W. (2009). A two-stage approach to synthesizing covariance matrices in meta-analytic structural equation modeling. Structural Equation Modeling, 16, 28-53
  • Cheung, M. W. L., & Chan, W. (2005). Meta-analytic structural equation modeling: A two-stage approach. Psychological Methods, 10, 40-64.