There’s been a recent spat between the heavy metal bands Sepultura and Soulfly. For those unaware of the history, 50% of Sepulture used to be the Cavalera brothers (Max and Igor) until Max (the frontman and guitarist) left the band in 1996 and formed Soulfly. The full story is here. There’s a lot of bad blood even 20 years later, and according to a recent story on metal sucks, Soulfly’s manager (and Max’s wife) Gloria Cavalier recently posted a fairly pointed post on her Facebook page. This got picked up by my favourite podcast (the metal sucks podcast). What has this got to do with me, or statistics? Well, one of the presenters of the metal sucks podcasts asked me this over Twitter:
After a very brief comment about needing to operationalise ‘better’, I decided that rather than reading book proofs I’d do a tongue in cheek analysis of what is better max or no max and here it is.
First we need to operationalise ‘better’. I have done this by accepting subjective opinion as determining ‘better’ and specifically ratings of albums on amazon.com (although I am English metal sucks is US based, so I thought I’d pander to them and take ratings from the US site). Our questions then becomes ‘is max or no max rated higher by the sorts of people who leave reviews on Amazon’. We have operationalised our questions and turned it into a scientific statement, which we can test with data. [There are all sorts of problems with using these ratings, not least of which is that they tend to be positively biased, and they likely reflect a certain type of person who reviews, often reviews reflect things other than the music (e.g., arrived quickly 5*), and so on … but fuck it, this is not serious science, just a bit of a laugh.]
Post Sepultura: Max or No Max
![]() |
||
Figure 1: Histograms of all ratings for Soulfly and (Non-Max Era) Sepultura |
![]() |
||
Figure 2: Mean ratings of Soulfly and (Non-Max Era) Sepultura by year of album release |
There are a lot of ways you could look at these data. The first thing is the skew. That messes up estimates of confidence intervals and significance tests … but our sample is likely big enough that we can rely on the central limit theorem to do its magic and let us assume that the sampling distribution is normal (beautifully explained in my new book!)
I’m going to fit three models. The first is an intercept only model (a baseline with no predictors), the second allows intercepts to vary across albums (which allows ratings to vary by album, which seems like a sensible thing to do because albums will vary in quality) the third predicts ratings from the band (Sepultura vs Soulfly).
maxModel2a<-gls(Rating ~ 1, data = sepvssoul, method = "ML")
maxModel2b<-lme(Rating ~ 1, random = ~1|Album, data = sepvssoul, method = "ML")
maxModel2c<-update(maxModel2b, .~. + Band)
anova(maxModel2a, maxModel2b, maxModel2c)
By comparing models we can see:
Model df AIC BIC logLik Test L.Ratio p-value
maxModel2a 1 2 2889.412 2899.013 -1442.706
maxModel2b 2 3 2853.747 2868.148 -1423.873 1 vs 2 37.66536 <.0001
maxModel2c 3 4 2854.309 2873.510 -1423.155 2 vs 3 1.43806 0.2305
That album ratings varied very significantly (not surprising), the p-value is < .0001, but that band did not significantly predict ratings overall (p = .231). If you like you can look at the summary of the model by executing:
summary(maxModel2c)
Which gives us this output:
Linear mixed-effects model fit by maximum likelihood
Data: sepvssoul
AIC BIC logLik
2854.309 2873.51 -1423.154
Random effects:
Formula: ~1 | Album
(Intercept) Residual
StdDev: 0.2705842 1.166457
Fixed effects: Rating ~ Band
Value Std.Error DF t-value p-value
(Intercept) 4.078740 0.1311196 882 31.107015 0.0000
BandSoulfly 0.204237 0.1650047 14 1.237765 0.2362
Correlation:
(Intr)
BandSoulfly -0.795
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.9717684 -0.3367275 0.4998698 0.6230082 1.2686186
Number of Observations: 898
Number of Groups: 16
The difference in ratings between Sepultura and Soulfly was b = 0.20. Ratings for soulfully were higher, but not significantly so (if we allow ratings to vary over albums, if you take that random effect out you’ll get a very different picture because that variability will go into the fixed effect of ‘band’.).
Max or No Max
maxModela<-gls(Rating ~ 1, data = maxvsnomax, method = "ML")
maxModelb<-lme(Rating ~ 1, random = ~1|Album, data = maxvsnomax, method = "ML")
maxModelc<-update(maxModelb, .~. + Band)
anova(maxModela, maxModelb, maxModelc)
By comparing models we can see:
Model df AIC BIC logLik Test L.Ratio p-value
maxModela 1 2 4686.930 4697.601 -2341.465
maxModelb 2 3 4583.966 4599.973 -2288.983 1 vs 2 104.96454 <.0001
maxModelc 3 5 4581.436 4608.114 -2285.718 2 vs 3 6.52947 0.0382
That album ratings varied very significantly (not surprising), the p-value is < .0001, and the band did significantly predict ratings overall (p = .038). If you like you can look at the summary of the model by executing:
summary(maxModelc)
Which gives us this output:
Linear mixed-effects model fit by maximum likelihood
Data: maxvsnomax
AIC BIC logLik
4581.436 4608.114 -2285.718
Random effects:
Formula: ~1 | Album
(Intercept) Residual
StdDev: 0.25458 1.062036
Fixed effects: Rating ~ Band
Value Std.Error DF t-value p-value
(Intercept) 4.545918 0.1136968 1512 39.98281 0.0000
BandSepultura No Max -0.465626 0.1669412 19 -2.78916 0.0117
BandSoulfly -0.262609 0.1471749 19 -1.78433 0.0903
Correlation:
(Intr) BndSNM
BandSepultura No Max -0.681
BandSoulfly -0.773 0.526
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-3.3954974 -0.3147123 0.3708523 0.6268751 1.3987442
Number of Observations: 1534
Number of Groups: 22
The difference in ratings between Sepultura without Max compared to with him was b = -0.47 and significant at p = .012 (ratings for post-max Sepultura are significantly worse than for the Max-era Sepultura). The difference in ratings between Soulfly compared two Max-era Sepultura was b = -0.26 and not significant (p = .09) (ratings for Soulfly are not significantly worse than for the Max-era Sepultura). A couple of points here, p-values are silly, so don’t read too much into them, but the parameter (the bs) which quantifies the effect is a bit smaller for Soulfly.
Confidence Intervals
boot.lme <- function(data, indices){
data <- data[indices,] # select obs. in bootstrap sample
model <- lme(Rating ~ Band, random = ~1|Album, data = data, method = "ML")
fixef(model) # return coefficient vector
}
maxModel.boot<-boot(maxvsnomax, boot.lme, 1000)
maxModel.boot
boot.ci(maxModel.boot, index = 1, type = “perc”)
boot.ci(maxModel.boot, index = 2, type = “perc”)
boot.ci(maxModel.boot, index = 3, type = “perc”)
Then you find these confidence intervals for the three betas (intercept, Post-Max Sepultura vs. Max Era-Sepultura, Soulfly vs. Max-Era-Sepultura):
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates
CALL :
boot.ci(boot.out = maxModel.boot, type = “perc”, index = 1)
Intervals :
Level Percentile
95% ( 4.468, 4.620 )
Calculations and Intervals on Original Scale
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates
CALL :
boot.ci(boot.out = maxModel.boot, type = “perc”, index = 2)
Intervals :
Level Percentile
95% (-0.6153, -0.3100 )
Calculations and Intervals on Original Scale
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates
CALL :
boot.ci(boot.out = maxModel.boot, type = “perc”, index = 3)
Intervals :
Level Percentile
95% (-0.3861, -0.1503 )
Calculations and Intervals on Original Scalens: 1534
Number of Groups: 22
The difference in ratings between Sepultura without Max compared to with him was b = -0.47 [-0.62, -0.31]. The difference in ratings between Soulfly compared to Max-era Sepultura was b = -0.26 [-0.39, -0.15]. This suggests that both soulfully and post-Max Sepultura yield negative parameters that reflect (to the degree that you believe that a confidence interval tells you about the population parameter ….) a negative effect in the population. In other words, both bands are rated worse than Max-era Sepultura.