|
Having random slopes at level 1 is not a necessary condition for examining cross-level interactions. All that is necessary is that you have 2 predictors that vary at different levels, and their interaction.
EDIT: I looked over the Hofmann paper posted in the comments and I think I see the source of confusion here.
Hofmann describes a situation in which one is building a model by starting with the simplest "empty" random-intercept model, and then working up term-by-term to the full HLM, where the very last term added is the predictor representing the cross-level interaction. Under such an approach, it is true that in the model prior to the cross-level interaction model (i.e., the model that is identical except that the cross-level interaction term is omitted), there must be variation in the level-1 slopes in order for there to be moderation of these slopes by a level-2 predictor. Intuitively, if every group has the same exact level-1 slope, then it is not possible for us to predict variation in these slopes from another predictor in the dataset, because there is no such variation to predict.
Notice that this is not a statement about the cross-level interaction model itself, but rather a statement about a different model which omits the cross-level interaction term. In the cross-level interaction model itself, it is entirely possible for there to be no variation in the level-1 slopes. This would essentially mean that all of the seemingly random variation in the level-1 slopes that we observed in the previous model can be accounted for by adding the cross-level interaction term to the model.
I illustrate just such a situation below with some simulated data in R, where we have a cross-level interaction between x varying at level 1, and z varying at level 2:
# generate data -----------------------------------------------------------
set.seed(12345)
dat <- merge(data.frame(group=rep(1:30,each=30),
x=runif(900, min=-.5, max=.5),
error=rnorm(900)),
data.frame(group=1:30,
z=runif(30, min=-.5, max=.5),
randInt=rnorm(30)))
dat <- within(dat, y <- randInt + 5*x*z + error)
# Model with the x:z interaction ------------------------------------------
library(lme4)
mod1 <- lmer(y ~ x*z + (1|group) + (0+x|group), data=dat)
mod1
# Linear mixed model fit by REML
# Formula: y ~ x * z + (1 | group) + (0 + x | group)
# Data: dat
# AIC BIC logLik deviance REMLdev
# 2658 2692 -1322 2640 2644
# Random effects:
# Groups Name Variance Std.Dev.
# group (Intercept) 8.5326e-01 9.2372e-01
# group x 5.4449e-20 2.3334e-10
# Residual 9.9055e-01 9.9526e-01
# Number of obs: 900, groups: group, 30
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.13311 0.17283 -0.770
# x 0.09808 0.11902 0.824
# z -0.24705 0.51424 -0.480
# x:z 5.39969 0.35257 15.315
#
# Correlation of Fixed Effects:
# (Intr) x z
# x -0.010
# z 0.103 0.008
# x:z 0.007 0.137 -0.005
# Model without the x:z interaction ---------------------------------------
mod2 <- lmer(y ~ x + z + (1|group) + (0+x|group), data=dat)
mod2
# Linear mixed model fit by REML
# Formula: y ~ x + z + (1 | group) + (0 + x | group)
# Data: dat
# AIC BIC logLik deviance REMLdev
# 2726 2755 -1357 2713 2714
# Random effects:
# Groups Name Variance Std.Dev.
# group (Intercept) 0.85503 0.92468
# group x 3.46811 1.86229
# Residual 0.99607 0.99803
# Number of obs: 900, groups: group, 30
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.14148 0.17312 -0.817
# x -0.05178 0.36056 -0.144
# z -0.26570 0.51509 -0.516
#
# Correlation of Fixed Effects:
# (Intr) x
# x -0.004
# z 0.103 0.002
Jack Westfall
|