For a multilevel model on survey data, I would like to report on the "importance" of each of the independent variables in my model. Would you agree with me that the likelihood change when removing a predictor from the model points to "predictive power" of this predictor variable? Just like in a "plain" linear regression model, the change in Rsquare would be a similar measure, I guess. I know that there are alternative measures for effect-size. Yet,
I'm puzzled by this problem and wondering what exactly would be wrong by simply looking at the loss of predictive power of a model (Rsquare or -2loglikelihood) after elimination of one independent variable. It's totally different from "effect size" measures in experimental settings, but does such a loss-measure make sense at all to compare the predictive power of a set of independents? If predictions become much better due to including a particular variable, doesn't this in some way point to "importance" of that variable?
One objection to the above idea I could come up with is the following. The change in -2loglikehood is by and large equal to the square of Wald's t-value of a predictor's effect. And, obviously, the size of a t-value (also) depends on the size of the standard error of the effect which, in turn, is related to the precision of its sample estimate and has nothing to do (I guess) with the "importance" or "predictive power"of the effect.
Thanks for any suggestion!!!


雷达卡


京公网安备 11010802022788号







