楼主: ReneeBK
1746 6

Standard algorithms for doing hierarchical linear regression? [推广有奖]

  • 1关注
  • 62粉丝

VIP

已卖:4898份资源

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49640 个
通用积分
55.8137
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57805 点
帖子
4005
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

楼主
ReneeBK 发表于 2014-4-15 05:01:46 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
Are there standard algorithms (as opposed to programs) for doing hierarchical linear regression? Do people usually just do MCMC or are there more specialized, perhaps partially closed form, algorithms?
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Hierarchical Algorithms regression Algorithm regressio partially programs standard perhaps usually

沙发
ReneeBK 发表于 2014-4-15 05:02:08
There's Harvey Goldstein's iterative generalized least-squares (IGLS) algorithm for one, and also it's minor modification, restricted iterative generalized least-squares (RIGLS), that gives unbiased estimates of the variance parameters.

These algorithms are still iterative, so not closed form, but they're computationally simpler than MCMC or maximum likelihood. You just iterate until the parameters converge.

  • Goldstein H. Multilevel Mixed Linear-Model Analysis Using Iterative Generalized Least-Squares. Biometrika 1986; 73(1):43-56. doi: 10.1093/biomet/73.1.43
  • Goldstein H. Restricted Unbiased Iterative Generalized Least-Squares Estimation. Biometrika 1989; 76(3):622-623. doi: 10.1093/biomet/76.3.622
  • Stephen W. Raudenbush, Anthony S. Bryk. Hierarchical linear models: applications and data analysis methods. (2nd edition) Sage, 2002.

藤椅
ReneeBK 发表于 2014-4-15 05:02:58
The lme4 package in R uses iteratively reweighted least squares (IRLS) and penalized iteratively reweighted least squares (PIRLS). See the PDF's here:

http://rss.acs.unt.edu/Rdoc/library/lme4/doc/index.html

板凳
ReneeBK 发表于 2014-4-15 05:03:37
Douglas Bates and Steven Walker have created a GitHub project whose goal is to use pure R code to implement the PIRLS algorithm above. github.com/lme4/lme4pureR. If you consider the base lmer() function in the lme4 package of R you would normally have to read through a whole bunch of C++ code understand the implementation of PIRLS in lmer() (which may be challenging for those of us not so well versed in C++ programming). –  Chris

报纸
ReneeBK 发表于 2014-4-15 05:05:23
Another good source for "computing algorithms" for HLM's (again to the extent that you view them as similar specifications as LMM's) would be:

McCulloch, C., Searle, S., Neuhaus, J. (2008). Generalized Linear and Mixed Models. 2nd Edition. Wiley. Chapter 14 - Computing.
Algorithms they list for computing LMM's include:

EM algorithm
Newton Raphson algorithm
Algorithms they list for GLMM's include:

Numerical quadrature (GH quadrature)
EM algorithm
MCMC algorithms (as you mention)
Stochastic approximation algorithms
Simulated maximum likelihood
Other algorithms for GLMM's that they suggest include:

Penalized quasi-likelihood methods
Laplace approximations
PQL/Laplace with bootstrap bias correction

地板
ReneeBK 发表于 2014-4-15 05:06:17
If you consider the HLM to be a type of linear mixed model, you could consider the EM algorithm. Page 22-23 of the following course notes indicate how to implement the classic EM algorithm for mixed model:

http://www.stat.ucla.edu/~yuille/courses/stat153/emtutorial.pdf

7
ReneeBK 发表于 2014-4-15 05:06:41
###########################################################
#     Classical EM algorithm for Linear  Mixed Model      #
###########################################################
em.mixed <- function(y, x, z, beta, var0, var1,maxiter=2000,tolerance = 1e-0010)
    {
    time <-proc.time()
    n <- nrow(y)
    q1 <- nrow(z)
    conv <- 1
    L0 <- loglike(y, x, z, beta, var0, var1)
    i<-0
    cat("  Iter.       sigma0                 sigma1        Likelihood",fill=T)
    repeat {
            if(i>maxiter) {conv<-0
                    break}
    V <- c(var1) * z %*% t(z) + c(var0) * diag(n)
    Vinv <- solve(V)
    xb <- x %*% beta
    resid <- (y-xb)
    temp1 <- Vinv %*% resid
    s0 <- c(var0)^2 * t(temp1)%*%temp1 + c(var0) * n - c(var0)^2 * tr(Vinv)
    s1 <- c(var1)^2 * t(temp1)%*%z%*%t(z)%*%temp1+ c(var1)*q1 -
                                                c(var1)^2 *tr(t(z)%*%Vinv%*%z)
    w <- xb + c(var0) * temp1
    var0 <- s0/n
    var1 <- s1/q1
    beta <- ginverse( t(x) %*% x) %*% t(x)%*% w
    L1 <- loglike(y, x, z, beta, var0, var1)
    if(L1 < L0) { print("log-likelihood must increase, llikel <llikeO, break.")
                             conv <- 0
break
}
    i <- i + 1
    cat("  ", i,"  ",var0,"  ",var1,"  ",L1,fill=T)
    if(abs(L1 - L0) < tolerance) {break}  #check for convergence
    L0 <- L1
    }
list(beta=beta, var0=var0,var1=var1,Loglikelihood=L0)
}

#########################################################
#  loglike calculates the LogLikelihood for Mixed Model #
#########################################################
loglike<- function(y, x, z, beta, var0, var1)
}
{
n<- nrow(y)
V <- c(var1) * z %*% t(z) + c(var0) * diag(n)
Vinv <- ginverse(V)
xb <- x %*% beta
resid <- (y-xb)
temp1 <- Vinv %*% resid
(-.5)*( log(det(V)) + t(resid) %*% temp1 )
}

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-10 07:20