楼主: oliyiyi
15462 131

【好书推荐】Bayesian Methods in Health Economics   [推广有奖]

版主

泰斗

0%

还不是VIP/贵宾

-

TA的文库  其他...

计量文库

威望
7
论坛币
271951 个
通用积分
31269.3519
学术水平
1435 点
热心指数
1554 点
信用等级
1345 点
经验
383775 点
帖子
9598
精华
66
在线时间
5468 小时
注册时间
2007-5-21
最后登录
2024-4-18

初级学术勋章 初级热心勋章 初级信用勋章 中级信用勋章 中级学术勋章 中级热心勋章 高级热心勋章 高级学术勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
http://www.crcpress.com/product/isbn/9781439895559

Features
  • Provides an overview of Bayesian methods for cost-effectiveness analysis, and includes all necessary background on economics and Bayesian statistics
  • Presents three detailed case studies of the cost-effectiveness analysis of health care interventions
  • Includes several worked examples to guide through the process of health economic evaluation
  • Contains extensive coverage of the practice of making Bayesian analysis integrating software such as JAGS and R, specifically for the application of health economic analysis
  • Systematically describes the methodological issues related to the application of Bayesian inference and decision process in health economics
  • Designed as a reference for students and practitioners working in the field of health economic evaluations and medical statistics

The book is linked to code with which to replicate the examples, and an associated R package (BCEA) can be used in real applications to produce systematic health economic evaluations of Bayesian models.


Summary

Health economics is concerned with the study of the cost-effectiveness of health care interventions. This book provides an overview of Bayesian methods for the analysis of health economic data. After an introduction to the basic economic concepts and methods of evaluation, it presents Bayesian statistics using accessible mathematics. The next chapters describe the theory and practice of cost-effectiveness analysis from a statistical viewpoint, and Bayesian computation, notably MCMC. The final chapter presents three detailed case studies covering cost-effectiveness analyses using individual data from clinical trials, evidence synthesis and hierarchical models and Markov models. The text uses WinBUGS and JAGS with datasets and code available online.

本帖隐藏的内容

Bayesian Methods in Health Economics.zip (2.75 MB, 需要: 55 个论坛币) 本附件包括:
  • Bayesian Methods in Health Economics.pdf




二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:health econ Economics Economic Bayesian Methods 好书推荐 Health

回帖推荐

fjrong 发表于2楼  查看完整内容

Bayesian Methods in Health EconomicsCheck out my book (published with CRC - available also on Amazon, in ebook format too) [/backcolor]
已有 8 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
kongqingbao280 + 100 奖励积极上传好的资料
np84 + 100 精彩帖子
日新少年 + 3 + 3 + 3 精彩帖子
yuedragon + 3 精彩帖子
yucuiting + 48 + 2 + 1 + 1 奖励积极上传好的资料
Nicolle + 100 + 100 + 5 + 5 精彩帖子
胖胖小龟宝 + 20 + 10 + 5 奖励积极上传好的资料
crystal8832 + 24 + 2 + 2 + 2 对论坛有贡献

总评分: 经验 + 368  论坛币 + 134  学术水平 + 12  热心指数 + 19  信用等级 + 6   查看全部评分

本帖被以下文库推荐

缺少币币的网友请访问有奖回帖集合
https://bbs.pinggu.org/thread-3990750-1-1.html
沙发
fjrong 在职认证  发表于 2014-11-28 11:51:43 来自手机 |只看作者 |坛友微信交流群
Bayesian Methods in Health EconomicsCheck out my book (published with CRC - available also on Amazon, in ebook format too)


  • R/JAGS code to run the examples (all files at once here)
    • Instructions(probably best to read before downloading the files)
    • Utilities
      • Utils.R: script containing some utility functions, for examples to draw traceplots of MCMC chains, or estimating the parameters of suitable distributions to obtain given values for its mean and standard deviation [Needed to run most of the other scripts]
    • Chapter 2
      • MCMC.R: script to run the Gibbs sampling simulations and check convergence, as in Figure 2.10
      • modelNormal.R: script to run the analysis of the Normal model (pages 69-73)
      • modelNormal.txt: JAGS code for the Bayesian model (pages 69-73)
      • phbirths.dta: Dataset used for the Normal model example - courtesy of German Rodriguez
    • Chapter 3
      • HEexample.R: script to run the Bayesian model to analyse the health economic problem described in the chapter (chemotherapy) and the several cost-effectiveness analyses presented throughout the chapter. This example is used throughout chapter 4 as well
      • model.txt: JAGS code for the basic cost-effectiveness analysis
      • modelEVPPI_rho.txt: JAGS code for the analysis of the Expected Value of Partial Perfect Information (EVPPI) for the parameter ρ

      • modelEVPPI_gamma.txt: JAGS code for the analysis of the Expected Value of Partial Perfect Information (EVPPI) for the parameter γ
    • Chapter 4
      • modelNormal.R: script to run the analysis of the Normal model (pages 129-141); continues the analysis from chapter 2
      • modelNormalBlocking.txt: JAGS code to run the model using blocking to improve convergence (page 133)
      • modelNormal2.txt: JAGS code to run the compute the predictive distribution (page 135)
    • Chapter 5
      • Example 1: RCT of acupuncture for chronic headache in primary care
        • acupuncture.R: script to run the cost-effectiveness analysis of acupuncture. Based on this paper
        • dataRCTacupuncture.csv: Dataset used for the acupuncture example - courtesy of David Wonderling, Richard Nixon and Richard Grieve
        • actptRCT.txt: JAGS code to run the normal/normal (on the logit/log scale) model
        • actptRCT_gamma.txt: JAGS code to run the normal/gamma (on the logit/natural scale) model
        • actptRCT_logN.txt: JAGS code to run the normal/log-normal (on the logit/natural scale) model
      • Example 2: Neuraminidase inhibitors to reduce influenza in healthy adults
        • EvSynth.R: script to run the cost-effectiveness analysis based on evidence synthesis for the influenza treatment with neuraminidase
        • EvSynth.txt: JAGS code to run the evidence synthesis model
      • Example 3: Markov model for the treatment of asthma
        • MarkovModel.R: script to run the cost-effectiveness analysis for the treatment of asthma
        • MarkovModel.txt: JAGS code to run the conjugated Markov model





已有 3 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
np84 + 100 精彩帖子
Nicolle + 20 鼓励积极发帖讨论
oliyiyi + 100 + 2 + 2 + 2 精彩帖子

总评分: 经验 + 200  论坛币 + 20  学术水平 + 2  热心指数 + 2  信用等级 + 2   查看全部评分

使用道具

藤椅
ynsxx 在职认证  发表于 2014-11-28 12:32:38 |只看作者 |坛友微信交流群
  1. ### Set of utility functions to perform Bayesian estimations and health economic analysis

  2. ############################################################
  3. ## Produce a traceplot to assess convergence of a MCMC run
  4. ## node is a string with the name of the node to be plotted
  5. ## model is the name of the object containing the MCMC simulations
  6. ## Copyright Gianluca Baio 2012
  7. mytraceplot <- function(node,model=m,title="",lab=""){
  8.         xlab <- "Iteration"
  9.         cmd <- ifelse(class(model)=="rjags",mdl <- model$BUGSoutput,mdl <- model)
  10.         col <- colors()
  11.         if (mdl$n.chains==1) {
  12.                 plot(mdl$sims.array[,1,node],t="l",col=col[which(col=="blue")],xlab=xlab,
  13.                         ylab=lab,main=title)
  14.         }
  15.         if (mdl$n.chains==2) {
  16.                 cols <- c("blue","red")
  17.                 plot(mdl$sims.array[,1,node],t="l",col=col[which(col==cols[1])],xlab=xlab,
  18.                         ylab=lab,main=title,ylim=range(mdl$sims.array[,1:2,node]))
  19.                 points(mdl$sims.array[,2,node],t="l",col=col[which(col==cols[2])])
  20.         }
  21.         if (mdl$n.chains>2) {
  22.                 cols <- c("blue","red","green","magenta","orange","brown","azure")
  23.                 plot(mdl$sims.array[,1,node],t="l",col=col[which(col==cols[1])],xlab=xlab,
  24.                         ylab=lab,main=title,
  25.                         ylim=range(mdl$sims.array[,1:mdl$n.chains,node]))
  26.                 for (i in 2:mdl$n.chains) {
  27.                         points(mdl$sims.array[,i,node],t="l",col=col[which(col==cols[i])])
  28.                 }
  29.         }
  30. }



  31. ############################################################
  32. ## Compute the value of parameters (a,b) for a Beta distribution to have mean and sd (m,s)
  33. ## Copyright Gianluca Baio 2012
  34. betaPar <- function(m,s){
  35.         a <- m*( (m*(1-m)/s^2) -1 )
  36.         b <- (1-m)*( (m*(1-m)/s^2) -1 )
  37.         list(a=a,b=b)
  38. }



  39. ############################################################
  40. ## Compute the value of parameters (mulog,sigmalog) for a logNormal distribution to have mean and sd (m,s)
  41. ## Copyright Gianluca Baio 2012
  42. lognPar <- function(m,s) {
  43.         s2 <- s^2
  44.         mulog <- log(m) - .5*log(1+s2/m^2)
  45.         s2log <- log(1+(s2/m^2))
  46.         sigmalog <- sqrt(s2log)
  47.         list(mulog=mulog,sigmalog=sigmalog)
  48. }



  49. ############################################################
  50. ## Compute the parameters of a Beta distribution, given a prior guess for:
  51. ##  mode = the mode of the distribution
  52. ##  upp  = an upper bound value for the distribution
  53. ##  prob = the estimated probability that (theta <= upp)
  54. ## Based on "Bayesian ideas and data analysis", page 100.
  55. ## Optimisation method to identify the values of a,b that give required conditions on the Beta distribution
  56. ## Copyright Gianluca Baio 2012
  57. betaPar2 <- function(mode,upp,prob){
  58. N <- 10000
  59. b <- 1:N
  60. a <- (1+mode*(b-2))/(1-mode)
  61. sim <- qbeta(prob,a,b)
  62. m <- ifelse(prob>=.5,max(which(sim>=upp)),min(which(sim>=upp)))
  63. M <- ifelse(prob>=.5,min(which(sim<=upp)),max(which(sim<=upp)))

  64. b <- min(m,M)+(b/N)
  65. a <- (1+mode*(b-2))/(1-mode)
  66. sim <- qbeta(prob,a,b)
  67. m <- ifelse(prob>=.5,max(which(sim>=upp)),min(which(sim>=upp)))
  68. M <- ifelse(prob>=.5,min(which(sim<=upp)),max(which(sim<=upp)))
  69. a <- ifelse(m==M,a[m],mean(a[m],a[M]))
  70. b <- ifelse(m==M,b[m],mean(b[m],b[M]))

  71. step <- 0.001
  72. theta <- seq(0,1,step)
  73. density <- dbeta(theta,a,b)

  74. norm.dens <- density/sum(density)
  75. cdf <- cumsum(norm.dens)
  76. M <- min(which(cdf>=.5))
  77. m <- max(which(cdf<=.5))

  78. theta.mode <- theta[which(density==max(density))]
  79. theta.mean <- a/(a+b)
  80. theta.median <- mean(theta[m],theta[M])
  81. theta.sd <- sqrt((a*b)/(((a+b)^2)*(a+b+1)))
  82. beta.params <- c(a,b,theta.mode,theta.mean,theta.median,theta.sd)
  83. res1 <- beta.params[1]
  84. res2 <- beta.params[2]
  85. theta.mode <- beta.params[3]
  86. theta.mean <- beta.params[4]
  87. theta.median <- beta.params[5]
  88. theta.sd <- beta.params[6]
  89. list(
  90. res1=res1,res2=res2,theta.mode=theta.mode,theta.mean=theta.mean,theta.median=theta.median,theta.sd=theta.sd)
  91. }



  92. ############################################################
  93. ## Posterior summary
  94. ## Creates summary statistics a' la WinBUGS --- currently works only for vectors
  95. ## Copyright Gianluca Baio 2012
  96. stats <- function(x){
  97. c(mean(x),sd(x),quantile(x,.025),median(x),quantile(x,.975))
  98. }
复制代码

已有 2 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
Nicolle + 20 鼓励积极发帖讨论

总评分: 经验 + 100  论坛币 + 20   查看全部评分

使用道具

板凳
acode 发表于 2014-11-28 12:59:35 |只看作者 |坛友微信交流群
  1. ## PROGRAMS THE GIBBS SAMPLING - pag 64-67

  2. # Define the number of observations & simulations
  3. n <- 30

  4. # Load the data
  5. #y <- rnorm(n,4,10)        # This would generate random values
  6. y <- c(1.2697,7.7637,2.2532,3.4557,4.1776,6.4320,-3.6623,7.7567,5.9032,7.2671,
  7.         -2.3447,8.0160,3.5013,2.8495,0.6467,3.2371,5.8573,-3.3749,4.1507,4.3092,
  8.         11.7327,2.6174,9.4942,-2.7639,-1.5859,3.6986,2.4544,-0.3294,0.2329,5.2846)

  9. # Defines the hyper-parameters to build the full conditionals
  10. ybar <- mean(y)
  11. mu_0 <- 0
  12. sigma2_0 <- 10000
  13. alpha_0 <- 0.01
  14. beta_0 <- 0.01

  15. # Initialises the parameters
  16. mu <- tau <- numeric()
  17. sigma2 <- 1/tau

  18. mu[1] <- rnorm(1,0,3)
  19. tau[1] <- runif(1,0,3)
  20. sigma2[1] <- 1/tau[1]

  21. # Gibbs sampling (samples from the full conditionals)
  22. nsim <- 1000
  23. for (i in 2:nsim) {
  24.         sigma_n <- sqrt(1/(1/sigma2_0 + n/sigma2[i-1]))
  25.         mu_n <- (mu_0/sigma2_0 + n*ybar/sigma2[i-1])*sigma_n^2
  26.         mu[i] <- rnorm(1,mu_n,sigma_n)

  27.         alpha_n <- alpha_0+n/2
  28.         beta_n <- beta_0 + sum((y-mu[i])^2)/2
  29.         tau[i] <- rgamma(1,alpha_n,beta_n)
  30.         sigma2[i] <- 1/tau[i]
  31. }

  32. ## Creates a bivariate contour, using the package mvtnorm
  33. require(mvtnorm)
  34. theta <- c(mean(mu),mean(sqrt(sigma2)))
  35. sigma <- c(var(mu),var(sqrt(sigma2)))
  36. rho <- cor(mu,sqrt(sigma2))
  37. ins <- c(-10,10)
  38. x1 <- seq(theta[1]-abs(ins[1]),theta[1]+abs(ins[1]),length.out=1000)
  39. x2 <- seq(theta[2]-abs(ins[2]),theta[2]+abs(ins[2]),length.out=1000)
  40. all <- expand.grid(x1, x2)
  41. Sigma<-matrix(c(sigma[1], sigma[1]*sigma[2]*rho, sigma[1]*sigma[2]*rho, sigma[2]), nrow=2, ncol=2)
  42. f.x <- dmvnorm(all, mean = theta, sigma = Sigma)
  43. f.x2 <- matrix(f.x, nrow=length(x1), ncol=length(x2))

  44. ##Plots of the results at different simulation lengths
  45. lw <- c(1,1.6)

  46. # First 10 iterations
  47. plot(mu,sqrt(sigma2),col="white",pch=20,cex=.3,xlim=range(mu),ylim=range(sqrt(sigma2)),xlab=expression(mu),ylab=expression(sigma),
  48.         main="After 10 iterations")
  49. for (i in 1:(9)) {
  50.         points(c(mu[i],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i]),t="l",col="grey60")
  51.         points(c(mu[i+1],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i+1]),t="l",col="grey60")
  52. }
  53. text(jitter(mu[1:10]),jitter(sqrt(sigma2[1:10])),1:10,col="grey20")
  54. contour(x = x1, y = x2, z = f.x2, nlevels=5,add=T,col="black",drawlabels=FALSE,lwd=lw[1])


  55. # First 30 iterations
  56. plot(mu,sqrt(sigma2),col="white",pch=20,cex=.3,xlim=range(mu),ylim=range(sqrt(sigma2)),xlab=expression(mu),ylab=expression(sigma),
  57.         main="After 30 iterations")
  58. for (i in 1:29) {
  59.         points(c(mu[i],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i]),t="l",col="grey60")
  60.         points(c(mu[i+1],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i+1]),t="l",col="grey60")
  61. }
  62. text(jitter(mu[1:30]),jitter(sqrt(sigma2[1:30])),1:30,col="grey20")
  63. contour(x = x1, y = x2, z = f.x2, nlevels=5,add=T,col="black",drawlabels=FALSE,lwd=lw[1])


  64. # First 100 iterations
  65. plot(mu,sqrt(sigma2),col="white",pch=20,cex=.3,xlim=range(mu),ylim=range(sqrt(sigma2)),xlab=expression(mu),ylab=expression(sigma),
  66.         main="After 100 iterations")
  67. for (i in 1:99) {
  68.         points(c(mu[i],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i]),t="l",col="grey60")
  69.         points(c(mu[i+1],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i+1]),t="l",col="grey60")
  70. }
  71. text(jitter(mu[1:100]),jitter(sqrt(sigma2[1:100])),1:100,col="grey20")
  72. contour(x = x1, y = x2, z = f.x2, nlevels=5,add=T,col="black",drawlabels=FALSE,lwd=lw[1])


  73. # All 1000 iterations
  74. plot(mu,sqrt(sigma2),col="grey60",pch=20,cex=.3,xlim=range(mu),ylim=range(sqrt(sigma2)),xlab=expression(mu),ylab=expression(sigma),
  75.         main="After 1000 iterations")
  76. for (i in 1:(nsim-1)) {
  77.         points(c(mu[i],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i]),t="l",col="grey60")
  78.         points(c(mu[i+1],mu[i+1]),c(sqrt(sigma2)[i],sqrt(sigma2)[i+1]),t="l",col="grey60")
  79. }
  80. text(jitter(mu),jitter(sqrt(sigma2)),1:nsim,col="grey28")
  81. contour(x = x1, y = x2, z = f.x2, nlevels=5,add=T,col="black",drawlabels=FALSE,lwd=lw[2])
复制代码

已有 2 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
Nicolle + 20 鼓励积极发帖讨论

总评分: 经验 + 100  论坛币 + 20   查看全部评分

使用道具

报纸
托茨卡纳 发表于 2014-11-28 13:44:23 |只看作者 |坛友微信交流群
  1. ## ANALYSIS OF THE NORMAL MODEL - pag 69-73

  2. # define the working directory assuming - the user is already in ~/WebMaterial/ch2
  3. working.dir <- paste(getwd(),"/",sep="")

  4. # Load data from .dta format
  5. library(foreign)
  6. data <- read.dta("phbirths.dta")
  7. attach(data)

  8. # Arrange the data and define relevant variables
  9. y <- grams
  10. N <- length(y)
  11. X <- gestate
  12. k <- 5000

  13. ## Run JAGS
  14. library(R2jags)

  15. ## 1. Uncentred model
  16. dataJags <- list("N","y","X","k")                                 # define the data
  17. filein <- paste(working.dir,"modelNormal.txt",sep="")                # define the model file
  18. params <- c("alpha","beta","sigma")                                # define the parameters to be monitored
  19. inits <- function(){                                                # create a function to randomly simulate
  20.         list(alpha=rnorm(1),beta=rnorm(1),lsigma=rnorm(1))        #   initial values for the required nodes
  21. }

  22. n.iter <- 10000                                                        # define the number of iterations
  23. n.burnin <- 9500                                                # define the burn-in
  24. n.thin <- floor((n.iter-n.burnin)/500)                                # compute the thinning to have 1000 iterations monitored

  25. # Launch JAGS and print the summary table
  26. m.1 <- jags(dataJags, inits, params, model.file=filein,
  27.         n.chains=2, n.iter, n.burnin, n.thin,
  28.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  29. print(m.1,digits=3,intervals=c(0.025, 0.975))


  30. # 2. Uncentred model with thinning (run for longer chains)
  31. n.iter <- 50000                                                        # increase number of iterations to improve convergence
  32. n.burnin <- 9500                                                # same burn-in
  33. n.thin <- floor((n.iter-n.burnin)/500)                                # compute thinning, this time > 1

  34. # Launch JAGS and print the summary table
  35. m.2 <- jags(dataJags, inits, params, model.file=filein,
  36.         n.chains=2, n.iter, n.burnin, n.thin,
  37.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  38. print(m.2,digits=3,intervals=c(0.025, 0.975))


  39. # 3. Centred model
  40. X <- gestate-mean(gestate)                                        # create centred covariate to improve convergence
  41. dataJags <- list("N","y","X","k")                                 # need to re-define data list

  42. n.iter <- 10000                                                        # doesn't need so many iterations, now
  43. n.burnin <- 9500
  44. n.thin <- floor((n.iter-n.burnin)/500)                                # thinning computed and set to 1 again

  45. # Launch JAGS and print the summary table
  46. m.3 <- jags(dataJags, inits, params, model.file=filein,
  47.         n.chains=2, n.iter, n.burnin, n.thin,
  48.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  49. print(m.3,digits=3,intervals=c(0.025, 0.975))


  50. ## Make plots
  51. source("../Utils.R")                                                # load utility functions

  52. # 1. Uncentred model
  53. # Traceplot
  54. par(mfrow=c(2,1))
  55. mytraceplot("alpha",title="Uncentred model",lab=expression(alpha),model=m.1)
  56. mytraceplot("beta",lab=expression(beta),model=m.1)

  57. # Makes ACF plot
  58. alpha.u <- c(m.1$BUGSoutput$sims.array[,1,1],m.1$BUGSoutput$sims.array[,2,1])
  59. beta.u <- c(m.1$BUGSoutput$sims.array[,1,2],m.1$BUGSoutput$sims.array[,2,2])
  60. txt <- substitute("Autocorrelation function for "*alpha* " - Uncentred model")
  61. acf(alpha.u,main=txt,ylab="")

  62. # Computes correlation for this model
  63. cor(alpha.u,beta.u)
  64. txt2 <- substitute("Scatterplot for "*alpha* " and "*beta* " - Uncentred model")
  65. plot(m.1$BUGSoutput$sims.list$alpha,m.1$BUGSoutput$sims.list$beta,xlab=expression(alpha),
  66.         ylab=expression(beta),main=txt2)


  67. # 2. Uncentred model with thinning
  68. # Traceplot
  69. par(mfrow=c(2,1))
  70. mytraceplot("alpha",title="Uncentred model with thinning",lab=expression(alpha),model=m.2)
  71. mytraceplot("beta",lab=expression(beta),model=m.2)

  72. # Makes ACF plot
  73. alpha.l <- c(m.2$BUGSoutput$sims.array[,1,1],m.2$BUGSoutput$sims.array[,2,1])
  74. beta.l <- c(m.2$BUGSoutput$sims.array[,1,2],m.2$BUGSoutput$sims.array[,2,2])
  75. txt <- substitute("Autocorrelation function for "*alpha* " - Uncentred model with thinning")
  76. acf(alpha.l,main=txt,ylab="")

  77. # Computes correlation for this model
  78. cor(alpha.l,beta.l)
  79. txt4 <- substitute("Scatterplot for "*alpha* " and "*beta* " - Uncentred model with thinning")
  80. plot(m.2$BUGSoutput$sims.list$alpha,m.2$BUGSoutput$sims.list$beta,xlab=expression(alpha),        
  81.         ylab=expression(beta),main=txt4)


  82. # 3. Centred model
  83. par(mfrow=c(2,1))
  84. mytraceplot("alpha",model=m.3,title="Centred model",lab=expression(alpha))
  85. mytraceplot("beta",model=m.3,lab=expression(beta))

  86. # Makes ACF plot
  87. alpha.c <- c(m.3$BUGSoutput$sims.array[,1,1],m.3$BUGSoutput$sims.array[,2,1])
  88. beta.c <- c(m.3$BUGSoutput$sims.array[,1,2],m.3$BUGSoutput$sims.array[,2,2])
  89. txt <- substitute("Autocorrelation function for "*alpha* " - Centred model")
  90. acf(alpha.c,main=txt,ylab="")

  91. # Plots Scatterplot for correlation
  92. cor(alpha.c,beta.c)
  93. txt3 <- substitute("Scatterplot for "*alpha* " and "*beta* " - Centred model")
  94. plot(m.3$BUGSoutput$sims.list$alpha,m.3$BUGSoutput$sims.list$beta,xlab=expression(alpha),
  95.         ylab=expression(beta),main=txt3)
复制代码

已有 3 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
np84 + 100 精彩帖子
Nicolle + 20 + 20 鼓励积极发帖讨论

总评分: 经验 + 220  论坛币 + 20   查看全部评分

使用道具

地板
lhf8059 发表于 2014-11-28 14:28:25 |只看作者 |坛友微信交流群
  1. ## HEALTH ECONOMIC EVALUATION --- EXAMPLE THROUGHOUT CHAPTER 3 AND CHAPTER 4

  2. ## Loosely based on Fox-Rushby & Cairns (2005). Economic Evaluation. Open University Press
  3. ## Parameters of the model
  4. # pi = probability of side effects (specific to each treatment)
  5. # rho = reduction in probability of side effects for t=2
  6. # phi = probability of ambulatory visit, given side effects (constant across treatments)
  7. # c.amb = cost of ambulatory visit after side effects
  8. # c.hosp = cost of hospitalisation after side effects
  9. # c.drug = cost of each treatment (known constant)

  10. # define the working directory assuming - the user is already in ~/WebMaterial/ch2
  11. working.dir <- paste(getwd(),"/",sep="")
  12. source("http://www.statistica.it/gianluca/BMHE/WebMaterial/Utils.R")                # loads utility functions

  13. ## Generates data for the number of side effects under t=0
  14. N.studies <- 5
  15. sample.size <- 32
  16. prop.pi <- .24
  17. n <- rpois(N.studies,sample.size)
  18. se <- rbinom(N.studies,n,prop.pi)

  19. ## Generates the data for the number of patients with side effects requiring ambulatory care
  20. prop.gamma <- .619
  21. amb <- rbinom(N.studies,se,prop.gamma)

  22. ## Computes the hyperparameters for the informative prior distributions
  23. m.pi <- .5
  24. s.pi <- sqrt(.125)
  25. a.pi <- betaPar(m.pi,s.pi)$a
  26. b.pi <- betaPar(m.pi,s.pi)$b

  27. m.gamma <- .5
  28. s.gamma <- sqrt(.125)
  29. a.gamma <- betaPar(m.gamma,s.gamma)$a
  30. b.gamma <- betaPar(m.gamma,s.gamma)$b

  31. m.rho <- 0.8
  32. s.rho <- 0.2
  33. tau.rho <- 1/s.rho^2

  34. mu.amb <- 120
  35. sd.amb <- 20
  36. m.amb <- lognPar(mu.amb,sd.amb)$mulog
  37. s.amb <- lognPar(mu.amb,sd.amb)$sigmalog
  38. tau.amb <- 1/s.amb^2

  39. mu.hosp <- 5500
  40. sd.hosp <- 980
  41. m.hosp <- lognPar(mu.hosp,sd.hosp)$mulog
  42. s.hosp <- lognPar(mu.hosp,sd.hosp)$sigmalog
  43. tau.hosp <- 1/s.hosp^2

  44. c.drug <- c(110,520)

  45. # Number of patients in the population
  46. N <- 1000


  47. ## Run JAGS
  48. library(R2jags)
  49. dataJags <- list("se","amb","n","N.studies","a.pi","b.pi","a.gamma",                # data list
  50.                 "b.gamma","m.amb","tau.amb","m.hosp",
  51.                 "tau.hosp","m.rho","tau.rho","N")
  52. filein <- "http://www.statistica.it/gianluca/BMHE/WebMaterial/ch3/model.txt"        # model file
  53. params <- c("pi","gamma","c.amb","c.hosp","rho","SE","A","H")                        # parameters list
  54. inits <- function(){                                                                # randomly initialise relevant variables
  55.         SE=rbinom(2,N,.2)
  56.         list(pi=c(runif(1),NA),gamma=runif(1),c.amb=rlnorm(1),
  57.         c.hosp=rlnorm(1),rho=runif(1),SE=SE,A=rbinom(2,SE,.2))
  58. }

  59. n.iter <- 20000
  60. n.burnin <- 9500
  61. n.thin <- floor((n.iter-n.burnin)/250)

  62. chemo <- jags(dataJags, inits, params, model.file=filein,
  63.         n.chains=2, n.iter, n.burnin, n.thin,
  64.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  65. ## NB: If initial values for SE are not compatible with the rest of the model, JAGS stops with an error.
  66. ##     The solution is to re-launch the whole model until it does work.

  67. print(chemo,digits=3,intervals=c(0.025, 0.975))
  68. attach.bugs(chemo$BUGSoutput)


  69. ## Cost-effectiveness analysis
  70. # Defines the variables of cost and effectiveness
  71. e <- c <- matrix(NA,chemo$BUGSoutput$n.sims,2)
  72. e <- N - SE
  73. for (t in 1:2) {
  74.         c[,t] <- c.drug[t]*(N-SE[,t]) + (c.amb+c.drug[t])*A[,t] + (c.hosp+c.drug[t])*H[,t]
  75. }

  76. library(BCEA)
  77. treats <- c("Old Chemotherapy","New Chemotherapy")
  78. m <- bcea(e=e,c=c,ref=2,interventions=treats,Kmax=50000)
  79. summary(m)



  80. ## Computes EVPPI
  81. # Analysis of Expected Value of Partial Information using Strong & Oakley's or Sadatsafavi et al.'s methods
  82. inp <- CreateInputs(chemo)        # Create the inputs (parameters & matrix of MCMC simulations for all of them)
  83. x.so <- evppi("rho",inp$mat,m,n.blocks=50)                # Uses S&O method
  84. plot(x.so)
  85. x.sal <- evppi("rho",inp$mat,m,n.seps=2)                # Uses Sadatsafavi et al method
  86. plot(x.sal)
  87. # Compare the two methods
  88. plot(x.so$k,x.so$evi,t="l",lwd=2,xlab="Willingness to pay",ylab="",main="Expected value of partial information")
  89. points(x.so$k,x.so$evppi,t="l",col="blue")
  90. points(x.sal$k,x.sal$evppi,t="l",col="red")
  91. legend("topleft",c("EVPI","EVPPI for rho (S&O)","EVPPI for rho (S et al)"),cex=.7,bty="n",lty=1,
  92.         col=c("black","blue","red"))


  93. # Analysis of Expected Value of Partial Information using 2stage approach
  94. # Main parameters:
  95. # rho = reduction in side effects
  96. # gamma = chance of ambulatory care
  97. rho.sim <- rho
  98. gamma.sim <- gamma

  99. detach.bugs()
  100. inits <- NULL

  101. # EVPPI for rho
  102. # Replicates the C/E analysis for a fixed value of rho at each iteration
  103. Ustar.phi <- matrix(NA,chemo$BUGSoutput$n.sims,length(m$k))
  104. Rhat <- matrix(NA,chemo$BUGSoutput$n.sims,length(chemo$BUGSoutput$summary[,"Rhat"])-1)
  105. for (i in 1:chemo$BUGSoutput$n.sims) {
  106.         rho <- rho.sim[i]       

  107.         dataJags <- list("se","amb","n","N.studies","a.pi","b.pi","a.gamma","b.gamma","m.amb",
  108.                 "tau.amb","m.hosp","tau.hosp","N","rho")
  109.         filein <- "modelEVPPI_rho.txt"
  110.         params <- c("pi","gamma","c.amb","c.hosp","SE","A","H")
  111.         n.iter <- 10000
  112.         n.burnin <- 9750
  113.         n.thin <- floor((n.iter-n.burnin)/250)
  114.         chemo.evppi <- jags(dataJags, inits, params, model.file=filein,
  115.                 n.chains=2, n.iter, n.burnin, n.thin,
  116.                 DIC=TRUE, working.directory=working.dir, progress.bar="text")
  117.         Rhat[i,] <- chemo.evppi$BUGSoutput$summary[,"Rhat"]
  118.         attach.bugs(chemo.evppi$BUGSoutput)

  119.         # Performs the economic analysis given the simulations for all the
  120.         # other parameters, conditionally on rho
  121.         e.temp <- c.temp <- matrix(NA,chemo$BUGSoutput$n.sims,2)
  122.         e.temp <- N - SE
  123.         for (t in 1:2) {
  124.                 c.temp[,t] <- c.drug[Rt]*(N-SE[,t]) + (c.amb+c.drug[t])*A[,t] + (c.hosp+c.drug[t])*H[,t]
  125.         }
  126.         m.evppi <- bcea(e=e.temp,c=c.temp,ref=2,interventions=treats,Kmax=50000,Ktable=25000)

  127.         # Computes the maximum expected utility for that iteration for each value of k
  128.         Ustar.phi[i,] <- apply(m.evppi$Ustar,2,mean)
  129.         rm(m.evppi); detach.bugs()
  130.         print(i)
  131. }

  132. # Computes the average value of the maximum expected utility for each value of k
  133. Vstar.phi <- apply(Ustar.phi,2,mean)
  134. Umax <- apply(apply(m$U,c(2,3),mean),1,max)

  135. # Computes the EVPPI for each value of k
  136. EVPPI_rho <- Vstar.phi - Umax

  137. # Plot the results
  138. plot(m$k,m$evi,t="l",xlab="Willingness to pay",ylab="")
  139. points(m$k,EVPPI_rho,t="l",lty=2)
  140. text <- c("EVPI",expression(paste("EVPPI for ",rho,sep="")))
  141. legend("topleft",text,lty=1:2,cex=.9,box.lty=0)

  142. # Checks for convergence in all (inner) simulations
  143. par(mfrow=c(4,3))
  144. for (i in 1:12) {
  145.         plot(Rhat[,i],t="l",main=rownames(chemo.evppi$BUGSoutput$summary)[i],ylim=c(.99,1.15))
  146.         abline(h=1.1)
  147. }


  148. # EVPI for gamma
  149. # Replicates the C/E analysis for a fixed value of rho at each iteration
  150. Ustar.phi <- matrix(NA,chemo$BUGSoutput$n.sims,length(m$k))
  151. Rhat <- matrix(NA,chemo$BUGSoutput$n.sims,length(chemo$BUGSoutput$summary[,"Rhat"])-1)
  152. for (i in 1:chemo$BUGSoutput$n.sims) {
  153.         gamma <- gamma.sim[i]       

  154.         dataJags <- list("se","amb","n","N.studies","a.pi","b.pi","m.rho","tau.rho","m.amb",
  155.                 "tau.amb","m.hosp","tau.hosp","N","gamma")
  156.         filein <- "modelEVPPI_gamma.txt"
  157.         params <- c("pi","rho","c.amb","c.hosp","SE","A","H")
  158.         n.iter <- 10000
  159.         n.burnin <- 9750
  160.         n.thin <- floor((n.iter-n.burnin)/250)
  161.         chemo.evppi <- jags(dataJags, inits, params, model.file=filein,
  162.                 n.chains=2, n.iter, n.burnin, n.thin,
  163.                 DIC=TRUE, working.directory=working.dir, progress.bar="text")
  164.         Rhat[i,] <- chemo.evppi$BUGSoutput$summary[,"Rhat"]
  165.         attach.bugs(chemo.evppi$BUGSoutput)

  166.         # Performs the economic analysis given the simulations for all the
  167.         # other parameters, conditionally on rho
  168.         e.temp <- c.temp <- matrix(NA,chemo$BUGSoutput$n.sims,2)
  169.         e.temp <- N - SE
  170.         for (t in 1:2) {
  171.                 c.temp[,t] <- c.drug[t]*(N-SE[,t]) + (c.amb+c.drug[t])*A[,t] + (c.hosp+c.drug[t])*H[,t]
  172.         }
  173.         m.evppi <- bcea(e=e.temp,c=c.temp,ref=2,interventions=treats,Kmax=50000,Ktable=25000)

  174.         # Computes the maximum expected utility for that iteration for each value of k
  175.         Ustar.phi[i,] <- apply(m.evppi$Ustar,2,mean)
  176.         rm(m.evppi); detach.bugs()
  177.         print(i)
  178. }

  179. # Computes the average value of the maximum expected utility for each value of k
  180. Vstar.phi <- apply(Ustar.phi,2,mean)
  181. Umax <- apply(apply(m$U,c(2,3),mean),1,max)

  182. # Computes the EVPPI for each value of k
  183. EVPPI_gamma <- Vstar.phi - Umax

  184. # Plot the results
  185. plot(m$k,m$evi,t="l",xlab="Willingness to pay",ylab="")
  186. points(m$k,EVPPI_gamma,t="l",lty=2)
  187. text <- c("EVPI",expression(paste("EVPPI for ",gamma,sep="")))
  188. legend("topleft",text,lty=1:2,cex=.9,box.lty=0)


  189. # Checks for convergence in all (inner) simulations
  190. par(mfrow=c(4,3))
  191. for (i in 1:12) {
  192.         plot(Rhat[,i],t="l",main=rownames(chemo.evppi$BUGSoutput$summary)[i],ylim=c(.99,1.15))
  193.         abline(h=1.1)
  194. }



  195. ## Model averaging
  196. # M1 = complete model
  197. # M2 = rho =: 0.3
  198. # M3 = rho =: 1

  199. # analysis for M2
  200. detach.bugs()
  201. rho <- 1

  202. dataJags <- list("se","n","amb","N.studies","a.pi","b.pi","a.gamma","b.gamma","m.amb","tau.amb","m.hosp","tau.hosp","N","rho")
  203. filein <- "modelEVPPI.txt"
  204. params <- c("pi","gamma","c.amb","c.hosp","rho","SE","A","H")
  205. inits <- function(){
  206.         SE=rbinom(2,N,.2)
  207.         list(pi=c(runif(1),NA),gamma=runif(1),c.amb=rlnorm(1),c.hosp=rlnorm(1),SE=SE,A=rbinom(2,SE,.2))
  208. }

  209. n.iter <- 20000
  210. n.burnin <- 9500
  211. n.thin <- floor((n.iter-n.burnin)/250)
  212. chemo2 <- jags(dataJags, inits, params, model.file=filein,
  213.         n.chains=2, n.iter, n.burnin, n.thin,
  214.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  215. print(chemo2,digits=3,intervals=c(0.025, 0.975))
  216. attach.bugs(chemo2$BUGSoutput,overwrite=TRUE)

  217. ## Cost-effectiveness analysis
  218. # Defines the variables of cost and effectiveness
  219. e <- c <- matrix(NA,chemo$BUGSoutput$n.sims,2)
  220. e <- N - SE
  221. for (t in 1:2) {
  222.         c[,t] <- c.drug[t]*(N-SE[,t]) + (c.amb+c.drug[t])*A[,t] + (c.hosp+c.drug[t])*H[,t]
  223. }

  224. m2 <- bcea(e=e,c=c,ref=2,interventions=treats,Kmax=50000)
  225. fileout <- "Model_rho_fixed.ps"
  226. postscript(fileout,paper="special",width=6,height=6,horizontal=F)
  227. plot(m2)
  228. dev.off()

  229. ## Compares the models based on the DIC
  230. d <- w <- numeric()
  231. d[1] <- chemo$BUGSoutput$DIC
  232. d[2] <- chemo2$BUGSoutput$DIC
  233. dmin <- min(d)

  234. # Computes the model weights
  235. w <- exp(-.5*(d-dmin))/sum(exp(-.5*(d-dmin)))

  236. # Computes the model average health economic analysis
  237. SE <- w[1]*chemo$BUGSoutput$sims.list$SE + w[2]*chemo2$BUGSoutput$sims.list$SE
  238. c.hosp <- w[1]*chemo$BUGSoutput$sims.list$c.hosp + w[2]*chemo2$BUGSoutput$sims.list$c.hosp
  239. c.amb <- w[1]*chemo$BUGSoutput$sims.list$c.amb + w[2]*chemo2$BUGSoutput$sims.list$c.amb
  240. A <- w[1]*chemo$BUGSoutput$sims.list$A + w[2]*chemo2$BUGSoutput$sims.list$A
  241. H <- w[1]*chemo$BUGSoutput$sims.list$H + w[2]*chemo2$BUGSoutput$sims.list$H

  242. e <- c <- matrix(NA,chemo$BUGSoutput$n.sims,2)
  243. e <- N - SE
  244. for (t in 1:2) {
  245.         c[,t] <- c.drug[t]*(N-SE[,t]) + (c.amb+c.drug[t])*A[,t] + (c.hosp+c.drug[t])*H[,t]
  246. }
  247. m.avg <- bcea(e=e,c=c,ref=2,interventions=treats,Kmax=50000)
  248. plot(m.avg)

  249. par(mfrow=c(2,2))
  250. contour(m)
  251. contour(m2)
  252. contour(m.avg)
复制代码

已有 1 人评分经验 收起 理由
oliyiyi + 100 精彩帖子

总评分: 经验 + 100   查看全部评分

使用道具

7
yangke74 在职认证  发表于 2014-11-28 14:29:08 |只看作者 |坛友微信交流群

Basic Cost-Effectiveness Analysis using JAGS

  1. ## Health economic evaluation --- example for Chapter 3
  2. ## Loosely based on Fox-Rushby & Cairns (2005). Economic Evaluation. Open University Press

  3. model {
  4.         for (s in 1:N.studies) {
  5.                 se[s] ~ dbin(pi[1],n[s])
  6.                 amb[s] ~ dbin(gamma,se[s])
  7.         }
  8.         pi[1] ~ dbeta(a.pi,b.pi)
  9.         pi[2] <- pi[1]*rho
  10.         rho ~ dnorm(m.rho,tau.rho)
  11.         gamma ~ dbeta(a.gamma,b.gamma)
  12.         c.amb ~ dlnorm(m.amb,tau.amb)
  13.         c.hosp ~ dlnorm(m.hosp,tau.hosp)

  14. # Predictive distributions on the clinical outcomes
  15.         for (t in 1:2) {
  16.                 SE[t] ~ dbin(pi[t],N)
  17.                 A[t] ~ dbin(gamma,SE[t])
  18.                 H[t] <- SE[t] - A[t]
  19.         }
  20. }
复制代码

已有 3 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
np84 + 100 精彩帖子
Nicolle + 20 鼓励积极发帖讨论

总评分: 经验 + 200  论坛币 + 20   查看全部评分

使用道具

8
oink-oink 发表于 2014-11-28 14:36:56 |只看作者 |坛友微信交流群

Computing Expected Value of Partial Perfect Information Using JAGS

  1. ## Health economic evaluation --- example for Chapter 3 --- Model for the EVPPI
  2. ## Loosely based on Fox-Rushby & Cairns (2005). Economic Evaluation. Open University Press

  3. model {
  4.         for (s in 1:N.studies) {
  5.                 se[s] ~ dbin(pi[1],n[s])
  6.                 amb[s] ~ dbin(gamma,se[s])
  7.         }
  8.         pi[1] ~ dbeta(a.pi,b.pi)
  9.         pi[2] <- pi[1]*rho
  10.         gamma ~ dbeta(a.gamma,b.gamma)
  11.         c.amb ~ dlnorm(m.amb,tau.amb)
  12.         c.hosp ~ dlnorm(m.hosp,tau.hosp)

  13.         for (t in 1:2) {
  14.                 SE[t] ~ dbin(pi[t],N)
  15.                 A[t] ~ dbin(gamma,SE[t])
  16.                 H[t] <- SE[t] - A[t]
  17.         }
  18. }
复制代码

已有 2 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
Nicolle + 20 鼓励积极发帖讨论

总评分: 经验 + 100  论坛币 + 20   查看全部评分

使用道具

9
rmatrix 发表于 2014-11-28 14:53:57 |只看作者 |坛友微信交流群

Expected Value of Partial Perfect Information using JAGS

  1. ## Health economic evaluation --- example for Chapter 3 --- Model for the EVPPI
  2. ## Loosely based on Fox-Rushby & Cairns (2005). Economic Evaluation. Open University Press

  3. model {
  4.         for (s in 1:N.studies) {
  5.                 se[s] ~ dbin(pi[1],n[s])
  6.                 amb[s] ~ dbin(gamma,se[s])
  7.         }
  8.         pi[1] ~ dbeta(a.pi,b.pi)
  9.         pi[2] <- pi[1]*rho
  10.         rho ~ dnorm(m.rho,tau.rho)
  11.         c.amb ~ dlnorm(m.amb,tau.amb)
  12.         c.hosp ~ dlnorm(m.hosp,tau.hosp)

  13.         for (t in 1:2) {
  14.                 SE[t] ~ dbin(pi[t],N)
  15.                 A[t] ~ dbin(gamma,SE[t])
  16.                 H[t] <- SE[t] - A[t]
  17.         }
  18. }
复制代码


已有 2 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
Nicolle + 20 + 20 鼓励积极发帖讨论

总评分: 经验 + 120  论坛币 + 20   查看全部评分

使用道具

10
li_mao 发表于 2014-11-28 15:30:37 |只看作者 |坛友微信交流群

Normal Model using R

  1. ## ANALYSIS OF THE NORMAL MODEL - pag 129-141

  2. # define the working directory assuming - the user is already in ~/WebMaterial/ch2
  3. working.dir <- paste(getwd(),"/",sep="")

  4. # Load data from .dta format
  5. library(foreign)
  6. data <- read.dta("../ch2/phbirths.dta")
  7. attach(data)

  8. # Arrange the data and define relevant variables
  9. y <- grams
  10. N <- length(y)
  11. X <- gestate
  12. k <- 5000

  13. # Run JAGS
  14. library(R2jags)

  15. ## 4. Uncentred model with blocking (for chapter 4)
  16. ## NB Continues the analysis from chapter 2 (check modelNormal.R in the directory /ch2)
  17. X <- gestate
  18. m0 <- c(0,0)
  19. prec <- 1/100000*matrix(c(1,0,0,1),nrow=2,ncol=2)
  20. dataJags <- list("m0","prec","N","y","X","k")
  21. filein <- "modelNormalBlocking.txt"
  22. params <- c("alpha","beta","sigma")
  23. inits <- function(){
  24.         list(lsigma=rnorm(1),coef=rnorm(2,0,1))
  25. }

  26. n.iter <- 10000
  27. n.burnin <- 9500
  28. n.thin <- floor((n.iter-n.burnin)/500)
  29. m.4 <- jags(dataJags, inits, params, model.file=filein,
  30.         n.chains=2, n.iter, n.burnin, n.thin,
  31.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  32. print(m.4,digits=3,intervals=c(0.025, 0.975))



  33. ## 5. Predictive distribution (for chapter 4)
  34. # 5a. Uncentered version with thinning
  35. X.star <- 28
  36. dataJags2 <- list("N","y","X","k","X.star")
  37. filein <- paste(working.dir,"modelNormal2.txt",sep="")
  38. params <- c("alpha","beta","sigma","y.star")
  39. inits <- function(){
  40.         list(alpha=rnorm(1),beta=rnorm(1),lsigma=rnorm(1),y.star=runif(1))
  41. }

  42. n.iter <- 50000
  43. n.burnin <- 9500
  44. n.thin <- floor((n.iter-n.burnin)/500)
  45. m.5 <- jags(dataJags2, inits, params, model.file=filein,
  46.         n.chains=2, n.iter, n.burnin, n.thin,
  47.         DIC=TRUE, working.directory=working.dir, progress.bar="text")
  48. print(m.5,digits=3,intervals=c(0.025, 0.975))

  49. # 5b. Centred model with missing data
  50. y <- c(y,NA)
  51. N <- length(y)
  52. X <- c(X,28)
  53. # still call centred variable X so can still use same model file
  54. #Xbar <- mean(X)
  55. #X <- X - Xbar
  56. dataJags3 <- list("N","y","X","k")
  57. filein <- "../ch2/modelNormal.txt"
  58. params3 <- c("alpha","beta","sigma","y[1116]")
  59. inits <- function(){
  60. list(alpha=rnorm(1),beta=rnorm(1),lsigma=rnorm(1),
  61.   y=c(rep(NA,(N-1)),rnorm(1)))
  62. }
  63. n.iter <- 50000
  64. n.burnin <- 9500
  65. n.thin <- floor((n.iter-n.burnin)/500)

  66. m.6 <- jags(dataJags3, inits, params3, model.file=filein,
  67.         n.chains=2,n.iter=n.iter,n.burnin=n.burnin,n.thin=n.thin,DIC=TRUE)
  68. print(m.6,digits=3,intervals=c(0.025, 0.975))
复制代码

已有 2 人评分经验 论坛币 收起 理由
oliyiyi + 100 精彩帖子
Nicolle + 20 + 20 鼓励积极发帖讨论

总评分: 经验 + 120  论坛币 + 20   查看全部评分

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-27 00:48