4763 7

[貝葉斯專著]Doing Bayesian Data Analysis: A Tutorial with R and BUGS [推广有奖]

  • 0关注
  • 10粉丝

已卖:1625份资源

教授

8%

还不是VIP/贵宾

-

TA的文库  其他...

Must-Read Book

Winrats NewOccidental

Matlab NewOccidental

威望
1
论坛币
31504 个
通用积分
4.4911
学术水平
96 点
热心指数
43 点
信用等级
79 点
经验
9658 点
帖子
287
精华
10
在线时间
40 小时
注册时间
2013-12-14
最后登录
2024-4-12

楼主
农村固定观察点 发表于 2014-6-20 21:51:56 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

Doing Bayesian Data Analysis: A Tutorial with R and BUGS


John Kruschke



Book Description
Publication Date: Oct. 27 2010 |
ISBN-10: 0123814855 | ISBN-13: 978-0123814852 |
Edition: 1
There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis tractable and accessible to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS, is for first year graduate students or advanced undergraduates and provides an accessible approach, as all mathematics is explained intuitively and with concrete examples. It assumes only algebra and 'rusty' calculus. Unlike other textbooks, this book begins with the basics, including essential concepts of probability and random sampling. The book gradually climbs all the way to advanced hierarchical modeling methods for realistic data. The text provides complete examples with the R programming language and BUGS software (both freeware), and begins with basic programming examples, working up gradually to complete programs for complex analyses and presentation graphics. These templates can be easily adapted for a large variety of students and their own research needs.The textbook bridges the students from their undergraduate training into modern Bayesian methods.
-Accessible, including the basics of essential concepts of probability and random sampling
-Examples with R programming language and BUGS software
-Comprehensive coverage of all scenarios addressed by non-bayesian textbooks- t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis).
-Coverage of experiment planning
-R and BUGS computer programming code on website
-Exercises have explicit purposes and guidelines for accomplishment
Product Details
  • Hardcover: 672 pages
  • Publisher: Academic Press; 1 edition (Oct. 27 2010)
  • Language: English
  • ISBN-10: 0123814855
  • ISBN-13: 978-0123814852
  • Product Dimensions: 23.6 x 19.3 x 3.3 cm
  • Shipping Weight: 1.2 Kg
Product DescriptionReview"I think it fills a gaping hole in what is currently available, and will serve to create its own market as researchers and their students transition towards the routine application of Bayesian statistical methods.” -Prof. Michael lee, University of California, Irvine, and president of the Society for Mathematical Psychology
"Kruschke's text covers a much broader range of traditional experimental designs.has the potential to change the way most cognitive scientists and experimental psychologists approach the planning and analysis of their experiments" -Prof. Geoffrey Iverson, University of California, Irvine, and past president of the Society for Mathematical Psychology
"John Kruschke has written a book on Statistics. It's better than others for reasons stylistic. It also is better because itis Bayesian. To find out why, buy it -- it's truly amazin'!”-James L. (Jay) McClelland, Lucie Stern Professor & Chair, Dept. Of Psychology, Standford University

From the Back CoverThere is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS, provides an accessible approach to Bayesian Data Analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all scenarios addressed by non-Bayesian textbooks- t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, correlation, multiple regression, and chi-square (contingency table analysis).
This book is intended for first year graduate students or advanced undergraduates. It provides a bridge between undergraduate training and modern Bayesian methods for data analysis, which is becoming the accepted research standard. Prerequisite is knowledge of algebra and basic calculus.

  • https://bbs.pinggu.org/thread-1048913-1-1.html
  • https://sites.google.com/site/doingbayesiandataanalysis/1st-ed-stuff
  • https://github.com/boboppie/kruschke-doing_bayesian_data_analysis

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Bayesian Tutorial Analysis Analysi alysis accessible recently audience interest because

本帖被以下文库推荐

沙发
zendream 发表于 2014-6-23 08:42:39
  1. graphics.off()
  2. rm(list=ls(all=TRUE))
  3. fnroot = "ANOVAonewayBrugs"
  4. library(BRugs)         # Kruschke, J. K. (2010). Doing Bayesian data analysis:
  5.                        # A Tutorial with R and BUGS. Academic Press / Elsevier.
  6. #------------------------------------------------------------------------------
  7. # THE MODEL.

  8. modelstring = "
  9. # BUGS model specification begins here...
  10. model {
  11.   for ( i in 1:Ntotal ) {
  12.     y[i] ~ dnorm( mu[i] , tau )
  13.     mu[i] <- a0 + a[x[i]]
  14.   }
  15.   #
  16.   tau <- pow( sigma , -2 )
  17.   sigma ~ dunif(0,10) # y values are assumed to be standardized
  18.   #
  19.   a0 ~ dnorm(0,0.001) # y values are assumed to be standardized
  20.   #
  21.   for ( j in 1:NxLvl ) { a[j] ~ dnorm( 0.0 , atau ) }
  22.   atau <- 1 / pow( aSD , 2 )
  23.   aSD <- abs( aSDunabs ) + .1
  24.   aSDunabs ~ dt( 0 , 0.001 , 2 )
  25. }
  26. # ... end BUGS model specification
  27. " # close quote for modelstring
  28. # Write model to a file, and send to BUGS:
  29. writeLines(modelstring,con="model.txt")
  30. modelCheck( "model.txt" )

  31. #------------------------------------------------------------------------------
  32. # THE DATA.

  33. # Specify data source:
  34. dataSource = c( "McDonaldSK1991" , "SolariLS2008" , "Random" )[1]
  35. # Load the data:

  36. if ( dataSource == "McDonaldSK1991" ) {
  37.   fnroot = paste( fnroot , dataSource , sep="" )
  38.   datarecord = read.table( "McDonaldSK1991data.txt", header=T ,
  39.                            colClasses=c("factor","numeric") )
  40.   y = as.numeric(datarecord$Size)
  41.   Ntotal = length(datarecord$Size)
  42.   x = as.numeric(datarecord$Group)
  43.   xnames = levels(datarecord$Group)
  44.   NxLvl = length(unique(datarecord$Group))
  45.   contrastList = list( BIGvSMALL = c(-1/3,-1/3,1/2,-1/3,1/2) ,
  46.                        ORE1vORE2 = c(1,-1,0,0,0) ,
  47.                        ALAvORE = c(-1/2,-1/2,1,0,0) ,
  48.                        NPACvORE = c(-1/2,-1/2,1/2,1/2,0) ,
  49.                        USAvRUS = c(1/3,1/3,1/3,-1,0) ,
  50.                        FINvPAC = c(-1/4,-1/4,-1/4,-1/4,1) ,
  51.                        ENGvOTH = c(1/3,1/3,1/3,-1/2,-1/2) ,
  52.                        FINvRUS = c(0,0,0,-1,1) )
  53. }

  54. if ( dataSource == "SolariLS2008" ) {
  55.   fnroot = paste( fnroot , dataSource , sep="" )
  56.   datarecord = read.table("SolariLS2008data.txt", header=T ,
  57.                            colClasses=c("factor","numeric") )
  58.   y = as.numeric(datarecord$Acid)
  59.   Ntotal = length(datarecord$Acid)
  60.   x = as.numeric(datarecord$Type)
  61.   xnames = levels(datarecord$Type)
  62.   NxLvl = length(unique(datarecord$Type))
  63.   contrastList = list( G3vOTHER = c(-1/8,-1/8,1,-1/8,-1/8,-1/8,-1/8,-1/8,-1/8) )
  64. }

  65. if ( dataSource == "Random" ) {
  66.   fnroot = paste( fnroot , dataSource , sep="" )
  67.   #set.seed(47405)
  68.   ysdtrue = 4.0
  69.   a0true = 100
  70.   atrue = c( 2 , -2 ) # sum to zero
  71.   npercell = 8
  72.   datarecord = matrix( 0, ncol=2 , nrow=length(atrue)*npercell )
  73.   colnames(datarecord) = c("y","x")
  74.   rowidx = 0
  75.   for ( xidx in 1:length(atrue) ) {
  76.     for ( subjidx in 1:npercell ) {
  77.       rowidx = rowidx + 1
  78.       datarecord[rowidx,"x"] = xidx
  79.       datarecord[rowidx,"y"] = ( a0true + atrue[xidx] + rnorm(1,0,ysdtrue) )
  80.     }
  81.   }
  82.   datarecord = data.frame( y=datarecord[,"y"] , x=as.factor(datarecord[,"x"]) )
  83.   y = as.numeric(datarecord$y)
  84.   Ntotal = length(y)
  85.   x = as.numeric(datarecord$x)
  86.   xnames = levels(datarecord$x)
  87.   NxLvl = length(unique(x))
  88.   # Construct list of all pairwise comparisons, to compare with NHST TukeyHSD:
  89.   contrastList = NULL
  90.   for ( g1idx in 1:(NxLvl-1) ) {
  91.     for ( g2idx in (g1idx+1):NxLvl ) {
  92.       cmpVec = rep(0,NxLvl)
  93.       cmpVec[g1idx] = -1
  94.       cmpVec[g2idx] = 1
  95.       contrastList = c( contrastList , list( cmpVec ) )
  96.     }
  97.   }
  98. }

  99. # Specify the data in a form that is compatible with BRugs model, as a list:
  100. ySDorig = sd(y)
  101. yMorig = mean(y)
  102. z = ( y - yMorig ) / ySDorig
  103. datalist = list(
  104.   y = z ,
  105.   x = x ,
  106.   Ntotal = Ntotal ,
  107.   NxLvl = NxLvl
  108. )
  109. # Get the data into BRugs:
  110. modelData( bugsData( datalist ) )

  111. #------------------------------------------------------------------------------
  112. # INTIALIZE THE CHAINS.

  113. # Autocorrelation within chains is large, so use several chains to reduce
  114. # degree of thinning. But we still have to burn-in all the chains, which takes
  115. # more time with more chains (on serial CPUs).
  116. nchain = 5
  117. modelCompile( numChains = nchain )

  118. if ( F ) {
  119.    modelGenInits() # often won't work for diffuse prior
  120. } else {
  121.   #  initialization based on data
  122.   theData = data.frame( y=datalist$y , x=factor(x,labels=xnames) )
  123.   a0 = mean( theData$y )
  124.   a = aggregate( theData$y , list( theData$x ) , mean )[,2] - a0
  125.   ssw = aggregate( theData$y , list( theData$x ) ,
  126.                   function(x){var(x)*(length(x)-1)} )[,2]
  127.   sp = sqrt( sum( ssw ) / length( theData$y ) )
  128.   genInitList <- function() {
  129.     return(
  130.         list(
  131.             a0 = a0 ,
  132.             a = a ,
  133.             sigma = sp ,
  134.             aSDunabs = sd(a)
  135.         )
  136.     )
  137.   }
  138.   for ( chainIdx in 1 : nchain ) {
  139.     modelInits( bugsInits( genInitList ) )
  140.   }
  141. }

  142. #------------------------------------------------------------------------------
  143. # RUN THE CHAINS

  144. # burn in
  145. BurnInSteps = 10000
  146. modelUpdate( BurnInSteps )
  147. # actual samples
  148. samplesSet( c( "a0" ,  "a" , "sigma" , "aSD" ) )
  149. stepsPerChain = ceiling(5000/nchain)
  150. thinStep = 750
  151. modelUpdate( stepsPerChain , thin=thinStep )

  152. #------------------------------------------------------------------------------
  153. # EXAMINE THE RESULTS

  154. source("plotChains.R")
  155. source("plotPost.R")

  156. checkConvergence = T
  157. if ( checkConvergence ) {
  158.    sumInfo = plotChains( "a0" , saveplots=T , filenameroot=fnroot )
  159.    sumInfo = plotChains( "a" , saveplots=T , filenameroot=fnroot )
  160.    sumInfo = plotChains( "sigma" , saveplots=T , filenameroot=fnroot )
  161.    sumInfo = plotChains( "aSD" , saveplots=T , filenameroot=fnroot )
  162. }

  163. # Extract and plot the SDs:
  164. sigmaSample = samplesSample("sigma")
  165. aSDSample = samplesSample("aSD")
  166. windows()
  167. layout( matrix(1:2,nrow=2) )
  168. par( mar=c(3,1,2.5,0) , mgp=c(2,0.7,0) )
  169. plotPost( sigmaSample , xlab="sigma" , main="Cell SD" , breaks=30 )
  170. plotPost( aSDSample , xlab="aSD" , main="a SD" , breaks=30 )
  171. dev.copy2eps(file=paste(fnroot,"SD.eps",sep=""))

  172. # Extract a values:
  173. a0Sample = samplesSample( "a0" )
  174. chainLength = length(a0Sample)
  175. aSample = array( 0 , dim=c( datalist$NxLvl , chainLength ) )
  176. for ( xidx in 1:datalist$NxLvl ) {
  177.    aSample[xidx,] = samplesSample( paste("a[",xidx,"]",sep="") )
  178. }

  179. # Convert to zero-centered b values:
  180. mSample = array( 0, dim=c( datalist$NxLvl , chainLength ) )
  181. for ( stepIdx in 1:chainLength ) {
  182.     mSample[,stepIdx ] = ( a0Sample[stepIdx] + aSample[,stepIdx] )
  183. }
  184. b0Sample = apply( mSample , 2 , mean )
  185. bSample = mSample - matrix(rep( b0Sample ,NxLvl),nrow=NxLvl,byrow=T)
  186. # Convert from standardized b values to original scale b values:
  187. b0Sample = b0Sample * ySDorig + yMorig
  188. bSample = bSample * ySDorig

  189. # Plot b values:
  190. windows(datalist$NxLvl*2.75,2.5)
  191. layout( matrix( 1:datalist$NxLvl , nrow=1 ) )
  192. par( mar=c(3,1,2.5,0) , mgp=c(2,0.7,0) )
  193. for ( xidx in 1:datalist$NxLvl ) {
  194.     plotPost( bSample[xidx,] , breaks=30 ,
  195.               xlab=bquote(beta*1[.(xidx)]) ,
  196.               main=paste("x:",xnames[xidx])  )
  197. }
  198. dev.copy2eps(file=paste(fnroot,"b.eps",sep=""))

  199. # Display contrast analyses
  200. nContrasts = length( contrastList )
  201. if ( nContrasts > 0 ) {
  202.    nPlotPerRow = 5
  203.    nPlotRow = ceiling(nContrasts/nPlotPerRow)
  204.    nPlotCol = ceiling(nContrasts/nPlotRow)
  205.    windows(3.75*nPlotCol,2.5*nPlotRow)
  206.    layout( matrix(1:(nPlotRow*nPlotCol),nrow=nPlotRow,ncol=nPlotCol,byrow=T) )
  207.    par( mar=c(4,0.5,2.5,0.5) , mgp=c(2,0.7,0) )
  208.    for ( cIdx in 1:nContrasts ) {
  209.        contrast = matrix( contrastList[[cIdx]],nrow=1) # make it a row matrix
  210.        incIdx = contrast!=0
  211.        histInfo = plotPost( contrast %*% bSample , compVal=0 , breaks=30 ,
  212.                 xlab=paste( round(contrast[incIdx],2) , xnames[incIdx] ,
  213.                             c(rep("+",sum(incIdx)-1),"") , collapse=" " ) ,
  214.                 cex.lab = 1.0 ,
  215.                 main=paste( "X Contrast:", names(contrastList)[cIdx] ) )
  216.    }
  217.    dev.copy2eps(file=paste(fnroot,"xContrasts.eps",sep=""))
  218. }

  219. #==============================================================================
  220. # Do NHST ANOVA and t tests:

  221. theData = data.frame( y=y , x=factor(x,labels=xnames) )
  222. aovresult = aov( y ~ x , data = theData ) # NHST ANOVA
  223. cat("\n------------------------------------------------------------------\n\n")
  224. print( summary( aovresult ) )
  225. cat("\n------------------------------------------------------------------\n\n")
  226. print( model.tables( aovresult , "means" ) , digits=4 )
  227. windows()
  228. boxplot( y ~ x , data = theData )
  229. cat("\n------------------------------------------------------------------\n\n")
  230. print( TukeyHSD( aovresult , "x" , ordered = FALSE ) )
  231. windows()
  232. plot( TukeyHSD( aovresult , "x" ) )
  233. if ( T ) {
  234.   for ( xIdx1 in 1:(NxLvl-1) ) {
  235.     for ( xIdx2 in (xIdx1+1):NxLvl ) {
  236.       cat("\n----------------------------------------------------------\n\n")
  237.       cat( "xIdx1 = " , xIdx1 , ", xIdx2 = " , xIdx2 ,
  238.            ", M2-M1 = " , mean(y[x==xIdx2])-mean(y[x==xIdx1]) , "\n" )
  239.       print( t.test( y[x==xIdx2] , y[x==xIdx1] , var.equal=T ) ) # t test
  240.     }
  241.   }
  242. }
  243. cat("\n------------------------------------------------------------------\n\n")

  244. #==============================================================================
复制代码

藤椅
zendream 发表于 2014-6-23 08:43:10
  1. graphics.off()
  2. rm(list=ls(all=TRUE))
  3. fnroot = "ANOVAonewayBrugsSTZ"
  4. library(BRugs)         # Kruschke, J. K. (2011). Doing Bayesian data analysis:
  5.                        # A Tutorial with R and BUGS. Academic Press / Elsevier.
  6. #------------------------------------------------------------------------------
  7. # THE MODEL.

  8. modelstring = "
  9. # BUGS model specification begins here...
  10. model {
  11.   for ( i in 1:Ntotal ) {
  12.     y[i] ~ dnorm( mu[i] , tau )
  13.     mu[i] <- a0 + a[x[i]]
  14.   }
  15.   #
  16.   tau <- pow( sigma , -2 )
  17.   sigma ~ dunif(0,10) # y values are assumed to be standardized
  18.   #
  19.   a0 ~ dnorm(0,0.001) # y values are assumed to be standardized
  20.   #
  21.   for ( j in 1:NxLvl ) { a[j] ~ dnorm( 0.0 , atau ) }
  22.   atau <- 1 / pow( aSD , 2 )
  23.   aSD <- abs( aSDunabs ) + .1
  24.   aSDunabs ~ dt( 0 , 0.001 , 2 )
  25.   # Convert a0,a[] to sum-to-zero b0,b[] :
  26.   for ( j in 1:NxLvl ) { m[j] <- a0 + a[j] }
  27.   b0 <- mean( m[1:NxLvl] )
  28.   for ( j in 1:NxLvl ) { b[j] <- m[j] - b0 }
  29. }
  30. # ... end BUGS model specification
  31. " # close quote for modelstring
  32. # Write model to a file, and send to BUGS:
  33. writeLines(modelstring,con="model.txt")
  34. modelCheck( "model.txt" )

  35. #------------------------------------------------------------------------------
  36. # THE DATA.

  37. # Specify data source:
  38. dataSource = c( "McDonaldSK1991" , "SolariLS2008" , "Random" )[1]
  39. # Load the data:

  40. if ( dataSource == "McDonaldSK1991" ) {
  41.   fnroot = paste( fnroot , dataSource , sep="" )
  42.   datarecord = read.table( "McDonaldSK1991data.txt", header=T ,
  43.                            colClasses=c("factor","numeric") )
  44.   y = as.numeric(datarecord$Size)
  45.   Ntotal = length(datarecord$Size)
  46.   x = as.numeric(datarecord$Group)
  47.   xnames = levels(datarecord$Group)
  48.   NxLvl = length(unique(datarecord$Group))
  49.   contrastList = list( BIGvSMALL = c(-1/3,-1/3,1/2,-1/3,1/2) ,
  50.                        ORE1vORE2 = c(1,-1,0,0,0) ,
  51.                        ALAvORE = c(-1/2,-1/2,1,0,0) ,
  52.                        NPACvORE = c(-1/2,-1/2,1/2,1/2,0) ,
  53.                        USAvRUS = c(1/3,1/3,1/3,-1,0) ,
  54.                        FINvPAC = c(-1/4,-1/4,-1/4,-1/4,1) ,
  55.                        ENGvOTH = c(1/3,1/3,1/3,-1/2,-1/2) ,
  56.                        FINvRUS = c(0,0,0,-1,1) )
  57. }

  58. if ( dataSource == "SolariLS2008" ) {
  59.   fnroot = paste( fnroot , dataSource , sep="" )
  60.   datarecord = read.table("SolariLS2008data.txt", header=T ,
  61.                            colClasses=c("factor","numeric") )
  62.   y = as.numeric(datarecord$Acid)
  63.   Ntotal = length(datarecord$Acid)
  64.   x = as.numeric(datarecord$Type)
  65.   xnames = levels(datarecord$Type)
  66.   NxLvl = length(unique(datarecord$Type))
  67.   contrastList = list( G3vOTHER = c(-1/8,-1/8,1,-1/8,-1/8,-1/8,-1/8,-1/8,-1/8) )
  68. }

  69. if ( dataSource == "Random" ) {
  70.   fnroot = paste( fnroot , dataSource , sep="" )
  71.   #set.seed(47405)
  72.   ysdtrue = 4.0
  73.   a0true = 100
  74.   atrue = c( 2 , -2 ) # sum to zero
  75.   npercell = 8
  76.   datarecord = matrix( 0, ncol=2 , nrow=length(atrue)*npercell )
  77.   colnames(datarecord) = c("y","x")
  78.   rowidx = 0
  79.   for ( xidx in 1:length(atrue) ) {
  80.     for ( subjidx in 1:npercell ) {
  81.       rowidx = rowidx + 1
  82.       datarecord[rowidx,"x"] = xidx
  83.       datarecord[rowidx,"y"] = ( a0true + atrue[xidx] + rnorm(1,0,ysdtrue) )
  84.     }
  85.   }
  86.   datarecord = data.frame( y=datarecord[,"y"] , x=as.factor(datarecord[,"x"]) )
  87.   y = as.numeric(datarecord$y)
  88.   Ntotal = length(y)
  89.   x = as.numeric(datarecord$x)
  90.   xnames = levels(datarecord$x)
  91.   NxLvl = length(unique(x))
  92.   # Construct list of all pairwise comparisons, to compare with NHST TukeyHSD:
  93.   contrastList = NULL
  94.   for ( g1idx in 1:(NxLvl-1) ) {
  95.     for ( g2idx in (g1idx+1):NxLvl ) {
  96.       cmpVec = rep(0,NxLvl)
  97.       cmpVec[g1idx] = -1
  98.       cmpVec[g2idx] = 1
  99.       contrastList = c( contrastList , list( cmpVec ) )
  100.     }
  101.   }
  102. }

  103. # Specify the data in a form that is compatible with BRugs model, as a list:
  104. ySDorig = sd(y)
  105. yMorig = mean(y)
  106. z = ( y - yMorig ) / ySDorig
  107. datalist = list(
  108.   y = z ,
  109.   x = x ,
  110.   Ntotal = Ntotal ,
  111.   NxLvl = NxLvl
  112. )
  113. # Get the data into BRugs:
  114. modelData( bugsData( datalist ) )

  115. #------------------------------------------------------------------------------
  116. # INTIALIZE THE CHAINS.

  117. # Autocorrelation within chains is large, so use several chains to reduce
  118. # degree of thinning. But we still have to burn-in all the chains, which takes
  119. # more time with more chains (on serial CPUs).
  120. nchain = 5
  121. modelCompile( numChains = nchain )

  122. if ( F ) {
  123.    modelGenInits() # often won't work for diffuse prior
  124. } else {
  125.   #  initialization based on data
  126.   theData = data.frame( y=datalist$y , x=factor(x,labels=xnames) )
  127.   a0 = mean( theData$y )
  128.   a = aggregate( theData$y , list( theData$x ) , mean )[,2] - a0
  129.   ssw = aggregate( theData$y , list( theData$x ) ,
  130.                   function(x){var(x)*(length(x)-1)} )[,2]
  131.   sp = sqrt( sum( ssw ) / length( theData$y ) )
  132.   genInitList <- function() {
  133.     return(
  134.         list(
  135.             a0 = a0 ,
  136.             a = a ,
  137.             sigma = sp ,
  138.             aSDunabs = sd(a)
  139.         )
  140.     )
  141.   }
  142.   for ( chainIdx in 1 : nchain ) {
  143.     modelInits( bugsInits( genInitList ) )
  144.   }
  145. }

  146. #------------------------------------------------------------------------------
  147. # RUN THE CHAINS

  148. # burn in
  149. BurnInSteps = 10000
  150. modelUpdate( BurnInSteps )
  151. # actual samples
  152. samplesSet( c( "a0" ,  "a" , "b0" , "b" , "sigma" , "aSD" ) )
  153. stepsPerChain = ceiling(5000/nchain)
  154. thinStep = 40
  155. modelUpdate( stepsPerChain , thin=thinStep )

  156. #------------------------------------------------------------------------------
  157. # EXAMINE THE RESULTS

  158. source("plotChains.R")
  159. source("plotPost.R")

  160. checkConvergence = T
  161. if ( checkConvergence ) {
  162.   # Merely for insight, take a look at the a0,a[] values:
  163.   sumInfo = plotChains( "a0" , saveplots=T , filenameroot=fnroot )
  164.   sumInfo = plotChains( "a" , saveplots=T , filenameroot=fnroot )
  165.    sumInfo = plotChains( "b0" , saveplots=T , filenameroot=fnroot )
  166.    sumInfo = plotChains( "b" , saveplots=T , filenameroot=fnroot )
  167.    readline("Press any key to clear graphics and continue")
  168.    graphics.off()
  169.    sumInfo = plotChains( "sigma" , saveplots=T , filenameroot=fnroot )
  170.    sumInfo = plotChains( "aSD" , saveplots=T , filenameroot=fnroot )
  171. }

  172. # Extract and plot the SDs:
  173. sigmaSample = samplesSample("sigma")
  174. aSDSample = samplesSample("aSD")
  175. windows()
  176. layout( matrix(1:2,nrow=2) )
  177. par( mar=c(3,1,2.5,0) , mgp=c(2,0.7,0) )
  178. histInfo = plotPost( sigmaSample , xlab="sigma" , main="Cell SD" , breaks=30 , col="skyblue" )
  179. histInfo = plotPost( aSDSample , xlab="aSD" , main="a SD" , breaks=30 , col="skyblue" )
  180. savePlot( file=paste(fnroot,"SD.eps",sep="") , type="eps" )

  181. readline("Press any key to clear graphics and continue")
  182. graphics.off()

  183. # Extract b values:
  184. b0Sample = samplesSample( "b0" )
  185. chainLength = length(b0Sample)
  186. bSample = array( 0 , dim=c( datalist$NxLvl , chainLength ) )
  187. for ( xidx in 1:datalist$NxLvl ) {
  188.    bSample[xidx,] = samplesSample( paste("b[",xidx,"]",sep="") )
  189. }
  190. # Convert from standardized b values to original scale b values:
  191. b0Sample = b0Sample * ySDorig + yMorig
  192. bSample = bSample * ySDorig

  193. # Plot b values:
  194. windows(datalist$NxLvl*2.75,2.5)
  195. layout( matrix( 1:datalist$NxLvl , nrow=1 ) )
  196. par( mar=c(3,1,2.5,0) , mgp=c(2,0.7,0) )
  197. for ( xidx in 1:datalist$NxLvl ) {
  198.     histInfo = plotPost( bSample[xidx,] , breaks=30 , col="skyblue" ,
  199.               xlab=bquote(beta*1[.(xidx)]) ,
  200.               main=paste("x:",xnames[xidx])  )
  201. }
  202. savePlot( file=paste(fnroot,"b.eps",sep="") , type="eps" )

  203. # Display contrast analyses
  204. nContrasts = length( contrastList )
  205. if ( nContrasts > 0 ) {
  206.    nPlotPerRow = 5
  207.    nPlotRow = ceiling(nContrasts/nPlotPerRow)
  208.    nPlotCol = ceiling(nContrasts/nPlotRow)
  209.    windows(3.75*nPlotCol,2.5*nPlotRow)
  210.    layout( matrix(1:(nPlotRow*nPlotCol),nrow=nPlotRow,ncol=nPlotCol,byrow=T) )
  211.    par( mar=c(4,0.5,2.5,0.5) , mgp=c(2,0.7,0) )
  212.    for ( cIdx in 1:nContrasts ) {
  213.        contrast = matrix( contrastList[[cIdx]],nrow=1) # make it a row matrix
  214.        incIdx = contrast!=0
  215.        histInfo = plotPost( contrast %*% bSample , compVal=0 , breaks=30 ,
  216.                 xlab=paste( round(contrast[incIdx],2) , xnames[incIdx] ,
  217.                             c(rep("+",sum(incIdx)-1),"") , collapse=" " ) ,
  218.                 cex.lab = 1.0 ,
  219.                 main=paste( "X Contrast:", names(contrastList)[cIdx] ),
  220.                 col="skyblue" )
  221.    }
  222.    savePlot( file=paste(fnroot,"xContrasts.eps",sep="") , type="eps" )
  223. }

  224. #==============================================================================
  225. # Do NHST ANOVA and t tests:

  226. theData = data.frame( y=y , x=factor(x,labels=xnames) )
  227. aovresult = aov( y ~ x , data = theData ) # NHST ANOVA
  228. cat("\n------------------------------------------------------------------\n\n")
  229. print( summary( aovresult ) )
  230. cat("\n------------------------------------------------------------------\n\n")
  231. print( model.tables( aovresult , "means" ) , digits=4 )
  232. windows()
  233. boxplot( y ~ x , data = theData )
  234. cat("\n------------------------------------------------------------------\n\n")
  235. print( TukeyHSD( aovresult , "x" , ordered = FALSE ) )
  236. windows()
  237. plot( TukeyHSD( aovresult , "x" ) )
  238. if ( T ) {
  239.   for ( xIdx1 in 1:(NxLvl-1) ) {
  240.     for ( xIdx2 in (xIdx1+1):NxLvl ) {
  241.       cat("\n----------------------------------------------------------\n\n")
  242.       cat( "xIdx1 = " , xIdx1 , ", xIdx2 = " , xIdx2 ,
  243.            ", M2-M1 = " , mean(y[x==xIdx2])-mean(y[x==xIdx1]) , "\n" )
  244.       print( t.test( y[x==xIdx2] , y[x==xIdx1] , var.equal=T ) ) # t test
  245.     }
  246.   }
  247. }
  248. cat("\n------------------------------------------------------------------\n\n")

  249. #==============================================================================
复制代码

板凳
Nicolle 学生认证  发表于 2014-10-26 10:44:41
提示: 作者被禁止或删除 内容自动屏蔽

报纸
Nicolle 学生认证  发表于 2014-10-26 10:47:34
提示: 作者被禁止或删除 内容自动屏蔽

地板
chestertsee 发表于 2015-10-19 18:06:58
第二版有吗?

7
shihuachen 发表于 2016-11-8 23:19:43
谢谢分享

8
WFMZZ 发表于 2016-12-2 09:52:41

谢谢分享

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-1 16:58