楼主: ReneeBK
2342 3

[问答] Meta Analysis: fixed and random effects models using SPSS Syntax [推广有奖]

  • 1关注
  • 62粉丝

VIP

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49422 个
通用积分
52.2904
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57815 点
帖子
4006
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
****************************************************************************
*** Meta-Analysis: Fixed and Random Effects Models
*** Valentim R. Alferes (University of Coimbra, Portugal)
*** valferes@fpce.uc.pt
**
** This syntax does a meta-analysis on a set of studies comparing two
** independent means. It produces results for both fixed and random effects
** models, using Cohen's d statistic, with or without Hedges' correction.
**
** The user has TEN MODES FOR ENTERING SUMMARY DATA (see PART 1):
**
** Mode 1 - Study No., N1, M1, SD1, N2, M2 SD2.
** Mode 2 - Study No., N1, M1, N2, M2 SD_POOL.
** Mode 3 - Study No., Direction of Effect, Difference, N1, SD1, N2, SD2.
** Mode 4 - Study No., Direction of Effect, Difference, N1, N2, SD_POOL.
** Mode 5 - Study No., DF, M1, SD1, M2 SD2.
** Mode 6 - Study No., DF, M1, M2, SD_POOL.
** Mode 7 - Study No., Direction of Effect, DF, Difference, SD1, SD2.
** Mode 8 - Study No., Direction of Effect, DF, Difference, SD_POOL.
** Mode 9 - Study No., Direction of Effect, N1, N2, T_OBS.
** Mode 10 - Study No., Direction of Effect, DF, T_OBS.
**
** There are no limits for the number of studies to be analyzed and the user
** can input data simultaneously in the ten modes or enter all the studies
** only in one mode. In the modes not used, the lines of data have to be
** cleared, but not the corresponding command lines.
**
** If the input are means, the program assumes that Group 1 is the
** experimental or focus group and Group 2 is the control or comparison
** group.
**
** If the input are differences between group means or observed Ts, they
** are registered in absolute values (DIF=|M1-M2| or T_OBS=|Tobs|) and the
** user specifies the direction of effect in a different variable (DIRECT):
** +1 (if the effect is in the expected direction: Group 1 mean greater
** than Group 2 mean) and -1 (if the effect is reversed: Group 1 mean lesser
** than Group 2 mean).
**
** When the input are degrees of freedom, the syntax asssumes equal Ns if df
** are even, and N2=N1-1 if they are odd.
**
** When the data are selected from two contrasting ANOVA treatments, the
** user can input them in modes 2 or 4 and let the pooled standard deviation
** (SD_POOL) equals the squared root of the original ANOVA MS Error.
**
** By default the measure of effect size is Hedges' correction to Cohen's d.
** If you want to use d statistic without correction,  you can change the
** default in the corresponding command line.   
**
** The OUTPUT is organized in nine tables:
**
** Table 1  User's data
**
** Table 2  Program imputations
**
** Table 3  Individual T Tests and observed power
** - N1, N2, degrees of freedom (DF), difference between group means (DIF),
**   observed T (T_OBS), two-tailed probability (P_TWO), and one-tailed
**   probability (P_ONE);
** - Alfa (ALFA), Harmonic N (N_HARM), noncentrality parameter (NCP), and
**   observed power (OPOWER).
**   [for algorithm, see Borenstein et al., 2001]
**
** Table 4  Measures of Effect Size and Nonoverlap
** Measures of effect size:
** - Cohen's d (D);
**   [Cohen, 1988, p. 20]
** - Hedges' correction (D_H);
**   [D_H = d, in Hedges & Olkin, 1985; D_H = d*, in Hunter & Schmidt,
**   1990; see Cortina & Nouri, 2000, p. 9];
** - r point biserial (R);
** - Squared r point biserial (R2);
** - Binomial Effect Size Display (BESD_LO and BESD_UP).
**   [see formulas in Rosenthal et al. 2000, pp. 8-19]
**
** Measures of nonoverlap:
** - U1 (percent of nonoverlap between the two distributions);
** - U2 (the highest percent in Group 1 that exceeds the same lowest
**   percent in Group 2);
** - U3 (percentile standing = percentile of the Group 2 distribution
**   corresponding to the 50th percentile of Group 1 distribution);
**   [see formulas in Cohen, 1988, pp. 21-23]
**
** Table 5 - Non weighted effect size - Descriptive statistics
** - Number of studies (NSTUDIES), Cohen's d (D), and Hedges' correction
**   (D_H) (minimun, maximun, mean, sem, and sd).
**
** Table 6  Fixed effects model
** - Weighted average effect size (EF_SIZE), VARIANCE, and standard error
**   (SE);
** - z Test (z), two-tailed probability (P_TWO), and one-tailed probability
**   (P_ONE);
** - Confidence level (CL), and lower (CI_LOWER) and upper (CI_UPPER)
**   interval confidence limits.
**   [see formulas in Shadish & Haddock, 1994, pp. 265-268]
**
** Table 7 - Chi-square Test for homogeneity of effect size:
** - Q statistic, degrees of freedom (K), and two-tailed probability
**   (P_CHISQ)
**   [see formula in Shadish & Haddock, 1994, p. 266]
**
** Table 8 - Random Variance Component
** - V0 [see formula in Lipsey & Wilson, 2001, p. 134].
**
** Table 9  Random effects model
** - Weighted average effect size (EF_SIZE), VARIANCE, and standard error
**   (SE);
** - z Test (z), two-tailed probability (P_TWO), and one-tailed probability
**   (P_ONE);
** - Confidence level (CL), and lower (CI_LOWER) and upper (CI_UPPER)
**   interval confidence limits.
**   [see formulas and procedures in Lipsey & Wilson, 2001, pp. 134-135]
**
** For calculating observed power of individual studies, the syntax assumes
** alfa = 0.05. For calculating the confidence interval of weighted effect
** sizes, the syntax assumes confidence level = 95%. If you want, you can
** modify these values in the corresponding lines (see PART 2).
**
** After running the syntax, the user can have access to Tables 2, 3 and 4
** in SPSS active file, so that he may handle the data for other meta-
** analytic procedures based on different effect size measures or exact
** probabilities (see other syntaxes in this site).
**
** In the example, we have 20 studies and we have used the ten input data
** modes.

****************************************************************************

*** BEGIN OF THE SYNTAX.

** PART 1: ENTERING SUMMARY DATA.

* Mode 1: Enter, row by row, Study No., N1, M1, SD1, N2, M2 SD2.
DATA LIST LIST /Study(F8.0) N1(F8.0) M1(F8.2) SD1(F8.2) N2(F8.0) M2(F8.2)
SD2(F8.2).
BEGIN DATA
1  17  7.46  1.98  16  6.23  2.45
2  15  5.34  2.14  15  4.47  2.51
END DATA.
SAVE OUTFILE=DATA1.

* Mode 2: Enter, row by row, Study No., N1, M1, N2, M2 SD_POOL.
DATA LIST LIST /Study(F8.0) N1(F8.0) M1(F8.2) N2(F8.0) M2(F8.2)
SD_POOL(F8.2).
BEGIN DATA
3  14  7.32  16  8.23  2.67
4  23  6.20  27  4.47  2.21
END DATA.
SAVE OUTFILE=DATA2.

* Mode 3: Enter, row by row, Study No., Direction of Effect, Difference,
* N1, SD1, N2, SD2.
DATA LIST LIST /Study(F8.0) Direct(F8.0) DIF(F8.2) N1(F8.0) SD1(F8.2)
N2(F8.0) SD2(F8.2).
BEGIN DATA
5  +1  1.04  10  3.04  11  2.98
6  -1  2.25  12  2.63  12  2.21
END DATA.
SAVE OUTFILE=DATA3.

* Mode 4: Enter, row by row, Study No., Direction of Effect, Difference, N1,
* N2, SD_POOL.
DATA LIST LIST /Study(F8.0) Direct(F8.0) DIF(F8.2) N1(F8.0) N2(F8.0)
SD_POOL(F8.2).
BEGIN DATA
7  -1  1.32  34  33  2.44
8  +1  1.25  20  20  3.09
END DATA.
SAVE OUTFILE=DATA4.

* Mode 5: Enter, row by row, Study No., DF, M1, SD1, M2 SD2.
DATA LIST LIST /Study(F8.0) DF(F8.0) M1(F8.2) SD1(F8.2) M2(F8.2) SD2(F8.2).
BEGIN DATA
9   34  7.46  1.69  6.33  2.98
10  33  5.34  2.94  5.46  2.31
END DATA.
SAVE OUTFILE=DATA5.

* Mode 6: Enter, row by row, Study No., DF, M1, M2, SD_POOL.
DATA LIST LIST /Study(F8.0) DF(F8.0) M1(F8.2) M2(F8.2) SD_POOL(F8.2).
BEGIN DATA
11  27  7.76  5.29  2.77
12  28  6.30  4.21  2.41
END DATA.
SAVE OUTFILE=DATA6.

* Mode 7: Enter, row by row, Study No., Direction of Effect, DF, Difference,
* SD1, SD2.
DATA LIST LIST /Study(F8.0) Direct(F8.0) DF(F8.0) DIF(F8.2) SD1(F8.2) SD2(F8.2).
BEGIN DATA
13  +1  40  3.07  1.77  2.87
14  -1  37  2.11  2.62  2.21
END DATA.
SAVE OUTFILE=DATA7.

* Mode 8: Enter, row by row, Study No., Direction of Effect, DF, Difference,
* SD_POOL.
DATA LIST LIST /Study(F8.0) Direct(F8.0) DF(F8.0) DIF(F8.2) SD_POOL(F8.2).
BEGIN DATA
15  -1  23  2.22  1.88
16  +1  34  3.17  1.94
END DATA.
SAVE OUTFILE=DATA8.

* Mode 9: Enter, row by row, Study No., Direction of Effect, N1, N2, T_OBS.
DATA LIST LIST /Study(F8.0) Direct(F8.0) N1(F8.0) N2(F8.0) T_OBS(F8.2).
BEGIN DATA
17  +1  20  20  4.74
18  -1  14  15  3.17
END DATA.
SAVE OUTFILE=DATA9.

* Mode 10: Enter, row by row, Study No., Direction of Effect, DF, T_OBS.
DATA LIST LIST /Study(F8.0) Direct(F8.0) DF(F8.0) T_OBS(F8.2).
BEGIN DATA
19  +1  54  5.46
20  -1  49  2.27
END DATA.
SAVE OUTFILE=DATA10.
GET FILE=DATA1.
ADD FILES/FILE=*/FILE=DATA2/FILE=DATA3/FILE=DATA4/FILE=DATA5
/FILE=DATA6/FILE=DATA7/FILE=DATA8/FILE=DATA9/FILE=DATA10.
EXECUTE.


** PART 2: SETTING ALFA AND CONFIDENCE LEVEL, CHOOSING EFFECT SIZE MEASURE
** AND RUNNIG META-ANALYSIS.

* Enter alfa for computing observed power (by default, AFFA = 0.05).
COMPUTE ALFA = 0.05.
EXECUTE.
SORT CASES BY STUDY(A).
IF (M1>=M2) DIRECT=1.
IF (M1<M2) DIRECT=-1.
SUMMARIZE/TABLES=STUDY DIRECT N1 N2 DF M1 M2 DIF SD1 SD2 SD_POOL T_OBS
/FORMAT=VALIDLIST NOCASENUM TOTAL/TITLE="Table 1 - User's data"/CELLS=NONE.
COMPUTE MOD_DF=MOD(DF,2).
IF (MOD_DF=0) N1=(DF/2)+1.
IF (MOD_DF=0) N2=N1.
IF (MOD_DF=1) N1=((DF+1)/2)+1.
IF (MOD_DF=1) N2=N1-1.
COMPUTE DF=(N1+N2)-2.
IF (M1<=0 OR M1>0) DIF=ABS(M1-M2).
COMPUTE SDX=(((N1-1)*(SD1**2))+((N2-1)*(SD2**2)))/(N1+N2-2).
IF (SD_POOL<=0 OR SD_POOL>0) SDX=SD_POOL**2.
IF (SDX<=0 OR SDX>0) T_OBS=DIF/SQR(SDX*((1/N1)+(1/N2))).
COMPUTE SD_POOL=SQR(SDX).
COMPUTE T_OBS=DIRECT*T_OBS.
COMPUTE DIF=DIRECT*DIF.
COMPUTE TABS=ABS(T_OBS).
COMPUTE P_TWO=(1-CDF.T(TABS,DF))*2.
COMPUTE P_ONE=1-CDF.T(TABS,DF).
COMPUTE D=T_OBS*SQR((1/N1)+(1/N2)).
COMPUTE N_HARM=(2*N1*N2)/(N1+N2).
COMPUTE NCP=ABS((D*SQR(N_HARM))/SQR(2)).
COMPUTE T_ALPHA=IDF.T(1-ALFA/2,DF).
COMPUTE POWER1=1-NCDF.T(T_ALPHA,DF,NCP).
COMPUTE POWER2=1-NCDF.T(T_ALPHA,DF,-NCP).
COMPUTE OPOWER=POWER1+POWER2.
COMPUTE R=T_OBS/SQR((T_OBS**2)+DF).
COMPUTE R2=R**2.
COMPUTE D_H=D*(1-(3/(4*(N1+N2)-9))).
COMPUTE BESD_LO=.50-(R/2).
COMPUTE BESD_UP=.50+(R/2).
COMPUTE U3=CDF.NORMAL(D,0,1)*100.
COMPUTE U2=CDF.NORMAL((D/2),0,1)*100.
COMPUTE U2X=CDF.NORMAL(((ABS(D))/2),0,1).
COMPUTE U1=(2*U2X-1)/U2X*100.
FORMATS P_TWO P_ONE ALFA N_HARM NCP OPOWER D D_H R R2 BESD_LO BESD_UP(F8.4)
U1 U2 U3(F8.1).
SUMMARIZE/TABLES=STUDY DIRECT N1 N2 DF M1 M2 DIF SD1 SD2 SD_POOL T_OBS
/FORMAT=VALIDLIST NOCASENUM TOTAL/TITLE="Table 2  Program imputations"
/CELLS=NONE.
SUMMARIZE/TABLES=STUDY DIRECT DIF DF T_OBS P_TWO P_ONE ALFA N_HARM NCP
OPOWER/FORMAT=VALIDLIST NOCASENUM TOTAL
/TITLE="TABLE 3  Individual T Tests and observed power"/CELLS=NONE.
SUMMARIZE/TABLES=STUDY DIRECT D D_H R R2 BESD_LO BESD_UP U1 U2 U3
/FORMAT=VALIDLIST NOCASENUM TOTAL
/TITLE="Table 4 - Measures of effect size and nonoverlap"/CELLS=NONE.
SUMMARIZE/TABLES=D D_H/FORMAT=NOLIST TOTAL/TITLE="Table 5 - Non weighted "
+"effect size  Descriptive statistics: Cohens d and Hedges' correction"
/CELLS=COUNT MIN MAX MEAN SEMEAN STDDEV.
SAVE OUTFILE=META_DATA.

* Choose the effect size measure (Cohen's d = 1; Hedges' correction = 2)
* (by default, ES = 2).
COMPUTE ES = 2.
IF (ES=1) D=D.
IF (ES=2) D=D_H.
EXECUTE.
COMPUTE V=((N1+N2)/(N1*N2))+((D**2)/(2*(N1+N2))).
COMPUTE W=1/V.
COMPUTE WD=W*D.
COMPUTE WD2=W*D**2.
COMPUTE W2=W**2.
COMPUTE X=1.
EXECUTE.
SAVE OUTFILE=FOUTX.
AGGREGATE/OUTFILE=*/BREAK=X/SUM_W=SUM(W)/SUM_WD=SUM(WD)
/SUM_WD2=SUM(WD2)/SUM_W2=SUM(W2)/NSTUDIES=N.
COMPUTE K=NSTUDIES-1.
COMPUTE EF_SIZE=SUM_WD/SUM_W.
COMPUTE VARIANCE=1/SUM_W.
COMPUTE SE=SQR(1/SUM_W).
COMPUTE Z=ABS(EF_SIZE)/SE.
COMPUTE P_TWO=(1-CDF.NORMAL(Z,0,1))*2 .
COMPUTE P_ONE=1-CDF.NORMAL(Z,0,1).
EXECUTE.

* Enter confidence level for interval confidence (by default, CL=95%).
COMPUTE CL = 95.
COMPUTE ZCL=IDF.NORMAL((1-(((100-CL)/100)/2)),0,1).
COMPUTE CI_LOWER=EF_SIZE-ZCL*SE.
COMPUTE CI_UPPER=EF_SIZE+ZCL*SE.
COMPUTE Q=SUM_WD2-SUM_WD**2/SUM_W.
COMPUTE P_CHISQ = 1-CDF.CHISQ(Q,K).
COMPUTE V0 = (Q-K)/(SUM_W-SUM_W2/SUM_W) .
EXECUTE.
SAVE OUTFILE=FOUTY/KEEP=V0 X.
FORMATS ALL(F8.4) VARIANCE SE(F8.5) NSTUDIES CL K(F8.0).
SUMMARIZE/TABLES=NSTUDIES EF_SIZE VARIANCE SE Z P_TWO P_ONE CL CI_LOWER
CI_UPPER/FORMAT=LIST NOCASENUM TOTAL /TITLE='Table 6  Fixed effects model:'
+' Weighted average effect size, z test, and confidence interval'
/CELLS=NONE.
SUMMARIZE/TABLES=Q K P_CHISQ/FORMAT=LIST NOCASENUM TOTAL/TITLE=
'Table 7 - Chi-square test for homogeneity of effect size'/cells=none.
GET FILE=FOUTX.
MATCH FILES /FILE=*/TABLE=FOUTY/BY X.
EXECUTE.
COMPUTE V=V+V0.
COMPUTE W=1/V.
COMPUTE WD=W*D.
COMPUTE WD2=W*D**2.
COMPUTE W2=W**2.
EXECUTE.
FORMATS V0(F8.3).
SUMMARIZE/TABLES=v0/FORMAT=NOLIST TOTAL/TITLE='Table 8 - Random variance'
+' component'/CELLS=MEAN.
AGGREGATE/OUTFILE=*/BREAK=X/SUM_W=SUM(W)/SUM_WD=SUM(WD)
/SUM_WD2=SUM(WD2)/SUM_W2=SUM(W2)/NSTUDIES=N.
COMPUTE K=NSTUDIES-1.
COMPUTE EF_SIZE=SUM_WD / SUM_W.
COMPUTE VARIANCE=1/SUM_W.
COMPUTE SE=SQR(1/SUM_W).
COMPUTE Z=ABS(EF_SIZE)/SE.
COMPUTE P_TWO=(1-CDF.NORMAL(Z,0,1))*2 .
COMPUTE P_ONE=1-CDF.NORMAL(Z,0,1).
EXECUTE.

* Enter confidence level for interval confidence (by default, CL=95%).
COMPUTE CL = 95.
COMPUTE ZCL=IDF.NORMAL((1-(((100-CL)/100)/2)),0,1).
COMPUTE CI_LOWER=EF_SIZE-ZCL*SE.
COMPUTE CI_UPPER=EF_SIZE+ZCL*SE.
FORMATS ALL(F8.4) VARIANCE SE(F8.5) NSTUDIES CL K(F8.0).
SUMMARIZE/TABLES=NSTUDIES EF_SIZE VARIANCE SE Z P_TWO P_ONE CL CI_LOWER
CI_UPPER/FORMAT=LIST NOCASENUM TOTAL /TITLE='Table 9  Random effects '
+'model: Weighted average effect size, z test, and confidence interval'
/CELLS=NONE.
GET FILE=META_DATA/KEEP=STUDY DIRECT N1 N2 DF M1 M2 DIF SD1 SD2
SD_POOL T_OBS P_TWO P_ONE ALFA N_HARM NCP OPOWER D D_H R R2 BESD_LO
BESD_UP U1 U2 U3.

*** END OF THE SYNTAX.

****************************************************************************
** Note **
**
** Beginning in line:
**
** COMPUTE W=1/V.
**
** with effect sizes (D) and variances (V) from original sources, this
** syntax was tested with data reported in Lipsey and Wilson (2001, p. 130,
** Table 7.1) and Shadish and Haddock (1994, p. 267, Table 18.2).
**
** Imputations procedures and Individual T Tests were tested in SPSS,
** comparing the results with outputs obtained from raw data examples.
**
** Power calculations are the same given by SamplePower (Borenstein et al.,
** 2001) and measures of effect size and nonoverlap were tested with
** tabulated values and examples given by Cohen (1988) and Rosenthal et al.
** (2000).
**
** Feel free to use and modify this syntax as you wish. In case you want to
** refer it, the proper form is:
**
** Alferes, V. R. (2003). Meta-analysis: Fixed and random effects models
**    [SPSS Syntax File]. Retrieved [Date], from [URL]
****************************************************************************
** References **
**
** Borenstein, M., Rothstein, H., & Cohen, J. (2001). SamplePower 2.0
**    [Computer Manual]. Chicago: SPSS Inc.
** Cohen, J. (1988). Statistical power analysis for the behavioral
**    sciences (2nd ed.). Hillsdale, NJ: Lawrence Erbaum.
** Cortina, J. M., & Nouri, H. (2000). Effect sizes for ANOVA designs.
**    Thousand Oaks, CA: Sage.
** Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis.
**    Orlando, FL: Academic Press.
** Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis:
**    Correcting error and bias in research findings. Newbury Park, CA:
**    Sage.
** Lipsey, M. W., & Wilson, D. B. (2001). Pratical meta-analysis. Thousand
**    Oaks, CA: Sage.
** Rosenthal, R., Rosnow, R. L, & Rubin, D. B. (2000). Contrasts and
**    effect sizes in behavioral research: A correlational approach.
**    Cambridge, UK: Cambridge University Press.
** Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect
**    size. In H. Cooper and L. V. Hedges (Eds.), The handbook of research
**    synthesis (pp. 261-281). New York: Russell Sage Foundation.
***************************************************************************.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Analysis effects Analysi SYNTAX alysis University comparing produces without Random

本帖被以下文库推荐

沙发
tmdxyz 发表于 2014-5-1 04:39:26 |只看作者 |坛友微信交流群
不错。谢谢!学习了!

使用道具

顶顶顶顶顶

使用道具

板凳
faeyon 发表于 2014-5-6 20:34:30 |只看作者 |坛友微信交流群
这么复杂?

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-6-8 18:31