|
Effect Sizes & Confidence Intervals
* Meta analysis reports findings in terms of effect sizes. The effect size provides information about how much change is evident across all studies and for subsets of studies.
* There are many different types of effect size, but they fall into two main types:
o standardized mean difference (e.g., Cohen's d or Hedges g) or
o correlation (e.g., Pearson's r)
* It is possible to convert one effect size into another, so each really just offers a differently scaled measure of the strength of an effect or a relationship.
* The standardised mean effect size is basically computed as the difference score divided by the standard deviation of the scores.
* In meta-analysis, effect sizes should also be reported with:
o the number of studies and the number of effects used to create the estimate.
o confidence intervals to help readers determine the consistency and reliability of the mean estimated effect size.
* For more information about calculating effect sizes and confidence intervals, see:
o Effect size calculation - David Wilson
o Overview of different effect sizes
o Effect sizes calculation program
* Tests of statistical significance can also be conducted and on the effect sizes.
* Different effect sizes are calculated for different constructs of interest, as predetermined by the researchers based on what issues are of interest in the research literature.
* Rules of thumb and comparisons with field-specific benchmarks can be used to interpret effect sizes. According to an arbitrary but commonly used interpretation of effect size by Cohen (1988), a standardised mean effect size of 0 means no change, negative effect sizes mean a negative change, with .2 a small change, .5 a moderate change, and .8 a large charge. Wolf (1986), on the other hand, suggests that .25 is educationally significant and .50 is clinically significant.
|