How to improve model fit
Given the complexity of structural equation modelling, it is not uncommon to find that the fit of a proposed model is poor. Allowing modification indices to drive the process is not always a good practice however, some modifications can be made locally that can substantially improve results. It is good practice to assess the fit of each construct and its items individually to determine whether there are any items that are particularly
weak. Items with low multiple r2 (less than .20) should be removed from the analysis as this is an indication
of very high levels of error. Following this, each construct should be modelled in conjunction with every other
construct in the model to determine whether discriminant validity has been achieved. The Phi (φ) value
between two constructs is akin to their covariance, therefore a Phi of 1.0 indicates that the two constructs are
measuring the same thing. One test which is useful to determine whether constructs are significantly
different is Bagozzi et al’s (1991) discriminant validity test. The formula for this is: parameter estimate (phi
value) ±1.96 * standard error. If the value is greater than 1.0 discriminant validity has not been achieved and
further inspections of item cross-loadings need to be made. Items with high Lambda-Y modification indices
are possible candidates for deletion and are likely to be causing the discriminant validity problem. By deleting
indiscriminant items fit is likely to improve and is advantageous in that it is unlikely to have any major
theoretical repercussions.
A further way in which fit can be improved is through the correlation of error terms. This practice is generally
means that there is some other issue that is not specified within the model that is causing the covariation. If a researcher decides to correlate error terms there needs to be strong theoretical justification behind such a move. Correlating within-factor error is easier to justify than across latent variable correlations, however it is essential that the statistical and substantive impact are clearly discussed. If a researcher feels they can substantiate this decision, correlated error terms is acceptable, however it is a step that should be taken with caution.
|