楼主: SPSSCHEN
4060 16

[推荐][讨论] Factor Analysis Question [推广有奖]

11
SPSSCHEN 发表于 2006-4-30 10:46:00

Dr Veena,


I will address one of your queries below.Principal Component analysis better or worse than Confirmatory factor analysis?

Firstly I think you are confusing the type of extraction which is principal components analysis (PCA) or Principal Axis Factoring (PAF) with confirmatory factor analysis versus exploratory factor analysis (the purpose of undertaking your factor analysis). When comparing principal components analysis with other types of data extraction, which one is best depends on the purpose for which you are running the analysis. I know that factor analysis (PAF in spss) and (PCA) differ based on the way they assign values to the diagonals in a matrix. While principal components assigns a 1 to the diagonals (in other words considers the
total variance between items) PAF uses common variance and leaves out
unexplained variance by assigning squared multiple correlations to the diagonal. In other words PAF extracts unique explained variance and PCA extracts both explained and unexplained variance.

While you can attempt to force the data into a set number of factors, you do not have the flexibility (unless I am mistaken here) of being able to assign items to a particular factor and then test how well they load. You have limited control over what you can and cannot do with traditional factor analysis.

I think that Amos by spss is much better for your needs than spss factor analysis.

  1. &lt;SCRIPT&gt;<br><!--
  2. D(["ce"]);
  3. //--><br>&lt;/script&gt;
复制代码

12
SPSSCHEN 发表于 2006-4-30 10:49:00
Greetings!!!

Just a few questions pertaining tot he interpretation of the output for
discriminant
analysis

1. What is the utility of using both Wilk's Lambda and F when assessing the
significance of the individual variables (IVs) and stepwise results?

2. Generally for the runs I have conducted, Wilk's Lambda is quite large
(closer to 1) even when significant yet this would suggest a relatively
large amount of within-group variance. Why is that so?

3. The Wilk's Lambda table which assesses the significance of each function
also includes a chi-square. How is the non-parametric chi-square interpreted
within the context of discriminant analysis since it is used to assess
frequency counts?

4. How is the eigenvalue itself interpreted in discriminant analysis. In
factor analysis, one unit is assigned to each variable and is interpreted as
a standardized form of variance. How is this related to what I get in
discriminant analysis. It seems that the squared canonical correlation is
what is relevant in the eigenvalue table.

5. When interpreting the canonical correlation for a two-group discrim, is
it similar to a multiple correlation with one discriminant function and
multiple IVs?

Thanks in advance for your assistance
  1. &lt;SCRIPT&gt;<br><!--
  2. D(["mb","<div style\u003d\"direction:ltr\" ><br />Estelle Campenni, Ph.D.<br />Marywood University<br /></div>",0]
  3. );
  4. D(["ce"]);
  5. //--><br>&lt;/script&gt;
复制代码

Estelle Campenni, Ph.D.
Marywood University

13
SPSSCHEN 发表于 2006-4-30 10:51:00

I am running a factor analysis in SPSS. Now, since some of the variables
I am interested in are dichotomous, I am trying to get around the
limitations of what sort of data is suitable for FA by creating a
Spearman's correlation matrix and using that as input for the FA procedure.

Can anyone tell me if that can be considered acceptable practice? After
all, if I'm working from correlations, and if the correlations are
nonparametric and therefore appropriate for the data, type of variable
should not have an effect on the results, right?

Many thanks for your help, it is very much appreciated.

Nicola Knight

14
SPSSCHEN 发表于 2006-4-30 10:52:00

Hi

For dichotomous variables, I'd rather use kendall's tau correlation
coeficient, or even tetrachoric correlation coeficients.

Regards,
Marta
biostatistics@terra.es

15
SPSSCHEN 发表于 2006-4-30 10:53:00
What comes to mind is homogeneity analysis, a version of factor analysis applied to categorical variables.

16
SPSSCHEN 发表于 2006-4-30 10:54:00
以下是引用SPSSCHEN在2006-4-30 10:53:00的发言:
What comes to mind is homogeneity analysis, a version of factor analysis applied to categorical variables.
homogeneity analysis is in Data Reduction menu under Optimal Scaling. Choosing the button "All variables multiple nominal" is homogeneity analysis (called HOMALS in version before 13 and Multiple Correspondence Analysis from 13 on). But if you want ordinal scaling level for ordinal variables, choose button "Some variables not multiple nominal" to obtain CATPCA.

Anita van der Kooij
Data Theory Group
Leiden University

17
SPSSCHEN 发表于 2006-4-30 10:55:00

Thanks for all replies. I guess I should have provided more details in order to make my need clear. Sorry if I didn't last time.

I had to use two different scales with various subscales (behavioural dimensions) to cover all variables I need to analyzes my hypotheses; however, these scales have overlapping dimensions as well.
What I need to do is to reduce all scales to reasonable number of factors/variables so that I could avoid collinearity problem in further steps of analysis. The scales are likert-style and some consists more questions than others and each scale has its own answer format. (0-2, 1-3, 1-5) Therefore I have numeric intervals with different ranges.

I have other independent variables which might be correlated with subscales. I need to test them as well.

I have been trying factor analysis to reduce these subscales but I am not sure whether it is meaningful to include all there scales in raw format without any normalization.

My variables do not lack independence but those subscales have high correlations. I had to use these scales because couldn't find a scale that covers all dimensions I need. So I feel obligation to reduce number of variables derived from those subscales in order to obtain more meaningful model for the analysis.

For example two of the scales have subscale to measure hyperactivity, one ranges from 0 to 10 and the other ranges from 3 to 12. I am looking for a statistically meaningful way to reach one numeric variable for hyperactivity and related items if any. Do I need to normalize these subscales prior to entering Data Reduction and if it will be factor analysis what combination do you suggest?

The behaviours are expected to affect peer status and including the subscales as is would cause problems due to situation explained above.

So this is the problem I am facing now.

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-27 05:25