楼主: wwert34rt
8144 4

[问答] 如何做偏最小二乘法 [推广有奖]

  • 0关注
  • 0粉丝

大专生

31%

还不是VIP/贵宾

-

威望
0
论坛币
0 个
通用积分
0
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
451 点
帖子
32
精华
0
在线时间
27 小时
注册时间
2012-3-7
最后登录
2016-1-8

楼主
wwert34rt 发表于 2014-5-2 17:45:11 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
求助,我要做PLS分析,发现PEW软件很好用,但没正版的看不到全部,请问哪位有这个软件的正版,或者用其他软件详细说明怎么做也行,具体来说,有因变量Y ,自变量X2,X3,X4,X5,X6,X7,X8
要解决的问题PLS回归结果,VIP值和B值,还有PLS成分解释变差百分比的表,说明提取的几个成分来分析。如果有高手知道怎么做的,请联系我,我把相关数据表格发给您,谢谢,不胜感激。

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:偏最小二乘法 偏最小二乘 最小二乘法 最小二乘 详细说明 不胜感激 百分比 因变量 自变量 如何

沙发
ReneeBK 发表于 2014-5-3 01:09:02
for Using SAS Pls Procedue,please read

http://www.ats.ucla.edu/stat/sas/library/pls.pdf

藤椅
ReneeBK 发表于 2014-5-3 01:13:02
If you elect r package pls,

http://cran.r-project.org/web/packages/pls/index.html

data(yarn)
## Default methods:
yarn.pcr <- pcr(density ~ NIR, 6, data = yarn, validation = "CV")
yarn.pls <- plsr(density ~ NIR, 6, data = yarn, validation = "CV")
yarn.cppls <- cppls(density ~ NIR, 6, data = yarn, validation = "CV")
## Alternative methods:
yarn.oscorespls <- mvr(density ~ NIR, 6, data = yarn, validation = "CV",
method = "oscorespls")
yarn.simpls <- mvr(density ~ NIR, 6, data = yarn, validation = "CV",
method = "simpls")
## Not run:
## Parallelised cross-validation, using transient cluster:
pls.options(parallel = 4) # use mclapply
pls.options(parallel = quote(makeCluster(4, type = "PSOCK"))) # use parLapply
## A new cluster is created and stopped for each cross-validation:
yarn.pls <- plsr(density ~ NIR, 6, data = yarn, validation = "CV")
yarn.pcr <- pcr(density ~ NIR, 6, data = yarn, validation = "CV")
## Parallelised cross-validation, using persistent cluster:
library(parallel)
## This creates the cluster:
pls.options(parallel = makeCluster(4, type = "PSOCK"))
## The cluster can be used several times:
yarn.pls <- plsr(density ~ NIR, 6, data = yarn, validation = "CV")
yarn.pcr <- pcr(density ~ NIR, 6, data = yarn, validation = "CV")
## The cluster should be stopped manually afterwards:
stopCluster(pls.options()$parallel)
## Parallelised cross-validation, using persistent MPI cluster:
## This requires the packages snow and Rmpi to be installed
library(parallel)
## This creates the cluster:
pls.options(parallel = makeCluster(4, type = "MPI"))
## The cluster can be used several times:
yarn.pls <- plsr(density ~ NIR, 6, data = yarn, validation = "CV")
yarn.pcr <- pcr(density ~ NIR, 6, data = yarn, validation = "CV")
## The cluster should be stopped manually afterwards:
stopCluster(pls.options()$parallel)
## It is good practice to call mpi.exit() or mpi.quit() afterwards:
mpi.exit()
## End(Not run)
## Multi-response models:
data(oliveoil)
sens.pcr <- pcr(sensory ~ chemical, ncomp = 4, scale = TRUE, data = oliveoil)
sens.pls <- plsr(sensory ~ chemical, ncomp = 4, scale = TRUE, data = oliveoil)
## Classification
# A classification example utilizing additional response information
# (Y.add) is found in the cppls.fit manual (’See also’ above).

板凳
ReneeBK 发表于 2014-5-3 01:14:11
For SPSS,  http://pic.dhe.ibm.com/infocenter/spssstat/v20r0m0/index.jsp?topic=%2Fcom.ibm.spss.statistics.help%2Fidh_idd_pls_variables.htm
Partial Least Squares Regression

The Partial Least Squares Regression procedure estimates partial least squares (PLS, also known as "projection to latent structure") regression models. PLS is a predictive technique that is an alternative to ordinary least squares (OLS) regression, canonical correlation, or structural equation modeling, and it is particularly useful when predictor variables are highly correlated or when the number of predictors exceeds the number of cases.

PLS combines features of principal components analysis and multiple regression. It first extracts a set of latent factors that explain as much of the covariance as possible between the independent and dependent variables. Then a regression step predicts values of the dependent variables using the decomposition of the independent variables.

Availability. PLS is an extension command that requires the IBM® SPSS® Statistics - Integration Plug-In for Python to be installed on the system where you plan to run PLS (see How to Get Integration Plug-Ins). The PLS Extension Module must be installed separately, and can be downloaded fromhttp://www.ibm.com/developerworks/spssdevcentral.

Tables. Proportion of variance explained (by latent factor), latent factor weights, latent factor loadings, independent variable importance in projection (VIP), and regression parameter estimates (by dependent variable) are all produced by default.

Charts. Variable importance in projection (VIP), factor scores, factor weights for the first three latent factors, and distance to the model are all produced from the Options tab.


[url=]Partial Least Squares Regression Data Considerations[/url]

[url=]To Obtain Partial Least Squares Regression[/url]

This procedure pastes PLS command syntax.

See PLS Algorithms for computational details for this procedure.

Related Topics[size=1em]Model (Partial Least Squares Regression)
[size=1em]Options (Partial Least Squares Regression)

[size=1em]PLS



报纸
wwert34rt 发表于 2014-5-3 14:35:31
全是英文的就没得中文的啊

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群
GMT+8, 2025-12-22 09:51