tag 标签: machine经管大学堂:名校名师名课

相关帖子

版块 作者 回复/查看 最后发表
正则化风险最小化的完全修正Boosting 外文文献专区 能者818 2022-3-8 0 229 能者818 2022-3-8 17:01:40
有序选择模型的随机森林估计 外文文献专区 mingdashike22 2022-3-8 0 599 mingdashike22 2022-3-8 15:13:20
机器学习中的一个限制层次 外文文献专区 kedemingshi 2022-3-8 0 424 kedemingshi 2022-3-8 14:34:00
machine learning in action 2012 attachment python论坛 gongyg1 2013-3-9 14 8244 tianwk 2019-7-27 20:54:47
Introduction to Machine Learning 机器学习导论 by Ethem Alpaydin attachment 机器学习 0355lihao 2009-11-1 16 10071 acedownload 2017-6-9 11:22:12
好书推荐:Gaussian Processes for Machine Learning attachment Gauss专版 prestige 2008-1-30 15 10003 ydc129 2016-12-17 16:10:03
(转自人人)宏观经济运行的分析框架(英文原版how the economic machine works) attachment 宏观经济学 haoyl123 2013-1-20 13 6947 小鱼87 2016-11-25 13:10:00
[下载]Principles of Data Mining (Adaptive Computation and Machine Learning) attachment 计量经济学与统计软件 leusong 2006-1-18 17 5434 sacromento 2016-11-18 06:06:15
【下载】Gaussian Processes for Machine Learning~Carl Edward Rasmussen.2006 attachment 计量经济学与统计软件 kxjs2007 2010-6-6 11 5094 e0g411k014z 2016-10-21 21:53:00
数据挖掘相关领域丛书系列之三:Pattern Recognition and Machine Learning attachment 数据分析与数据挖掘 lanfeng0924 2009-10-15 29 8292 lkwokchu 2015-6-11 13:12:50
an introduction to markov chain monte carlo for machine learning attachment 计量经济学与统计软件 shelihuang 2009-4-23 4 2531 bbslover 2014-12-18 22:18:02
[原创]Elsevier出品《Data Mining:Practical Machine Learning Tools and Techniques》 attachment 数据分析与数据挖掘 lanfeng0924 2009-1-24 10 4966 bbslover 2014-12-18 22:16:20
[电子书]Data Mining_Practical.Machine.Learning.Tools.and.Techniques attachment 数据分析与数据挖掘 yiiiyaaa 2009-7-18 9 2459 Enthuse 2014-9-18 12:01:54
[推荐]数据挖掘方面的经典教材之《Pratical Machine Learning Tools and Techniques with Java implem attachment 计量经济学与统计软件 bigfeetcrystal 2009-5-4 8 4299 lotuseaters 2013-7-17 12:11:04
[推荐]好书:data mining - practical machine learning tools and techniques, 2nd editio attachment 计量经济学与统计软件 yishirl 2007-5-31 29 5438 sky12999 2011-11-28 17:53:58
China Machine Tools Market---毕马威分析报告 attachment 金融学(理论版) gamexxp 2006-9-20 0 1874 gamexxp 2011-10-26 00:34:05
下载书籍Principles and Theory for Data Mining and Machine Learning attachment 数据分析与数据挖掘 doc1005209137 2009-9-7 6 1962 zhumengjin 2009-12-23 22:21:39
【下载】Data.Mining.Practical.Machine.Learning.Tools.and.Techniques attachment 数据分析与数据挖掘 zhuzhu83 2009-7-29 4 2066 pql 2009-11-28 09:55:13
【下载】Data.Mining.Practical.Machine.Learning.Tools.and.Techniques attachment 数据分析与数据挖掘 zhuzhu83 2009-7-29 6 2141 money999 2009-10-19 22:05:33

相关日志

分享 Batch Gradient Descent,BGD
lww1993 2013-9-23 13:06
#我们用批梯度下降算法(Batch Gradient Descent,BGD)处理一组数据 #读取数据 library(xlsx) data1-read.xlsx(file="D:/Alfred2013/Alfred_Data/dataOnGrain.xlsx", sheetIndex=1,startRow=1,endRow=20,colIndex=1:4,header=TRUE) data1-scale(data1) data1-data.frame(Density=data1 ,SDDensity=data1 ,Moisture=data1 ,Speed=data1 ) #从19组数据中抽取12组数据做训练 rowIndex-sample(x=19,size=12,replace=FALSE) trainingSet-data1 trainingSet-cbind(intercept=rep(1,times=12),trainingSet) #Speed=beta0+beta1*Density+beta2*SDDensity+beta3*Moisture #betaHat=(beta0,beta1,beta2,beta3) #parameter=(1,Density,SDDensity,Moisture) speed-function(betaHat,parameter){ betaHat%*%parameter } #我们再用BGD来处理数据 k-0 error-1 betaHat-c(1,-1,1,-1) learningRate-0.001 while((k=1e5)(error1e-6)){ k-k+1 error-0 derivative-rep(0,times=4) for(j in 1:length(betaHat)){ derivative -sum( (trainingSet$Speed - speed(betaHat,t(trainingSet ))) * trainingSet ) betaHat -betaHat +learningRate*derivative } ##以下计算平方残差 for(i in 1:(dim(trainingSet)) ){ data4-trainingSet ##获得第i组训练数据 parameter-c(1,data4$Density,data4$SDDensity,data4$Moisture)#加入常数项 error-error+(data4$Speed-speed(betaHat,parameter))^2 } } 注意:所用的数据及参考文献与SGD相同。
个人分类: 程序|13 次阅读|0 个评论
分享 随机梯度下降算法小试
lww1993 2013-9-23 11:04
#我们用随机梯度下降算法(Stochastic Gradient Descent,SGD)处理一组数据 #读取数据 library(xlsx) data1-read.xlsx(file="D:/Alfred2013/Alfred_Data/dataOnGrain.xlsx", sheetIndex=1,startRow=1,endRow=20,colIndex=1:4,header=TRUE) data1-scale(data1) #从19组数据中抽取12组数据做训练 rowIndex-sample(x=19,size=12,replace=FALSE) trainingSet-data1 #Speed=beta0+beta1*Density+beta2*SDDensity+beta3*Moisture #betaHat=(beta0,beta1,beta2,beta3) #parameter=(1,Density,SDDensity,Moisture) speed-function(betaHat,parameter){ betaHat%*%parameter } #给定参数的初值 betaHat-c(1,-1,1,-1) betaHat2-rep(0,times=4) learningRate-0.01 mat1-matrix(rep(0,times=48),ncol=4,nrow=12,byrow=TRUE) mat2-matrix(rep(0,times=48),ncol=4,nrow=12,byrow=TRUE) #先用SGD处理数据 k-0 error-1 while((k=1e7)(error1e-3)){ error-0 for(i in 1:(dim(trainingSet) )){ for(j in 1:length(betaHat)){ k-k+1 data2-trainingSet #获取第i组数据 parameter-c(1,data2 ,data2 ,data2 )#加入常数项 error-error+abs(data2 -speed(betaHat,parameter)) betaHat2 -betaHat +learningRate*(data2 -speed(betaHat,parameter))*parameter } betaHat-betaHat2 } } betaHat ##这个速度有点太慢了。其中做scale是关键。否则无法收敛 #betaHat # 0.08485091 -1.40311144 1.62375133 -0.19552947 #做LM估计时系数是 #Coefficients: #(Intercept) Density SDDensity Moisture #-6.588e-16 -1.473e+00 1.741e+00 -1.016e-01 附录: 1.原始文件 DensitySDDensityMoistureSpeed 83083011.7242 84083011.7243 85083011.7236 86083011.7233 87083011.7230 88083011.7230 81581514.7240 83081514.7235 84081514.7231 85081514.7229 86081514.7227 87081514.7224 88081514.7224 75475415236 77075415226 78075415226 79075415226 80075415220 81075415216 2.参考文献 http://cs229.stanford.edu/notes/cs229-notes1.pdf http://r.789695.n4.nabble.com/Stochastic-Gradient-Ascent-for-logistic-regression-td884272.html#a884273
个人分类: 程序|12 次阅读|0 个评论

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-26 00:10