楼主: kychan
11671 66

【独家发布】【2018 】Machine Learning A Practical Approach Statistical Learning Theory   [推广有奖]

区版主

泰斗

66%

还不是VIP/贵宾

-

TA的文库  其他...

学管理 • 学人生

每日股市

【KYCHAN文库】

威望
12
论坛币
1179289 个
通用积分
101477.1933
学术水平
15760 点
热心指数
16780 点
信用等级
14408 点
经验
679882 点
帖子
12355
精华
52
在线时间
10015 小时
注册时间
2013-4-2
最后登录
2025-11-28

初级热心勋章 中级热心勋章 高级热心勋章 初级学术勋章 中级学术勋章 高级学术勋章 初级信用勋章 中级信用勋章 高级信用勋章 特级热心勋章 特级学术勋章 特级信用勋章

楼主
kychan 学生认证  发表于 2018-8-3 10:53:42 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
【2018 】Machine Learning. A Practical Approach on the Statistical Learning Theory
Machine
Book 图书名称:    Machine Learning. A Practical Approach on the Statistical Learning Theory
Author 作者:     Rodrigo Fernandes de Mello, Moacir Antonelli Ponti
Publisher 出版社: Springer
Page 页数:368
Publishing Date 出版时间: Sep 2018                    
Language 语言: English
Size 大小: 11 MB
Format 格式: pdf 文字版
ISBN: 978-3-319-94989-5
Edition: 第1版




This book presents the Statistical Learning Theory in a detailed and easy to understand way, by using practical examples, algorithms and source codes. It can be used as a textbook in graduation or undergraduation courses, for self-learners, or as reference with respect to the main theoretical concepts of Machine Learning. Fundamental concepts of Linear Algebra and Optimization applied to Machine Learning are provided, as well as source codes in R, making the book as self-contained as possible.

It starts with an introduction to Machine Learning concepts and algorithms such as the Perceptron, Multilayer Perceptron and the Distance-Weighted Nearest Neighbors with examples, in order to provide the necessary foundation so the reader is able to understand the Bias-Variance Dilemma, which is the central point of the Statistical Learning Theory.

Afterwards, we introduce all assumptions and formalize the Statistical Learning Theory, allowing the practical study of different classification algorithms. Then, we proceed with concentration inequalities until arriving to the Generalization and the Large-Margin bounds, providing the main motivations for the Support Vector Machines.

From that, we introduce all necessary optimization concepts related to the implementation of Support Vector Machines. To provide a next stage of development, the book finishes with a discussion on SVM kernels as a way and motivation to study data spaces and improve classification results.  

Table of Content
Foreword......Page 3
Contents......Page 4
Acronyms......Page 7
1.1 Machine Learning Definition......Page 8
1.2 Main Types of Learning......Page 11
1.3 Supervised Learning......Page 12
1.4 How a Supervised Algorithm Learns?......Page 26
1.5.1 The Perceptron......Page 35
1.5.2 Multilayer Perceptron......Page 59
1.6 Concluding Remarks......Page 79
References......Page 80
2.1 Motivation......Page 82
2.2 Basic Concepts......Page 83
2.2.1 Probability Densities and Joint Probabilities......Page 84
2.2.2 Identically and Independently Distributed Data......Page 89
2.2.3 Statistical Learning Theory Assumptions......Page 96
2.2.4 Expected Risk and Generalization......Page 97
2.2.5 Bounds for Generalization: A Practical Example......Page 99
2.2.6 Bayes Risk and Universal Consistency......Page 104
2.2.7 Consistency, Overfitting and Underfitting......Page 105
2.2.8 Bias of Classification Algorithms......Page 108
2.3 Empirical Risk Minimization Principle......Page 109
2.3.1 Consistency and the ERM Principle......Page 111
2.3.2 Restriction of the Space of Admissible Functions......Page 112
2.3.3 Ensuring Uniform Convergence in Practice......Page 115
2.4 Symmetrization Lemma and the Shattering Coefficient......Page 117
2.4.1 Shattering Coefficient as a Capacity Measure......Page 118
2.4.2 Making the ERM Principle Consistent for Infinite Functions......Page 120
2.5 Generalization Bounds......Page 122
2.6 The Vapnik-Chervonenkis Dimension......Page 125
2.6.1 Margin Bounds......Page 128
2.7 Computing the Shattering Coefficient......Page 129
2.8 Concluding Remarks......Page 133
References......Page 134
3.2 Distance-Weighted Nearest Neighbors......Page 136
3.3 Using the Chernoff Bound......Page 145
3.4 Using the Generalization Bound......Page 153
3.5 Using the SVM Generalization Bound......Page 156
3.6 Empirical Study of the Biases of Classification Algorithms......Page 164
3.8 List of Exercises......Page 167
References......Page 168
4.2.1 Basis......Page 169
4.2.2 Linear Transformation......Page 171
4.2.3 Inverses of Linear Transformations......Page 174
4.2.4 Dot Products......Page 176
4.2.5 Change of Basis and Orthonormal Basis......Page 178
4.2.6 Eigenvalues and Eigenvectors......Page 180
4.3 Using Basic Algebra to Build a Classification Algorithm......Page 185
4.4 Hyperplane-Based Classification: An Intuitive View......Page 196
4.5 Hyperplane-Based Classification: An Algebraic View......Page 204
4.5.1 Lagrange Multipliers......Page 208
4.5.2 Karush-Kuhn-Tucker Conditions......Page 212
4.6 Formulating the Hard-Margin SVM Optimization Problem......Page 217
4.7 Formulating the Soft-Margin SVM Optimization Problem......Page 225
4.9 List of Exercises......Page 231
References......Page 232
5.2 Introducing Optimization Problems......Page 233
5.3 Main Types of Optimization Problems......Page 234
5.4.1 Solving Through Graphing......Page 240
5.4.2.1 Using the Table and Rules......Page 247
5.4.2.2 Graphical Interpretation of Primal and Dual Forms......Page 251
5.4.2.3 Using Lagrange Multipliers......Page 255
5.4.3 Using an Algorithmic Approach to Solve Linear Problems......Page 259
5.4.4 On the KKT Conditions for Linear Problems......Page 269
5.4.4.1 Applying the Rules......Page 272
5.4.4.2 Graphical Interpretation of the KKT Conditions......Page 275
5.5 Convex Optimization Problems......Page 277
5.5.1 Interior Point Methods......Page 287
5.5.1.1 Primal-Dual IPM for Linear Problem......Page 288
5.5.2 IPM to Solve the SVM Optimization Problem......Page 303
5.5.3 Solving the SVM Optimization Problem Using Package LowRankQP......Page 317
5.6 Concluding Remarks......Page 328
References......Page 329
6 Brief Intro on Kernels......Page 331
6.1 Definitions, Typical Kernels and Examples......Page 332
6.1.1 The Polynomial Kernel......Page 333
6.1.2 The Radial Basis Function Kernel......Page 334
6.1.3 The Sigmoidal Kernel......Page 335
6.1.4 Practical Examples with Kernels......Page 336
6.2 Principal Component Analysis......Page 338
6.3 Kernel Principal Component Analysis......Page 342
6.4 Exploratory Data Analysis......Page 345
6.4.1 How Does the Data Space Affect the Kernel Selection?......Page 346
6.4.2 Kernels on a 3-Class Problem......Page 355
6.4.3 Studying the Data Spaces in an Empirical Fashion......Page 358
6.4.4 Additional Notes on Kernels......Page 361
6.5 SVM Kernel Trick......Page 362
6.6 A Quick Note on the Mercer's Theorem......Page 366
6.8 List of Exercises......Page 367
Reference......Page 368

== 回帖见免费下载 ==

本帖隐藏的内容

Machine Learning 2018.pdf (11.18 MB)


声明: 本资源仅供学术研究参考之用,发布者不负任何法律责任,敬请下载者支持购买正版。


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:学术研究 免费下载 法律责任 word 出版时间

已有 2 人评分经验 论坛币 收起 理由
xujingtang + 80 精彩帖子
zhou_yl + 60 精彩帖子

总评分: 经验 + 80  论坛币 + 60   查看全部评分

本帖被以下文库推荐

不管你不喜欢我和不爱我

沙发
Jealy 在职认证  发表于 2018-8-3 10:58:00
已有 1 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
kychan + 60 + 60 + 5 + 5 + 5 精彩沙发

总评分: 经验 + 60  论坛币 + 60  学术水平 + 5  热心指数 + 5  信用等级 + 5   查看全部评分

藤椅
karst 发表于 2018-8-3 10:58:02
版主上午好,好久没看见版主的帖子了
已有 1 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
kychan + 20 + 20 + 1 + 1 + 1 忙工作

总评分: 经验 + 20  论坛币 + 20  学术水平 + 1  热心指数 + 1  信用等级 + 1   查看全部评分

板凳
karst 发表于 2018-8-3 10:58:23
谢谢版主的资源

报纸
西门高 发表于 2018-8-3 11:00:35
谢谢楼主

地板
kukenghuqian 发表于 2018-8-3 11:02:36

7
军旗飞扬 发表于 2018-8-3 11:25:22
谢谢分享

8
deric_zhou 发表于 2018-8-3 11:34:34
谢谢您的分享

9
uandi 发表于 2018-8-3 12:50:31
thanks a lot

10
追赶时光的人 发表于 2018-8-3 12:55:28

谢谢版主的资源

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
jg-xs1
拉您进交流群
GMT+8, 2025-12-5 22:46