书名:《Fundamentals of Optimization Theory With Applications to Machine Learning 》,by Jean Gallier, Jocelyn uaintance
前言:In recent years, computer vision, robotics, machine learning, and data science have been some of the key areas that have contributed to major advances in technology. Anyone who looks at papers or books in the above areas will be baed by a strange jargon involving exotic terms such as kernel PCA, ridge regression, lasso regression, support vector machines (SVM), Lagrange multipliers, KKT conditions, etc. Do support vector machines chase cattle to catch them with some kind of super lasso? No! But one will quickly discover that behind the jargon which always comes with a new eld (perhaps to keep the outsiders out of the club), lies a lot of \classical" linear algebra and techniques from optimization theory. And there comes
the main challenge: in order to understand and use tools from machine learning, computer vision, and so on, one needs to have a rm background in linear algebra and optimization theory. To be honest, some probability theory and statistics should also be included, but we already have enough to contend with. Many books on machine learning struggle with the above problem. How can one understand what are the dual variables of a ridge regression problem if one doesn't know about the
Lagrangian duality framework? Similarly, how is it possible to discuss the dual formulation of SVM without a rm understanding of the Lagrangian framework? The easy way out is to sweep these diculties under the rug. If one is just a consumer of the techniques we mentioned above, the cookbook recipe approach is probably adequate. But this approach doesn't work for someone who really wants to do serious research and make signicant contributions. To do so, we believe that one must have a solid background in linear algebra and optimization theory. This is a problem because it means investing a great deal of time and energy studying these elds, but we believe that perseverance will be amply rewarded. This second volume covers some elements of optimization theory and applications, especially to machine learning. This volume is divided in ve parts:
(1) Preliminaries of Optimization Theory.
(2) Linear Optimization.
(3) Nonlinear Optimization.
(4) Applications to Machine Learning.
(5) An appendix consisting of two chapers
Fundamentals of Optimization Theory With Applications to Machine Learning by Jea.pdf
(13.33 MB, 需要: 20 个论坛币)


雷达卡



京公网安备 11010802022788号







