王晋东和陈益强著的《迁移学习导论》第二版 英文版
引言:
Machine learning, as an important branch of artificial intelligence, is becoming increasingly popular. Machine learning makes it possible to learn from massive training data and experience and then apply the model to new problems. Transfer learning is an important machine learning paradigm that studies how to apply existing knowledge, models, and parameters to new problems.
In recent years, algorithms, theories, and models for transfer learning have been extensively studied. Given tons of literature, it is frustratingly challenging for a preliminary researcher to take a first step and then make a difference to this area. There is a growing need for a book that can gradually introduce the essence of existing work in a learner-friendly manner.
In April 2018, we open-sourced the first version of this book on Github and called it Transfer Learning Tutorial. Accompanying the tutorial, we also open-sourced the most popular transfer learning Github repository, which contains tutorials, codes, datasets, papers, applications, and many other materials. Our very first purpose is to let readers easily tune in this area and learn it quickly. The open-source tutorial gained much appreciation by readers and the Github repo received over 8.8K stars. You can find almost everything related to transfer learning at https://github.com/ jindongwang/transferlearning.
In May 2021, we rewrote it and added many new contents, which were then published as a Chinese textbook. This textbook is based on the experience of teaching a ubiquitous computing class at the University of Chinese Academy of Sciences, through which we have gained much understanding of how to better prepare a book that can benefit everyone, especially new learners.
Now we take a step further to write this English version with many new contents and reorganizations to help new learners who use English as their native language. In this book, our main purpose is not to introduce a particular algorithm or some papers but to introduce the very basic concept of transfer learning, its problems, general methods, extensions, and applications, from shallow to deep. We paid a great deal of attention to ensure that it starts from a new learner’s perspective such that it will be much easier to tune in, step by step. Additionally, this is a textbook, not a survey or some long talk that must contain all literature in it. We hope that this textbook will help interested readers quickly learn this area and, more importantly, use it in your own research or applications. Finally, we hope this book can be a friend who provides experience to readers to accelerate your success.
In 2020, Cambridge University Press published the first transfer learning book by Qiang Yang’s group, which gives a comprehensive overview of this area. Compared to that book, our work provides a more detailed introduction of the latest progress with tutorial-style description, practicing codes, and datasets, which enable easy and fluent learning for readers.
This book consists of three parts: Foundations, Modern Transfer Learning, and Applications of Transfer Learning.
Part I is Foundations, composed of Chaps. 1–7. Chapter 1 is introduction that overviews the basic concepts of transfer learning, related research areas, problems, applications, and its necessity. Chapter 2 transits from general machine learning to transfer learning. Then, it introduces the fundamental problems in transfer learning. In Chap. 3, we unify the high-level idea of most transfer learning algorithms. This chapter should be seen as the start of the rest of the chapters. Chapters 4–6 present two categories of methods: instance weighting methods in Chap. 4 and statistical and geometrical feature transformation methods in Chaps. 5 and 6. Then, Chap. 7 presents the theory, model evaluation, and model selection technique for transfer learning.
Part II is Modern Transfer Learning, which is composed of Chaps.8–14. Chapter 8 introduces the third major category of transfer learning methods: pre- training and fine-tuning, which belongs to model-based methods. Chapters 9 and 10 are deep and adversarial transfer learning methods, which also belong to the former three basic types of methods, but with more algorithms especially in deep learning. Chapter 11 introduces the generalization problems in transfer learning. Then, Chap. 12 discusses the safety and privacy issues in transfer learning. Chapter 13 introduces how to deal with complex environments for transfer learning. Then, Chap. 14 introduces low-resource learning when the labeled data are extremely rare or even not accessible. Specifically, we introduce semi-supervised learning, meta- learning, and self-supervised learning.
Part III is Applications of Transfer Learning, which consists of Chaps.15– 19. These chapters present the code practice of how to apply transfer learning to applications including the following: computer vision (Chap. 15), natural lan- guage processing (Chap.16), speech recognition (Chap.17), activity recognition (Chap.18), and federated medical healthcare (Chap.19). We show readers how transfer learning is adopted in different applications to address their different chal- lenges. Chapter 20 is the last chapter of this book, and it presents several frontiers.
Additionally, we provide some useful materials in the appendix.
Of course, this book is not perfect, and we are aware of our own limitation. In case of any errors or suggestions, please do not hesitate to contact us.
Beijing, China [size=13.333333px]March 2022
Jindong Wang Yiqiang Chen


雷达卡






京公网安备 11010802022788号







