楼主: beoy
3733 5

[书籍介绍] Hands-On Meta Learning with Python: Meta learning using one-shot learning, MAML, [推广有奖]

  • 12关注
  • 4粉丝

博士生

1%

还不是VIP/贵宾

-

威望
0
论坛币
40291 个
通用积分
17.5268
学术水平
15 点
热心指数
20 点
信用等级
9 点
经验
20103 点
帖子
151
精华
0
在线时间
193 小时
注册时间
2006-9-27
最后登录
2023-4-20

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

Explore a diverse set of meta-learning algorithms and techniques to enable human-like cognition for your machine learning models using various Python frameworks

Key Features
  • Understand the foundations of meta learning algorithms
  • Explore practical examples to explore various one-shot learning algorithms with its applications in TensorFlow
  • Master state of the art meta learning algorithms like MAML, reptile, meta SGD
Book Description

Meta learning is an exciting research trend in machine learning, which enables a model to understand the learning process. Unlike other ML paradigms, with meta learning you can learn from small datasets faster.

Hands-On Meta Learning with Python starts by explaining the fundamentals of meta learning and helps you understand the concept of learning to learn. You will delve into various one-shot learning algorithms, like siamese, prototypical, relation and memory-augmented networks by implementing them in TensorFlow and Keras. As you make your way through the book, you will dive into state-of-the-art meta learning algorithms such as MAML, Reptile, and CAML. You will then explore how to learn quickly with Meta-SGD and discover how you can perform unsupervised learning using meta learning with CACTUs. In the concluding chapters, you will work through recent trends in meta learning such as adversarial meta learning, task agnostic meta learning, and meta imitation learning.

By the end of this book, you will be familiar with state-of-the-art meta learning algorithms and able to enable human-like cognition for your machine learning models.

What you will learn
  • Understand the basics of meta learning methods, algorithms, and types
  • Build voice and face recognition models using a siamese network
  • Learn the prototypical network along with its variants
  • Build relation networks and matching networks from scratch
  • Implement MAML and Reptile algorithms from scratch in Python
  • Work through imitation learning and adversarial meta learning
  • Explore task agnostic meta learning and deep meta learning
Who this book is for

Hands-On Meta Learning with Python is for machine learning enthusiasts, AI researchers, and data scientists who want to explore meta learning as an advanced approach for training machine learning models. Working knowledge of machine learning concepts and Python programming is necessary.

Table of Contents
  • Introduction to Meta Learning
  • Face and Audio Recognition using Siamese Network
  • Prototypical Network and its variants
  • Building Matching and Relation Network using Tensorflow
  • Memory Augmented Networks
  • MAML and its variants
  • Meta-SGD and Reptile ALgorithm
  • Gradient Agreement as an Optimization Objective
  • Recent Advancements and Next Steps




没有找到书

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Learning earning python Learn Hands

已有 1 人评分经验 论坛币 热心指数 收起 理由
yunnandlg + 100 + 20 + 5 鼓励积极发帖讨论

总评分: 经验 + 100  论坛币 + 20  热心指数 + 5   查看全部评分

本帖被以下文库推荐

[img][/img]
沙发
yunnandlg 在职认证  学生认证  发表于 2019-6-8 03:15:22 |只看作者 |坛友微信交流群
Table of contents
1. Introduction to Meta Learning
1.1. What is Meta Learning?
1.2. Meta Learning and Few-Shot
1.3. Types of Meta Learning
1.4. Learning to Learn Gradient Descent by Gradient Descent
1.5. Optimization As a Model for Few-Shot Learning
2. Face and Audio Recognition using Siamese Network
2.1. What are Siamese Networks?
2.2. Architecture of Siamese Networks
2.3. Applications of Siamese Networks
2.4. Face Recognition Using Siamese Networks
2.5. Audio Recognition Using Siamese Networks
3. Prototypical Network and its variants
3.1. Prototypical Network
3.2. Algorithm of Prototypical Network
3.3. Omniglot character set classification using prototypical network
3.4. Gaussian Prototypical Network
3.5. Algorithm
3.6. Semi prototypical Network
4. Relation and Matching Networks Using Tensorflow
4.1. Relation Networks
4.2. Relation Networks in One-Shot Learning
4.3. Relation Networks in Few-Shot Learning
4.4. Relation Networks in Zero-Shot Learning
4.5. Building Relation Networks using Tensorflow
4.6. Matching Networks
4.7. Embedding Functions
4.8. Architecture of Matching Networks
4.9. Matching Networks in Tensorflow
5. Memory Augmented Networks
5.1. Neural Turing Machine
5.2. Reading and Writing in NTM
5.3. Addressing Mechansims
5.4. Copy Task using NTM
5.5. Memory Augmented Neural Networks
5.6. Reading and Writing in MANN
5.7. Building MANN in Tensorflow
6. MAML and its variants
6.1. Model Agnostic Meta Learning
6.2. MAML Algorithm
6.3. MAML in Supervised Learning
6.4. MAML in Reinforcement Learning
6.5. Building MAML from Scratch
6.6. Adversarial Meta Learning
6.7. Building ADML from Scratch
6.8. CAML
6.9. CAML Algorithm
7. Meta-SGD and Reptile ALgorithms
7.1. Meta-SGD
7.2. Meta-SGD in Supervised Learning
7.3. Meta-SGD in Reinforcement Learning
7.4. Building Meta-SGD from Scratch
7.5. Reptile
7.6. Reptile Algorithm
7.7. Sine Wave Regression Using Reptile
8. Gradient Agreement as an Optimization Objective
8.1. Gradient Agreement
8.2. Weight Calculation
8.3. Gradient Agreement Algorithm
8.4. Building Gradient Agreement with MAML from scratch
9. Recent Advancements and Next Steps
9.1. Task Agnostic Meta Learning
9.2. TAML Algorithm
9.3. Meta Imitation Learning
9.4. MIL Algorithm
9.5. CACTUs
9.6. Task Generation using CACTUs
9.7. Learning to Learn in the Concept Space

使用道具

藤椅
yunnandlg 在职认证  学生认证  发表于 2019-6-8 03:23:12 |只看作者 |坛友微信交流群
Hands-On-Meta-Learning-With-Python-master.zip (36.59 MB)
Cause morning rolls around and it's another day of sun.
清晨不久就会来到,又是阳光明媚的一天。

使用道具

板凳
yunnandlg 在职认证  学生认证  发表于 2019-6-8 03:23:47 |只看作者 |坛友微信交流群
1 Legacy Papers
[1] Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5–9, 2003.
[2] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In
International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.
[3] Kunikazu Kobayashi, Hiroyuki Mizoue, Takashi Kuremoto, and Masanao Obayashi. A meta-learning method based on temporal difference error. In International Conference on Neural Information Processing, pages 530–537. Springer, 2009.
[4] Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Learning to learn, pages 3–17. Springer, 1998.
[5] A Steven Younger, Sepp Hochreiter, and Peter R Conwell. Meta-learning with backpropagation. In Neural Networks, 2001. Proceedings. IJCNN’01. International Joint Conference on, volume 3. IEEE, 2001.
[6] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial
Intelligence Review, 18(2):77–95, 2002.
[7] Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAAI, volume 1, pp. 3, 2008.
[8] Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum.One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, volume 172, pp. 2, 2011.
[9] Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006.
[10] Ju ̈rgen Schmidhuber. A neural network that embeds its own meta-levels. In Neural Networks, 1993., IEEE International Conference on, pp. 407–412. IEEE, 1993.
[11] Sebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181–209. Springer, 1998.
[12] Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Universite ́ de Montre ́al, De ́partement d’informatique et de recherche ope ́rationnelle, 1990.
[13] Samy Bengio, Yoshua Bengio, and Jocelyn Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters, 2(4):26–30, 1995.
[14] Rich Caruana. Learning many related tasks at the same time with backpropagation. Advances in
neural information processing systems, pp. 657–664, 1995.
[15] Giraud-Carrier, Christophe, Vilalta, Ricardo, and Brazdil, Pavel. Introduction to the special issue on meta-learning. Machine learning, 54(3):187–193, 2004.
[16] Jankowski, Norbert, Duch, Włodzisław, and Grabczewski, Krzysztof. Meta-learning in computational intelligence, volume 358. Springer Science & Business Media, 2011.
[17] N. E. Cotter and P. R. Conwell. Fixed-weight networks can learn. In International Joint Conference on Neural Networks, pages 553–559, 1990.
[18] J. Schmidhuber. Evolutionary principles in self-referential learning; On learning how to learn: The meta-meta-...
hook. PhD thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
[19] J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks.
Neural Computation, 4(1):131–139, 1992.
[20] Jurgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Simple principles of metalearning. Technical report, SEE, 1996.
[21] Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 1998.
2 Recent Papers
[1] Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pp. 3981–3989, 2016
[2] Ba, Jimmy, Hinton, Geoffrey E, Mnih, Volodymyr, Leibo, Joel Z, and Ionescu, Catalin. Using fast weights to attend to the recent past. In Advances In Neural Information Processing Systems, pp. 4331–4339, 2016
[3] David Ha, Andrew Dai and Le, Quoc V. Hypernetworks. In ICLR 2017, 2017.
[4] Koch, Gregory. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.
[5] Lake, Brenden M, Salakhutdinov, Ruslan R, and Tenenbaum, Josh. One-shot learning by inverting a compositional causal process. In Advances in neural information processing systems, pp. 2526–2534, 2013.
[6] Santoro, Adam, Bartunov, Sergey, Botvinick, Matthew, Wierstra, Daan, and Lillicrap, Timothy. Meta-learning with memory-augmented neural networks. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1842–1850, 2016.
[7] Vinyals, Oriol, Blundell, Charles, Lillicrap, Tim, Wierstra, Daan, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630–3638, 2016.
[8] Kaiser, Lukasz, Nachum, Ofir, Roy, Aurko, and Bengio, Samy. Learning to remember rare events. In ICLR 2017, 2017.
[9] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, D. Kumaran, and R. Hadsell. Learning to navigate in complex environments. Techni- cal report, DeepMind, 2016.
[10] B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. Technical report, submitted to ICLR 2017, 2016.
[11] Y. Duan, J. Schulman, X. Chen, P. Bartlett, I. Sutskever, and P. Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. Technical report, UC Berkeley and OpenAI, 2016.
[12] Li, Ke and Malik, Jitendra. Learning to optimize. International Conference on Learning Representations (ICLR), 2017.
[13] Edwards, Harrison and Storkey, Amos. Towards a neural statistician. International Conference on Learning Representations (ICLR), 2017.
[14] Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Ruslan. Actor-mimic: Deep multitask and transfer reinforcement learning. International Conference on Learning Representations (ICLR), 2016.
[15] Ravi, Sachin and Larochelle, Hugo. Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), 2017.
[16] Finn, C., Abbeel, P., & Levine, S. (2017). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv preprint arXiv:1703.03400.
[17] Chen, Y., Hoffman, M. W., Colmenarejo, S. G., Denil, M., Lillicrap, T. P., & de Freitas, N. (2016). Learning to Learn for Global Optimization of Black Box Functions. arXiv preprint arXiv:1611.03824.
[18] Munkhdalai T, Yu H. Meta Networks. arXiv preprint arXiv:1703.00837, 2017.
[19] Duan Y, Andrychowicz M, Stadie B, et al. One-Shot Imitation Learning. arXiv preprint arXiv:1703.07326, 2017.
[20] Woodward M, Finn C. Active One-shot Learning. arXiv preprint arXiv:1702.06559, 2017.
[21] Wichrowska O, Maheswaranathan N, Hoffman M W, et al. Learned Optimizers that Scale and Generalize. arXiv preprint arXiv:1703.04813, 2017.
[22] Hariharan, Bharath, and Ross Girshick. Low-shot visual object recognition arXiv preprint arXiv:1606.02819 (2016).
[23] Wang J X, Kurth-Nelson Z, Tirumala D, et al. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
[24] Flood Sung, Zhang L, Xiang T, Hospedales T, et al. Learning to Learn: Meta-Critic Networks for Sample Efficient Learning. arXiv preprint arXiv:1706.09529, 2017.

使用道具

报纸
cometwx 发表于 2019-6-9 20:33:51 |只看作者 |坛友微信交流群
感谢分享!

使用道具

地板
Nicolle 学生认证  发表于 2021-6-14 08:17:44 |只看作者 |坛友微信交流群
提示: 作者被禁止或删除 内容自动屏蔽

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-27 19:46