26. 実験
1. Toy data
2. Few-shot classification
1. overall results
2. versatility
3. comparision to standard and amortized VI(今日は話さない)
3. Shapenet view reconstruction
32. 参考文献
• Vilalta, Y. Drissi, A perspective view and survey of meta-learning, Artificial
Intelligence Review, 18 (2) (2002), pp. 77-95
• Pan, S. J. and Yang, Q.: A Survey on Transfer Learning, IEEE Trans. on Knowl.
and Data Eng., Vol. 22, No. 10, pp. 1345-1359 (2010)
• S. Ravi and H. Larochelle. Optimization as a model for few-shot learning.
ICLR2017.
• C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast
adaptation of deep networks. ICML2017.
• M. Garnelo, D. Rosenbaum, C. J. Maddison, T. Ramalho, D. Saxton, M.
Shanahan, Y. W. Teh, D. J. Rezende, and S. Eslami. Conditional neural
processes. ICML2018
33. • Y. Kim, S. Wiseman, A. C. Miller, D. Sontag, and A. M. Rush. Semi-amortized
variational autoencoders. In Proceedings of the 35th International
Conference on Machine Learning, 2018b.
• J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot
learning. In Advances in Neural Information Processing Systems, pages
4080–4090, 2017.
• Eslami, S. A., Rezende, D. J., Besse, F., Viola, F., Morcos, A. S., Garnelo, M.,
Ruderman, A., Rusu, A. A., Dani- helka, I., Gregor, K., et al. Neural scene
representation and rendering. Science, 360(6394):1204–1210, 2018.
• M. Garnelo, J. Schwarz, D. Rosenbaum, F. Viola, D. J. Rezende, S. Eslami, and
Y. W. Teh. Neural processes. ICML2018 workshop on Theoretical Foundations
and Applications of Deep Generative Models.
• Zitian Chen, Yanwei Fu, Yinda Zhang, Leonid Sigal, Multi-level Semantic
Feature Augmentation for One-shot Learning, arxiv 2018,
https://arxiv.org/abs/1804.05298