This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
one-shot_learning [2017/11/18 00:07]
one-shot_learning [2019/01/12 11:07]
Line 130: Line 130:
 https://​arxiv.org/​pdf/​1711.06025.pdf Learning to Compare: Relation Network for Few-Shot Learning https://​arxiv.org/​pdf/​1711.06025.pdf Learning to Compare: Relation Network for Few-Shot Learning
 +We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning,​ it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on four datasets demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.
 +https://​arxiv.org/​pdf/​1711.04043.pdf FEW-SHOT LEARNING WITH GRAPH NEURAL NETWORKS
 +We propose to study the problem of few-shot learning with the prism of inference
 +on a partially observed graphical model, constructed from a collection of
 +input images whose label can be either observed or not. By assimilating generic
 +message-passing inference algorithms with their neural-network counterparts,​ we
 +define a graph neural network architecture that generalizes several of the recently
 +proposed few-shot learning models. Besides providing improved numerical performance,​
 +our framework is easily extended to variants of few-shot learning, such
 +as semi-supervised or active learning, demonstrating the ability of graph-based
 +models to operate well on ‘relational’ tasks.
 +https://​arxiv.org/​pdf/​1603.05106.pdf One-Shot Generalization in Deep Generative Models
 +We develop
 +machine learning systems with this important
 +capacity by developing new deep generative
 +models, models that combine the representational
 +power of deep learning with the inferential
 +power of Bayesian reasoning. We develop
 +a class of sequential generative models that
 +are built on the principles of feedback and attention.
 +These two characteristics lead to generative
 +models that are among the state-of-the art
 +in density estimation and image generation. We
 +demonstrate the one-shot generalization ability
 +of our models using three tasks: unconditional
 +sampling, generating new exemplars of a given
 +concept, and generating new exemplars of a family
 +of concepts. ​
 +https://​openreview.net/​forum?​id=r1wEFyWCW Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions ​
 +In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation.
 +https://​arxiv.org/​abs/​1803.00676v1 Meta-Learning for Semi-Supervised Few-Shot Classification
 +To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. ​
 +https://​arxiv.org/​pdf/​1804.00222.pdf Learning Unsupervised Learning Rules
 +t. In this work, we propose instead to directly
 +target a later desired task by meta-learning
 +an unsupervised learning rule, which leads to representations
 +useful for that task. Here, our desired
 +task (meta-objective) is the performance of the
 +representation on semi-supervised classification,​
 +and we meta-learn an algorithm – an unsupervised
 +weight update rule – that produces representations
 +that perform well under this meta-objective. Additionally,​
 +we constrain our unsupervised update
 +rule to a be a biologically-motivated,​ neuron-local
 +function, which enables it to generalize to novel
 +neural network architectures. We show that the
 +meta-learned update rule produces useful features
 +and sometimes outperforms existing unsupervised
 +learning techniques. We show that the metalearned
 +unsupervised update rule generalizes to
 +train networks with different widths, depths, and
 +nonlinearities. It also generalizes to train on data
 +with randomly permuted input dimensions and
 +even generalizes from image datasets to a text
 +https://​arxiv.org/​abs/​1804.07275v1 Deep Triplet Ranking Networks for One-Shot Recognition
 +https://​arxiv.org/​abs/​1512.01192v2 Prototypical Priors: From Improving Classification to Zero-Shot Learning
 +https://​arxiv.org/​abs/​1901.02199v1 FIGR: Few-shot Image Generation with Reptile
 +Our model successfully generates novel images on both MNIST and Omniglot with as little as 4 images from an unseen class. We further contribute FIGR-8, a new dataset for few-shot image generation, which contains 1,548,944 icons categorized in over 18,409 classes. Trained on FIGR-8, initial results show that our model can generalize to more advanced concepts (such as "​bird"​ and "​knife"​) from as few as 8 samples from a previously unseen class of images and as little as 10 training steps through those 8 images. ​ https://​github.com/​OctThe16th/​FIGR https://​github.com/​marcdemers/​FIGR-8