Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
one-shot_learning [2018/03/08 10:47]
admin
one-shot_learning [2019/01/12 11:07]
admin
Line 174: Line 174:
 To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. ​ To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. ​
  
 +https://​arxiv.org/​pdf/​1804.00222.pdf Learning Unsupervised Learning Rules
  
 +t. In this work, we propose instead to directly
 +target a later desired task by meta-learning
 +an unsupervised learning rule, which leads to representations
 +useful for that task. Here, our desired
 +task (meta-objective) is the performance of the
 +representation on semi-supervised classification,​
 +and we meta-learn an algorithm – an unsupervised
 +weight update rule – that produces representations
 +that perform well under this meta-objective. Additionally,​
 +we constrain our unsupervised update
 +rule to a be a biologically-motivated,​ neuron-local
 +function, which enables it to generalize to novel
 +neural network architectures. We show that the
 +meta-learned update rule produces useful features
 +and sometimes outperforms existing unsupervised
 +learning techniques. We show that the metalearned
 +unsupervised update rule generalizes to
 +train networks with different widths, depths, and
 +nonlinearities. It also generalizes to train on data
 +with randomly permuted input dimensions and
 +even generalizes from image datasets to a text
 +task.
 +
 +https://​arxiv.org/​abs/​1804.07275v1 Deep Triplet Ranking Networks for One-Shot Recognition
 +
 +https://​arxiv.org/​abs/​1512.01192v2 Prototypical Priors: From Improving Classification to Zero-Shot Learning
 +
 +https://​arxiv.org/​abs/​1901.02199v1 FIGR: Few-shot Image Generation with Reptile
 +
 +Our model successfully generates novel images on both MNIST and Omniglot with as little as 4 images from an unseen class. We further contribute FIGR-8, a new dataset for few-shot image generation, which contains 1,548,944 icons categorized in over 18,409 classes. Trained on FIGR-8, initial results show that our model can generalize to more advanced concepts (such as "​bird"​ and "​knife"​) from as few as 8 samples from a previously unseen class of images and as little as 10 training steps through those 8 images. ​ https://​github.com/​OctThe16th/​FIGR https://​github.com/​marcdemers/​FIGR-8