Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
meta-learning [2018/09/29 20:34]
admin
meta-learning [2018/12/23 04:11]
admin
Line 411: Line 411:
 MAML is great, but it has many problems, we solve many of those problems and as a result we learn most hyper parameters end to end, speed-up training and inference and set a new SOTA in few-shot learning MAML is great, but it has many problems, we solve many of those problems and as a result we learn most hyper parameters end to end, speed-up training and inference and set a new SOTA in few-shot learning
  
 +https://​arxiv.org/​pdf/​1810.02334.pdf UNSUPERVISED LEARNING VIA META-LEARNING
 +
 +we construct tasks from unlabeled
 +data in an automatic way and run meta-learning over the constructed tasks. Surprisingly,​
 +we find that, when integrated with meta-learning,​ relatively simple task
 +construction mechanisms, such as clustering unsupervised representations,​ lead to
 +good performance on a variety of downstream tasks. Our experiments across four
 +image datasets indicate that our unsupervised meta-learning approach acquires a
 +learning algorithm without any labeled data that is applicable to a wide range
 +of downstream classification tasks, improving upon the representation learned by
 +four prior unsupervised learning methods.
 +
 +https://​arxiv.org/​abs/​1810.03548v1 Meta-Learning:​ A Survey
 +
 +In this chapter, we provide an overview of the state of the art in this fascinating and continuously evolving field.
 +
 +https://​arxiv.org/​pdf/​1810.03642v1.pdf CAML: Fast Context Adaptation via Meta-Learning
 +
 +
 +CAML: Fast Context Adaptation via Meta-Learning
 +Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson
 +(Submitted on 8 Oct 2018 (this version), latest version 12 Oct 2018 (v2))
 +We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks.
 +
 +https://​arxiv.org/​pdf/​1810.08178.pdf Gradient Agreement as an Optimization Objective for
 +Meta-Learning
 +
 +Our approach is based on pushing the parameters of the
 +model to a direction in which tasks have more agreement upon. If the gradients
 +of a task agree with the parameters update vector, then their inner product will be
 +a large positive value. As a result, given a batch of tasks to be optimized for, we
 +associate a positive (negative) weight to the loss function of a task, if the inner
 +product between its gradients and the average of the gradients of all tasks in the
 +batch is a positive (negative) value. ​
 +
 +https://​openreview.net/​pdf?​id=HkxStoC5F7 META-LEARNING PROBABILISTIC INFERENCE FOR
 +PREDICTION
 +
 + 1) We develop ML-PIP, a general framework for Meta-Learning approximate
 +Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic
 +interpretations of meta-learning to cover a broad class of methods. 2) We
 +introduce VERSA, an instance of the framework employing a flexible and versatile
 +amortization network that takes few-shot learning datasets as inputs, with arbitrary
 +numbers of shots, and outputs a distribution over task-specific parameters in
 +a single forward pass. VERSA substitutes optimization at test time with forward
 +passes through inference networks, amortizing the cost of inference and relieving
 +the need for second derivatives during training.
 +
 +https://​arxiv.org/​pdf/​1810.06784.pdf PROMP: PROXIMAL META-POLICY SEARCH
 +
 +This paper provides
 +a theoretical analysis of credit assignment in gradient-based Meta-RL. Building
 +on the gained insights we develop a novel meta-learning algorithm that overcomes
 +both the issue of poor credit assignment and previous difficulties in estimating
 +meta-policy gradients. By controlling the statistical distance of both
 +pre-adaptation and adapted policies during meta-policy search, the proposed algorithm
 +endows efficient and stable meta-learning. Our approach leads to superior
 +pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms
 +in sample-efficiency,​ wall-clock time, and asymptotic performance. Our
 +code is available at github.com/​jonasrothfuss/​promp
 +
 +https://​pdfs.semanticscholar.org/​0b00/​3bb28f25627f715b0fd53b443fabfcf5a817.pdf?​_ga=2.110922695.354576531.1543161615-2107301068.1536926320 Meta-Learning with Latent Embedding Optimization
 +
 +The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks
 +
 +https://​arxiv.org/​pdf/​1810.03642.pdf CAML: FAST CONTEXT ADAPTATION VIA META-LEARNING
 +
 +https://​arxiv.org/​pdf/​1611.03537.pdf Linear predictors for nonlinear dynamical
 +systems: Koopman operator meets model
 +predictive control
 +
 +http://​metalearning.ml/​2018/​slides/​meta_learning_2018_Levine.pdf What’s Wrong with Meta-Learning
 +
 +Meta-learning,​ or learning to learn, offers an appealing framework for training deep neural networks to adapt quickly and efficiently to new tasks. Indeed, the framework of meta-learning holds the promise of resolving the long-standing challenge of sample complexity in deep learning: by learning to learn efficiently,​ deep models can be meta-trained to adapt quickly to classify new image classes from a couple of examples, or learn new skills with reinforcement learning from just a few trials.
 +
 +However, although the framework of meta-learning and few-shot learning is exceedingly appealing, it carries with it a number of major challenges. First, designing neural network models for meta-learning is quite difficult, since meta-learning models must be able to ingest entire datasets to adapt effectively. I will discuss how this challenge can be addressed by describing a model-agnostic meta-learning algorithm: a meta-learning algorithm that can use any model architecture,​ training that architecture to adapt efficiently via simple finetuning.
 +
 +The second challenge is that meta-learning trades off the challenge of algorithm design (by learning the algorithm) for the challenge of task design: the performance of meta-learning algorithms depends critically on the ability of the user to manually design large sets of diverse meta-training tasks. In practice, this often ends up being an enormous barrier to widespread adoption of meta-learning methods. I will describe our recent work on unsupervised meta-learning,​ where tasks are proposed automatically from unlabeled data, and discuss how unsupervised meta-learning can exceed the performance of standard unsupervised learning methods while removing the manual task design requirement inherent in standard meta-learning methods.