Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
meta-learning [2018/11/12 22:18]
admin
meta-learning [2018/12/23 04:11] (current)
admin
Line 458: Line 458:
 passes through inference networks, amortizing the cost of inference and relieving passes through inference networks, amortizing the cost of inference and relieving
 the need for second derivatives during training. the need for second derivatives during training.
 +
 +https://​arxiv.org/​pdf/​1810.06784.pdf PROMP: PROXIMAL META-POLICY SEARCH
 +
 +This paper provides
 +a theoretical analysis of credit assignment in gradient-based Meta-RL. Building
 +on the gained insights we develop a novel meta-learning algorithm that overcomes
 +both the issue of poor credit assignment and previous difficulties in estimating
 +meta-policy gradients. By controlling the statistical distance of both
 +pre-adaptation and adapted policies during meta-policy search, the proposed algorithm
 +endows efficient and stable meta-learning. Our approach leads to superior
 +pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms
 +in sample-efficiency,​ wall-clock time, and asymptotic performance. Our
 +code is available at github.com/​jonasrothfuss/​promp
 +
 +https://​pdfs.semanticscholar.org/​0b00/​3bb28f25627f715b0fd53b443fabfcf5a817.pdf?​_ga=2.110922695.354576531.1543161615-2107301068.1536926320 Meta-Learning with Latent Embedding Optimization
 +
 +The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks
 +
 +https://​arxiv.org/​pdf/​1810.03642.pdf CAML: FAST CONTEXT ADAPTATION VIA META-LEARNING
 +
 +https://​arxiv.org/​pdf/​1611.03537.pdf Linear predictors for nonlinear dynamical
 +systems: Koopman operator meets model
 +predictive control
 +
 +http://​metalearning.ml/​2018/​slides/​meta_learning_2018_Levine.pdf What’s Wrong with Meta-Learning
 +
 +Meta-learning,​ or learning to learn, offers an appealing framework for training deep neural networks to adapt quickly and efficiently to new tasks. Indeed, the framework of meta-learning holds the promise of resolving the long-standing challenge of sample complexity in deep learning: by learning to learn efficiently,​ deep models can be meta-trained to adapt quickly to classify new image classes from a couple of examples, or learn new skills with reinforcement learning from just a few trials.
 +
 +However, although the framework of meta-learning and few-shot learning is exceedingly appealing, it carries with it a number of major challenges. First, designing neural network models for meta-learning is quite difficult, since meta-learning models must be able to ingest entire datasets to adapt effectively. I will discuss how this challenge can be addressed by describing a model-agnostic meta-learning algorithm: a meta-learning algorithm that can use any model architecture,​ training that architecture to adapt efficiently via simple finetuning.
 +
 +The second challenge is that meta-learning trades off the challenge of algorithm design (by learning the algorithm) for the challenge of task design: the performance of meta-learning algorithms depends critically on the ability of the user to manually design large sets of diverse meta-training tasks. In practice, this often ends up being an enormous barrier to widespread adoption of meta-learning methods. I will describe our recent work on unsupervised meta-learning,​ where tasks are proposed automatically from unlabeled data, and discuss how unsupervised meta-learning can exceed the performance of standard unsupervised learning methods while removing the manual task design requirement inherent in standard meta-learning methods.