Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
iterative_inference [2017/11/14 10:06]
admin
iterative_inference [2018/12/21 19:13] (current)
admin
Line 39: Line 39:
 Dynamic evaluation methods continuously adapt the model parameters θg, learned at training time, Dynamic evaluation methods continuously adapt the model parameters θg, learned at training time,
 to parts of a sequence during evaluation. ​ to parts of a sequence during evaluation. ​
 +
 +https://​arxiv.org/​abs/​1706.04008v1 Recurrent Inference Machines for Solving Inverse Problems
 +
 +Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural network (RNN) which allows for joint learning of model and inference parameters with back-propagation through time. In this framework, the RNN architecture is directly derived from a hand-chosen inference algorithm, effectively limiting its capabilities. We propose a learning framework, called Recurrent Inference Machines (RIM), in which we turn algorithm construction the other way round: Given data and a task, train an RNN to learn an inference algorithm. Because RNNs are Turing complete [1, 2] they are capable to implement any inference algorithm. The framework allows for an abstraction which removes the need for domain knowledge. We demonstrate in several image restoration experiments that this abstraction is effective, allowing us to achieve state-of-the-art performance on image denoising and super-resolution tasks and superior across-task generalization.
 +
 +https://​arxiv.org/​abs/​1802.04762v1 Deep Predictive Coding Network for Object Recognition
 +
 +PCN reuses a single architecture to recursively run bottom-up and top-down process, enabling an increasingly longer cascade of non-linear transformation. For image classification,​ PCN refines its representation over time towards more accurate and definitive recognition.
 +
 +https://​github.com/​nyu-dl/​dl4mt-nonauto Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
 +
 +https://​arxiv.org/​abs/​1803.11189v1 Iterative Visual Reasoning Beyond Convolutions
 +
 +The framework consists of two core modules: a local module that uses spatial memory to store previous beliefs with parallel updates; and a global graph-reasoning module. Our graph module has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships between them; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other to refine estimates. The final predictions are made by combining the best of both modules with an attention mechanism. We show strong performance over plain ConvNets, \eg achieving an 8.4% absolute improvement on ADE measured by per-class average precision. Analysis also shows that the framework is resilient to missing regions for reasoning.
 +
 +https://​arxiv.org/​abs/​1805.08136v1 Meta-learning with differentiable closed-form solvers
 +
 +In this work we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning. The main idea is to teach a deep network to use standard machine learning tools, such as logistic regression, as part of its own internal model, enabling it to quickly adapt to novel tasks. This requires back-propagating errors through the solver steps. http://​www.robots.ox.ac.uk/​~luca/​r2d2.html
 +
 +https://​www.disneyresearch.com/​publication/​iterative-amortized-inference/​ Iterative Amortized Inference
 +
 +https://​github.com/​joelouismarino/​iterative_inference
 +
 +https://​openreview.net/​forum?​id=HygYqs0qKX ​
 +
 +https://​arxiv.org/​abs/​1706.04008 Recurrent Inference Machines for Solving Inverse Problems
 +
 +We establish this framework by abandoning the traditional separation between
 +model and inference. Instead, we propose to learn both components jointly without the need to define
 +their explicit functional form. This paradigm shift enables us to bridge the gap between the fields
 +of deep learning and inverse problems. A crucial and unique quality of RIMs are their ability to
 +generalize across tasks without the need to retrain. We convincingly demonstrate this feature in our
 +experiments as well as state of the art results on image denoising and super-resolution.
 +
 +https://​arxiv.org/​pdf/​1811.02486.pdf Concept Learning with Energy-Based Models
 +
 +https://​openreview.net/​forum?​id=rkxw-hAcFQ Generating Multi-Agent Trajectories using Programmatic Weak Supervision ​
 +
 +e blend deep generative models with programmatic weak supervision to generate coordinated multi-agent trajectories of significantly higher quality than previous baselines.