Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
iterative_inference [2018/02/15 19:31]
admin
iterative_inference [2018/12/21 19:13] (current)
admin
Line 47: Line 47:
  
 PCN reuses a single architecture to recursively run bottom-up and top-down process, enabling an increasingly longer cascade of non-linear transformation. For image classification,​ PCN refines its representation over time towards more accurate and definitive recognition. PCN reuses a single architecture to recursively run bottom-up and top-down process, enabling an increasingly longer cascade of non-linear transformation. For image classification,​ PCN refines its representation over time towards more accurate and definitive recognition.
 +
 +https://​github.com/​nyu-dl/​dl4mt-nonauto Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
 +
 +https://​arxiv.org/​abs/​1803.11189v1 Iterative Visual Reasoning Beyond Convolutions
 +
 +The framework consists of two core modules: a local module that uses spatial memory to store previous beliefs with parallel updates; and a global graph-reasoning module. Our graph module has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships between them; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other to refine estimates. The final predictions are made by combining the best of both modules with an attention mechanism. We show strong performance over plain ConvNets, \eg achieving an 8.4% absolute improvement on ADE measured by per-class average precision. Analysis also shows that the framework is resilient to missing regions for reasoning.
 +
 +https://​arxiv.org/​abs/​1805.08136v1 Meta-learning with differentiable closed-form solvers
 +
 +In this work we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning. The main idea is to teach a deep network to use standard machine learning tools, such as logistic regression, as part of its own internal model, enabling it to quickly adapt to novel tasks. This requires back-propagating errors through the solver steps. http://​www.robots.ox.ac.uk/​~luca/​r2d2.html
 +
 +https://​www.disneyresearch.com/​publication/​iterative-amortized-inference/​ Iterative Amortized Inference
 +
 +https://​github.com/​joelouismarino/​iterative_inference
 +
 +https://​openreview.net/​forum?​id=HygYqs0qKX ​
 +
 +https://​arxiv.org/​abs/​1706.04008 Recurrent Inference Machines for Solving Inverse Problems
 +
 +We establish this framework by abandoning the traditional separation between
 +model and inference. Instead, we propose to learn both components jointly without the need to define
 +their explicit functional form. This paradigm shift enables us to bridge the gap between the fields
 +of deep learning and inverse problems. A crucial and unique quality of RIMs are their ability to
 +generalize across tasks without the need to retrain. We convincingly demonstrate this feature in our
 +experiments as well as state of the art results on image denoising and super-resolution.
 +
 +https://​arxiv.org/​pdf/​1811.02486.pdf Concept Learning with Energy-Based Models
 +
 +https://​openreview.net/​forum?​id=rkxw-hAcFQ Generating Multi-Agent Trajectories using Programmatic Weak Supervision ​
 +
 +e blend deep generative models with programmatic weak supervision to generate coordinated multi-agent trajectories of significantly higher quality than previous baselines.