This is an old revision of the document!


https://arxiv.org/pdf/1707.09219.pdf Recurrent Ladder Networks

We propose a recurrent extension of the Ladder network [24], which is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling.

https://arxiv.org/abs/1703.01560 LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation https://github.com/jwyang/lr-gan.pytorch

https://medium.com/towards-data-science/a-new-kind-of-deep-neural-networks-749bcde19108

http://www.fil.ion.ucl.ac.uk/spm/doc/papers/Reinforcement_Learning_or_Active_Inference.pdf

In summary, the free-energy formulation dispenses with value functions and prescribes optimal trajectories in terms of prior expectations. Active inference ensures these trajectories are followed, even under random perturbations. In what sense are priors optimal? They are optimal in the sense that they restrict the states of an agent to a small part of state-space. In this formulation, rewards do not attract trajectories; rewards are just sensory states that are visited frequently. If we want to change the behaviour of an agent in a social or experimental setting, we simply induce new (empirical) priors by exposing the agent to a new environment. From the engineering perceptive, the ensuing behaviour is remarkably robust to noise and limited only by the specification of the new (controlled) environment.

https://arxiv.org/pdf/1503.04187.pdf A Minimal Active Inference Agent

http://ac.els-cdn.com/S0149763416307096/1-s2.0-S0149763416307096-main.pdf?_tid=8d4190c6-79ef-11e7-aa20-00000aab0f27&acdnat=1501945731_88bccc8a50ed1b8a2d2e2970b9797ee8 Deep temporal models and active inference

http://www.fil.ion.ucl.ac.uk/~karl/Active%20inference%20and%20learning.pdf Active inference and learning

https://arxiv.org/pdf/1706.00885v3.pdf IDK Cascades: Fast Deep Learning by Learning not to Overthink

We introduce the “I Don't Know” (IDK) prediction cascades framework, a general framework for composing a set of pre-trained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new cost-aware objective within this framework. We evaluate these techniques on a range of both benchmark and real-world datasets and demonstrate that prediction cascades can reduce computation by 37%, resulting in up to 1.6x speedups in image classification tasks over state-of-the-art models without a loss in accuracy.