This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
hierarchical_abstraction [2018/02/14 10:09]
hierarchical_abstraction [2018/11/18 18:57]
Line 469: Line 469:
 propose the information scaling law of ConvACs propose the information scaling law of ConvACs
 through making a reasonable assumption. through making a reasonable assumption.
 +https://​arxiv.org/​abs/​1712.00409 Deep Learning Scaling is Predictable,​ Empirically
 +https://​arxiv.org/​pdf/​1804.02808v1.pdf Latent Space Policies for Hierarchical Reinforcement Learning
 +First, each layer in the
 +hierarchy can be trained with exactly the same algorithm.
 +Second, by using an invertible mapping from latent variables
 +to actions, each layer becomes invertible, which means that
 +the higher layer can always perfectly invert any behavior of
 +the lower layer. This makes it possible to train lower layers
 +on heuristic shaping rewards, while higher layers can still
 +optimize task-specific rewards with good asymptotic performance.
 +Finally, our method has a natural interpretation
 +as an iterative procedure for constructing graphical models
 +that gradually simplify the task dynamics.
 +https://​openreview.net/​forum?​id=S1JHhv6TW Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions ​
 +We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions,​ we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not
 +https://​arxiv.org/​abs/​1807.04640v1 Automatically Composing Representation Transformations as a Means for Generalization
 +https://​arxiv.org/​abs/​1807.07560v1 Compositional GAN: Learning Conditional Image Composition
 +https://​arxiv.org/​pdf/​1803.00590.pdf Hierarchical Imitation and Reinforcement Learning
 +We propose an algorithmic framework, called hierarchical
 +guidance, that leverages the hierarchical
 +structure of the underlying problem to integrate
 +different modes of expert interaction. Our
 +framework can incorporate different combinations
 +of imitation learning (IL) and reinforcement
 +learning (RL) at different levels, leading to dramatic
 +reductions in both expert effort and cost of
 +https://​arxiv.org/​pdf/​1807.03748.pdf Representation Learning with
 +Contrastive Predictive Coding