Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
disentangled_basis [2018/08/31 21:47]
admin
disentangled_basis [2018/12/05 17:21]
admin
Line 324: Line 324:
 statistics and perform semantically meaningful cross-domain inference. statistics and perform semantically meaningful cross-domain inference.
  
 +https://​arxiv.org/​abs/​1809.04506 Combined Reinforcement Learning via Abstract Representations
  
 + In this paper we propose a new way of explicitly bridging both approaches via a shared low-dimensional learned encoding of the environment,​ meant to capture summarizing abstractions. We show that the modularity brought by this approach leads to good generalization while being computationally efficient, with planning happening in a smaller latent state space. In addition, this approach recovers a sufficient low-dimensional representation of the environment,​ which opens up new strategies for interpretable AI, exploration and transfer learning.
 +
 +https://​openreview.net/​forum?​id=rJGgFjA9FQ Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
 +
 +This paper presents methods to disentangle and interpret contextual effects that are encoded in a deep neural network
 +
 +https://​arxiv.org/​abs/​1809.10083v1 Unsupervised Adversarial Invariance
 +
 +We present a novel unsupervised invariance induction framework for neural networks that learns a split representation of data through competitive training between the prediction task and a reconstruction task coupled with disentanglement,​ without needing any labeled information about nuisance factors or domain knowledge. We describe an adversarial instantiation of this framework and provide analysis of its working. Our unsupervised model outperforms state-of-the-art methods, which are supervised, at inducing invariance to inherent nuisance factors, effectively using synthetic data augmentation to learn invariance, and domain adaptation. Our method can be applied to any prediction task, eg., binary/​multi-class classification or regression, without loss of generality.
 +
 +disentanglement is achieved between e1 and e2 in a novel way through two adversarial disentanglers
 +— one that aims to predict e2 from e1 and another that does the inverse.
 +
 +https://​github.com/​oripress/​ContentDisentanglement https://​openreview.net/​forum?​id=BylE1205Fm Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer
 +
 +https://​arxiv.org/​abs/​1804.00104v3 Learning Disentangled Joint Continuous and Discrete Representations
 +
 +https://​arxiv.org/​abs/​1811.12359 Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations