Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
disentangled_basis [2018/04/12 13:53]
admin
disentangled_basis [2018/12/08 01:09]
admin
Line 292: Line 292:
 https://​arxiv.org/​abs/​1804.03599 ​ https://​arxiv.org/​abs/​1804.03599 ​
 Understanding disentangling in β-VAE Understanding disentangling in β-VAE
 +
 +We proposed controlling the increase of the
 +encoding capacity of the latent posterior during training, by allowing the average KL divergence with
 +the prior to gradually increase from zero, rather than the fixed β-weighted KL term in the original β-
 +VAE objective. We show that this promotes robust learning of disentangled representation combined
 +with better reconstruction fidelity, compared to the results achieved in the original formulation.
 +
 +https://​arxiv.org/​abs/​1804.08071v1 Decoupled Networks
 +
 +we first reparametrize the inner product to a decoupled form and then generalize it to the decoupled convolution operator which serves as the building block of our decoupled networks. We present several effective instances of the decoupled convolution operator. Each decoupled operator is well motivated and has an intuitive geometric interpretation. Based on these decoupled operators, we further propose to directly learn the operator from data.
 +
 +https://​arxiv.org/​abs/​1804.10469v1 Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders
 +
 +https://​blog.openai.com/​glow/ ​
 +
 +https://​arxiv.org/​abs/​1808.00948 Diverse Image-to-Image Translation via Disentangled Representations
 +
 +https://​arxiv.org/​pdf/​1808.06508.pdf Life-Long Disentangled Representation
 +Learning with Cross-Domain Latent Homologies ​ https://​deepmind.com/​blog/​imagine-creating-new-visual-concepts-recombining-familiar-ones/​
 +
 +: Variational Autoencoder with Shared Embeddings
 +(VASE). Based on the Minimum Description Length principle, VASE automatically
 +detects shifts in the data distribution and allocates spare representational capacity to
 +new knowledge, while simultaneously protecting previously learnt representations
 +from catastrophic forgetting. Our approach encourages the learnt representations
 +to be disentangled,​ which imparts a number of desirable properties: VASE can
 +deal sensibly with ambiguous inputs, it can enhance its own representations through
 +imagination-based exploration,​ and most importantly,​ it exhibits semantically
 +meaningful sharing of latents between different datasets. Compared to baselines
 +with entangled representations,​ our approach is able to reason beyond surface-level
 +statistics and perform semantically meaningful cross-domain inference.
 +
 +https://​arxiv.org/​abs/​1809.04506 Combined Reinforcement Learning via Abstract Representations
 +
 + In this paper we propose a new way of explicitly bridging both approaches via a shared low-dimensional learned encoding of the environment,​ meant to capture summarizing abstractions. We show that the modularity brought by this approach leads to good generalization while being computationally efficient, with planning happening in a smaller latent state space. In addition, this approach recovers a sufficient low-dimensional representation of the environment,​ which opens up new strategies for interpretable AI, exploration and transfer learning.
 +
 +https://​openreview.net/​forum?​id=rJGgFjA9FQ Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks
 +
 +This paper presents methods to disentangle and interpret contextual effects that are encoded in a deep neural network
 +
 +https://​arxiv.org/​abs/​1809.10083v1 Unsupervised Adversarial Invariance
 +
 +We present a novel unsupervised invariance induction framework for neural networks that learns a split representation of data through competitive training between the prediction task and a reconstruction task coupled with disentanglement,​ without needing any labeled information about nuisance factors or domain knowledge. We describe an adversarial instantiation of this framework and provide analysis of its working. Our unsupervised model outperforms state-of-the-art methods, which are supervised, at inducing invariance to inherent nuisance factors, effectively using synthetic data augmentation to learn invariance, and domain adaptation. Our method can be applied to any prediction task, eg., binary/​multi-class classification or regression, without loss of generality.
 +
 +disentanglement is achieved between e1 and e2 in a novel way through two adversarial disentanglers
 +— one that aims to predict e2 from e1 and another that does the inverse.
 +
 +https://​github.com/​oripress/​ContentDisentanglement https://​openreview.net/​forum?​id=BylE1205Fm Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer
 +
 +https://​arxiv.org/​abs/​1804.00104v3 Learning Disentangled Joint Continuous and Discrete Representations
 +
 +https://​arxiv.org/​abs/​1811.12359 Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
 +
 +https://​arxiv.org/​pdf/​1812.02230.pdf Towards a Definition of
 +Disentangled Representations