Manifold Traversal

Aliases

Intent

Train the network so that there is a continuous manifold that can be navigated as an aid in explanation.

Motivation

How can we traverse the input space in a way that explains the outputs?

Sketch

This section provides alternative descriptions of the pattern in the form of an illustration or alternative formal expression. By looking at the sketch a reader may quickly understand the essence of the pattern. Discussion

This is the main section of the pattern that goes in greater detail to explain the pattern. We leverage a vocabulary that we describe in the theory section of this book. We don’t go into intense detail into providing proofs but rather reference the sources of the proofs. How the motivation is addressed is expounded upon in this section. We also include additional questions that may be interesting topics for future research.

Known Uses

Here we review several projects or papers that have used this pattern.

Related Patterns In this section we describe in a diagram how this pattern is conceptually related to other patterns. The relationships may be as precise or may be fuzzy, so we provide further explanation into the nature of the relationship. We also describe other patterns may not be conceptually related but work well in combination with this pattern.

Relationship to Canonical Patterns

Relationship to other Patterns

Further Reading

We provide here some additional external material that will help in exploring this pattern in more detail.

References

To aid in reading, we include sources that are referenced in the text in the pattern.

https://en.wikipedia.org/wiki/Semi-supervised_learning

http://arxiv.org/pdf/1511.06381v2.pdf

MANIFOLD REGULARIZED DEEP NEURAL NETWORKS USING ADVERSARIAL EXAMPLES

To address adversarial input, we propose manifold regularized networks (MRnet) that utilize a novel training objective function that minimizes the difference between multi-layer embedding results of samples and those adversarial.

http://arxiv.org/abs/1602.04938 “Why Should I Trust You?”: Explaining the Predictions of Any Classifier

https://homes.cs.washington.edu/~marcotcr/blog/lime/ LIME - Local Interpretable Model-Agnostic Explanations

http://arxiv.org/abs/1605.09674v2 Variational Information Maximizing Exploration

This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.

http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45404.pdf

In this work, we introduce and study an rnn-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding.

This paper introduces the use of a variational autoencoder for natural language sentences. We present novel techniques that allow us to train our model successfully, and find that it can effectively impute missing words. We analyze the latent space learned by our model, and find that it is able to generate coherent and diverse sentences through purely continuous sampling and provides interpretable homotopies that smoothly interpolate between sentences

Exploration with generative models - Rein Houthooft, Xi Chen, John Schulman, Filip De Turck, Pieter Abbeel https://arxiv.org/abs/1605.09674 Variational Information Maximizing Exploration Curiosity - take actions to maximize “information gain”. Extremely well on low-D environments • Many unsolvable problems become solvable

This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics.

https://arxiv.org/abs/1611.05644 Inverting The Generator Of A Generative Adversarial Network

Generative adversarial networks (GANs) learn to synthesise new samples from a high-dimensional distribution by passing samples drawn from a latent space through a generative network. When the high-dimensional distribution describes images of a particular data set, the network should learn to generate visually similar image samples for latent variables that are close to each other in the latent space. For tasks such as image retrieval and image classification, it may be useful to exploit the arrangement of the latent space by projecting images into it, and using this as a representation for discriminative tasks. GANs often consist of multiple layers of non-linear computations, making them very difficult to invert. This paper introduces techniques for projecting image samples into the latent space using any pre-trained GAN, provided that the computational graph is available. We evaluate these techniques on both MNIST digits and Omniglot handwritten characters. In the case of MNIST digits, we show that projections into the latent space maintain information about the style and the identity of the digit. In the case of Omniglot characters, we show that even characters from alphabets that have not been seen during training may be projected well into the latent space; this suggests that this approach may have applications in one-shot learning.

https://arxiv.org/pdf/1611.07429v1.pdf TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning

In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy. We propose to build a Treeview representation of the complex model via hierarchical partitioning of the feature space, which reveals the iterative rejection of unlikely class labels until the correct association is predicted.

https://arxiv.org/abs/1609.04468 Sampling Generative Networks

We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model's prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Binary classification using attribute vectors is presented as a technique supporting quantitative analysis of the latent space. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks.

https://arxiv.org/pdf/1711.01970.pdf Optimal transport maps for distribution preserving operations on latent spaces of Generative Models

In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation.

https://github.com/lmcinnes/umap UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction