**This is an old revision of the document!**

**Name** Adversarial Training

**Intent**

Train the network using a generative and a discriminative network to achieve generalization.

**Motivation**

How can we prevent adversarial observations from misclassifying?

**Structure**

**Discussion**

This method involves two artificial neural networks. A generative network creates output based on random data that it is fed. Researchers also train the second discriminator network that is tasked with determining the difference between real data and adversarial generated data. The generator is able to learn from the discriminator’s responses and begin to produce increasingly more realistic output. The consequence of this configuration is that the generator and discriminator are adversaries. The generator tries to fool the discriminator and the discriminator trying to avoid being fooled.

The advantage of using a generative network using competing adversarial networks is that it has been shown to work better than most other generative networks (i.e. Variational Autoencoder). An explanation for this is that objective function favors the generation of realistic data. Variational Autoencoders in contrast rely on priors that make an assumption on smoothness rather than one on realism.

A disadvantage of adversarial networks are they are difficult to train. The objective function does not have a closed form unlike most conventional objective functions. Adversarial Training consists in finding a Nash equilibrium to a two-player non-cooperative game. Finding an equilibrium may require a lot of trial-and-error.

**Known Uses**

**Related Patterns**

<Diagram>

**References**
https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation.

http://arxiv.org/abs/1511.06581v3 Dueling Network Architectures for Deep Reinforcement Learning

Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm

http://arxiv.org/pdf/1505.07818v4.pdf Domain-Adversarial Training of Neural Networks\

Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.

http://arxiv.org/abs/1606.00704v1

Adversarially Learned Inference

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process.

https://openai.com/requests-for-research/#multiobjective-rl

https://arxiv.org/abs/1606.03498v1 Improved Techniques for Training GANs

Generative adversarial networks are a promising class of generative models that has so far been held back by unstable training and by the lack of a proper evaluation metric. This work presents partial solutions to both of these problems. We propose several techniques to stabilize training that allow us to train models that were previously untrainable. Moreover, our proposed evaluation metric (the Inception score) gives us a basis for comparing the quality of these models.

http://arxiv.org/abs/1511.06385 A Unified Gradient Regularization Family for Adversarial Examples

We develop a family of gradient regularization methods that effectively penalize the gradient of loss function w.r.t. inputs.

http://arxiv.org/pdf/1606.07536v1.pdf Coupled Generative Adversarial Networks

It consists of a pair of generative adversarial networks, each responsible for generating images in one domain. We show that by enforcing a simple weight-sharing constraint, the CoGAN learns to generate pairs of corresponding images without existence of any pairs of corresponding images in the two domains in the training set. In other words, the CoGAN learns a joint distribution of images in the two domains from images drawn separately from the marginal distributions of the individual domains.

http://arxiv.org/pdf/1605.09782v3.pdf Adversarial Feature Learning

We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping.

http://www.inference.vc/an-alternative-update-rule-for-generative-adversarial-networks/

http://staff.cs.manchester.ac.uk/~kechen/publication/chapter13.html Combining Competitive Learning Networks of Various Representations for Sequential Data Clustering

Sequential data clustering provides useful techniques for condensing and summarizing information conveyed in sequential data, which is demanded in various fields ranging from time series analysis to video clip understanding. In this chapter, we propose a novel approach to sequential data clustering by combining multiple competitive learning networks incorporated by various representations of sequential data and thus the clustering will be performed in the feature space. In our approach, competitive learning networks of a rival-penalized learning mechanism are employed for clustering analyses based on different sequential data representations individually while an optimal selection function is applied to find out a final consensus partition from multiple partition candidates yielded by applying alternative consensus functions to results of competitive learning on various representations. Thanks to its capability of the rival penalized learning rules in automatic model selection and the synergy of diverse partitions on various representations resulting from diversified initialization and stopping conditions, our ensemble learning approach yields favorite results especially in model selection, i.e. no assumption on the number of clusters underlying a given data set is needed prior to clustering analysis, which has been demonstrated in synthetic time series and motion trajectory clustering analysis tasks.

Compared to all other models I can think of:

In terms of actual results, they seem to produce better samples than other models. The GAN framework can train any kind of generator net (in theory—-in practice, it’s pretty hard to use REINFORCE to train generator nets with discrete outputs). Most other frameworks require that the generator net has some particular functional form, like the output layer being Gaussian. Essentially all of the other frameworks require that the generator net put non-zero mass everywhere. GANs can learn models that generate points only on a thin manifold that goes near the data. There’s no need to design the model to obey any kind of factorization. Any generator net and any discriminator net will work. Compared to the PixelRNN, the runtime to generate a sample is smaller. GANs produce a sample in one shot, while PixelRNNs need to produce a sample one pixel at a time.

Compared to the VAE, there is no variational lower bound. If the discriminator net fits perfectly, then the generator net recovers the training distribution perfectly. In other words, GANs are asymptotically consistent, while the VAE has some bias.

Compared to deep Boltzmann machines, there is neither a variational lower bound, nor an intractable partition function. The samples are generated in one shot, instead of generated by repeatedly applying a Markov chain operator.

Compared to GSNs, the samples are generated in one shot, instead of generated by repeatedly applying a Markov chain operator.

Compared to NICE and Real NVE, there’s no restriction on the size of the latent code.

To be clear, I think a lot of these other methods are great, and they also have different advantages over GANs.

https://arxiv.org/abs/1606.03498 Improved Techniques for Training GANs

https://arxiv.org/abs/1605.07725 Virtual Adversarial Training for Semi-Supervised Text Classification

https://arxiv.org/abs/1605.07725 Distributional Smoothing with Virtual Adversarial Training

http://www.newyorker.com/news/john-cassidy/the-triumph-and-failure-of-john-nashs-game-theory

http://arxiv.org/abs/1609.05473 SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.

http://arxiv.org/abs/1609.04802 Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

In this paper, we present super-resolution generative adversarial network (SRGAN). To our knowledge, it is the first framework capable of recovering photo-realistic natural images from 4 times downsampling. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss function motivated by perceptual similarity instead of similarity in pixel space.

https://arxiv.org/abs/1606.07536v2 Coupled Generative Adversarial Networks

https://arxiv.org/abs/1609.08661v1 Task Specific Adversarial Cost Function

we propose an alternative adversarial cost function which allows easy tuning of the model for either task. Our task specific cost function is evaluated on a dataset of hand-written characters in the following tasks: Generation, retrieval and one-shot learning.

Generative Adversarial Networks in the context of generative models: A random sample z is drawn from a prior distribution pz and mapped by G to be a sample in the model distribution space, Q. Samples, x from the training data distribution, P or the model distribution Q are mapped by D to a (0, 1) prediction of whether the sample is from the training data distribution or not.

http://www.inference.vc/are-energy-based-gans-actually-energy-based

http://www.araya.org/archives/1183 Stability of Generative Adversarial Networks

https://arxiv.org/abs/1610.06918 Learning to Protect Communications with Adversarial Neural Cryptography

We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an adversary. Thus, a system may consist of neural networks named Alice and Bob, and we aim to limit what a third neural network named Eve learns from eavesdropping on the communication between Alice and Bob. We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train end-to-end, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals.

https://arxiv.org/abs/1611.01673v2 Generative Multi-Adversarial Networks

Generative Multi-Adversarial Network (GMAN), a framework that extends GANs to multiple discriminators. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher.

https://arxiv.org/abs/1611.01046 Learning to Pivot with Adversarial Networks

Robust inference is possible if it is based on a pivot – a quantity whose distribution is invariant to the unknown value of the (categorical or continuous) nuisance parameters that parametrizes this family of generation processes. In this work, we introduce a flexible training procedure based on adversarial networks for enforcing the pivotal property on a predictive model. We derive theoretical results showing that the proposed algorithm tends towards a minimax solution corresponding to a predictive model that is both optimal and independent of the nuisance parameters (if that models exists) or for which one can tune the trade-off between power and robustness.

In terms of applications, the proposed solution can be used in any situation where the training data may not be representative of the real data the predictive model will be applied to in practice.

Architecture for the adversarial training of a binary classifier f against a nuisance parameters Z. The adversary r models the distribution p(z|f(X; θf ) = s) of the nuisance parameters as observed only through the output f(X; θf ) of the classifier. By maximizing the antagonistic objective Lr(θf , θr) (as part of minimizing Lf (θf ) − λLr(θf , θr)), the classifier f forces p(z|f(X; θf ) = s) towards the prior p(z), which happens when f(X; θf ) is independent of the nuisance parameter Z and therefore pivotal.

https://arxiv.org/abs/1611.01236 Adversarial Machine Learning at Scale

Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model's parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a “label leaking” effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process.

https://arxiv.org/pdf/1611.03852v2.pdf A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models

Generative adversarial networks (GANs) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator. While the idea of learning cost functions is relatively new to the field of generative modeling, learning costs has long been studied in control and reinforcement learning (RL) domains, typically for imitation learning from demonstrations. In these fields, learning the cost function underlying observed behavior is known as inverse reinforcement learning (IRL) or inverse optimal control. While at first the connection between cost learning in RL and cost learning in generative modeling may appear to be a superficial one, we show in this paper that certain IRL methods are in fact mathematically equivalent to GANs. In particular, we demonstrate an equivalence between a sample-based algorithm for maximum entropy IRL and a GAN in which the generator’s density can be evaluated and is provided as an additional input to the discriminator. Interestingly, maximum entropy IRL is a special case of an energy-based model. We discuss the interpretation of GANs as an algorithm for training energy-based models, and relate this interpretation to other recent work that seeks to connect GANs and EBMs. By formally highlighting the connection between GANs, IRL, and EBMs, we hope that researchers in all three communities can better identify and apply transferable ideas from one domain to another, particularly for developing more stable and scalable algorithms: a major challenge in all three domains.

https://arxiv.org/abs/1610.01945 Connecting Generative Adversarial Networks and Actor-Critic Methods

Information structure of GANs and AC methods. Empty circles represent models with a distinct loss function. Filled circles represent information from the environment. Diamonds represent fixed functions, both deterministic and stochastic. Solid lines represent the flow of information, while dotted lines represent the flow of gradients used by another model. Paths which are analogous between the two models are highlighted in red. The dependence of Q on future states and the dependence of future states on π are omitted for clarity.

Other machine learning problems which can be framed as multilevel optimization problems.

https://arxiv.org/abs/1609.03126 Energy-based Generative Adversarial Network

We introduce the “Energy-based Generative Adversarial Network” model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.

https://arxiv.org/abs/1611.02163 Unrolled Generative Adversarial Networks

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.

https://arxiv.org/abs/1612.02780 Improved generator objectives for GANs

We present a framework to understand GAN training as alternating density ratio estimation and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. We also derive a family of generator objectives that target arbitrary f-divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.

https://arxiv.org/pdf/1612.08354v1.pdf Image-Text Multi-Modal Representation Learning by Adversarial Backpropagation

Our model is end-to-end backpropagation, intuitive and easily extended to other multimodal learning work.

https://openreview.net/pdf?id=SyxeqhP9ll Calibrating Energy-based Generative Adversarial Networks

In this paper **we propose equipping Generative Adversarial Networks with the
ability to produce direct energy estimates for samples**. Specifically, we develop
a flexible adversarial training framework, and prove this framework not only ensures
the generator converges to the true data distribution, but also enables the
discriminator to retain the density information at the global optimum. We derive
the analytic form of the induced solution, and analyze its properties. In order to
make the proposed framework trainable in practice, we introduce two effective
approximation techniques. Empirically, the experiment results closely match our
theoretical analysis, verifying that the discriminator is able to recover the energy
of data distribution.

https://arxiv.org/abs/1611.02163 Unrolled Generative Adversarial Networks

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.

https://openreview.net/pdf?id=B1ElR4cgg ADVERSARIALLY LEARNED INFERENCE

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.

https://arxiv.org/abs/1706.08500v1 GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium

https://arxiv.org/abs/1709.02538v1 CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning https://github.com/Bitadr/CurTAIL

https://arxiv.org/abs/1507.00677 Distributional Smoothing with Virtual Adversarial Training

VAT resembles adversarial training, but distinguishes itself in that it determines the adversarial direction from the model distribution alone without using the label information, making it applicable to semi-supervised learning. The computational cost for VAT is relatively low. For neural network, the approximated gradient of the LDS can be computed with no more than three pairs of forward and back propagations.

https://arxiv.org/abs/1704.03976 Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning

https://arxiv.org/abs/1711.08534 Safer Classification by Synthesis

At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately “knows when it does not know,” and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.

https://arxiv.org/abs/1805.10204 Adversarial examples from computational constraints

This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.