Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
adversarial_training [2017/11/22 21:35]
admin
adversarial_training [2018/12/02 13:59] (current)
admin
Line 286: Line 286:
  
  VAT resembles adversarial training, but distinguishes itself in that it determines the adversarial direction from the model distribution alone without using the label information,​ making it applicable to semi-supervised learning. The computational cost for VAT is relatively low. For neural network, the approximated gradient of the LDS can be computed with no more than three pairs of forward and back propagations. ​  VAT resembles adversarial training, but distinguishes itself in that it determines the adversarial direction from the model distribution alone without using the label information,​ making it applicable to semi-supervised learning. The computational cost for VAT is relatively low. For neural network, the approximated gradient of the LDS can be computed with no more than three pairs of forward and back propagations. ​
 +
 +https://​arxiv.org/​abs/​1704.03976 Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning
 +
 +
 +https://​arxiv.org/​abs/​1711.08534 Safer Classification by Synthesis
 +
 +At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately "knows when it does not know," and provides resilience to out of distribution examples while maintaining competitive performance for standard examples.
 +
 +https://​arxiv.org/​abs/​1805.10204 Adversarial examples from computational constraints
 +
 +This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.
 +
 +https://​arxiv.org/​abs/​1811.11553v1 ​
 +Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects
 +
 +https://​arxiv.org/​abs/​1809.07802v2 Playing the Game of Universal Adversarial Perturbations
 +
 +https://​arxiv.org/​pdf/​1811.04422.pdf An Optimal Control View of Adversarial Machine Learning
 +