This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revision Both sides next revision
data_augmentation [2017/08/24 22:51] external edit
data_augmentation [2018/06/04 17:10]
Line 103: Line 103:
 https://​arxiv.org/​pdf/​1708.06046.pdf nuts-flow/​ml : data pre-processing for deep learning https://​arxiv.org/​pdf/​1708.06046.pdf nuts-flow/​ml : data pre-processing for deep learning
 +https://​arxiv.org/​pdf/​1710.09412v1.pdf mixup: Beyond Empirical Risk Minimization
 +In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples.
 +https://​arxiv.org/​pdf/​1702.05538.pdf DATASET AUGMENTATION IN FEATURE SPACE
 +We start with existing
 +data points and apply simple transformations such as adding noise, interpolating,​
 +or extrapolating between them. Our main insight is to perform the transformation
 +not in input space, but in a learned feature space. ​
 +https://​arxiv.org/​abs/​1711.04340v1 Data Augmentation Generative Adversarial Networks
 +The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items.
 +https://​arxiv.org/​pdf/​1802.00050.pdf Recursive Feature Generation for Knowledge-based Learning
 +The algorithm works by generating complex features induced using the available
 +knowledge base. It does so through the extraction of recursive learning problems based on existing
 +features and the knowledge base, that are then given as input to induction algorithms. The output of
 +this process is a collection of classifiers that are then turned into features for the original induction
 +https://​arxiv.org/​abs/​1805.06962 Counterexample-Guided Data Augmentation
 +Counterexamples are misclassified examples that have important properties for retraining and improving the model. Key components of our framework include a counterexample generator, which produces data items that are misclassified by the model and error tables, a novel data structure that stores information pertaining to misclassifications. Error tables can be used to explain the model'​s vulnerabilities and are used to efficiently generate counterexamples for augmentation. We show the efficacy of the proposed framework by comparing it to classical augmentation techniques on a case study of object detection in autonomous driving based on deep neural networks. https://​github.com/​dreossi/​analyzeNN
 +https://​arxiv.org/​abs/​1805.12018 Generalizing to Unseen Domains via Adversarial Data Augmentation
 +Only using training data from the source domain, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is "​hard"​ under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration.