Name Cross Validation
Intent
Motivation
Structure
<Diagram>
Discussion
Known Uses
Related Patterns
<Diagram>
References
http://www.win-vector.com/blog/2015/09/a-deeper-theory-of-testing/
http://www.slideshare.net/erogol/performance-evaluation-tutorial-46936268
https://www.cyut.edu.tw/~hchorng/downdata/1st/SS11_Variance%20Reduction.pdf
https://en.wikipedia.org/wiki/Variance_reduction
https://en.wikipedia.org/wiki/Akaike_information_criterion
https://arxiv.org/abs/1703.03167v1 Cross-validation
https://arxiv.org/abs/1704.01664 The Relative Performance of Ensemble Methods with Deep Convolutional Neural Networks for Image Classification
In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as candidate algorithms.
https://www.oreilly.com/ideas/the-preoccupation-with-test-error-in-applied-machine-learning
https://research.googleblog.com/2015/08/the-reusable-holdout-preserving.html
The reusable holdout is only one instance of a broader methodology that is, perhaps surprisingly, based on differential privacy—a notion of privacy preservation in data analysis. At its core, differential privacy is a notion of stability requiring that any single sample should not influence the outcome of the analysis significantly.
https://github.com/frankmcsherry/blog/blob/master/posts/2016-02-03.md
http://www.kdnuggets.com/2015/01/clever-methods-overfitting-avoid.html
http://andrewgelman.com/2017/07/15/what-is-overfitting-exactly/
https://arxiv.org/abs/1708.03309 Systematic Testing of Convolutional Neural Networks for Autonomous Driving
http://www.kdnuggets.com/2017/08/dataiku-predictive-model-holdout-cross-validation.html?es_p=4726597
http://memosisland.blogspot.de/2017/08/understanding-overfitting-inaccurate.html