This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
uncertainty [2017/09/14 10:02] external edit
uncertainty [2019/01/10 17:05] (current)
Line 91: Line 91:
 ensembles and adversarial training for robustness to model ensembles and adversarial training for robustness to model
 misspecification and dataset shift. misspecification and dataset shift.
 +https://​arxiv.org/​abs/​1710.04759 Bayesian Hypernetworks
 +In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap i.i.d. sampling of q(θ). We demonstrate these qualitative advantages of Bayesian hypernets, which also achieve competitive performance on a suite of tasks that demonstrate the advantage of estimating model uncertainty,​ including active learning and anomaly detection.
 +https://​arxiv.org/​abs/​1805.11783 To Trust Or Not To Trust A Classifier
 + We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier'​s confidence score as well as many other baselines. Further, under some mild distributional assumptions,​ we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.
 +https://​arxiv.org/​abs/​1812.10687 Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty