Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
uncertainty [2017/10/16 10:26]
admin
uncertainty [2019/01/10 17:05]
admin
Line 95: Line 95:
  
 In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap i.i.d. sampling of q(θ). We demonstrate these qualitative advantages of Bayesian hypernets, which also achieve competitive performance on a suite of tasks that demonstrate the advantage of estimating model uncertainty,​ including active learning and anomaly detection. In contrast to most methods for Bayesian deep learning, Bayesian hypernets can represent a complex multimodal approximate posterior with correlations between parameters, while enabling cheap i.i.d. sampling of q(θ). We demonstrate these qualitative advantages of Bayesian hypernets, which also achieve competitive performance on a suite of tasks that demonstrate the advantage of estimating model uncertainty,​ including active learning and anomaly detection.
 +
 +https://​arxiv.org/​abs/​1805.11783 To Trust Or Not To Trust A Classifier
 +
 + We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier'​s confidence score as well as many other baselines. Further, under some mild distributional assumptions,​ we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.
 +
 +https://​arxiv.org/​abs/​1812.10687 Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
 +