Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
uncertainty [2018/06/02 16:45]
admin
uncertainty [2019/01/10 17:05] (current)
admin
Line 99: Line 99:
  
  We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier'​s confidence score as well as many other baselines. Further, under some mild distributional assumptions,​ we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.  We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier'​s confidence score as well as many other baselines. Further, under some mild distributional assumptions,​ we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.
 +
 +https://​arxiv.org/​abs/​1812.10687 Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
 +