TY - JOUR
T1 - Generalized Outlier Exposure
T2 - Towards a trustworthy out-of-distribution detector without sacrificing accuracy
AU - Koo, Jiin
AU - Choi, Sungjoon
AU - Hwang, Sangheum
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/4/7
Y1 - 2024/4/7
N2 - Despite the remarkable performance of deep neural networks (DNNs), it is often challenging to employ DNNs in safety-critical applications due to their overconfident predictions on even out-of-distribution (OoD) samples. To address this, an OoD detection task was motivated, and one of the OoD detection methods, Outlier Exposure (OE), demonstrated strong performance by leveraging OoD training samples. However, OE and its variants lead to a deterioration in in-distribution (ID) classification performance, and this issue is still unresolved. To this end, we propose Generalized OE (G-OE) that linearly mixes training data from all given samples, including OoD to produce reliable uncertainty estimates. G-OE also includes an effective filtering strategy to reduce the negative effect of OoD samples that are semantically similar to ID samples. We extensively evaluate the performance of G-OE on SC-OoD benchmarks: G-OE improves the performance of OoD detection and ID classification compared to existing OE-based methods.
AB - Despite the remarkable performance of deep neural networks (DNNs), it is often challenging to employ DNNs in safety-critical applications due to their overconfident predictions on even out-of-distribution (OoD) samples. To address this, an OoD detection task was motivated, and one of the OoD detection methods, Outlier Exposure (OE), demonstrated strong performance by leveraging OoD training samples. However, OE and its variants lead to a deterioration in in-distribution (ID) classification performance, and this issue is still unresolved. To this end, we propose Generalized OE (G-OE) that linearly mixes training data from all given samples, including OoD to produce reliable uncertainty estimates. G-OE also includes an effective filtering strategy to reduce the negative effect of OoD samples that are semantically similar to ID samples. We extensively evaluate the performance of G-OE on SC-OoD benchmarks: G-OE improves the performance of OoD detection and ID classification compared to existing OE-based methods.
KW - Confidence
KW - Out-of-distribution detection
KW - Outlier Exposure
KW - Uncertainty
UR - https://www.scopus.com/pages/publications/85185459637
U2 - 10.1016/j.neucom.2024.127371
DO - 10.1016/j.neucom.2024.127371
M3 - Article
AN - SCOPUS:85185459637
SN - 0925-2312
VL - 577
JO - Neurocomputing
JF - Neurocomputing
M1 - 127371
ER -