TY - GEN
T1 - UCR-SSL
T2 - 2023 International Conference on Electronics, Information, and Communication, ICEIC 2023
AU - Lee, Seungil
AU - Kim, Hyun
AU - Chun, Dayoung
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Recently, semi-supervised learning methods are being actively developed to increase the performance of neural networks by using large amounts of unlabeled data. Among these techniques, pseudo-labeling methods have the advantage of low computational complexity, but are vulnerable to missing annotations. To solve this problem, we propose a method called uncertainty-based consistency regularization (UCR). UCR models a detection head to obtain different outputs for input images and computes a feature map of each. Subsequently, these feature maps are matched with the original and filtered ground truth (GT), and are classified as positive and negative samples, respectively. In this process, missing samples are generated by the filtered GT; therefore, we use a specialized loss function designed to reduce the logit difference of the samples for robustness against missing annotations. We also use the uncertainty extracted through Gaussian modeling as a criterion for annotation filtering to train the network to focus on reliable results. As a result of experiments with an SSD model on the Pascal VOC dataset, the proposed approach achieved an improvement of 0.7% in terms of mAP compared to a baseline method.
AB - Recently, semi-supervised learning methods are being actively developed to increase the performance of neural networks by using large amounts of unlabeled data. Among these techniques, pseudo-labeling methods have the advantage of low computational complexity, but are vulnerable to missing annotations. To solve this problem, we propose a method called uncertainty-based consistency regularization (UCR). UCR models a detection head to obtain different outputs for input images and computes a feature map of each. Subsequently, these feature maps are matched with the original and filtered ground truth (GT), and are classified as positive and negative samples, respectively. In this process, missing samples are generated by the filtered GT; therefore, we use a specialized loss function designed to reduce the logit difference of the samples for robustness against missing annotations. We also use the uncertainty extracted through Gaussian modeling as a criterion for annotation filtering to train the network to focus on reliable results. As a result of experiments with an SSD model on the Pascal VOC dataset, the proposed approach achieved an improvement of 0.7% in terms of mAP compared to a baseline method.
KW - Consistency Regularization
KW - Convolutional Neural Network
KW - Semi-Supervised Object Detection
KW - Uncertainty
UR - http://www.scopus.com/inward/record.url?scp=85150439210&partnerID=8YFLogxK
U2 - 10.1109/ICEIC57457.2023.10049938
DO - 10.1109/ICEIC57457.2023.10049938
M3 - Conference contribution
AN - SCOPUS:85150439210
T3 - 2023 International Conference on Electronics, Information, and Communication, ICEIC 2023
BT - 2023 International Conference on Electronics, Information, and Communication, ICEIC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 February 2023 through 8 February 2023
ER -