TY - GEN
T1 - Adapting Models to Scarce Target Data Without Source Samples
AU - Lee, Joon Ho
AU - Lee, Gyemin
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - When significant discrepancies exist in data distributions between source and target domains, source-trained models often exhibit suboptimal performance in the target domain. Unsupervised domain adaptation (UDA) effectively addresses this issue without needing labels of target data. More recent source-free UDA methods handle the situations where source data is inaccessible. However, the performance of UDA is substantially compromised when the target domain data is scarce. Despite the challenges in obtaining and storing large target data, this aspect of UDA has not been extensively investigated. Our study introduces a new method to alleviate performance degradation in source-free UDA under target data scarcity. The proposed method retains the architecture and pretrained parameters of the source model, thereby reducing the risk of overfitting. Instead, it incorporates less than 3.3% of trainable parameters that comprise a set of convolution layers with non-linearity and a spatial attention network. Empirical assessments reveal that our approach achieves up to 5.4% performance improvement with limited target data on VisDA benchmark over existing UDA methods. Similar trends are also evident in Office-31 benchmark and multi-source UDA experiments with Office-Home benchmark across different target domains. Our method shows promising enhancement of the adapted model’s generalization. These findings highlight the efficacy of our method in improving UDA across diverse domain adaptation scenarios.
AB - When significant discrepancies exist in data distributions between source and target domains, source-trained models often exhibit suboptimal performance in the target domain. Unsupervised domain adaptation (UDA) effectively addresses this issue without needing labels of target data. More recent source-free UDA methods handle the situations where source data is inaccessible. However, the performance of UDA is substantially compromised when the target domain data is scarce. Despite the challenges in obtaining and storing large target data, this aspect of UDA has not been extensively investigated. Our study introduces a new method to alleviate performance degradation in source-free UDA under target data scarcity. The proposed method retains the architecture and pretrained parameters of the source model, thereby reducing the risk of overfitting. Instead, it incorporates less than 3.3% of trainable parameters that comprise a set of convolution layers with non-linearity and a spatial attention network. Empirical assessments reveal that our approach achieves up to 5.4% performance improvement with limited target data on VisDA benchmark over existing UDA methods. Similar trends are also evident in Office-31 benchmark and multi-source UDA experiments with Office-Home benchmark across different target domains. Our method shows promising enhancement of the adapted model’s generalization. These findings highlight the efficacy of our method in improving UDA across diverse domain adaptation scenarios.
KW - Scarce Target Data
KW - Source-Free Domain Adaptation
KW - Unsupervised Domain Adaptation
UR - https://www.scopus.com/pages/publications/85212963931
U2 - 10.1007/978-981-96-0966-6_22
DO - 10.1007/978-981-96-0966-6_22
M3 - Conference contribution
AN - SCOPUS:85212963931
SN - 9789819609659
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 368
EP - 383
BT - Computer Vision – ACCV 2024 - 17th Asian Conference on Computer Vision, Proceedings
A2 - Cho, Minsu
A2 - Laptev, Ivan
A2 - Tran, Du
A2 - Yao, Angela
A2 - Zha, Hongbin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th Asian Conference on Computer Vision, ACCV 2024
Y2 - 8 December 2024 through 12 December 2024
ER -