Class-incremental Weakly Supervised Object Localization Using a Contrastive Language-image Pre-trained Model

Taehyung Lee, Byeongkeun Kang

Research output: Contribution to journalArticlepeer-review

Abstract

This paper addresses the task of class-incremental weakly supervised object localization, which aims to learn new classes using only image-level class labels while retaining the knowledge of previously learned classes. Although this task is valuable in various real-world applications, incremental learning object localization using only image-level class labels presents significant challenges in terms of achieving robust and accurate results. To address this problem, we propose leveraging a contrastive language–image pre-trained model and self-supervised learning model to generate dense pseudo-labels. We then train an object localization network using the generated pseudo-labels, ground-truth image-level class labels, knowledge distillation losses, an exemplar set, and feature drift compensation modules. To demonstrate the effectiveness of the proposed method, we compare its performance with that of previous state-of-the-art methods on the publicly available ImageNet-100 dataset.

Original languageEnglish
Pages (from-to)999-1005
Number of pages7
JournalJournal of Institute of Control, Robotics and Systems
Volume31
Issue number9
DOIs
StatePublished - 2025

Keywords

  • class-incremental learning
  • foundation models
  • weakly supervised object localization

Fingerprint

Dive into the research topics of 'Class-incremental Weakly Supervised Object Localization Using a Contrastive Language-image Pre-trained Model'. Together they form a unique fingerprint.

Cite this