Joint Appearance and Motion Model With Temporal Transformer for Multiple Object Tracking

Hyunseop Kim, Hyo Jun Lee, Hanul Kim, Seong Gyun Jeong, Yeong Jun Koh

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The problem of multi-object tracking (MOT) in the real world poses several challenging tasks, such as similar appearance, occlusion, and extreme articulation motion. In this paper, we propose a novel joint appearance and motion model, which is robust to diverse motion and objects with similar uniform appearance. The proposed MOT method includes a temporal transformer, a motion estimation module and a ReID embedding module. The temporal transformer is designed to convey object-aware features to the ReID embedding and motion estimation modules. The ReID embedding module extracts ReID features of the detected objects, while motion estimation module predicts expected locations of the previously tracked objects in the current frame. Also, we present a motion-guided association to fuse outputs of the appearance and motion modules effectively. Experimental results demonstrate that the proposed MOT method outperforms the state-of-the-arts on the TAO and DanceTrack datasets that have objects with diverse motions and similar appearances. Furthermore, the proposed MOT provides stable performance on MOT17 and MOT20 that contain objects with simple and regular motion patterns.

Original languageEnglish
Pages (from-to)133792-133803
Number of pages12
JournalIEEE Access
Volume11
DOIs
StatePublished - 2023

Keywords

  • Multi-object tracking
  • online approach
  • tracking-by-detection

Fingerprint

Dive into the research topics of 'Joint Appearance and Motion Model With Temporal Transformer for Multiple Object Tracking'. Together they form a unique fingerprint.

Cite this