Visual surveillance transformer

Choi Keonghun, Jong Eun Ha

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In the case of the unmanned surveillance system field, even if it is the same object, the detection result will be different depending on the state of the object and the configuration of the surrounding environment. Therefore, artificial intelligence for unmanned surveillance needs to understand the environment on the image, understand the state of the object within the image, and understand the relationship between them. For this purpose, in this study, a transformed transformer structure that can receive a single image, which is 2D data, as an input, unlike splitting one image into a certain size and using it as an input, is presented, and the effect between neighboring pixels is considered by using a segmentation model to which it is applied. A possible background classification model was constructed.

Original languageEnglish
Title of host publication2021 21st International Conference on Control, Automation and Systems, ICCAS 2021
PublisherIEEE Computer Society
Pages1516-1518
Number of pages3
ISBN (Electronic)9788993215212
DOIs
StatePublished - 2021
Event21st International Conference on Control, Automation and Systems, ICCAS 2021 - Jeju, Korea, Republic of
Duration: 12 Oct 202115 Oct 2021

Publication series

NameInternational Conference on Control, Automation and Systems
Volume2021-October
ISSN (Print)1598-7833

Conference

Conference21st International Conference on Control, Automation and Systems, ICCAS 2021
Country/TerritoryKorea, Republic of
CityJeju
Period12/10/2115/10/21

Keywords

  • Deep learning
  • Segmentation
  • Transformer
  • Visual surveillance

Fingerprint

Dive into the research topics of 'Visual surveillance transformer'. Together they form a unique fingerprint.

Cite this