Foreground objects segmentation using background image generated by vae

Jae Yeul Kim, Jong Eun Ha

Research output: Contribution to journalArticlepeer-review

Abstract

In visual surveillance, the robust detection of foreground objects under diverse environmental changes is the main goal. In the case of traditional algorithms, they usually obtain a background model image through the statistical analysis, which is used for finding foreground objects by comparing with a current image. Recently, many deep learning-based visual surveillance algorithms have been proposed, and they show improved performance than traditional algorithms. However, they usually show a good performance when test images are similar to training environments. Retraining is required to have an improved result in scenes which are different from training environments. In this paper, we aim to have an improved deep learning-based visual surveillance algorithm which also gives an improvement in new scenes. We use two types of images as the input of the U-Net, which produces a foreground segmentation map as output. A background model image which is generated by VAE is used one type of input. The other type of input to the network is multiple original images. Also, we train the presented network using multiple scenes while most conventional deep learning-based visual surveillance algorithms newly train a network per scene. Experimental results using various open datasets show the feasibility of the presented algorithm.

Original languageEnglish
Pages (from-to)964-970
Number of pages7
JournalJournal of Institute of Control, Robotics and Systems
Volume26
Issue number11
DOIs
StatePublished - Nov 2020

Keywords

  • Deep learning
  • Foreground objects detection
  • Segmentation
  • VAE
  • Visual surveillance

Fingerprint

Dive into the research topics of 'Foreground objects segmentation using background image generated by vae'. Together they form a unique fingerprint.

Cite this