Unsupervised salient object matting

Jaehwan Kim, Jongyoul Park

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

2 Scopus citations

Abstract

In this paper, we present a new, easy-to-generate method that is capable of precisely matting salient objects in a large-scale image set in an unsupervised way. Our method extracts only salient object without any user-specified constraints or a manual-thresholding of the saliency-map, which are essentially required in the image matting or saliency-map based segmentation, respectively. In order to provide a more balanced visual saliency as a response to both local features and global contrast, we propose a new, coupled saliency-map based on a linearly combined conspicuity map. Also, we introduce an adaptive tri-map as a refined segmented image of the coupled saliency-map for amore precise object extraction. The proposed method improves the segmentation performance, compared to image matting based on two existing saliency detection measures. Numerical experiments and visual comparisons with large-scale real image set confirm the useful behavior of the proposed method.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages752-763
Number of pages12
DOIs
StatePublished - 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9386
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Keywords

  • Object segmentation
  • Saliency-map
  • Unsupervised matting

Fingerprint

Dive into the research topics of 'Unsupervised salient object matting'. Together they form a unique fingerprint.

Cite this