Frame rate up conversion of 3D video by motion and depth fusion

Yeejin Lee, Zucheul Lee, Truong Q. Nguyen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

This paper presents a new frame-rate up conversion method for video plus depth representation. The proposed method improves the quality of video using additional depth information. The depth information is utilized in three ways to improve the accuracy of motion estimation. One usage is that depth frame is added in block matching criterion to estimate initial motion. Another usage is to detect object boundary region combining depth value variation and local depth distribution adjustment technique. Finally, motion vector field is refined adaptively by segmenting macroblocks into object regions. Any existing block-based motion-compensated frame interpolation methods may utilize our method to refine motion vector field by using depth frames. Experimental results verify that the proposed method outperforms the conventional motion-compensated frame interpolation algorithms, while preserving object structure.

Original languageEnglish
Title of host publication2013 IEEE 11th IVMSP Workshop
Subtitle of host publication3D Image/Video Technologies and Applications, IVMSP 2013 - Proceedings
DOIs
StatePublished - 2013
Event2013 IEEE 11th Workshop on 3D Image/Video Technologies and Applications, IVMSP 2013 - Seoul, Korea, Republic of
Duration: 10 Jun 201312 Jun 2013

Publication series

Name2013 IEEE 11th IVMSP Workshop: 3D Image/Video Technologies and Applications, IVMSP 2013 - Proceedings

Conference

Conference2013 IEEE 11th Workshop on 3D Image/Video Technologies and Applications, IVMSP 2013
Country/TerritoryKorea, Republic of
CitySeoul
Period10/06/1312/06/13

Keywords

  • 3D video
  • frame-rate up conversion(FRUC)
  • motion vector refinement
  • motion-compensated frame interpolation(MCFI)
  • video plus depth representation

Fingerprint

Dive into the research topics of 'Frame rate up conversion of 3D video by motion and depth fusion'. Together they form a unique fingerprint.

Cite this