Multi-frame based homography estimation for video stitching in static camera environments

Keon Woo Park, Yoo Jeong Shim, Myeong Jin Lee, Heejune Ahn

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

In this paper, a multi-frame based homography estimation method is proposed for video stitching in static camera environments. A homography that is robust against spatio-temporally induced noise can be estimated by intervals, using feature points extracted during a predetermined time interval. The feature point with the largest blob response in each quantized location bin, a representative feature point, is used for matching a pair of video sequences. After matching representative feature points from each camera, the homography for the interval is estimated by random sample consensus (RANSAC) on the matched representative feature points, with their chances of being sampled proportional to their numbers of occurrences in the interval. The performance of the proposed method is compared with that of the per-frame method by investigating alignment distortion and stitching scores for daytime and noisy video sequence pairs. It is shown that alignment distortion in overlapping regions is reduced and the stitching score is improved by the proposed method. The proposed method can be used for panoramic video stitching with static video cameras and for panoramic image stitching with less alignment distortion.

Original languageEnglish
Article number92
JournalSensors
Volume20
Issue number1
DOIs
StatePublished - 1 Jan 2020

Keywords

  • Homography estimation
  • Multi-frame based homography
  • Representative feature point
  • Video alignment
  • Video stitching

Fingerprint

Dive into the research topics of 'Multi-frame based homography estimation for video stitching in static camera environments'. Together they form a unique fingerprint.

Cite this