Where to Look: Visual Attention Estimation in Road Scene Video for Safe Driving

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

This work addresses the task of locating regions that are more crucial for safe driving than other areas on roads. It could be utilized to improve the efficiency and safety of autonomous driving vehicles or robots and could also be useful for human drivers when employed in driver-assistance systems. To achieve robust and accurate attention prediction, we propose a multiscale color and motion-based attention prediction network. The network consists of three components where each processes multi-scaled color images, uses multi-scaled motion information, and merges the outputs of the two streams, respectively. The proposed network is guided to utilize the movement of objects/people as well as the type/location of things/stuff. We demonstrate the effectiveness of the proposed system by experimenting with an actual driving dataset. The experimental results show that the proposed framework outperforms previous works.

Original languageEnglish
Pages (from-to)105-111
Number of pages7
JournalIEIE Transactions on Smart Processing and Computing
Volume11
Issue number2
DOIs
StatePublished - 2022

Keywords

  • Convolutional neural networks
  • Intelligent transportation system
  • Saliency estimation
  • Video-based
  • Visual attention estimation

Fingerprint

Dive into the research topics of 'Where to Look: Visual Attention Estimation in Road Scene Video for Safe Driving'. Together they form a unique fingerprint.

Cite this