Adaptive NormalHedge for robust visual tracking

Shengping Zhang, Huiyu Zhou, Hongxun Yao, Yanhao Zhang, Kuanquan Wang, Jun Zhang

Research output: Contribution to journalArticlepeer-review

37 Citations (Scopus)
385 Downloads (Pure)

Abstract

In this paper, we propose a novel visual tracking framework, based on a decision-theoretic online learning algorithm namely NormalHedge. To make NormalHedge more robust against noise, we propose an adaptive NormalHedge algorithm, which exploits the historic information of each expert to perform more accurate prediction than the standard NormalHedge. Technically, we use a set of weighted experts to predict the state of the target to be tracked over time. The weight of each expert is online learned by pushing the cumulative regret of the learner towards that of the expert. Our simulation experiments demonstrate the effectiveness of the proposed adaptive NormalHedge, compared to the standard NormalHedge method. Furthermore, the experimental results of several challenging video sequences show that the proposed tracking method outperforms several state-of-the-art methods.
Original languageEnglish
Pages (from-to)132-142
Number of pages11
JournalSignal Processing
Volume110
Early online date27 Aug 2014
DOIs
Publication statusPublished - May 2015

Fingerprint

Dive into the research topics of 'Adaptive NormalHedge for robust visual tracking'. Together they form a unique fingerprint.

Cite this