Associated spatio-temporal capsule network for gait recognition

Aite Zhao, Junyu Dong, Jianbo Li, Lin Qi, Huiyu Zhou

Research output: Contribution to journalArticlepeer-review

Abstract

It is a challenging task to identify a person based on her/his gait patterns. State-of-the-art approaches rely on the analysis of temporal or spatial characteristics of gait, and gait recognition is usually performed on single modality data (such as images, skeleton joint coordinates, or force signals). Evidence has shown that using multi-modality data is more conducive to gait research. Therefore, we here establish an automated learning system, with an associated spatio-temporal capsule network (ASTCapsNet) trained on multi-sensor datasets, to analyze multimodal information for gait recognition. Specifically, we first design a low-level feature extractor and a high-level feature extractor for spatio-temporal feature extraction of gait with a novel recurrent memory unit and a relationship layer. Subsequently, a Bayesian model is employed for the decision-making of class labels. Extensive experiments on several public datasets (normal and abnormal gait) validate the effectiveness of the proposed ASTCapsNet, compared against several state-of-the-art methods.

Original languageEnglish
Pages (from-to)846-860
Number of pages15
JournalIEEE Transactions on Multimedia
Volume24
DOIs
Publication statusPublished - 19 Feb 2021
Externally publishedYes

Keywords

  • associated capsules
  • capsule network
  • Gait recognition
  • multi-sensor
  • spatiotemporal

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Associated spatio-temporal capsule network for gait recognition'. Together they form a unique fingerprint.

Cite this