Associated Spatio-Temporal Capsule Network for Gait Recognition

Aite Zhao, Junyu Dong, Jianbo Li, Lin Qi, Huiyu Zhou

Research output: Contribution to journalArticlepeer-review


It is a challenging task to identify a person based on her/his gait patterns. State-of-the-art approaches rely on the analysis of temporal or spatial characteristics of gait, and gait recognition is usually performed on single modality data (such as images, skeleton joint coordinates, or force signals). Evidence has shown that using multi-modality data is more conducive to gait research. Therefore, we here establish an automated learning system, with an associated spatio-temporal capsule network (ASTCapsNet) trained on multi-sensor datasets, to analyze multimodal information for gait recognition. Specifically, we first design a low-level feature extractor and a high-level feature extractor for spatio-temporal feature extraction of gait with a novel recurrent memory unit and a relationship layer. Subsequently, a Bayesian model is employed for the decision-making of class labels. Extensive experiments on several public datasets (normal and abnormal gait) validate the effectiveness of the proposed ASTCapsNet, compared against several state-of-the-art methods.

Original languageEnglish
JournalIEEE Transactions on Multimedia
Early online date19 Feb 2021
Publication statusEarly online date - 19 Feb 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:

Copyright 2021 Elsevier B.V., All rights reserved.


  • associated capsules
  • capsule network
  • Gait recognition
  • multi-sensor
  • spatiotemporal

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Associated Spatio-Temporal Capsule Network for Gait Recognition'. Together they form a unique fingerprint.

Cite this