Recurrent Convolutional Network for Video-based Person Re-Identification

Research output: Chapter in Book/Report/Conference proceedingConference contribution

313 Citations (Scopus)
3218 Downloads (Pure)


In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.
Original languageEnglish
Title of host publicationProceedings of the IEEE conference on computer vision and pattern recognition CVPR 2016
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages10
Publication statusPublished - 12 Dec 2016
Event Conference on Computer Vision and Pattern Recognition (CVPR) 2016 - Caesar's Palace, Las Vegas, United States
Duration: 26 Jun 201601 Jul 2016


Conference Conference on Computer Vision and Pattern Recognition (CVPR) 2016
CountryUnited States
CityLas Vegas

Bibliographical note


Fingerprint Dive into the research topics of 'Recurrent Convolutional Network for Video-based Person Re-Identification'. Together they form a unique fingerprint.

Cite this