TY - GEN
T1 - TransVLAD: multi-scale attention-based global descriptors for visual geo-localization
AU - Xu, Yifan
AU - Shamsolmoali, Pourya
AU - Granger, Eric
AU - Nicodeme, Claire
AU - Gardes, Laurent
AU - Yang, Jie
PY - 2023/2/6
Y1 - 2023/2/6
N2 - Visual geo-localization remains a challenging task due to variations in the appearance and perspective among captured images. This paper introduces an efficient TransVLAD module, which aggregates attention-based feature maps into a discriminative and compact global descriptor. Unlike existing methods that generate feature maps using only convolutional neural networks (CNNs), we propose a sparse transformer to encode global dependencies and compute attention-based feature maps, which effectively reduces visual ambiguities that occurs in large-scale geo-localization problems. A positional embedding mechanism is used to learn the corresponding geometric configurations between query and gallery images. A grouped VLAD layer is also introduced to reduce the number of parameters, and thus construct an efficient module. Finally, rather than only learning from the global descriptors on entire images, we propose a self-supervised learning method to further encode more information from multi-scale patches between the query and positive gallery images. Extensive experiments on three challenging large-scale datasets indicate that our model outperforms state-of-the-art models, and has lower computational complexity. The code is available at: https://github.com/wacv-23/TVLAD.
AB - Visual geo-localization remains a challenging task due to variations in the appearance and perspective among captured images. This paper introduces an efficient TransVLAD module, which aggregates attention-based feature maps into a discriminative and compact global descriptor. Unlike existing methods that generate feature maps using only convolutional neural networks (CNNs), we propose a sparse transformer to encode global dependencies and compute attention-based feature maps, which effectively reduces visual ambiguities that occurs in large-scale geo-localization problems. A positional embedding mechanism is used to learn the corresponding geometric configurations between query and gallery images. A grouped VLAD layer is also introduced to reduce the number of parameters, and thus construct an efficient module. Finally, rather than only learning from the global descriptors on entire images, we propose a self-supervised learning method to further encode more information from multi-scale patches between the query and positive gallery images. Extensive experiments on three challenging large-scale datasets indicate that our model outperforms state-of-the-art models, and has lower computational complexity. The code is available at: https://github.com/wacv-23/TVLAD.
KW - Algorithms: Image recognition and understanding (object detection, categorization, segmentation)
KW - Machine learning architectures
KW - and algorithms (including transfer, low-shot, semi-, self-, and un-supervised learning)
KW - formulations
U2 - 10.1109/WACV56688.2023.00286
DO - 10.1109/WACV56688.2023.00286
M3 - Conference contribution
SN - 9781665493468
T3 - IEEE/CVF Proceedings
SP - 2839
EP - 2848
BT - Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE 2023 Winter Conference on Applications of Computer Vision
Y2 - 2 January 2023 through 7 January 2023
ER -