Abstract
Image semantic segmentation is in the center of interest for computer vision researchers. Indeed, huge number of applications requires efficient segmentation performance, such as activity recognition, navigation, and
human body parsing, etc. One of the important applications is gesture recognition that is the ability to understanding human hand gestures by detecting and counting finger parts in a video stream or in still images. Thus,
accurate finger parts segmentation yields more accurate gesture recognition. Consequently, in this paper, we
highlight two contributions as follows: First, we propose data-driven deep learning pooling policy based on
multi-scale feature maps extraction at different scales (called FinSeg). A novel aggregation layer is introduced
in this model, in which the features maps generated at each scale is weighted using a fully connected layer.
Second, with the lack of realistic labeled finger parts datasets, we propose a labeled dataset for finger parts
segmentation (FingerParts dataset). To the best of our knowledge, the proposed dataset is the first attempt
to build a realistic dataset for finger parts semantic segmentation. The experimental results show that the
proposed model yields an improvement of 5% compared to the standard FCN network.
Original language | English |
---|---|
Title of host publication | Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019 |
Publisher | SciTePress |
Pages | 77-84 |
Number of pages | 8 |
Volume | 5 |
ISBN (Print) | 978-989-758-354-4 |
DOIs | |
Publication status | Published - 25 Feb 2019 |