TY - JOUR
T1 - Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction
AU - Diaz-Chito, Katerine
AU - Martinez del Rincon, Jesus
AU - Hernandez-Sabate, Aura
AU - Rusiñol, Marçal
AU - Ferri, Francesc J.
PY - 2018/5
Y1 - 2018/5
N2 - This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed: a first one based on the Kernel Trick and a second one based on the Nonlinear Projection Trick (NPT) for even higher efficiency. Both methodologies have been validated on four different image datasets containing faces, objects and handwritten digits and compared against well-known nonlinear state-of-the-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
AB - This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed: a first one based on the Kernel Trick and a second one based on the Nonlinear Projection Trick (NPT) for even higher efficiency. Both methodologies have been validated on four different image datasets containing faces, objects and handwritten digits and compared against well-known nonlinear state-of-the-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
U2 - 10.1007/s10851-017-0771-z
DO - 10.1007/s10851-017-0771-z
M3 - Article
VL - 60
SP - 512
EP - 524
JO - Journal of Mathematical Imaging and Vision
JF - Journal of Mathematical Imaging and Vision
SN - 0924-9907
IS - 4
ER -