Cross-participant modelling based on joint or disjoint feature selection: an fMRI conceptual decoding study

Hiroyuki Akama, Brian Murphy, Miao Mei Lei, Massimo Poesio

Research output: Contribution to journalArticlepeer-review

227 Downloads (Pure)

Abstract

Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights.
Original languageEnglish
Number of pages21
JournalApplied Informatics
Volume1
Issue number1
DOIs
Publication statusPublished - 02 Sep 2014

Fingerprint Dive into the research topics of 'Cross-participant modelling based on joint or disjoint feature selection: an fMRI conceptual decoding study'. Together they form a unique fingerprint.

Cite this