Brain decoding - the process of inferring a person's momentary cognitive state from their brain activity - has enormous potential in the field of human-computer interaction. In this study we propose a zero-shot EEG-to-image brain decoding approach which makes use of state-of-the-art EEG preprocessing and feature selection methods, and which maps EEG activity to biologically inspired computer vision and linguistic models. We apply this approach to solve the problem of identifying viewed images from recorded brain activity in a reliable and scalable way. We demonstrate competitive decoding accuracies across two EEG datasets, using a zero-shot learning framework more applicable to real-world image retrieval than traditional classification techniques.
|Media of output||bioRxiv preprint server|
|Publication status||Submitted - Jan 2019|
|Publisher||Cold Spring Harbor Laboratory Press|