Abstract
Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental modSaveel against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.
Original language | English |
---|---|
Pages (from-to) | 1590-1605 |
Number of pages | 16 |
Journal | Journal of cognitive neuroscience |
Volume | 30 |
Issue number | 11 |
Early online date | 28 Sept 2018 |
DOIs | |
Publication status | Published - Nov 2018 |
ASJC Scopus subject areas
- Cognitive Neuroscience