Neuro-computational modelling of parallel incremental prediction and integration during speech comprehension

Hun Choi, Billi Randall, Barry Devereux, Lorraine K. Tyler

Research output: Contribution to conferencePosterpeer-review


Spoken sentence comprehension is a rapid, incremental process which involves anticipating and integrating upcoming words into a developing representation. We use state-of-art computational models of verb subcategorization information and semantic selectional preferences to explore the dynamic neurocomputational processes involved in incremental integration of the semantic and syntactic properties of words in sentences. Our models measured prediction: information about syntactic and semantic properties of subsequent input, given the preceding context; and integration: the difficulty of integrating this subsequent input, given these predictions. We aimed to determine how quickly syntactic and semantic information is reflected in the dynamics of neural activity, and to distinguish whether information relevant to processing the subsequent input is activated early, or becomes active only when needed to facilitate integration of subsequent input. In an EMEG study, participants listened to 200 sentences which varied in complement structure following subject and verb (e.g. ['The student (subject NP) designed (verb) the experiment (complement)']). To model verb syntactic preferences, we used VALEX, a corpora-derived database providing syntactic frames for 6,397 English verbs (Korhonen et al., 2006). For the semantic preference model, we used the Latent Dirichlet Allocation (LDA) approach of topic-modelling (O’Seaghdha & Korhonen, 2014). This model combined topic distributions associated with direct object continuations given in a pre-test. We also modelled syntactic and semantic prediction error as the difference between the actual continuation and prior belief reflected in the syntactic and semantic prediction distributions. Representational similarity analysis (Kriegeskorte et al., 2008) related our computational models to the spatio-temporal dynamics of source-space signals in the language network. Consistent with claims that syntactic processing involves a left-lateralized fronto-temporal system (Tyler & Marslen-Wilson, 2008; Hagoort, 2013), verb subcategorisation information activated left fronto-temporal areas from 200ms after verb-onset. We found a significant subcategorisation prediction-error effect in L-BA45 150ms after the onset of the verb’s complement reflecting the difficulty of syntactic integration. Activation of the semantic preferences of verbs occurred remarkably early in bilateral inferior frontal areas –soon after verb-onset and before the verb’s complement structure had been determined. These early frontal effects may show how the subject NP constrains the verb such that the prediction of object nouns may begin before the verb is fully identified. Or the verb may be identified sooner given the context of the subject NP. Hence, this frontal activation may be related to the complexity of activating lexical-semantics by pre-activating a direct object frame. Finally, semantic prediction-error effects for the complement noun, reflecting integration difficulty for the noun, occurred in the left posterior middle and inferior temporal gyri around 300ms after complement noun onset. These results show that the left-syntactic and bilateral-semantic networks in the brain rapidly activate relevant syntactic and semantic information, flexibly pre-activating likely verb complements (i.e. direct object) and incrementally integrating the syntax and semantics of the complement for faster and more accurate comprehension.
Original languageEnglish
Number of pages1
Publication statusPublished - 08 Nov 2017
EventSociety for the Neurobiology of Language Annual Conference - Baltimore, United States
Duration: 08 Nov 201710 Nov 2017


ConferenceSociety for the Neurobiology of Language Annual Conference
Abbreviated titleSNL 2017
Country/TerritoryUnited States
Internet address


Dive into the research topics of 'Neuro-computational modelling of parallel incremental prediction and integration during speech comprehension'. Together they form a unique fingerprint.

Cite this