Context and prediction in spoken word recognition: Early left frontotemporal effects of lexical uncertainty and semantic constraint

Ana Klimovich-Smith, Barry Devereux, Billi Randall, William D. Marslen-Wilson, Lorraine K. Tyler

Research output: Contribution to conferencePosterpeer-review


Processing spoken words in isolation evokes the activation of multiple word candidates consistent with the early sensory input. Competition between these continues until only one candidate is consistent with the bottom-up auditory input (uniqueness point UP), at which point the word is recognised (Marslen-Wilson & Welsh, 1978; Marslen-Wilson, 1987). Recent research shows that these processes of activation and competition involve multiple, primarily left lateralised, regions in inferior frontal, temporal and parietal cortex (Kocagoncu et al., 2016), and confirms the importance of the UP in marking a shift from competitive processes to identification of the unique target word. In everyday speech, however, words are rarely heard and processed without a prior semantic and syntactic context. While many studies have shown that the presence of a constraining context facilitates word recognition, the neuro-computational mechanisms underlying these effects are not clear, and models differ in the claims they make about the influence of contextual constraints. Some models claim prior constraints do not directly affect the upcoming speech (Marslen-Wilson,1987), whereas others (Friston & Frith, 2015) claim that a prior context generates contextually constrained predictions about the properties of upcoming words, thus reducing uncertainty about the incoming bottom-up input. In the present study, using a combination of EMEG and Representational Similarity analysis (RSA), we were able to decode specific predictions generated by the semantic context, and their spatiotemporal neural coordinates. Participants listened to two-word English phrases (‘yellow banana’; ‘peeled banana’) that varied in the degree of semantic constraint that the first word exerted on the second. In a pre-test we obtained gating responses from participants who produced guesses about word 2 (W2) after only hearing word 1 (W1). We used a corpus-based Distributional Memory (DM) database (Baroni and Lenci, 2010) to derive two computational models capturing different properties of the gating responses. One model, Lexical Competition, captured the degree of uncertainty about the lexical identity of W2 on the basis of the word candidates generated by hearing W1. The second model, Semantic Blend, captured the semantic content of participant guesses. We derived Representational Dissimilarity Matrices (RDMs) from each model and tested these using the source-localised activity estimates and multivariate RSA to determine which areas within an extended bilateral fronto-temporo-parietal language network encoded these aspects of the processing of the two-word phrases. We found a strikingly early effect of Lexical Competition in LBA45 which started -50ms before W1 offset and persisted for 10 ms into W2, with a later effect at 75 to 90 ms. Semantic Blend effects, encoding semantically-constrained predictions, emerged in LMTG later (100 to 160 ms after W2 onset) and just before the UP of W2. These results suggest that listeners are sensitive to the overall lexical uncertainty of upcoming words in advance of hearing any sensory input (early BA45 effects), while the semantic properties of contextual constraints only become computationally relevant once minimal sensory input associated with W2 has been heard (later MTG effects).
Original languageEnglish
Publication statusPublished - 08 Nov 2017
EventSociety for the Neurobiology of Language Annual Conference - Baltimore, United States
Duration: 08 Nov 201710 Nov 2017


ConferenceSociety for the Neurobiology of Language Annual Conference
Abbreviated titleSNL 2017
Country/TerritoryUnited States
Internet address


Dive into the research topics of 'Context and prediction in spoken word recognition: Early left frontotemporal effects of lexical uncertainty and semantic constraint'. Together they form a unique fingerprint.

Cite this