Representations of specific acoustic patterns in the auditory cortex and hippocampus

Sukhbinder Kumar, Heidi M Bonnici, Sundeep Teki, Trevor R Agus, Daniel Pressnitzer, Eleanor A Maguire, Timothy D Griffiths

    Research output: Contribution to journalArticlepeer-review

    19 Citations (Scopus)
    199 Downloads (Pure)

    Abstract

    Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be 'decoded' from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
    Original languageEnglish
    JournalProceedings of the Royal Society of London. Series B, Biological Sciences
    Volume281
    Issue number1791
    DOIs
    Publication statusPublished - 22 Sep 2014

    Bibliographical note

    This paper has two joint-first authors, of which I am not one. However, the research was made possible by my contribution, which was to design and construct the auditory stimuli according to strict criteria.

    The experiment was designed to replicate my previous experiment (Agus et al., Neuron, 2010) in an MRI scanner. Because of the scanner noise, it was not possible to use the white-noise stimuli of the original paper. As such, we needed another complex, highly random sound for which listener could perform a "repeated/not repeated" task, reporting whether the second half of the sound was identical to the first or not.

    Piloting the experiments published in this paper, I tested whether or not the original 2010 could be replicated using these stimuli and whether the result was robust to background noise, using background noise recorded from the MRI scanner.

    Although the stimuli are simply formed from pure-tone pips (therefore seemingly trivial) note that:
    (a) whereas concatenating sounds normally has a "click" or other artefact, the joins between segments of the tone cloud are smooth; there is no statistical difference between segments that could help distinguish repeating and unrepeating tone clouds.
    (b) varying the parameters of these tone clouds makes a continuum of sounds from trivially sparse (random melodies) to extremely dense (noise). This allowed the experimenter to make the task more easy or difficult as required for the situation.

    Fingerprint Dive into the research topics of 'Representations of specific acoustic patterns in the auditory cortex and hippocampus'. Together they form a unique fingerprint.

    Cite this