DESCRIPTION (from applicant's abstract): This project aims to understand how complex sound sequences, such as human speech sounds, are represented in the auditory cortex. It extends previous studies of the cortical representation of complex sounds by focusing on the dynamic and transient aspects of the response to acoustic transitions, and how these dynamics change with behavioral training. Monkeys will listen to human speech syllable pairs, and the cortical activity evoked by interleaved sequences of contrasting syllables will be compared to sequences of a single repeated syllable. The cortex adapts to rapid sequences of a single, repeated sound, but responds well to sequences of contrasting sounds. Behavioral training is expected to specifically enhance the cortical response to the sequences of the trained syllables. Greater behavioral distinction corresponds to more distinct cortical representation. Understanding how behavioral training changes cortical representations of specific stimuli is an essential first step towards a scientific foundation for the practice of rehabilitation. Traditionally, plasticity of central sensory representations has been assessed by comparing of high-resolution brain activity maps. This project establishes a new, more efficient approach: measuring the interaction of the neuronal activity evoked by specific sound pairs played in sequence. The signatures of this interaction are the overall evoked activity level and the timing of the evoked activity. Changes in these signatures can be detected by non-invasive imaging methods, such as functional magnetic resonance imaging (fMRI) and magneto-encephalography (MEG). This research thereby points the way towards assessment of cortical representations in patients, and towards individually tailored rehabilitation programs.