The long-term goal of this proposal is to further our understanding of the perceptual consequences of peripheral auditory processing and efferent, or top-down, effects in normal and impaired hearing. The project has three main aims. Aim 1 is to provide behavioral and physiological estimates of the hypothesized effects of the medial olivocochlear (MOC) efferent system on basilar-membrane (BM) gain and compression in humans, and to determine their consequences for temporal processing within individual frequency channels using forward masking. These measures will be undertaken in both normal-hearing listeners and listeners with sensorineural hearing loss. Aim 2 extends the investigation from within-channel to across-channel temporal processing to study the effects of peripheral filtering and efferent control on the spectro-temporal processing of complex sounds. The working hypothesis is that changes in BM filtering with hearing loss lead to changes in the relative latencies along the BM, which are not fully compensated by neural plasticity. Across-channel temporal synchrony plays a crucial role in our ability to perceptually segregate competing sounds in complex environments, so a disruption of perceived synchrony may be particularly detrimental in noisy and challenging acoustic backgrounds. Experiments under this aim will measure the temporal perception of sounds across frequency, in terms of temporal resolution and acuity, as well as in terms of subjective judgments of synchrony to test the hypothesis that across-frequency temporal perception changes with hearing loss. Aim 3 will take the basic psychophysical knowledge gained in Aims 1 and 2 and apply it to speech perception in noisy backgrounds. Speech intelligibility of sentences will be measured in steady noise, fluctuating noise, and single- talker interference in conditions where systematic delays are imposed across frequency. Speech perception in quiet has been shown to be remarkably robust to temporal asynchronies across frequency. Our working hypothesis is that temporal asynchronies, whether imposed through signal processing or as a result of hearing loss, will produce much more serious deficits in speech intelligibility in noisy situations, where perceptual segregation of the target from the masker is required. The results will further our understanding of how peripheral auditory processing is affected by efferent stimulation, and how changes in peripheral processing due to hearing loss affect our perception of sound and our ability to communicate in challenging acoustic environments. Understanding how peripheral auditory processing influences within- and across-channel temporal perception may lead to novel signal-processing algorithms for hearing aids that compensate for the across-channel temporal changes induced by hearing loss.