Abstract The goal of this work is to clarify the role of spatial factors in the difficulty experienced by listeners with hearing impairment (HI) in understanding the speech of one particular talker in a mixture of talkers. Because HI listeners perform poorly when competing talkers are spatially separated, it is often assumed that an inability to exploit spatial cues is at the root of the problem. However, direct evidence for spatial deficits in HI listeners is lacking, as are clear links between spatial hearing and speech intelligibility. The experiments proposed here will carefully unpack and characterize the mechanisms affecting listeners with hearing loss in spatialized speech mixtures. The experiments proposed under Aim 1 will focus on the impact of reduced audibility, and determine the extent to which this basic deficit limits access to spatial information in addition to speech information. A stimulus processing approach that isolates clean ?glimpses? of the target sound in speech mixtures will enable intelligibility to be assessed in the absence of any explicit spatial task. Controlled manipulation of the spatial cues available in the stimuli will reveal the extent to which acoustic head-shadow contributes to the glimpses. Experiments under this aim will also explore the conditions under which access to high frequencies in speech is critical, and relate this knowledge to the benefits obtained from high-frequency amplification hearing aids. The experiments proposed under Aim 2 are designed to clarify the role of binaural temporal fine structure (TFS) in speech mixtures. The working hypothesis is that, although the amplitude envelopes of speech convey the primary information needed for high intelligibility, TFS carries auxiliary cues for sound source location that enable segregation of competing sounds. This hypothesis will be tested by comparing the effect of disrupting TFS on speech intelligibility for mixtures in which segregation is difficult versus relatively automatic. Parallel experiments will measure discrimination of the spatial location of speech sounds under the same conditions, in order to directly explore the link between spatial hearing and speech source segregation. Experiments in HI listeners will reveal whether these listeners exhibit an impairment specifically involving binaural TFS, an issue that remains unresolved in the literature. A critical aspect of this work will be to carefully consider the relationship between ?acoustic? TFS (extracted using signal processing techniques) and neural TFS (as coded in the auditory-nerve) in the context of speech signals. The experiments proposed under Aim 3 will examine a relatively unexplored but potentially crucial aspect of listening in speech mixtures: the rapid improvement in intelligibility over time that occurs when the listening environment is stable (e.g., when the talker of interest stays fixed at one location). A new speech task will be created that is optimized to observe the build-up, and the availability of different cues will be manipulated to determine the basis of the build-up in listeners with normal hearing. Performance will also be examined in HI listeners to determine whether they experience a similar build-up, and results will be related on an individual basis to the specific deficits identified in Aims 1 and 2. 1