For many organisms sound localization is a matter of survival. For humans, such a claim would be an overstatement. However, there can be no denying the importance of sound localization in human life. It subserves our ability to segregate target and competing messages in noisy environments, which is an essential component of normal communication. When that ability is degraded, as with many types of hearing loss, communication is significantly disrupted. The long-term goal of this project is a complete understanding of the mechanisms and processes of human sound localization. Extensive research suggests that the apparent position of a sound is determined by three primary acoustical cues: interaural time differences, interaural level differences, and pinna filtering effects. There are circumstances in which one or more of the cues may be ambiguous or unreliable, such as listening to a narrowband sound or to an unfamiliar sound. In these cases listeners must either use additional cues or weight the primary cues differently. A major focus of this project is the extraction and use of the additional cues, and the alternative weighting strategies. Specifically, the project will investigate the use of localization cues derived from small head movements and the role of a listener's knowledge of or expectations about specific stimulus characteristics. In the proposed experiments human listeners will estimate the apparent positions of sounds presented in either a free sound field or the virtual free-field produced by delivering computer- synthesized sound over headphones. In virtual free-field, head position is monitored with a magnetic tracker so that the stimuli can be appropriately modified in real time. A listener's knowledge of or expectations about the stimuli is manipulated by changing the between- stimuli and within-stimulus variance of the stimulus spectrum. Additional empirical studies will investigate the stimulus features that govern the separability of individual sources in auditory space. In these latter experiments, listeners will indicate the apparent positions of two independent sources in a virtual free-field. Stimulus parameters such amplitude envelope coherence, onset-offset asynchrony, and spectral overlap will be manipulated to assess their importance for successful source segregation. In addition to the empirical work, the project will evaluate a neural network model of sound localization. The model is a single hidden-layer, single output layer network based on a radial basis function architecture. Neural network models are attractive since training and learning, which are thought to be important in human sound localization, are explicitly incorporated in the models.