The goal of the proposed project is to characterize the neural network interactions that mediate auditory-visual integration of speech in a noisy environment. Understanding speech in degraded and reverberant conditions is perhaps the most frequent audiological complaint by the hearing impaired, now including an estimated 28 million Americans. Depression, loneliness, and social anxiety are common conditions afflicting those who suffer this reduced ability to communicate with their friends, family, and co-workers (Knutson, 1990). In addition to its practical implications, crossmodal speech perception also serves as a paradigm for our brains' ability to combine diverse sources of information into a unified percept. Integration across sensory modalities allows us to detect and discriminate stimuli faster and more accurately than with one system alone, especially when the stimuli are degraded. Our brains therefore use expectations from prior experience along with complementary information from different senses to form coherent perceptual objects. The prior knowledge, known as top-down influence, must be combined with the raw stimuli, or bottom-up influences, just as auditory is combined with visual information. The negotiation of these top-down/bottom-up and crossmodal interactions may be mediated by networks of areas in the superior temporal sulcus, intraparietal sulcus, and prefrontal cortex. A whole-brain technique such as functional magnetic resonance imaging is required to simultaneously assess neural activity in many widespread regions. A network analytic approach, including structural equation modeling and partial least squares, is essential to address crossmodal and top-down/bottom-up interactions among these regions during speech perception in a noisy environment.