Although human language requires multi-sensory processing of information, much of the research on children's language development focuses on the auditory-only speech signal. This is in spite of the fact that the speaking face provides surprising amounts of information that observers use when processing language. Although behavioral evidence is beginning to emerge about the degree to which preverbal infants can coordinate complex visual and auditory information, there is relatively little neurophysiological data to inform our understanding of how this coordination process develops and how it influences the neural underpinnings of language processing at different developmental time points and in different infant populations. The proposed experiments will test the hypothesis that, while infants may be predisposed to process auditory speech in the left temporal region, this processing is influenced by environmental experience, such as that provided by increasingly extensive exposure to visual speech or to more than one language. Our investigation will examine the role of visual- and auditory-speech both separately and in coordination (audiovisual speech) in order to understand how these distinct sources of perceptual information facilitate the development of language processing abilities in preverbal infants. First, we propose to use near-infrared spectroscopy to test the influence of isolated visual- and auditory-speech on patterns of neural activity in the bilateral temporal cortices of 9-month-old infants, and compare that to the activity observed in response to coordinated audiovisual speech (Aim 1). We will then compare the neural activity elicited by these three speech conditions across three age groups (6-, 9-, and 12-month-olds) to track the developmental trajectory the coordination process follows (Aim 2). Finally, we will compare the bilateral processing patterns of monolingual (English-exposed) infants with age-matched bilingual (Spanish/English-exposed) infants (Aim 3). This would be the first study to demonstrate the privileged nature of audiovisual, speech in early language processing, as reflected by a more robust neurovascular response in the left temporal region relative to the right when audio- or visual-speech are presented in isolation. We expect to find that this effect is experientially based, such that there is a measurable tuning process that infants go through that is specific to their amount of prior exposure to coordinated speech. Findings from the studies outlined here will help us better understand how the auditory and visual systems interact to influence early language development, as well as the normal time course of perceptual tuning to coordinated speech in one's native language(s).