Historically, research concerning children with mild to severe hearing loss (childHL) has been descriptive in nature, informing us about their abilities to recognize and produce spoken language. In the last grant cycle, we broke with tradition and pioneered a Multimodal Picture Word Task to study children's phonological processing. Results showed that childHL demonstrate a disproportionately large influence of visual speech during audiovisual (AV) speech processing when compared to children with normal hearing (childNH), and that childHL who have poorer articulation skills and slow responses demonstrate an unusual response to distracters that contain the schwa vowel. Although these distracters were designed to facilitate phonological processing, they produced an interference effect for this subset of children. An unexpected finding for both participant groups was a U-shape development function, where very young and older children were more sensitive to visual speech information than children of in between ages. In this grant cycle, our inter-university team of experts will implement a novel variant of the classic McGurk and Phoneme Restoration paradigms to account for the differences noted in AV speech processing between childHL and childNH. One hundred and seventy two participants between the ages of 4 and 14 years will complete a comprehensive battery of demographic tests and then be asked to perform the new McGurk-Phoneme Restoration-like tasks with stimuli that assess their sensitivity to AV speech (both meaningful and non-meaningful) and other tasks that assess speech and nonspeech processing for dynamic and static AV stimuli. Sixty of the children will also participate in a longitudinal study, repeating the test sessions for three consecutive years. The results will clarify issues about how hearing loss and normal maturation affect AV processing for various speech and nonspeech forms and the link that exists between childHL's speech perception and speech production abilities.