The talking face is very frequently characterized as a natural but ancillary source of speech information during face-to-face conversation. But typically for individuals with profound hearing impairments, auditory speech information is the ancillary source. Fortunately, many prelingually deaf adults, who rely on visual speech information for communication, demonstrate enhanced visual speech perception. Some hearing adults are also capable visual speech perceivers. Little is known about visual speech perception (lipreading). An important characteristic of the visual speech signal is that it conveys less phonetic information than does the acoustic speech signal, yet it affords adequate phonetic information to recognize, a high percentage of words.Our previous research suggests that the perceptual similarity structure of visual phonetic information is isomorphic with the physical similarity structure of visual stimuli. We hypothesize that expert lipreading is the result of sensitivity to small phonetic differences in visual speech stimuli, and that expert deaf lipreaders are more sensitive than hearing lipreaders to the phonetic information that renders phonemes perceptually dissimilar. This hypothesis will be tested in discrimination experiments designed to exploit the visual perceptual similarity of natural spoken syllables. However, natural speech stimuli are extremely complex, and we will also use our new visual speech synthesizer to investigate whether the physical data we used to characterize the speech stimulus space are an adequate representation of the information in the natural visual speech stimuli. Finally, our longterm goal is to determine where and when in the cortical hierarchy visual speech stimuli are processed for their phonetic information. To approach this longterm goal, an electrophysiology study will use what is learned about visual speech dissimilarity to localize visual phonetic processing temporally and spatially in cortex. Relevance to public health: Numbers of individuals with hearing loss increase as the population ages. Vision is a major source of speech input whenever auditory perception is impaired. Understanding the mechanisms that enhnace visual speech perception could lead to more effective and efficient clinical applications, such as evaluating speech perception and providing remediation to individuals with hearing impairment. Our visual speech synthesizer will be useful for training deaf children in vocabulary development and lipreading.