The existence of a distinct system for speech processing is controversial. The general aim of this proposal is to search for evidence of a specialized cortical phonetic module, distinct from the general auditory system, using a combination of advanced brain imaging methods, i.e. simultaneous functional Magnetic Resonance Imaging (fMRI) and Event-Related Potential (ERP) recordings. Three stimulus types will be used to separate phonetic and acoustic processing: (a) sounds that can be heard as speech or as non-speech, depending on the subject's expectancy as to the nature of the sound; (b) sounds varying in the phonetic context in which the key cue for phonetic perception is presented; and (c) speech sounds gradually varying in the degree of acoustic coherence and therefore in the degree of phonetic content. An oddball paradigm will be used to evoke specific event-related auditory cortical processes. The ERP responses elicited by oddball detection have been shown to depend on the type of auditory analysis that is being performed and can hence be used as a marker for acoustic as opposed to phonetic processing. Simultaneous fMRI/ERP is a new and technically challenging method. It involves recording EEG with non-ferromagnetic equipment and without interfering with the RF field and the MRI acquisition, as well as extracting the small ERP signal from the increased noisy background caused by the magnetic environment. The high-density electrophysiologic recordings, analyzed using current density mapping and spatiotemporal dipole estimation constrained by fMRI data, will allow both high spatial and high temporal resolution. Implementation of simultaneous fMRI/ERP recordings should have a broad impact on methods used in speech and hearing. Revealing the spatial- temporal organization of speech and nonspeech perception is important for understanding dysfunction underlying specific language impairments.