A program of research is proposed to systemically investigate the perceptual and cognitive processes by which novel speech stimuli (including tactile, Cued Speech, and auditory) are learned, identified, remembered, and integrated. It is hypothesized that lexical processes mediate the relationship between bottom-up perceptual constraints and higher-level cognitive/linguistic processes. In Study 1, Computational Modeling of the Lexicon, psychophysical constraints for a set of tactile, auditory, and visual speech conditions will be estimated using nonsense syllable identifications obtained with a theoretically motivated set of stimuli. Two different tactile vocoders and a fundamental frequency (FO) device will be studied, as well as analogous auditory vocoder and FO signals, and Cued Speech. A computational method will be used to examine effects of phonetic similarity/dissimilarity on the information available to address the lexicon, and a validation experiment will be conducted. In Study 2, Perceptual Learning, perceptual sensitivity following identification training will be investigated in several experiments involving word stimuli with calibrated similarity. A main question is whether word identification training results in increased perceptual sensitivity. In Study 3, Representation in Longterm Memory, the hypothesis will be tested that modality and surface attributes of crossmodal stimuli are preserved in memory. A continuous recognition memory task will be employed in a series of experiments. If the hypothesis is confirmed, an implications is that perceptual encoding does not result simply in an abstract phonological code. In Study 4, Cross- modal Integration with Words, on-line methods will be used to study word recognition. The traditional methods of naming and lexical decision will be used to investigate audiovisual word identification, and results will be compared with those from a new phoneme monitoring task. The phoneme monitoring task will then be applied to investigate difference between audiovisual conditions, and differences between Cued Speech and visual- tactile speech. Subjects in the proposed experiments will be adults with normal hearing and English as a first language, and adults with profound hearing losses who learned Cued Speech during the period of language acquisition. Overall, the proposed research departs from our previous work that adopted pragmatic/engineering/psychophysical methods for developing tactile speech aids. Since it is known that speech perception can be affected by tactile speech and Cued Speech stimuli, we now propose to turn our attention to research on spoken language processing with these novel speech signals. The work will contribute to ameliorating communication problems of individuals with profound hearing losses.