The importance of the auditory system for speech perceptual processing is well known. Recent findings demonstrating somatosensory inputs modulating auditory perceptual processing suggest that the somatosensory system may be an important contributor to the speech perceptual process. The overall objective of the proposed research is to identify the manner in which somatosensory information may be encoded in speech perception. A computer controlled robotic device will probe the contribution of somatosensation from the facial skin during speech perceptual processing and speech motor learning. The first aim is to test the hypothesis that somatosensory inputs during speech motor learning structure the perceptual processing of speech sounds. The second aim is to demonstrate that somatosensory influences on speech perception that are observed as a result of motor learning (Aim 1) are equivalent to those that occur when these same somatosensory stimuli are delivered directly during speech perceptual processing. The results from these hypothesis-driven studies have the potential to impact theories of speech perception and speech development as well as identifying new avenues for speech learning and rehabilitation.