We have developed a wearable speech-analyzing tactile aid for the profoundly deaf, a system which translates sound into touch patterns on the skin. The system is lightweight, simple to operate, and suitable for day-long use. The acoustic spectrum is transformed according to frequency into a linear multi-channel tactile presentation, via a belt of electrotactile stimulators across the abdomen. Each speech sound generates a characteristic tactile pattern which observers can learn to identify. We have reported studies showing (1) discriminability of the major phonemic elemental features of speech via the tactile display; (2) identification of words and sentences in connected discourse; (3) preliminary responses of profoundly deaf children and adults; and (4) an experimental comparison of the electrotactile vocoder with other tactile aids in a lipreading situation. We propose to measure, quantitatively and comprehensively, the speech features actually transmitted, the ability of observers to process this information, and the development of perceptual strategies for interpreting the tactile display as a dynamic communication channel. These data will permit us to optimize the acoustic preprocessing and analysis presented to the tactile display. On the assumption that these tactile speech patterns must be learned like a language, we are proceeding with the development of a training program, based on the principles of secondary language acquisition and developmental linguistics. An objective measurement technique allows us to assess the amount and accuracy of verbal material presented via the tactile channel, per unit of time. The technique allows us to compare our system with other sensory aids and with other communication methods, such as lipreading. We are studying the effects, on this variable, of factors such as vocabulary size, amount of training, semantic complexity, and background noise.