The major goal of this project is to develop a transform of the acoustic speech signal into a tactile display which permits speech perception at real time rates. A secondary goal is to develop a rapidly comprehensible phonemic tactile code. Our approach to these goals is focused on the development of tactile displays which permit organization of linguistically (and/or articulatorily) relevant tactile perceptual structures of varying sizes. Ideally, larger structures are perceived directly, but contain smaller structures which may also be perceptually identified. Computer analysis of the speech signal is used to derive the parameters which control the tactile display. The computer is used to detect those signals in the speech stream which contain important information regarding speech production, and which would elude the perceptual capacity of the skin -- even if such signals were transformed into physical energies within the skin's range of sensitivity. The computer thus identifies important information-bearing elements of speech, but does not assign them to linguistic categories (as is attempted in automatic recognition of speech). These derived signals are presented in a tactually salient manner to the human perceiver who decides on the linguistic categorization.