The effort to develop artificial hearing systems for the deaf has recently been focussed on tactual vocoders, devices that transduce acoustic energy into vibratory or electrocutaneous signals which are then applied to the skin. Current tactual vocoders, however, are based largely upon untested assumptions about optimal filter configurations for speech processing. In a study which carefully controls device variables (through use of software filters and computer driven displays) while systematically varying filter configurations, perception of critical features of the speech code will be studied. Using psychophysical techniques, the research seeks to describe the tactual perception of speech features based on signals processed through filter banks with linear, logarithmic and composite spacings for both 32 and 16 channel displays. The studies will: 1. determine discrimination and identification of speech contrasts along various speech-simulating continua for each of the filter configurations; 2. provide an empirical basis for the design of tactual vocoders with optimal filter configurations; and 3. lay the foundations for a miniaturized tactual vocoder design to be implemented in succeeding phases of the work. This work is directed toward the development and ultimate manufacture of a portable, wearable, cosmetically acceptable, artificial hearing prosthesis for the deaf.