The long range goal of this proposal is to develop practical listening systems (speech-to-text systems) for hearing-impaired persons. Although automatic speech recognition (ASR) technology has made steady progress in recent years, existing systems with large vocabularies often require extensive retraining if the acoustic channel is altered because the noise level changes, the speaker's room or position changes, or the signal conduit changes (telephone vs. room speech). The PI has developed a novel non-linear signal-processing method that can be used in the "front end" of any ASR system in order to increase its channel-independence. The new method does a much better job of decreasing the speech signal's channel-dependence than the commonly-used cepstral mean normalization (CMN). Furthermore, its performance is comparable to that of the combination of CMN and spectral subtraction, despite the fact that it does not utilize the noise measurements required by spectral subtraction. The proposal's first aim is to make certain technical changes that will significantly improve the method's performance. The proposal's second aim is to test the following hypothesis: the word error rate of a typical ASR system will be reduced by incorporation of the new channel normalization technology in its front end.