Despite significant advances in hearing-aid technology, several problems for hearing-impaired listeners have not been solved by current signal-processing schemes. This project will develop novel, physiologically-based signal processing strategies to address two major problems faced by hearing-aid users: difficulty listening in noisy environments and loudness distortion, which limits hearing-impaired listeners' dynamic range of comfortable listening levels. Our recent physiological studies have suggested neural encoding and processing mechanisms for masked detection of signals in background noise and for level coding. These studies have resulted in quantitative models that successfully predict the performance of human listeners on psychophysical tasks related to masked detection and level discrimination. In this project, we will take advantage of the basic concepts behind these models and convert them into signal-processing algorithms. This effort not only provides additional tests for our models, but also provides an opportunity to apply ideas suggested by our physiological and modeling studies to real problems for hearing-impaired listeners. Two strategies will be explored in this project. (1) Noise reduction based on a neural model for masking. We will use our masked-detection model to identify signals in the presence of background noise. Frequency bands that are dominated by signal energy will be amplified, and other channels will be attenuated. The confirmed success of our model in detecting signals in fluctuating noises is an important aspect of this approach. (2) Compensation of perceived loudness based on a neural model for level coding. We will introduce into the signal the level-dependent cross-frequency phase differences that are created in the healthy cochlea, taking advantage of nonlinear filters that simulate auditory-nerve tuning. The goal is to increase the comfortable range of levels and to improve speech recognition by providing these nonlinear cues to the impaired ear.