Human listeners depend on the sense of hearing to communicate effectively in everyday social situations. Although we rely heavily on our ability to selectively attend to a single voice in a noisy background, and to follow transitions in talkers during conversation, this task is quite complex and accomplishing it successfully depends on the integrity of processing at a number of physiological sites spanning the auditory periphery to the brain. It is well known that hearing loss may adversely affect a listener's ability to perceptually segregate one talker in the midst of other talkers and to understand that talker's spoken message (i.e., the cocktail party problem; see Middlebrooks et al., 2017, for a series of recent reviews). The most common remedy for sensorineural hearing loss (SNHL) is a hearing aid, or a pair of aids, that can boost sounds to audible levels while preserving comfortable loudness and may improve signal-to-noise ratio for certain classes of sounds via noise reduction. However, even when listeners with SNHL wear hearing aids they often still experience extreme difficulty perceptually navigating the auditory scene, severely limiting their ability to communicate effectively. One reason is that, from an acoustic perspective, the designation of a particular sound source as target versus masker is arbitrary because it depends on the current - and changeable - internal state of the observer. Thus, the distinction between a target talker to be attended and a masker talker to be ignored can only be made by the listener and may change from moment to moment. Although the amplification of sounds by hearing aids provides the best (often the only) option for improving communication for listeners with SNHL, current hearing aids inherently fail to solve the source selection problem because they amplify target and masker sounds indiscriminately without the ability to distinguish which source the listener has chosen as the target. Thus the challenge is to devise a hearing aid that focuses only on those sounds the listener chooses to attend and suppresses competing sounds, responding to the wishes of the listener immediately, accurately, and effectively. During the past award period, our work has demonstrated that acoustic beamforming implemented by a head worn microphone array can provide a significant advantage for listeners with SNHL in solving the cocktail party problem. Furthermore, we have found that steering the beam of amplification can be accomplished quickly and effectively by sensing eye gaze with an eye tracker and directing the acoustic look direction (ALD) of the beam accordingly. The present application requests support to continue work on this visually guided hearing aid (VGHA) and to further examine the scientific premise upon which it is based. The overall goals are to better understand how top-down control of selective amplification provides assistance to listeners with SNHL in typical social situations, to advance our understanding of auditory and auditory-visual selective attention, and to extend the potential benefits of the VGHA to new populations of listeners - users of bilateral cochlear implants and persons with aphasia - who typically experience great difficulty understanding speech in complex, multiple-talker communication situations.