When hearing-impaired listeners are properly aided with a hearing aid (HA) or cochlear implant (CI), they are often able to comfortably maintain a conversation in quiet environments. However, in group environments, such as a large family dinner, restaurant, or other environment where multiple people are talking simultaneously, hearing-impaired listeners have great difficulty participating in conversations and frequently withdraw or avoid the situation. As such, it would be highly beneficial to implement an algorithm into HAs or CIs to remove background talkers (?babble?) from the signal to reduce listening effort for the hearing-impaired listener and allow them to converse as if they were in a quiet environment. Although HAs and CIs frequently incorporate noise reduction algorithms, these algorithms are not effective when the background is babble. The problem of removing babble involves segregating speech from speech. Hence, the spectral properties of the signal and noise are extremely similar. Despite these challenges, we developed an extremely effective algorithm named SEDA to remove background babble. A prototype of SEDA was implemented on an iPhone and evaluated on 10 CI users. SEDA improved understanding of speech with background talkers at all signal-to-noise ratios (SNRs) tested; on average, word understanding in babble improved by 31 percentage points. By contrast, the state-of-the-art noise reduction systems for CIs provide little to no benefit for understanding speech with babble noise. CI manufacturers have shown great enthusiasm about our successful proof-of-concept of our algorithm. Nevertheless, before commercialization, CI manufacturers want reductions in the computational power required for the algorithm. As CI processors minimize computational processing in order to maximize battery life, it is important to minimize the additional computations required by SEDA. When using SEDA as a front- end for a CI processing strategy (as is the case with our iPhone prototype), redundancy in the required calculations result in increased computations and latency. Specifically, SEDA decomposes the input signal into multiple channels, removes the background babble, and then reassembles them into a single waveform. This waveform is then fed into a CI which again decomposes the signal into multiple channels. Integrating SEDA into the signal processing chain will save computational processing as the signal would only need to be decomposed once and would not need to be reassembled. Additionally, although SEDA is highly successful in typical speech in noise tests, CI manufacturers emphasized the importance of evaluating SEDA in more realistic environments. Two specific aims will address the requirements for commercialization by the CI manufactures: reducing the computational requirements by integrating SEDA into a sound processing algorithm and evaluating SEDA in realistic environments.