Abstract When using a cochlear implant (CI) in one ear, children experience difficulties hearing speech in noise and localizing sounds in rooms. This is not surprising, as studies in normal hearing listeners have demonstrated that speech understanding in noise and sound localization can be very poor when binaural hearing is artificially disrupted. Inability to segregate speech from noise and to localize sounds can have a profound impact on the ability of a child to learn in environments such as classrooms, where it is typical to have multiple sounds arrive from various directions, and each child faces the challenge of segregating stimuli of interest from background noise. There has been a sweeping shift in clinical treatment, whereby bilateral CIs have become the standard of care, and 70% of bilateral CI users are children under 10 years of age. The vast majority of children perform significantly worse than their normal hearing, age-matched peers on sound localization and understanding of speech in noise. Using research processors that synchronize the CIs in the two ears, specific aims will investigate mechanisms of sound localization (Aim 1) and speech-in-noise segregation (Aim 2). In addition, we will evaluate whether peripheral measures of neural spread of excitation can serve as a useful tool for predicting binaural processing, in order to ultimately develop objective clinical tools for bilateral fitting of CIs (Aim 3). Studies will be done in implanted children who were congenitally deaf and implanted bilaterally during infancy and children born with hearing and acquired deafness during early childhood. In addition, children with normal hearing will be studied using binaural CI simulations, in order to understand the extent to which differences between CI user and normal hearing children are accounted for by signal degradation that is known to occur with CIs. Limits imposed by the CI simulations relative to standard acoustic cues will provide a benchmark for best performance levels to be expected from the CI populations. Ultimately, the work will lead to development of better speech processors for children who are deaf, and ideally better outcomes in their ability to hear speech in noise, localize sounds, and learn in complex noisy environments.