Understanding speech is a remarkable ability: listeners need to extract asynchronously distributed phonological features from a transient and noisy signal that unfolds at speeds not under their control. Despite the ubiquitous success of spoken language processing among the general population, 5% of first graders enter school with some type of speech sound disorder that cannot be accounted for by hearing impairment. In addition, once the language system has been successfully acquired, it is susceptible to insult from injury or stroke (accounting for 1 million adults in the U.S. with some form of aphasia). One of the biggest challenges that speech understanding has to overcome is that speakers differ in how exactly they realize the same intended sound, so that one speaker's 'peach' might by physically indistinguishable from another speaker's 'beach'. Research suggests that we overcome this challenge by rapidly adapting to speaker-specific pronunciations. A more complete understanding of the perceptual and computational capacities underlying these adaptation processes is essential to advancing understanding of both normal and deviant language acquisition and processing. The aim of the proposed project is to develop and test an explicit computational model of the cognitive processes that underlie our ability to rapidly adjust to speaker-specific pronunciations ('accents'). To this end, we investigate how listeners integrate information about speakers' accents while listening to their speech. We propose that listeners weigh previous experience with other speakers against the percepts received from the current speaker. We investigate adaptation to a single speaker, adaptation to multiple speakers, and generalization from previously encountered speakers to novel speakers. The computational framework we pursue, also predicts that alternative reasons for unexpected pronunciations (such as chewing on a piece of gum), if present, can block adaptation effects. We investigate and model such 'explaining away' of unusual pronunciations. To collect large amounts of adaptation data in a fraction of the time previously required for experiments on speech perception, we have developed a web-based experimental interface that allows us to reach hundreds of thousands of participants.