Human listeners are able to learn to recognize unfamiliar phonetic contrasts in the course of learning to understand foreign languages, in learning to use a cochlear implant, and in learning to better understand an unfamiliar talker or computer speech synthesizer. However, little is yet known about the cognitive mechanisms that form the basis of such perceptual learning of speech sounds. One frequently used conceptualizing of phonetic learning is based on the redistribution of selective attention between acoustic features in the speech signal. Distributing more attention to those cues that are highly diagnostic, and withdrawing attention from unreliable or misleading cue, should improve categorization accuracy and response time. In this way, training may also serve to develop new, more diagnostic perceptual features, improving the effectiveness of limited cognitive resources such as working memory. Four experiments will be conducted using a combination of speeded classification with conflicting cue stimuli and Garner interference tasks to investigate: (A) how identification training can induce listeners to ignore a previously attended cue (experiment 1) and to attend to a previously unattended cue (experiment 2); and (B) to determine the degree and manner to which these two kinds of changes in the distribution of selective attention each affect the allocation of working memory resources (experiments 3 and 4). This research will provide new basic data related to understanding the cognitive mechanisms that underlie human speech recognition. These results will also have implications for more clinically oriented studies investigating the re-acquisition of speech perception following cochlear implantation; the role of attention and working memory factors in specific language impairment (SLI), dyslexia, and normal aging; and for understanding the relationship between mechanisms of first and second language learning.