This project will investigate the representations and processing mechanisms responsible for the integration of stimulus information and linguistic knowledge in speech perception. It will concentrate on one form of linguistic knowledge, namely, knowledge of lexical status--that is, which sequences of phones are words--and one measure of the processing of auditory stimulus information: phonetic categorization. The starting point for this research is the finding that subjects are more likely to report phonetic categorizations which are parts of words than phonetic categorizations which are not (Ganong, 1980). For example, a subject who is presented with a stimulus which is acoustically ambiguous between "dash" and "tash" is biased to hear the first segment as [d] because "dash" is a word, and "tash" is not. In Part I of this project, a number of standard speech perception paradigms (such as discrimination, selective adaptation, semantic priming, and lexical decision) will be used to investigate the stage in perceptual processing at which the lexical effect arises. Of particular interest will be whether the lexical effect is produced by sophisticated guessing, or by a direct influence on the interpretation of auditory information. Part II will investigate the internal representation of words used in speech perception, by investigating what aspects of words trigger the lexical effect. Possible relevant characteristics might include: word-frequency, morphemic structure, frequency as a sequence of phones in the language, or simply lexical status. The method for testing these variables will be to compare the phonetic categorization of acoustic continua between stimuli from these populations and nonwords with the categorization of control continua between nonwords.