The ability to comprehend and produce language stands as a defining characteristic of human cognition. It is this ability that enables the transfer of knowledge and culture within human society. A proper characterization of the human capacity for language is required for the development of interventions which may be used to remediate those individuals who have failed to achieve, or who have lost competence in, a full range of language behaviors (e.g., effective interpersonal communication, reading, writing, etc.). Cognitive psychologists have made great strides in understanding the functional and neural mechanisms underlying the use of spoken language. These findings have led to a wide range of effective educational and clinical programs for improving language behaviors. However, equivalent knowledge in the domain of signed languages is lacking. The long-term objective of this research is to develop a comprehensive neurocognitive model of sign language processing derived from behavioral and functional brain-imaging studies. Such a model would have practical educational value; it would guide the development of effective strategies and programs targeted toward improving specific language behaviors in deaf individuals who come from a variety of language backgrounds. It would also benefit basic science, providing specification as to how a non-speech based human communication system interfaces with sensory, motor and perceptual cognitive systems, and additionally explaining how sensory deprivation and early language experience impacts the development of neural systems. Finally, this model would benefit cognitive scientists interested in models of the functional neural specialization underlying human language by providing explicit understanding of how language may arise from the codification of manual-gesture based human actions. The development of a neurocognitive model of sign language processing will require knowledge from several fronts, as sign language processing lies at the intersection of linguistic, motor and visual processing domains. We seek to identify the representations and processes that underlie sign language recognition and perception, and understand how these are similar to or differ from representation and processes that are used in the service of spoken language, human action/motor processing and visual object/action processing. The current application builds upon findings from our previous grant and continues to ask basic questions concerning the influence of form-based "phonological" properties on the lexical recognition and production of American Sign Language. In addition, our previous findings force us to further consider the neural and functional relationships between sign processing and human action processing and the neural relationships between language and manual-gestural abilities. Finally, we examine how visual perceptual and attentional factors may mediate sign language recognition. We propose tests that are amenable to testing with native and non-native deaf signers and signing and sign-naive hearing subjects, allowing us to determine the degree to which the specific effects observed reflect linguistic experiences, auditory deprivation, or reflect more general, language independent, perceptual processes.