Sign language is important to the health of deaf individuals who rely on this mode of communication to access medical, judicial, and other critical information. A priority area for NIDCD is the investigation of the acquisition, processing, and neural underpinnings of languages within the visual-manual modality. This project investigates two modality-specific properties of sign languages (iconicity and the interface between perception and production) in order to address questions of theoretical importance for psycholinguistic theories of language processing and for the functional neuroanatomy of human language. Aim 1 of the project is to determine the impact of lexical iconicity on language processing and its neural underpinnings. Spoken languages do not exhibit ubiquitous conceptually motivated form-meaning mappings, and therefore this phenomenon is best examined through the study of signed languages. New evidence indicates that iconicity plays a role in the organization of sign phonology, morphological patterns, and semantics. This project uses a new model of iconicity (Structure-Mapping) to test predictions about a) the governing principles and patterns of iconicity, b) how iconicity affects form-based decisions, c) cross-linguistic differences in lexical iconicity and image generation, and d) the role of alignable differences in making comparisons involving iconic signs and referent objects. The project utilizes Event-Related Potentials (ERPs) to assess the neural response to iconic signs, as well as to identify neurophysiological correlates of lexical access in American Sign Language. Aim 2 of the project is to determine how language production and perception are integrated for visual-manual languages. For speech, mostly unseen articulators give rise to an acoustic signal that is perceived by both the speaker and comprehender, whereas for sign the articulators are fully observable, but the visual signal is only perceived by the comprehender (signers do not watch their hands while signing). These modality differences impact how sensory-motor information is integrated and the role of sensory-feedback in determining articulatory targets. This project investigates the nature of internal models for sign production through novel behavioral methods (e.g., close shadowing of oneself vs. another signer; use of visual imagery vs. covert articulation in a sign-learning paradigm), as well as through both ERP and fMRI techniques. Neuroimaging methods are used to test predictions of a dual stream model of sign language processing vs. a direct matching model of action recognition. Overall, the project aims to enhance our understanding of the neurobiology of visual-manual language, which will provide a translational foundation for treating injury to the language system, for employing iconic signs/gestures in therapy, and for diagnosing language impairments in deaf individuals.