Our laboratory studies the relationship between what is observed in functional neuroimaging studies and the underlying neural dynamics. To do this, we use large-scale computer models of neuronal dynamics that perform either a visual or auditory object-matching task similar to those designed for PET/fMRI/MEG studies. A review of both models can be found in Horwitz & Husain (2007). We also develop computational methods for fMRI and MEG data that allow us to investigate functional brain networks in normal human subjects and in patients with sensory and cognitive processing disorders. Furthermore, we perform neuroimaging experiments to understand the neural basis of auditory and language processing. Two papers explored the interaction between auditory and visual representations in the brain. Such interactions play a major role in language and communication, particularly in tasks such as naming and reading. In both cases, one needs to associate particular auditory objects to particular visual objects (or vice-versa) in long-term memory. We investigated several aspects of auditory-visual representation interactions using magnetoencephalography (MEG) and a paired associates memory task. Subjects first learned to associate a set of abstract visual objects with a set of abstract auditory objects on a one-to-one basis, a set of visual objects with a second set of visual objects, and a set of auditory objects with a second set of auditory objects. During scanning, each trial of the task consisted of the presentation of a stimulus, a delay period, and a target period during which an auditory and a visual stimulus were simultaneously presented and the subject indicated by a button press whether one of these two stimuli was the associate of the initial stimulus. Pillai et al. (2013) utilized the high temporal resolution of MEG to determine whether well-learned crossmodal paired associates produce activation within the associated sensory modality, even in the absence of explicit sensory input in that modality. We found that even during the initial stimulus presentation there was indeed activation within the crossmodal sensory areas prior to any sensory input in the crossmodal modality. These findings support theories which posit that modality-specific regions of cortex are involved in the storage and retrieval of sensory-specific items from long-term memory. Using this data set, we also investigated another brain area that is purported to play a major role in crossmodal associations - the posterior superior temporal sulcus/gyrus (pSTS) (Gilbert et al., 2013). We found that the region in pSTS seemed to respond to dynamic, auditory stimuli. We hypothesize that the region we have identified in pSTS is involved in aspects of retrieval of stored dynamic, auditory stimuli, rather than multimodal integration per se. These data will be employed in constructing a large-scale neural model of the auditory-visual paired associates task. In another study (Husain et al., 2012), we investigated gestural processing. Emblems are meaningful, culturally-speci&#64257;c hand gestures that are analogous to words. In this functional magnetic resonance imaging (fMRI) study, we contrasted the processing of emblematic gestures with meaningless gestures by pre-lingually Deaf and hearing participants. Deaf participants, who used American Sign Language, activated bilateral auditory processing and associative areas in the temporal cortex to a greater extent than the hearing participants while processing both types of gestures relative to rest. The hearing non-signers activated a diverse set of regions, including those implicated in the mirror neuron system, such as premotor cortex and inferior parietal lobule for the same contrast. Further, when contrasting the processing of meaningful to meaningless gestures (both relative to rest), the Deaf participants, but not the hearing, showed greater response in the left angular and supramarginal gyri, regions that play important roles in linguistic processing. These results suggest that whereas the signers interpreted emblems to be comparable to words, the non-signers treated emblems as similar to pictorial descriptions of the world and engaged the mirror neuron system. Analysis of directionally specific interactions (effective connectivity) between brain regions in fMRI data has proliferated. Our section has developed a method for evaluating fMRI effective connectivity based on switching linear dynamic systems (SLDS), a signal processing method that represents the behavior of a nonlinear dynamical system by switching over time among a set of linear dynamical models (Smith et al., 2010). SLDS has many of the advantages of other popular effective connectivity methods (e.g., it deals explicitly with the temporal nature of fMRI time series), but also overcomes some of their limitations: it incorporates an effective means to measure overall model adequacy, and includes a procedure that can identify important regions left out of incomplete models. In the past year, we (Smith et al., 2013) demonstrated using simulated data that SLDS outperforms several other effective connectivity methods in terms of parameter estimation accuracy. Our laboratory also has performed studies to elucidate the neural basis of speech production and its disorders. We investigated the extent of endogenous dopamine release in the striatum and its influences on the organization of functional striatal speech networks during production of meaningful English sentences using a combination of positron emission tomography (PET) with the dopamine D2/D3 receptor radioligand 11-Craclopride and fMRI functional and structural connectivity analyses (striatal structural connectivity was measured using diffusion tensor imaging (DTI) tractography). The combination of these three techniques applied to the same subjects permitted us to examine the extent of dopaminergic modulatory influences on striatal network organization. Our paper (Simonyan et al., 2013) presented the first demonstration of striatal dopaminergic transmission during normal speech production in healthy humans. We found that during sentence production endogenous dopamine was released in the ventromedial portion of the dorsal striatum in both its associative and sensorimotor functional divisions. In the associative striatum, speech-induced dopamine release established a significant relationship with neural activity and influenced the left-hemispheric lateralization of striatal functional networks. On the other hand, there were no significant effects of endogenous dopamine release on the lateralization of striatal structural networks. Our study provided the first evidence for the neurochemical underpinnings of hemispheric dominance of human speech and language control.