This project focuses on understanding how the brain constructs networks of interacting regions (i.e., neural networks) to perform cognitive tasks, especially those associated with audition and language, and how these networks are altered in brain disorders. These issues are addressed by combining computational neuroscience techniques with functional neuroimaging data, obtained using positron emission tomography (PET) or functional magnetic resonance imaging (fMRI). The network analysis methods allow us to evaluate how brain operations differ between tasks, and between normal and patient populations. This research will allow us to ascertain which networks are dysfunctional, and the role neural plasticity plays in enabling compensatory behavior to occur. We have begun delineating the functional networks involved in the production of spontaneous narrative speech in normal subjects using PET measurements of regional cerebral blood flow (rCBF), an index of neural activity. We determined the functional connectivity (evaluated as the correlation between rCBF in different brain areas) of left hemisphere (LH) brain regions. Our results demonstrate that in normal subjects LH perisylvian regions interact strongly with one another during language production, but not during a task requiring similar muscle movements and vocalizations as speech. In patients who stutter, many of these strong functional linkages seem to be absent, suggesting that LH language-production networks are abnormal during stuttering. For language function, one important part of the left inferior frontal gyrus is Broca's area, which is defined as the cytoarchitectonic areas 44 and 45 in the system devised by Brodmann. We used a probabilistic data set corresponding to cytoarchitectonically-defined Brodmann areas (BA) 44 and 45, which enables us to say for any given voxel in the stereotactic space of the Talairach atlas, what the probability is that this voxel is in BA44 or BA45. We applied this probabilistic atlas to PET data acquired during language production tasks (generation of narrative speech) from adults whose parents were deaf and who were fluent in both speech and American Sign Language (ASL). Narrative production was performed in separate PET scans using speech and sign. We found similar activations of the central parts of both Brodmann areas by both speech and sign, suggesting that in spite of the different modalities by which speech and sign are expressed, the same neural substrates in Broca's region are engaged. Another major focus of our laboratory seeks to understand the relationship between what is observed in functional neuroimaging studies and the underlying neural dynamics. To do this, we had previously constructed a large-scale computer model of neuronal dynamics that performs a visual object-matching task similar to those designed for PET/fMRI studies. The model is composed of elements that correspond to neuronal assemblies in cerebral cortex, and contain different elements that are based on types identified by electrophysiological recordings from monkeys as they perform similar tasks. It includes an "active" memory network involving the occipitotemporal visual pathway and a frontal circuit, and is capable of performing a match-to-sample task in which a response is made if the second stimulus matches the first. A PET/fMRI study is simulated by presenting pairs of stimuli to an area of the model that represents the lateral geniculate nucleus. Simulated PET data are computed from the model as it performs the tasks by integrating synaptic activity within the different areas over appropriate time intervals. For the visual model, simulated PET data similar to that found in actual PET delayed match to sample visual tasks were obtained, as were the correct neuronal dynamics in each brain region. In the last year, we have also expanded the model so that it can also simulate auditory processing, thus allowing us to investigate the neural basis of auditory object processing in the cerebral cortex. We developed a large-scale, neurobiologically realistic network model of auditory pattern recognition that relates neuronal dynamics of cortical auditory processing of frequency modulated (FM) sweeps to functional neuroimaging data obtained using PET and fMRI. Areas included in the model extend from primary auditory to prefrontal cortex. The electrical activities of the neuronal units of model were constrained to agree with data from the neurophysiological literature regarding the perception of FM sweeps. We also conducted an fMRI experiment using stimuli and tasks similar to those used in our simulations. The integrated synaptic activity of the neuronal units of the model was used to determine simulated hemodynamic measures, and generally agreed with the experimentally observed fMRI data in the brain regions corresponding to the modules of the model. Our results demonstrate that the model is capable of exhibiting the salient features of both electrophysiological neuronal activities and fMRI values that are in agreement with empirically observed data. These findings provide support for our hypotheses concerning how auditory objects are processed by primate neocortex. Environmentally relevant auditory stimuli are often composed of long-duration tonal patterns (e.g., multisyllabic words, short sentences, melodies). Manipulation of those patterns by the brain requires working memory to temporarily store the segments of the pattern and integrate them into a percept. To understand the neural basis of how this is accomplished, we extended the model of auditory recognition of short-duration tonal patterns described above. A memory buffer and a gating module were added. The memory buffer increased the storage capacity of PFC; the gating module distributed the segments of the input pattern to separate locations of the memory buffer in an orderly fashion, allowing a subsequent comparison of the stored segments against the segments of a second pattern. Current simulations show that the extended model performs match and mismatch of sequences of long-duration tonal patterns in a DMS task. Our model can also simulate fMRI data by representing the fMRI signal in each module as proportional to the time-integrated synaptic activity. Candidate brain areas for the new modules of the extended model include prefrontal cortex, posterior parietal cortex, and basal ganglia, and will be determined by comparison of simulated and experimental fMRI data.