This project focuses on understanding how the brain constructs networks of interacting regions (i.e., neural networks) to perform cognitive tasks, especially those associated with audition and language, and how these networks are altered in brain disorders. These issues are addressed by combining computational neuroscience techniques with functional neuroimaging data, obtained using positron emission tomography (PET) or functional magnetic resonance imaging (fMRI) (reviewed in Horwitz & Glabus, Int. Rev. Neurobiol., 2005 and Horwitz and Husain, Handbook of Brain Connectivity, in press) or magnetoencephalography (MEG). The network analysis methods allow us to evaluate how brain operations differ between tasks, and between normal and patient populations. This research allows us to ascertain which networks are dysfunctional, and the role neural plasticity plays in enabling compensatory behavior to occur. [unreadable] [unreadable] One study used fMRI to investigate auditory categorization (Husain et al., Human Brain Mapping, 2006). We compared the brain responses to a category discrimination task (CAT) with an auditory discrimination task (AUD) using identical sets of sounds. The stimuli differed along a speech-nonspeech dimension and along a fast-slow temporal dynamics dimension. Comparing the activation patterns for CAT relative to AUD, we found that a core group of regions beyond the auditory cortices, including inferior and middle frontal gyri, dorsomedial frontal gyrus, and intraparietal sulcus, were preferentially activated for both familiar (speech) and novel categories. These regions have been shown by others to play a role in working memory. Processing the temporal aspects of the stimuli had a greater impact on the left lateralization of the categorization network than did other factors, particularly in the inferior frontal gyrus, suggesting that there is no inherent left hemisphere advantage in the categorical processing of speech stimuli, or for the categorization task itself. We repeated this study using MEG (Luo et al., NeuroImage, 2005), which permits a finer temporal (but poorer spatial) resolution than does fMRI. Using an induced wavelet transform method, we found in auditory cortex, for both the AUD and CAT conditions, an alpha (8-13Hz) band activation enhancement during the delay period for all stimulus types. A clear difference between the AUD and CAT conditions was observed for the nonspeech stimuli in auditory areas and for both speech and nonspeech stimuli in frontal areas. The results suggest that alpha band activation in auditory areas is related to both working memory and categorization for new nonspeech stimuli. The fact that the dissociation between speech and nonspeech occurred in auditory areas, but not frontal areas, points to different categorization mechanisms and networks for newly learned (nonspeech) and natural (speech) categories. Furthermore, a functional connectivity analysis of the fMRI data that determined how these different brain regions interacted with one another showed that the strengths of the connections between the auditory and frontal cortex were different for the processing of speech and non-speech sounds, suggesting that this difference underlies the results seen in the MEG study (Husain et al., Neuroreport, 2006).[unreadable] [unreadable] Another major focus of our laboratory seeks to understand the relationship between what is observed in functional neuroimaging studies and the underlying neural dynamics. To do this, we had previously constructed a large-scale computer model of neuronal dynamics that performs a visual object-matching task similar to those designed for PET/fMRI studies (reviewed in Horwitz and Glabus, Int. Rev. Neurobiol., 2005). We extended the model so that it could also simulate auditory processing, thus allowing us to investigate the neural basis of auditory object processing in the cerebral cortex (reviewed in Husain & Horwitz, J. Physiol-Paris, in press). This model relates neuronal dynamics of cortical auditory processing of spectrotemporal patterns to fMRI data.[unreadable] [unreadable] Environmentally relevant auditory stimuli are often composed of long-duration tonal patterns (e.g., multisyllabic words, short sentences, melodies). Manipulation of those patterns by the brain requires working memory to temporarily store the segments of the pattern and integrate them into a percept. To understand the neural basis of how this is accomplished, we extended the model of auditory recognition of short-duration tonal patterns described above. A memory buffer and a gating module were added. The memory buffer increased the storage capacity; the gating module distributed the segments of the input pattern to separate locations of the memory buffer in an orderly fashion, allowing a subsequent comparison of the stored segments against the segments of a second pattern. Current simulations show that the extended model performs match and mismatch of sequences of long-duration tonal patterns. We conducted an fMRI experiment using the same stimuli as employed in the simulations and found areas in the prefrontal cortex that are likely candidate brain areas for the new modules of the extended model.[unreadable] [unreadable] Viewing cognitive functions as mediated by networks has begun to play a central role in interpreting neuroscientific data, and studies evaluating interregional functional and effective connectivity have become staples of the neuroimaging literature. As an example, in Mottaghy et al. (Neuroimage, 2006), we examined the interactions of different brain regions involved in mediating intrinsic alertness. We found that the anterior cingulate functions as the central coordinating structure for the right hemispheric neural network of intrinsic alertness and that the anterior cingulate gyrus is modulated mainly by prefrontal and parietal cortex.[unreadable] [unreadable] The neurobiological substrates of functional and effective connectivity are, however, uncertain. We used our biologically realistic neural models to investigate how neurobiological parameters affect the interregional effective connectivity between fMRI time-series as evaluated by a technique called Dynamic Causal Modeling (DCM) (Lee et al., Neuroimage, 2006). DCM was used to make inferences about effective connectivity using data generated by a model implementing a visual delayed match-to-sample task with the aim of exploring the validity of inferences made using DCM about the connectivity structure and task-dependent modulatory effects in a system with a known connectivity structure. This approach revealed strong evidence for those models with correctly specified anatomical connectivity.