Our laboratory studies the relationship between w hat is observed in functional neuroimaging studies and the underlying neural dynamics. To do this, w e use large-scale computer models of neuronal dynamics that perform either a visual or auditory object- matching task similar to those designed for PET/fMRI/MEG studies. A review of both models can be found in Horwitz & Husain (2007). W e also develop computational methods for fMRI and MEG data that allow us to investigate functional brain networks in normal human subjects and in patients with sensory and cognitive processing disorders. It has been recognized that network analysis can be quite complicated, and as a result, interpreting the results of such analyses requires great care. In the last year we have published several articles that have attempted to elucidate the intricacies of various aspects of brain network analysis. In Banerjee and Horwitz (2013), we showed that network analysis has much in common with the kind of probability theory employed in quantum physics. In Horwitz et al. (2013) and Horwitz (in press), we pointed out that large-scale neural modeling can be utilized to help interpret brain network analysis in normal subjects and in patients with brain disorders. Furthermore, w e perform neuroimaging experiments to understand the neural basis of auditory and language processing. In Smith et al. (2013), we presented a novel paradigm to identify shared and unique brain regions underlying non-semantic abstract audio-visual (AV) memory vs. naming using a longitudinal fMRI experiment. Participants were trained to associate novel AV stimulus pairs containing hidden linguistic content. Half of the pairs were distorted images of animals and sine-wave speech versions of their names. Images and sounds were distorted in such a way as to make their linguistic content easily recognizable only after being made aware of its existence. Memory for the pairings was tested by presenting an AV pair and asking subjects to verify if the two stimuli formed a learned pairing. After memory testing, the hidden linguistic content was revealed, and participants were again tested, but this time they could perform the task by naming the picture. We found substantial overlap of the regions involved in recognition of non-linguistic sensory memory. Contrasts between sessions identified left angular gyrus and middle temporal gyrus as key additional players in the naming network. Left inferior frontal regions participated in both naming and non-linguistic AV memory, suggesting the region is responsible for AV memory independent of phonological content contrary to previous proposals. Functional connectivity between angular gyrus and left inferior frontal gyrus and left middle temporal gyrus increased when performing the AV task as naming. Our results are consistent with the hypothesis that at this level of spatial resolution regions that facilitate non-linguistic AV associations are a subset of those that facilitate naming, although reorganized into distinct networks. Our laboratory also has performed studies to elucidate the neural basis of speech production and its disorders. Last year, we published a paper (Simonyan et al., 2013) that investigated the extent of endogenous dopamine release in the striatum and its influences on the organization of functional striatal speech networks during production of meaningful English sentences using a combination of positron emission tomography (PET) with the dopamine D2/D3 receptor radioligand (11- C)raclopride and fMRI functional and structural connectivity analyses (striatal structural connectivity w as measured using diffusion tensor imaging (DTI) tractography). Our paper presented the first demonstration of striatal dopaminergic transmission during normal speech production in healthy humans and provided the first evidence for the neurochemical underpinnings of hemispheric dominance of human speech and language control. During the current year, we published a commentary that strongly supported the continued use of PET to investigate normal brain functioning (Horwitz and Simonyan, 2014). We also have examined how the brain processes complex sounds, specifically harmonics. Many speech sounds and animal vocalizations contain components consisting of a fundamental frequency (F0) and higher harmonics. In this study (Kikuchi et al., 2014) we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations (coos). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were 10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was 10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB, and their interval distributions in both areas peaked at the perfect fifth (interval ratio of 3:2). In A1, however, these peaks appeared only in modest numbers and only during a late response period, whereas in LB harmonically-related intervals (perfect fifth and octave) were commonly present during both early and late response periods. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB.