Our laboratory studies the relationship between what is observed in functional neuroimaging studies and the underlying neural dynamics. To do this, we use large-scale computer models of neuronal dynamics that perform either a visual or auditory object-matching task similar to those designed for PET/fMRI/MEG studies. A review of both models can be found in Horwitz et al (Phil. Trans. Roy. Soc. B, 2005). Recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neuroimaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling (LSNM) in a whole brain (connectome) framework (see Ulloa and Horwitz, Fron. Neuroinformatics, 2016). We used this framework to study the effect of task activity on non-task-related parts of the brain (Ulloa & Horwitz, bioRxiv, 2018). Establishing a connection between intrinsic and task-evoked brain activity is critical because it would provide a way to map task-related brain regions in patients unable to comply with such tasks. A crucial question within this realm is to what extent the execution of a cognitive task affects the intrinsic activity of brain regions not involved in the task. We used our LSNM framework to embed a computational model of visual short-term memory into an empirically derived connectome. We simulated a neuroimaging study consisting of ten subjects performing passive fixation (PF), passive viewing (PV) and delay match-to-sample (DMS) tasks. We used the simulated BOLD fMRI time-series to calculate functional connectivity (FC) matrices and used those matrices to compute several graph theoretical measures. After determining that the simulated graph theoretical measures were largely consistent with experiments, we were able to quantify the differences between the graph metrics of the PF condition and those of the PV and DMS conditions. Another project extended our working memory model to enable distractor stimuli to be included (Liu et al., 2017), which enabled us to successfully implement multiple working memory tasks using the same model and produce neuronal patterns that match experimental findings. We also generated fMRI BOLD timeseries from our simulations. Furthermore, we noticed during simulations of memorizing a list of objects, the first and the last item in the sequence were recalled best, which may implicate the neural mechanism behind this important psychological effect (i.e., the primacy and recency effect). Both the auditory and visual models were incorporated in this framework, enabling us to investigate auditory-visual interactions (Liu et al, in preparation). Invasive electrophysiological and neuroanatomical studies in nonhuman mammalian experimental preparations have helped elucidate the lamina (layer) dependence of neural computations and interregional connections. Noninvasive functional neuroimaging can, in principle, resolve cortical laminae (layers), and thus provide insight into human neural computations and interregional connections. However human neuroimaging data are noisy and difficult to interpret; biologically realistic simulations can aid experimental interpretation by relating the neuroimaging data to simulated neural activity. We (Corbitt et al., Neuroimage, 2018) illustrated the potential of laminar neuroimaging by upgrading our existing largescale, multiregional neural model that simulates a visual delayed matchtosample task. The new laminarbased neural unit incorporates spiny stellate, pyramidal, and inhibitory neural populations which are divided among supragranular, granular, and infragranular laminae (layers). We simulated neural activity which is translated into local field potentiallike data used to simulate conventional and laminar fMRI activity. The hemodynamic model that we employed is a modified version of one due to Heinzle et al. (Neuroimage, 2016) that incorporates the effects of draining veins. We showed that the laminar version of the model replicates the findings of the existing model. The laminar model shows the finer structure in fMRI activity and functional connectivity. We illustrated differences between task and control conditions in the fMRI signal and demonstrated differences in interregional laminar functional connectivity that reflected the underlying connectivity scheme. In another study, we examined how the brain processes complex sounds, specifically harmonics. Many speech sounds and animal vocalizations contain components consisting of a fundamental frequency (F0) and higher harmonics. Animals and humans rapidly detect such specific features of sounds, but the time course of the underlying neural decision processes is largely unknown. To address this, we (Banerjee et al., eNeuro, 2018) computed neuronal response latencies from simultaneously recorded spike trains and local field potentials (LFPs) along the first two stages of cortical sound processing, primary auditory cortex (A1) and lateral belt (LB), of awake, behaving macaques. Two types of response latencies were measured for spike trains as well as LFPs: 1) onset latency, time-locked to onset of external auditory stimuli, and 2) discrimination latency, the time taken from stimulus onset to neuronal discrimination between different stimulus categories. Trial-by-trial LFP onset latencies always preceded spike onset latencies. In A1, simple sounds, such as pure tones, yielded shorter spike onset latencies compared to complex sounds, such as monkey vocalizations (coos, in which F0 was matched to a corresponding pure-tone stimulus). This trend was reversed in LB, indicating a hierarchical functional organization of auditory cortex in the macaque. LFP discrimination latencies in A1 were always shorter than those in LB reflecting the serial arrival of stimulus-specific information in these areas. Thus, chronometry on spike-LFP signals revealed some of the effective neural circuitry underlying complex sound discrimination. The organization of the auditory ventral stream, the neocortical auditory pattern recognition pathway, has been proposed to operate as a hierarchical feature network, wherein elemental features are hierarchically recombined into increasingly complex sensory representations. To probe the operation of this network, we constructed auditory word-form stimuli that contained equivalent lower-order features (phonemes) but which varied in their regularity with respect to the natural statistics of embedded higher-order feature combinations (di-, tri-, tetraphones). To observe neural sensitivity to phoneme sequence probabilities (phonotactics), we presented auditory word-form stimuli to healthy human subjects in a functional MRI (fMRI) scanner (Experiment 1) and to temporal lobe epilepsy patients implanted with intracranial electroencephalography (iEEG) arrays (Experiment 2). Preliminary analyses of fMRI data found increased signal in anterior-lateral planum temporale (PT) in response to irregular higher-order feature statistics. Preliminary analyses of iEEG data similarly found increased high-gamma power response in mid superior temporal gyrus (STG). Together, our findings indicate the auditory ventral stream encodes sequence event probabilities extracted from the long-term natural statistics of the heard environment. Results support feedback-inclusive models, in which expectancy error is processed early in the ventral stream, at the transition from anterior-lateral PT to mid-STG (DeWitt et al., in preparation).