In this competing renewal, we propose a coordinated series of multimodal experiments to study the properties of a putative model brain network that rapidly detects animate agents on the basis of their body form and/or motion, and infers the agent's goals and intentions from an analysis of its motion and its identity. These competencies are essential for successful social processing. As dysfunction of social processing is a core deficit in neurodevelopmental and psychiatric disorders as diverse as autism spectrum disorders, William's syndrome, social anxiety disorder, and schizophrenia, we believe that elucidating the properties of this network may lead to a deeper understanding of these disorders and advance strategies for treatment. We will use a multi-modal approach consisting of functional MRI, EEG/ERP, and direct cortical stimulation to best characterize the location, timing, and covariation of neural activity in widespread brain regions presumed to comprise this network. Advanced directed connectivity and decoding analyses will be used to analyze the flow of information among the network nodes. These studies will benefit from our opportunity to stimulate and record directly from subdural electrodes the human brain in patients in the Yale Epilepsy Surgery Program. This proposal comprises four specific aims. In the first aim, we will investigate the timing and directed connectivity upon VOTC processing of faces and body forms. In the second aim, we will investigate the timing and directed connectivity upon LOTC processing of animacy detection and intention attribution. In the third aim, we will investigate changes in directed connectivity between the pSTS and FG as a consequence of the form and motion presented by an animate agent. In the fourth aim, we will investigate the access of semantic information regarding animate agents in the VATL.