This past year we have continued to focus on a major part of long-term memory, termed semantic memory, that is composed of general information, such as facts, ideas, and the meaning of objects and words. We are particularly interested in characterizing the neural substrate mediating object and word meaning and its role in object perception. We are also interested in understanding how abstract knowledge, such as information about social interactions, is represented. Our studies have shown that information about salient properties of an object - such as what it looks like, how it moves, and how it is used - is stored in the sensory and motor systems active when objects are perceived and manipulated. As a result, objects belonging to different categories such as animate entities (people, animals) and manmade manipulable objects (tools, utensils) are represented in partially distinct neural circuits composed of discrete processing nodes located in multiple cortical areas. These distributed circuits also underpin our ability to understand more abstract events such as social and mechanical interactions. We have a long-standing interest in the representation of animate entities, both human and non-human. Although, it is well documented that the human amygdala responds strongly to viewing human faces, especially when depicting negative emotions, little is known about the extent to which the amygdala also responds to other animate entities - as well as to inanimate objects - and how that response is modulated by the objects perceived affective valence and arousal value. We addressed these issues using fMRI by having subjects perform a repetition detection task to photographs of negative, neutral, and positive faces, animals, and manipulable objects equated for emotional valence and arousal level. Both the left and right amygdala responded more to animate entities than manipulable objects, especially for negative objects (fearful faces, threatening animals, versus weapons) and to neutral stimuli (faces with neutral expressions, neutral animals, versus tools). Thus, in the absence of contextual cues, the human amygdala responds to threat associated with some object categories (animate things) but not others (weapons). Although failing to activate the amygdala, viewing weapons did elicit an enhanced response in dorsal stream regions linked to object action. Accordingly, our findings suggest two circuits underpinning an automatic response to threatening stimuli; an amygdala-based circuit for responding to animate entities, and a cortex-based circuit for responding to manmade, manipulable objects. To evaluate the neural circuits underpinning object processing in greater detail, we developed an fMRI procedure for identifying, in an unbiased manner, different types of object representations throughout the brain. These studies have revealed a hierarchically organized processing stream characterized by a progression from perceptual to conceptual representations. Most importantly, we were able to identify brain regions in frontal cortex that were either sharply, or broadly, tuned to objects. These two different types of neural tuning, in turn, underpin two of the central properties of cognitive systems in general, the ability to make fine distinctions between related concepts or idea, and the ability to generalize from one concept or idea to another. The conclusions about the nature of neural tuning in the study described above were based on an analysis of a neural memory phenomenon know as repetition suppression. Specifically, the finding that the magnitude of a neural response decreases when the same stimulus, or a related stimulus, is repeated. This neural phenomenon is also typically associated with improved performance. That is, for example, subjects will name a picture of an object faster the second time it is presented relative to the first time. The neural mechanism underpinning this behavioral facilitation is not well understood. How could a weaker neural response result in improved performance? One possibility is that, although the neural responses are weaker, they may become more organized or synchronized with repetition, thereby leading to more efficient processing. We have obtained evidence to support this possibility using MEG, a brain recording technique that provides information on the time-scale of milliseconds. Subjects were shown pictures of objects and asked to name each one. As predicted, we found that the temporal coordination of the neural signals increased in regions known to show repetition suppression. With each repetition of the pictures, neural signals became weaker but more synchronized, while naming times became faster, thus suggesting a neural mechanism for a powerful form of learning.