The goal of this research is to understand how we see what we see: how does the brain analyze the light falling on the retina of the eye to encode a world full of objects, people and things? During the past year we have focused on 1) the relationship between bottom up (sensory driven) and top-down (internally driven) processing in the brain, focusing on the retrieval of information from memory, and 2) perception of complex visual stimuli, focusing most recently on visual scenes. 1) Bottom up versus top down processing (Protocol 93-M-0170, NCT00001360) Our visual perception is the product of an interaction between bottom-up sensory information and top-down, internally generated signals guiding interpretation of the input and reflecting our prior knowledge and intent. We have previously demonstrated that during visual mental imagery, the patterns of response observed in visual cortex can be used to decode what object a person is imagining. Further, those patterns are similar to those observed during perception of the same objects. Over the past year we have been investigating whether such recapitulation of visual activity in cortex is dependent on whether the retrieval is from short- or long-term memory and the involvement of the hippocampus. We presented participants with images of every day objects (e.g. helmet, lamp, couch) and asked them to remember those items as vividly as possible. After a delay of either 30 minutes (short-term) or one day (long-term), we measured brain activity using MRI as participants were cued to imagine specific objects they had learned. First, replicating our prior work, we found that in visual cortex after short delays, we could decode the specific object participants were imagining. The same was true after a delay of one day suggesting that the recapitulation of activity is not specific to retrieval from short-term memory. Second, in hippocampus, we found that the patterns of response could also be used to decode the object being retrieved, but only after one day and not after a short delay. This latter result suggests that the representation of specific information in hippocampus is dependent on a time-consuming process. 2) Perception of real world scenes (Protocol 93-M-0170, NCT00001360) Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. While prior studies have identified several brain regions that appear to be specialized for scene processing, it remains unclear exactly what the precise roles of these different regions are. Building on a general framework for visual processing we proposed recently, we have been investigating the extent to which the two major scene-selective regions on lateral and ventral occipitotemporal cortex show different retinotopic properties reflecting a large-scale architecture of visual cortex (Silson et al., 2015). By presenting fragments of scenes to specific portions of the visual field we have demonstrated that the lateral scene-selective region is biased towards the lower visual field while the ventral scene-selective region is biased towards the upper visual field. These results highlight the importance of elevation as an organizing principle in high-level visual cortex and suggest that these two scene regions may reflect separate category-selective representations of distinct portions of the visual field rather then separate stages in a serial, hierarchical pathway. Elucidating how the brain enables us to recognize objects, scenes, faces and bodies provides important insights into the nature of our internal representations of the world around us. Understanding these representations is vital in trying to determine the underlying deficits in many mental health and neurological disorders.