It is now possible to design computers that compute millions of calculations per second, perform useful tasks, and recognize objects. However, despite these advances, computers fall short of emulating brain function in the domain of flexibility. One of the remarkable aspects of the brain is that it interprets stimuli and organizes its actions in a highly situation-dependent and flexible manner. With regard to visual stimuli, the brain is able to learn the structure and significance of a large number of stimulus categories. For some categories, such as faces, its performance is utterly remarkable: we readily can discriminate between and recognize thousands of different individuals based on very subtle differences in the face components and their geometrical configuration. This is all the more impressive given that each time we see a given face, its image on our retina is different from the last time. Two different individuals seen from the same distance and same lighting conditions may cast very similar retinal images, at least at a coarse level, than the same individual seen twice under different conditions. Nonetheless, we are able to fluidly and effortlessly recognize people, objects, landmarks, and scenes based on a single glance. How does this ability come about? One answer to this question relates to the manner in which complex visual stimuli are encoded in the brain. This topic has been a central focus of our research. In the past year, we have published papers related to the neural representation of stimuli in object-encoding regions V4 and TE of the visual cortex. We have previously shown that individual faces are systematically encoded based on their distinctiveness relative to an average face, so called norm-based encoding. In other words, the brain encodes to a given face according to how it differs in its structure from a mean, prototypical face. We first provided evidence for this means of encoding by conducting human behavioral experiments involving visual adaptation. In those experiments the presentation of one face for a few seconds altered the way a subsequently presented face was perceived. The misperceptions closely matched the expectations of norm-based encoding. This was strengthened by more recent neurophysiological recordings in nonhuman primates showing the neurons in the inferotemporal cortex adjust their firing rate based on the relative difference of a face from the average of many faces. Both lines of research point to the conclusion that the brain encodes face identity systematically, and relative to a prototypical average. A second important feature of face perception is the ability to learn and remember new faces, a process that undoubtedly involves changes in the brain. Unlike some skills (e.g. language acquisition), our capacity to learn new faces remains strong into adulthood. This implies that the neural machinery underlying our recognition remains, in a sense, plastic. How does experience modify neural responses? During the past year we have begun to approach this problem by monitoring the tuning functions of individual neurons over periods of days and weeks. While recording from single cells is a routine process, monitoring them for extended periods of time poses enormous challenges. We have overcome these challenges by developing, with the help of an outside collaborator, a novel inertialess microelectrode bundle array, which maintains close proximity to individual neurons by moving with the small movements of the brain. The advantage of this approach is that in recording the responses of isolated neurons over a period of many days the effects of visual learning on neural selectivity can be assessed. In the laboratory, nonhuman primates are presently being trained to learn new categories of stimuli, including novel human and simian faces, as neural responses are monitored. The results from this study will shed light on how we are able to learn stimuli because of changes in the selectivity of visual neurons.