Learning is the sine qua non of survival for all but the simplest creatures. Learning, of course, involves changes in the brain, with the most obvious sites being at the synapses that facilitate communication between neurons. One can thus study learning at the microscopic level, to study for example the mechanisms by which synapses are strengthened and weakened, or at the level of the system as a whole. Plasticity is the foundation of our brain development, leading to an obsession for learning that preoccupies virtually each moment of an infant's life. Learning must also occur in the adult, in a way that is more limited than in the infant, but in many ways still quite rich. For example, upon meeting and getting to know somebody for the first time, one learns not only their basic facial features, but also how they use their face to express themselves. With time, this information is used to form a deeper understanding of a person, their affect, their sensitivities, and their relationship to the observer. This is a central part of our adult life and we would be incapacitated without the capacity to remember such things. In holding onto such information, the brain ultimately must modify synaptic strengths, with some of the modification probably taking place in what we would casually call the high-level visual cortex. However, it is very difficult to even begin to understand how this pattern of learning is expressed across different cortical areas. There may be millions or billions of synapses that are affected during the learning of a face. Where are these changes taking place, and how is it that changing one set of synaptic weights will not severely disrupt other previously learned items? In the past year, we have used a new microwire bundle array to systematically follow the activity of neurons over much longer time scales than previously possible. In a methods paper (McMahon et al., J Neurophysiol (2014)), we described the basic structure of the microwire array, and outlined some of their experimental advantages. One such advantage is the capacity to examine the responses of single neurons longitudinally, not only over a few hours of a session, but also between sessions, and even across weeks and months. In a paper published last year, we measured the responses of multiple neurons within fMRI defined face patches (McMahon et al., Proc Natl Acad Sci (2014)). We presented a large number of static stimuli in order to establish the selectivity of individual neurons--a fingerprint of sorts. Then we went back on subsequent days to the same electrodes (which remained permanently implanted in position) and found that the neurons were still there, and that they maintained a virtually identical fingerprint of stimulus responses. In fact, over time periods of months and even exceeding one year, neurons in the recorded area maintained their precise pattern of stimulus response selectivity. This finding is unexpected because it suggests that neurons in a region of the brain ostensibly dedicated to faces do not exhibit updates or adjustments in their activity with natural experience. In another paper (McMahon et al. J Neurosci (2015)), we used these arrays to track responses to more complex visual stimuli, in the form of socially rich videos. In being able to study individual neurons for several weeks, we came to the surprising conclusion that a population of neurons that are all nominally face cells (in that they respond more to faces than to other visual objects), are almost completely decorrelated in their responses upon viewing naturalistic stimuli. In other words, a single categorical label, while useful in some contexts, falls short of explaining the role of individual neurons during natural vision. In another project, presented at the Society for Neuroscience meeting last year (Jones et al., SFN Abstr (2014)), and presently in manuscript form, we used these longitudinal electrodes to demonstrate that a particular type of identity selectivity called norm-based tuning emerges immediately upon seeing a stimulus set, and cannot be ascribed to neural adaptation within an individual session. A second subproject, also involved in high level social representations in the primate brain, has to do with the development of the marmoset monkey as a model for visual neuroscience. The marmoset is an up-and-coming animal model that has gained momentum recently owing to many experimental advantages. This topic has been summarized in two of our recent reviews (Mitchell et al., Neurosci Res (2015); Belmonte et al., Neuron (2015)). In developing this model in collaboration with Dr. Afonso Silva, we have recently published two papers outlining the basic visual fMRI responses in the marmoset (Hung et al., NeuroImage (2015)), and the face selective system in the marmoset cortex (Hung et al, J Neurosci (2015)). The latter study used a combination of fMRI and electroicorticography to demonstrate for the first time that the basic network of face-responsive cortex is present in this species. Current work underway in the lab that will be presented at this year's Society for Neuroscience meeting (Hung et al., SFN Abstr (2015); Day Cooney et al, SFN Abstr (2015)), aims to understand the correspondence between the observed face patches in the marmoset and those previously identified in the macaque and human. The broad potential for studying faces and higher-level social behavior in the marmoset offers a potential new perspective on primate social neuroscience that can inform the failures in social perception that often accompany neuropsychiatric disorders.