There has been a great deal of progress in understanding how complex visual objects, in particular, human faces, are processed by the cortex. At the same time, sophisticated neural network models have been developed that do many of the same tasks required by these cortical areas. The aim of this proposal is to extend these simplifying models toward an understanding of the extent to which facial expression processing is "universal," versus the extent to which it is experientially mediated. In particular, we focus upon elucidating the following issues through modeling: (1) Understanding cultural variation in expression recognition: contrasting the influence of other race effects versus cultural display rules; (2) Understanding how we become face "experts." Why does the same region of the Fusiform Gyrus get recruited for faces as well as other visual tasks we may be expert in? (3) Understanding the dynamics of facial expertise: How are eye movements planned for efficient feature extraction? In each case, we have developed or will develop a neurocomputational model of the process. Cultural and other-race experience will be modeled by the composition of the internal representations and the training signals of our model. Facial expertise is modeled as a combination of different task requirements (varying the level of categorization required) and length of training. Eye movement modeling will be based on novel and traditional methods for extracting the informative locations on the face for each task. We will develop a theoretical criterion for saccade targets the face based upon mutual information between feature values and the categories required for the task. We also will be performing behavioral experiments to test the predictions of our model. The project will shed light on the way faces are represented and processed by the brain, and should give insights into the problems underlying deficits in face processing such as prosopagnosia.