Gravity plays a critical role in shaping our experience of the world, influencing both sensory perception and motor planning at fundamental levels. Understanding how vestibular information, that signals the orientation of the self relative to gravity, can be used to create a stable gravity-centered representation of the visual scene is thus important for understanding perception and action. Surprisingly little is known about where and how the brain may use a vestibular estimate of gravity to transform visual signals first encoded in eye-centered (retinal) coordinates into the gravity-centered representation we perceive. The proposed experiments aim to eliminate this knowledge gap. Two lines of research suggest two probable loci. The first is the caudal intraparietal area (CIP) which is known to encode a high-level visual representation of object orientation. The second is the visual posterior sylvian (VPS) which is known to respond to both vestibular and visual stimulation, and which clinical reports suggest may be involved in creating a gravity-centered visual representation. I hypothesize that the transformation occurs progressively, beginning with an egocentric representation in V3A (CIP's main visual input) and culminating in a primarily gravity-centered representation: V3A (egocentric) ? CIP ? VPS (mostly gravity-centered). It is thus expected that V3A represents object orientation in strictly egocentric (head and/or eye) coordinates, and that the computations implementing the transformation occur at the level of CIP and/or VPS. In Aim 1, the visual orientation selectivity of single neurons will be recorded with the monkey in multiple spatial orientations (rolled left/right ear down). This experiment dissociates egocentric (eye/head) from gravity- centered representations, allowing the reference frame in which single neurons encode object orientation to be determined. Even if the transformation to a gravity-centered representation is incomplete at the level of single cells in CIP and/or VPS, it is hypothesized that population activity in these areas can represent object orientation relative to gravity. This will be tested using neural network modeling and the framework of probabilistic population codes to develop a neural theory of how a gravity-centered representation of object orientation is achieved. In Aim 2, the role of the vestibular system in implementing this transformation will be tested directly by performing a bilateral labyrinthectomy and repeating experiments from Aim 1. Since electrical stimulation of vestibular afferents can change perceived visual object orientation, the elimination of vestibular signals is expected to profoundly, if not completely, abolish gravity's effects on visual responses. Any residual effect will be attributed to proprioceptive signals (not vision, since no visual cues to gravity will be present). After the lesion, the effect of gravity on visual responses may increase with time, suggesting a re-learning period in which the role of proprioceptive signals increases. This research is important for understanding vestibular-visual interactions and establishing novel directions for both basic and clinical research studies.