The aims of this proposal are to understand high-level perceptions derived from visual motion. Two phenomena that have attracted considerable interest are self-motion perception and structure-from-motion perception. We propose experiments aimed toward understanding the neural circuits responsible for these percepts. Subjects can perceive their direction of self-motion from "optic flow" signals that are produced on their retinas during translation through the environment. An important problem for the visual system is to recover translation based motion cues when smooth gaze movements are made that generate additional, laminar motions on the retinas. During the last grant period we found that extra-retinal and retinal cues produce shifts in the tuning curves of MSTd neurons that are tuned to the direction of heading. In the first aim of the current proposal we plan to extend these findings with three new lines of research, examining translation compensation, the effects of 3D cues, and the coordinate frame used by MSTd to represent heading direction. The second aim is to study the neural networks responsible for SFM perception. Observers can perceive the 3D shape of objects based purely on relative motion cues. Such displays are bistable, and this feature has allowed us to examine the neural correlates of SFM perception. Based on work in the last grant period, we proposed a two-stage model; in the first stage motion signals are measured in Vi and in the second stage surfaces are represented from these motion signals by a circuit within MT. In the current proposal we plan to test this model by examining its temporal dynamics. These studies are designed to gain knowledge of how the brain processes information, and will help to understand neurological deficits that occur with brain diseases.