Current models of human motion perception use short-range motion sensors loosely based on the 1961 Reichardt model of insect vision. However, humans are more complex than insects; rather than using the output of the short-range detectors directly they have a second stage of motion processing that combines many short-range responses along the trajectory of a moving object, improving speed estimates. The proposed psychophysical experiments will reveal the stages by which this improved perception is achieved, by answering the following questions. 1. Are the second-stage units tuned to temporal frequency of speed? 2. Are there many different types of second stage motion units, or are they all similar? 3. Are the second stage units selective for spatial frequency or target size? 4. What is the spatial receptive field size of a second-stage unit? Over what time interval does it combine speed estimates? What is the combined spatio-temporal receptive field size and shape? 5. What is the "wiring diagram" of the second stage? How many short-range responses are used, and how are they combined? 6. How does second-stage motion processing change in the periphery? Do the second stage units exist only in the fovea? These experiments may also prove useful in guiding electrophysiological studies of motion processing and in explaining the motion deficits observed in amblyopia and stroke patients.