DESCRIPTION (provided by applicant: Although the visual world appears continuous and stable, visual information is actually sampled from the environment around 3 times per second. Observers must therefore process the contents of brief glimpses to form representations that can support effective behavior. Brief stimulus presentations have been crucial for illuminating the early stages involved in constructing scene representations. Very little is known about the time required to extract information about observer-to-object distances, however. There is a pressing need to understand these issues, because many factors, including real-world situations, visual impairment, normal aging, and neurological disorders, can place constraints on the time available for extracting and processing visual information. Thus, many people risk suffering consequences of poor object localization due to insufficient viewing time (e.g., falling, or colliding with objects when walking or driving). These consequences can be dire--the annual cost of falling, for example, is predicted to reach $54.9 billion in the next 10 years. There is a critical lack of knowledge about the consequences of insufficient viewing time on localization in distance, and this impedes identification of at-risk populations and slows development of evidence-based remediation plans. An important product of our investigation is that it will remove these critical barriers by quantifying the impact of insufficient viewing time on localization. This project's health relatedness thus derives from its ability to illuminate possible precursors to driving collisions and falling. Our long-term objectives are to characterize the time course of distance perception and to determine the psychological and neural mechanisms that govern this time course. We will address these issues using a novel, custom-built apparatus capable of providing very brief glimpses (e.g., 10 ms) of a real, 3D environment, followed by a masking image. After briefly glimpsing the environment, observers will use various methods (e.g., verbal report; blind walking) to indicate the egocentric distance of objects seen during the glimpse. This method allows us to study the factors that shape the early stages of distance perception. Experiments in this proposal will test our overarching hypothesis that both the stimulus-driven and top-down factors that govern early distance perception mechanisms are organized to confer a processing advantage for targets on the ground. Our specific aims are to (1) determine the visual requirements for extraction of distance information from brief glimpses-focusing particularly on the powerful angular declination (height in the field) cue; (2) determine the top-down influences on extraction of distance information from brief glimpses--focusing on perceptual and cognitive biases related to the ground plane, and (3) confirm that our results are not crucially dependent upon one particular environment, but instead are more fundamental and broadly applicable to a variety of environments.