The long-term goal of this research is to understand the visual mechanisms that enable human observers to perceive the shapes of objects. These mechanisms are central to normal visual functioning, and understanding them would help to illuminate principles and limitations of normal vision, as well as the deficiencies that cause visual deficits, especially visual agnosias. The specific aim of these experiments is to investigate human observers' abilities to perceive the shapes of objects from the shading information in retinal images. This shading information is extremely ambiguous, because any given 2D image could have been generated by any of an infinite number of 3D arrangements of lighting sources and surfaces. Nevertheless, human observers rapidly arrive at a generally accurate interpretation of such images, which implies that the human visual system must incorporate accurate assumptions (at least implicitly) about which 3D scenes are more or less likely to occur in the real world. The hypothesis being tested here is that the visual system uses a statistical model of natural illumination and a statistical model of local surface shape and reflectance, in order to arrive at the most likely 3D interpretation of a 2D image. This research project has four components. (1) Using a multidirectional photometer, we will measure the illumination incident from all directions at many randomly selected locations, and use these measurements to construct a statistical model of natural illumination. (2) Using 3D digitizing scanners, we will digitize the shape and reflectance patterns of many natural objects, and use small surface patches extracted from this data to construct a statistical model of local surface shape and reflectance. (3) We will develop a Bayesian model observer that uses the measured illumination and surface statistics to infer the most likely 3D shape of objects depicted in 2D images. (4) We will compare the performance of human observers and the Bayesian model observer on a number of shape perception tasks. If the human and Bayesian observers find the same tasks easy or difficult, then we will conclude that human performance is limited by the same factor as the Bayesian observers performance, namely the extent to which the illumination and surface statistics in the various tasks match the statistics measured in the natural world. Otherwise, human observers must use constraints other than the measured natural statistics to arrive at accurate 3D interpretations of 2D images. [unreadable] [unreadable]