The goal of the proposed research is to understand how 3-dimensional (3D) visual information, both about target objects and about the moving hand, is used to plan and control goal-directed hand movements. The research focuses on what depth cues the visuomotor system uses for planning and online control and how it integrates those cues. Several theoretical considerations inform the work. First, different motor behaviors admit different solutions to the problem of mapping visual information to motor behavior. We therefore address the question of whether the brain uses a common visual representation of objects or relies on different task-specific strategies for planning and controlling different types of hand movements such as hand transport and hand rotation (e.g. during grasping movements). We will measure how subjects weight binocular disparity and texture/figural cues about object layout in a scene for controlling both components of hand movements. As a probe into the modularity of visuomotor computations, we will use haptic feedback to adapt subjects'cue weights in one task and measure transfer of adaptation effects between tasks. Second, the relative contribution of different cues to motor control depends on both the reliability of the information provided by the cues and the time course with which the brain processes them. We will study how changes in cue reliability affect how the brain uses the information provided by depth cues for both planning and online control. We will also measure the time course of processing binocular disparity and texture/figural cues as they contribute to motor control using a perturbation technique developed in the previous funding period. In order to derive a deeper understanding of how cue uncertainty and timing constraints interact to determine human visuomotor performance, we will supplement the experimental studies with computational work applying methods from optimal filtering (optimal statistical estimation over time) and control.