Approximately eight-million Americans are blind or have low vision as defined by a difficulty reading common newsprint with corrective lenses (McNeil, 2001). The loss of vision can have serious repercussions on simple tasks such as cooking, reading a magazine, paying a cashier or navigating. For many activities, there are either visual aids (e.g., magnifiers, Braille, long cane or guide dog) or strategies (folding money) that can be used to compensate for the vision loss. However, currently, there is no widely accepted system for aiding helping someone with low vision with he problem of way finding. Way finding refers to the process of navigating from one location within a large-scale space (such as a building or a city) to another, unobservable, location. It is different than the problem of obstacle avoidance where a long cane or guide dog can be used to navigate around [unreadable] a local obstacle. The goal of the current research proposal is to develop a low-vision navigation aid that guides and localizes a user within an unfamiliar indoor environment (e.g., an office building or a hospital) to their goal. The proposed low-vision navigation aid is based upon an existing robot navigation algorithm that uses partially observable Markov decision processes (POMDP Kaelbling, Cassandra, & Kurien, 1996; Kaelbling Littman, & Cassandra, 1998; Cassandra, Kaelbling, & Littman, 1994; Stankiewicz, Legge, & Sehlicht, 2001). The proposed navigation aid is composed of a laser range finder, a POMDP algorithm implemented on a computer and a digital map of the building. Fundamentally, the model will use measurements taken with the laser range finder from the user's position to the nearest wall in the direction that the user is facing. Using this measurement the POMDP model will reference the map and compute the locations that the observation (i.e., the distance measurement) could have been taken. Given this collection of locations, the POMDP algorithm computes the optimal action (e.g., rotate by 9if) that will get the user to their goal location using the minimum number of instructions on average. This process is repeated (i.e., instruction, generate action update spatial uncertainty, compute optimal action) until the user reaches their destination. The model is designed to deal with the noise that will be inherent in the measurements taken and the actions generated by a low vision user. Unlike its predecessors such as Talking Signs, Verbal Landmarks and Talking Lights the current system does not use any beacon technology for guidance. Thus, the proposed system has very little infrastructure investments and we anticipate that it will be a low cost robust navigation system for low vision, blind and potentially, normally sighted users in unfamiliar buildings. [unreadable] [unreadable] [unreadable]