Integrating sensory information from a variety of sources to produce motor commands is fundamental to human behavior. Impairments in multisensory and sensorimotor integration impact numerous aspects of human health, ranging from social interactions and communication to movements in a complicated environment. For example, directing gaze to the location of a sound is a complex information processing task requiring the conversion of auditory input signals into motor commands to move the eyes. This process is impaired in human neglect patients. Here, we propose a joint computational and experimental approach to illuminate this problem. Specifically, we will investigate how information about sound location is encoded in the spike trains of neurons in primate superior colliculus (SC), in comparison to structures that provide inputs to or receive an output from the SC, such as auditory cortex, intraparietal cortex, and the paramedian reticular formation. We will explore the reference frame (Aim 1) and coding format (Aim 2) of the representation of stimulus location. Of particular interest will be the relationship between visual and auditory representations: we will evaluate quantitatively the similarity between these representations. In Aim 3, we will consider how these signals are read out into commands to move the eyes. This theoretical and experimental aim will test algorithms that could produce accurate motor commands to both visual and auditory targets despite differences in how these signals are encoded at earlier stages. These aims will enhance our understanding of neural processing from sensory input to motor output. The issues of multisensory and sensorimotor integration investigated here bear on a variety of neurological disorders such as those arising from stroke and other types of brain lesions. A better understanding of the transformation from sensory input to motor response will aid in identifying the pathophysiological substrates in neurological disorders with impaired sensorimotor integration.