DESCRIPTION (modified from abstract): Human sensory-neural information processing begins with inputs at the sense modalities. Once these inputs are encoded perceptually, they may of course be recoded linguistically. Any interactive processes across modalities may then be assumed to take place in at least two regions in the information stream, viz., at the sensory level,or at the cognitive level. The present proposal deals with the study of interactions between visual, auditory, and tactual signals; in particular, the interest of the present research is in a set of phenomena that can be called congruence effects. These are characterized as effects that occur when the subject is asked to identify a target stimulus in a particular modality when there is simultaneously presented an irrelevant stimulus in another modality. When the irrelevant stimulus agrees with, or matches in some dimension, or is congruent to the relevant one, research has shown that both identification time and accuracy improve in comparison with the condition in which the irrelevant stimulus is incongruent with the target. A simple example of the phenomenon can be cited: When subjects are asked to identify a high-pitched tone, they will do so more rapidly and accurately when it is accompanied by a visual stimulus that is high in visual space, compared with when accompanied by a spatially low visual stimulus. By the same token, a low-pitched tone will be more efficiently processed when accompanied by a spatially low visual stimulus than a high one. The principal investigator has already demonstrated the existence of the congruence effect in a number of studies, and now proposes to extend this area of research with a series of experiments that test whether the effect, either as interference with or as facilitation of processing, lies more in the realm of elementary sensory events or in later events involving linguistic recoding. It is his plan to examine the effects of reducing the possibility of interactions at what he calls the semantic level by, for example, taking advantage of the bilateral asymmetry of the auditory system for linguistic input. Because of the well-known advantage of the left ear over the right (in most subjects) for processing language, it should be possible to demonstrate whether congruence of visual events having semantic value with auditory ones will produce an improvement in processing. In addition, the principal investigator will repeat some of the proposed experiments on a separate group of subjects who report synesthetic experiences. These are persons who, for example, experience sounds not only auditorily but also visually, having vivid color experiences in addition to the auditory perceptions. Because these persons exhibit a rather fixed relation of the two modal perceptions, e.g., a specific hue is always perceived when a particular sound pattern occurs, the production of congruent and incongruent stimuli must be tailored to each subject's personal cross-modal pairs. Otherwise, congruence/incongruence effects will not be detected. Owing to the fact that the frequency of occurrence of synesthesia in the general population is rather low, the expectation is that only a few subjects will be found in the three-year period devoted to the grant. These and similar manipulations of the relations between the sensory dimensions and their semantic valences provides the basis for the testing of a small set of elementary quantitative models of the interaction process. The results of this work may very well prove of significant use in the development of sensory aids that use more than one modality for communication (e.g., visual-tactual or auditory-kinesthetic combinations), in addition to advancing our knowledge of the structural relations among cognitive events.