Understanding and enhancing visual search performance in complex scenes Cancer screening saves lives (e.g. National Lung Screening Trial Research, 2011). Every day, radiologists are faced with difficult, time-consuming visual search tasks like in mammography and lung cancer screening. Oftentimes signs of cancer, e.g. lung nodules or little abnormalities in a breast, are very hard to find against heterogeneous backgrounds and obscured by overlapping tissues. Missing these signs of cancer can result in wrong diagnoses with life or death consequences. It is therefore of key interest to identify and tackle the problems posed in these crucial search tasks. The proposed studies aim to improve search performance in cancer screening in two ways: First, by enhancing the visibility of lung nodules embedded in 3D volumetric datasets. Radiologists usually search for lung nodules by scrolling through stacks of chest CT. Nodules are roughly spherical features, spanning a few slices in a CT stack. Anecdotally, experts report that the way that lung nodules 'pop' in and out of the changing view is a signal to their presence. We propose an innovative approach using saliency algorithms that harness this signal to enhance chest CTs in a way that directs attention to crucial regions of a scene. Second, we aim to improve search performance by understanding and utilizing non-selective, 'gist'-like processing of mammograms. Recent work has shown that expert radiologists can detect a global signal in mammograms that allows for above-chance categorization of normal and abnormal breasts after very short exposures to the stimulus. We will first train novices to become experts in closer approximation to the medical tasks. As expertise develops, we will investigate two different neural correlates that might evolve in the course of training using electroencephalography (EEG), namely the P300 and the N2pc. In other settings, these measures can signal attentional selection either across (P300) or within (N2pc) briefly presented images. We will adapt a new method that exploits machine learning for real-time decoding of brain signals. It allows neural signatures, elicited in response to a sequence of images, to be used to rank those images in order of their implicit 'interest' to the viewer. We hypothesize that this information can be fed back to the observer/radiologist as a source of information that might, for example, suggest that an image or region deserves more scrutiny. The main goals of this proposal are therefore to understand the guidance of attention in complex visual search tasks and to apply this knowledge to improvements in clinically relevant search tasks like cancer screening.