Observer performance studies have become an established methodology for evaluating diagnostic imaging devices, and are often required for substantiating claims made in publications and to regulatory agencies such as the FDA. These studies are typically evaluated using receiver operating characteristic (ROC) analysis, with studies designed to generalize to the population of readers and cases. The typical endpoint of such studies is the area under the ROC curve (AUC). However, AUC neglects the fact that most diagnostic tasks are focused on certain regions of the ROC curve. For example, in screening mammography false positive rates above 20% (which constitutes most of the curve) are simply not realistic in practice. In this proposal we seek to develop an alternative figure of merit for evaluating observer performance studies that is based on the concept of expected utility (EU). EU incorporates diagnostic outcomes into the performance measure, and thus confines analysis to the most relevant region of the ROC curve. Preliminary evidence suggests that EU may have greater statistical power than AUC for detecting modality differences in fully-crossed experiments in which all readers score all images in all imaging modalities. We also seek to extend the EU analysis into localization and free-response ROC paradigms where it has not previously been used other than as a theoretical device. The three specific aims we propose reflect these goals of development and dissemination. We propose (Aim 1) to investigate optimal experimental design for EU in ROC studies, by analyzing parameters that lead to the greatest statistical power for the EU endpoint, to compare EU to AUC in localization and free-response assessment paradigms (Aim 2) to understand if there is any benefit in experimental design in these methods, and to develop distributable software (Aim 3) that allows investigators in the field to use EU on their own data or for their own assessments.