In previous years, we've re-written the OME analysis system to make it more robust and efficient (1), added an internationalization layer that includes a complete translation of both the user interface and the underlying semantic framework into Spanish, and have contributed to our collaborations that have further refined OME's user interface, as well as its underlying infrastructure. The bulk of the work in this project this year has been further development and validation of the WND-CHARM pattern recognition algorithm in biological imaging assays. WND-CHARM is a generalized pattern-recognition algorithm that can be used to analyze any type of image. Previously, we have validated the accuracy and generality of this approach using standard benchmark suites used for pattern recognition by the machine vision community, including face recognition, object recognition, and other non-biological classification problems. Additionally, we have validated this approach in biological imaging assays using several types of microscopy. Most work in pattern recognition both within and outside of biology involves classification - essentially sorting unknown images into one of several classes that the computer has been trained to recognize using control images. While this qualitative approach is useful because it is objective and more consistent than manual scoring, most assays in biology rely on quantitative measurements. This year, we have focused on investigating several techniques for quantifying pattern recognition results, and validating them based on known biological relationships. One of the validation datasets used the Kellgren-Lawrence osteoarthritis grading system to train a classifier to interpolate morphological change in human knee X-rays and investigate the degree of morphological change between the discrete KL grades. A significant advance in the usability of WND-CHARM came from the development of a specification for pattern-recognition experiments. This specification is both human and machine readable and presents a concise summary of how the classifier is trained. This specification will serve as the intermediate output of a graphical user interface being developed this year. Pattern recognition algorithms for image-based assays consume tremendous computer resources, potentially orders of magnitude greater than what is commonly used in genomics or proteomics. We have begun an experimental system for peer-to-peer distributed computing, similar in nature to distributed file sharing networks. Most distributed computing infrastructure relies on a large centralized server to either perform the computations or act as a host for the distributed nodes. The installation and configuration of these centralized servers demands significant information-technology skills. In contrast, each biological laboratory potentially contains several individuals with projects that could benefit from image-based pattern recognition. This ratio of projects to servers is much more similar to a peer-to-peer topology than to a centralized computing facility. We plan to use most of the software from the successful BOINC project (used for FoldingHome, SETIHome, etc.), while developing our own mini-server that will run together with the BOINC client. This will allow any biologist to harness local unused or idle CPU resources for their own imaging projects without having to configure centralized distributed computing servers.