The advent of genome-scale techniques has brought a profound change in biological research, where the generation of large datasets is now commonplace in almost every aspect of biomedical research. Such endeavors require sophisticated computational tools to transform data into knowledge, and to portray this knowledge in a human-comprehensible form. These tools involve languages and concepts that are not part of a biologist's mainstream curriculum, or even of most biologist's mindsets. Such work may best be performed with software packages imported from academic or commercial sources, which then requires computational expertise in implementing them and understanding their key operational variables; in other cases, context-specific computational strategies or algorithms that match the experimental situation are required. It is thus essential for an institution such as the Joslin to support the activity of its laboratories and core facilities by providing access to such computational expertise. In practice, the Bioinformatics Core will: provide the tools and expertise in analyses of large databases. In particular it aims to support computational analyses of complex data by providing software tools, help and training, to provide and maintain an infrastructure for high-end computing, and to co-ordinate the storage and interchange of data generated in joslin laboratories by maintaining a common "data warehouse" for enhanced informativity of individual projects and inter-laboratory metaanalyses.