Sigma molecular dynamics code: The original objectives of this R&D project were to develop molecular dynamics code with three major enhancements: (i) good performance on parallel computers, (ii) fast code for long-range electrostatic forces, and (iii) code for long time-step integration. A highly efficient parallel code for molecular dynamics on shared-memory machines was developed in the second year of this project. We plan to adapt this parallel code for use on the newly available parallel machine, the SGI Power Challenge. During the current grant year, major progress has been made with development of codes ii and iii in the R&D projects directed by, respectively, John Board and Tamar Schlick. Also, the Sigma code has been (iv) integrated with the vmd visualization code and extended with features allowing steered dynamics in yet a third R&D subproject, and in another project we propose (v) to integrate code for continuum dielectric forces with the dynamics program. We are now preparing to integrate all five features into a single version of the Sigma code. As a first step, the code of the Sigma program has been thoroughly revised with an eye to making it easier to introduce major program modifications, which has produced SigmaX2.0. The SigmaX2.0 code has been placed under revision control with use of the CVS system, which is designed to allow development of new features by more than one programmer at a time. namd molecular dynamics code: We have continued our collaboration with the group of Klaus Schulten at the University of Illinois (UIUC) to develop code for the parallel program namd, designed for use with distributed-memory machines. In the past year, experience both at UIUC and at UNC with "version-1" namd code has led the group at UIUC to redesign the inner workings of the program. Once this phase is complete (expected in August 96), we will be provided with an interface for expanding the code for specfying and applying penalty functions, and for potential-of-mean force calculations, code for which had already been partially completed at the time the decision to redesign namd was made. Parallel, scalable molecular dynamics code: With the decommissioning of the KSR in August of 1995, our vehicle for the experimental implementation of parallel algorithms for molecular dynamics disappeared. New hardware, in the form of a 6-processor SGI is on the way. This year has been an interim period during which we have rethought our efforts in parallel MD algorithms with the following conclusions (1) Our problem is to achieve a maximum number of integration steps per unit time, i.e., with relatively small amounts of work per processor in each integration timestep. For interactive MD, this problem becomes the problem of rapidly simulating small molecules using medium-scale parallelism (modest parallelism). For larger MD simulations done offline, this becomes the problem of making effective use of a large number of processors (massive parallelism model). (2) We should target small systems using the modestly parallel 6-processor SGI we are assembling. We should extrapolate our efforts to massive parallelism through interaction with NCSA, with its much larger machines of the same type. (3) Analytical modeling leads us to believe that for most modestly-parallel machines, load-balancing will be the critical problem for the space of problems described above. (4) The program Sigma, developed by us, takes an explicit approach to load balancing which we believe will be succesful with small systems or very large numbers of processors. (5) The program namd, under development by Schulten and collaborators, in which we are included, takes an alternative approach to load balancing which we believe will be not be as succesful with small systems or very large numbers of processors. Our plans are to z continue analytical modeling z generate parallel SigmaX2.0 using (a) pairlist-length spatial decomposition (b) running-time adaptive decomposition Hye-Chung Kum and Lei Wang, under the supervision of professors Prins and Nyland, have built a model of MD simulation, using algorithms and six data sets from actual MD simulations. They used a standard parallel computation model (the Bulk Synchronous Parallel model) as a foundation for exploring six different techniques used for work distribution of parallel MD simulations. The goal of this modeling effort was to provide a framework in which new ideas for achieving higher performance can be tested, without having to understand and modify a full-feature MD simulator. The conclusions so far suggest that balancing the work requires a decomposition based on non-bonded interactions (sorted by position of one of the atoms), rather than coarser decompositions based on the positions of atoms (as in SigmaX) or regions of space (patches, as in NAMD). This model will continue to be enhanced to serve asa guide to aid the development of efficient parallel code in SigmaX2.0 (along with other MD simulators).