Migration of CHARMM to GPUs Graphical Processing Units (GPUs) are modern processors with the ability to support the concurrent execution of thousands of threads at the same time. In this work we have redesigned the CHARMM codebase from a heterogeneous CPU-GPU architecture to a GPU-only architecture. This design avoids the communication of forces and coordinates between the host and device memory at every step of the simulation. As coordinates are only available in the device memory, most of the features are being reimplemented for the GPU. Several features of CHARMM have been implemented/optimized to utilize the underlying processor threads efficiently. Another important feature of the new implementation is the focus on modularity of the code to support easy extension and adherence to the state of the art software development best practices. Psi4 Our quantum mechanical development efforts in the Psi4 package have seen improvements in QM/MM capabilities, providing an open source solution to use with the CHARMM code. Through vastly improved integral screening and parallelization, we have removed the major bottlenecks in performance that were previously present when electrostatically embedding QM regions inside large MM domains. We have also implemented analytic DFT hessians, enabling many types of analysis. Ongoing developments include a multi-precision algorithm to determine circular dichroism using GPUs and constrained density fitting approaches, which ensure neutral charge; this is crucial for periodic systems. P21 periodic boundary condition in CHARMM The Eighth shell method has previously been shown to be the most optimal in terms of parallelization of molecular dynamics simulation over large number of nodes. However, this method supports only the P1 periodic boundary condition (PBC) and cannot handle reflection and/or rotational symmetry. In this work we developed the Extended Eighth shell (EES) method that simulates only the asymmetric unit and communicates coordinates and images with images that correspond to P21 PBC. The P21 periodic boundary condition has application in lipid bilayer simulations as it can be used to allow the movement of lipids from one layer to the other, thus balancing the chemical potential difference between the two layers. Development of the Action-CSA Method in CHARMM Finding a reaction pathway that connects two well-defined end states is a challenging and important problem in molecular simulation. Even though molecular dynamics simulations on special purpose hardware like GPUs can now routinely reach microsecond time scales relatively quickly, brute force searches are simply not efficient. Recently, our lab developed the Action-CSA search method, in which multiple pathways connecting two end states are found via global optimization of the Onsager-Machlup action by the conformational space annealing (CSA) algorithm. Although this method was successfully used to find pathways in a variety of systems, the initial implementation in the software package CHARMM was done in an unsustainable way, and was not easily distributed to the wider MD community. The Action-CSA method is now being re-written in the latest version of CHARMM with the aim of making it applicable to a wider variety of problems, robust, and simple to use. We aim to integrate this code into the next major release of CHARMM. Development of CPPTRAJ Analysis Software CPPTRAJ is a molecular dynamics (MD) trajectory analysis program that is widely used by the MD community. It can process data from a variety of MD software packages including Amber, CHARMM, NAMD, and Gromacs. CPPTRAJ is under continual development to improve its utility for the MD community. Some recent improvements to CPPTRAJ include 1) the ability to treat long-range Lennard-Jones interactions via the particle mesh Ewald method; 2) the development of a robust method for calculating lipid order parameters from CHARMM or Amber simulations; 3) the ability to process more simulation data from CHARMM including more coordinate formats, replica exchange data, and energies; 4) the ability to process constant pH simulation data from Amber simulations. Implementation of improved CHARMM force field support into OpenMM The OpenMM molecular simulation code is optimized for GPU-accelerated molecular dynamics simulations. It provides great flexibility by allowing user-defined potential energy functions and integrators which allows rapid exploration of novel simulation approaches on large molecular systems. While CHARMM force fields and input files are in principle supported by OpenMM, some important features, for example free energy calculations were missing. We augmented the parser for CHARMM PSF files to better resemble CHARMMs standard behavior and implemented CHARMMs van-der-Waals switching functions vswitch and vfswitch as well as support for alchemical free energy simulations using CHARMM input files into openmmtools. A computational framework for calculating position-dependent diffusion and free energy profiles through membranes Calculation of membrane permeabilities via the inhomogeneous solubility-diffusion (ISD) model requires accurate calculations of the position-dependent free energy and diffusion profiles of permeants though membranes. We developed a software framework to calculate these profiles from biased and unbiased simulations by employing a maximum-likelihood approach to the ISD equation. In recent years, this lab has developed a series new compuatational methods, such as the self-guided Langevin dynamics for efficient conformational searching and sampling, the isotropic periodic sum method for accurate and efficient calculation of long-range interactions, and the map-based modeling tool, EMAP, for electron microscropy studies. Implementation of these new methods enables researchers to tackle difficult problems. These methods have now also been implemented into another widely used simulation package, AMBER, to extend the user scope to access these methods. The SGLD, IPS, and EMAP methods are available in AMBER version 16. LOBOS In 2019, we have continued to upgrade the compute capabilities of LoBoS by both increasing the number of existing nodes available to all users as well as adding new nodes. We have increased the pool of nodes available to all users by moving from two separate queuing systems (PBS/SLURM) to a single SLURM queue. This has allowed us to increase the overall usage of all nodes as well as provide more flexible scheduling to all users. In addition, we have purchased 25 new GPU nodes, each containing two Nvidia Tesla V100 GPUs. The performance of GPU-capable software (e.g. Amber and OpenMM) on these nodes is approximately double that on our previous generation TitanXP GPU nodes. To take advantage of this compute power we are continuing to modify our toolchain to run well on GPUs: as part of this effort we have added a flexible Nos-Hoover integrator to the OpenMM package, which permits the use of conventional and Drude-polarizable force fields on the V100 cards. We have increased our archive storage capacity by 750 TB, which will enable us to meet data retention regulations while keeping the data accessible to lab staff for use in derived analysis. We have also added additional network security controls as well as modified our network architecture to adapt to government security regulations while allowing our continued direct collaboration with outside groups. In support of code development, we have configured one of our analysis nodes with settings that enable code profiling using the Intel Cluster suite which we acquired this year. A CUDA-capable continuous integration server was set up in conjunction with our in-house Gitlab server, allowing regular testing of our software development efforts (include new GPU developme