This project aims to develop a new imaging technology, robot-driven ultrasound (US) catheter imaging, which will automate navigation and guidance tasks in a range of interventions. The system will use robotic actuation to coordinate motion of the imaging catheter. This will enable two key functions. First, the robot will use electromagnetic (EM) trackers to automatically follow the motion of instruments during the procedure, and point the US imaging catheter at the instrument tip. This will provide continuous real-time images of the interaction between the instrument and tissue. These US images show the actual soft tissue structure at the time of the procedure. Second, the catheter will be scanned across the treatment volume to create large- scale high-resolution 3D+time panoramic mosaics for planning and navigation. Real-time images and the overall instrument location and pose will be superimposed on the large-field panoramic view. This will allow the clinician to easily visualize key spatial relationships during the procedure. US imaging catheters have been used in cardiac ablation procedures for over a decade. Image quality can be very good, because the short distance between the probe and tissue target allows the use of high frequencies without aberrations from intervening layers of muscle, fat, and other tissues. At present, all catheter pointing is manually controlled, which is extremely challenging because the relationship between hand controls and image motion is complex and varies across the workspace; this has limited use to a few critical tasks. The proposed project builds on our prior work in developing the robot platform. This hardware and control system interfaces with the controls of existing commercial catheters to provide accurate image acquisition. This system can acquire 3D US mosaics of extended workspaces, and follow trajectories with < 2mm accuracy. To exploit this accomplishment now requires the development of the image acquisition, image processing and user controls aspects of a clinically useful system. Three aims will comprise this project: Specific Aim I will develop methods for processing acquired images for registration, interpolation, and rendering in real-time. Specific Aim II will develop an validate algorithms for driving the US image to follow instruments. Specific Aim III will create a tailored user interface for robot control and image display. The net benefit will be better situational awareness, leading to faster work flow, reduced procedure time, and fewer complications.