Stuttering is a communication disorder characterized by disfluencies that may be frequent or disruptive to communication. Clinicians make extensive use of disfluency counts, for diagnosis, to assess treatment progress, and to document treatment outcomes. SLPs mostly do disfluency counts in real-time as a client is talking. However, this approach only accounts for the number of disfluencies, and not the different disfluency types, including the presence of disfluency clusters. Furthermore, the counts cannot be re-examined to improve accuracy. Alternatively, SLPs can use the verbatim transcript approach, in which they first transcribe exactly what was said, and then mark up the transcript with disfluency codes to provide the important information that is absent from real-time measures. Although verbatim transcripts provide a wealth of data that can support and improve clinical decision-making, this approach is rarely used, as it is very time-intensive. The goal of this project is to build computer tools that assist clinicians in performing detailed and more reliable disfluency counts and to determine how clinicians can better use these counts in clinical practice. The tool will allow both disfluency counts that are time-aligned to the audio file, and, for read-speech samples, counts that are part of an annotated verbatim transcript, which will provide the linguistic contexts in which the disfluencies occur. In fact, the amount of effort needed to create the transcript-based counts will be reduced enough to enable them to be routinely used in clinical practice. A key component is the use of Automatic Speech Recognition (ASR), which will allow (a) the number of fluent words to be automatically computed, allowing the clinician to focus on counting just the disfluencies; (b) the disfluencies in a read-speech sample to be easily revised for better reliability; and (c) the rapid creation of the annotated verbatim transcript for read-speech samples. To create an annotated verbatim transcript for a read-speech sample, the clinician will annotate the speaker's disfluencies with a computer tool. An ASR is then run, which does a rough transcription of what was said, taking into account where in the audio file the clinician annotated each disfluencies. Third, the clinician does a correction pass, in which they correct any errors in the disfluency annotations or in the word transcription. The aims of this proposal are as follows: (1) improve the annotation tool; (2) improve the ASR quality; (3) improve the user- interface of the correction tool; (4) determine how the information in the time-aligned annotations and annotated verbatim transcripts can be summarized so as to be useful in clinical practice; and (5) automate counting fluent words so that this does not need to be done by the SLPs.