The overarching goal of this project is to address both the methodological and theoretical challenges in real- time language processing in order to understand how listeners coordinate linguistic and non-linguistic information as they construct interpretations. Research from this project and research from other labs has strongly established that in addition to form-based probabilistic constraints, listeners make rapid use of information provided by visual context and action-relevant affordances of objects and task goals. It is less clear whether speakers and addressees take into account their interlocutor's knowledge and commitments during the earliest moments of language processing. Modeling the knowledge states of other individuals is presumably complex and resource-intensive. Inferring and tracking the knowledge states and intentions of interlocutors is often conceived of as a task separated from core real-time language processing. In contrast, we propose that targeted and probabilistic tracking of some aspects of interlocutors' knowledge states and intentions may be an essential part of language processing itself; interlocutors, in the linguistic choices they make, may be implicitly telling other individuals about elements of their own knowledge states and intentions that are relevant to the task of carrying on the current linguistic interaction. These choices of form by the speaker, and their interpretation by the addressee, involve integrating the content of the sentence with considerations of who knows (and does not know) what, a matter that relates more directly to the conduct of the discourse than to its descriptive content. We will examine processing of linguistic devices whose function is to shape the presentation of descriptive content rather than to form a description, including sentence-type, prosody, and forms of acknowledgment. The proposed research builds on our work with definite reference and our recent success in using eye movements to study real-time language processing in collaborative tasks. We address two central questions: (1) how and when interlocutors take into account each other's likely knowledge, both shared and privileged and (2) whether, and if so, when, interlocutors signal and monitor each other's intentions in conversation. The proposed work should make important empirical and theoretical contributions; it will helping resolve existing controversies and it will break new ground. We further develop the relatively novel eye-tracking targeted language games methodology and explore optimal cue-integration models and new methods for statistical modeling of eye-tracking data. Our results and methods should continue to: (a) advance our understanding of normal language comprehension; (b) inform investigations of language processing in children and special populations, including those with impairments that arise from brain injury, and they are beginning to influence development and evaluation of dialog systems with health-related applications.