This project aims to discover how languages are learned and understood at the levels of both word and sentence. The first problem we address is understanding how learners find out what words such as dog and cat signify. The second problem is learning how these words combine semantically in sentences such as The dog bites the cat versus The cat bites the dog. Though the words in this case are all the same ones, in English their ordering (more precisely, the structure that binds them) determines their semantic roles (do-er or done-to) with respect to the biting act. Some languages rarely use this ordering method and if they do their orders may be different, so learners of English have to acquire these properties by analyzing the speech that they hear from adults to derive the English facts. In most cases children don't get explicit instruction about most of the word meanings or about the syntax, but they learn even so. The learning process applies not only to children but also to learning of second languages by older individuals, including adults. Much of the work proposed herein uses a relatively new experimental technique, developed in our laboratories under earlier funding of this grant, in which children's eye gaze is tracked as they hear spoken descriptions of the surrounding visual world. Specifically, children hear instructions that require them to make an implicit choice about the intended structural organization of ambiguous utterances such as I saw the man with a telescope. By manipulating potentially informative cues to the intended structure (e.g., verb information, prosodic (tune) information and situational/discourse cues), children's eye gaze and other behaviors can reveal their sensitivity to and representation of these information sources. In the upcoming funding period, we propose: (a) To expand and test our developmental account of how children learn to recover the grammatical properties of a sentence as it is heard, by examining eye gaze responses to ambiguous sentences at different ages; (b) To explore how multiple linguistic and non-linguistic cues regarding speaker's intentions are used by the child to uncover word and sentence meaning; (c) To examine what is tracked by the child regarding the meaning of verbs and other relational lexical items as they hear them; (d) To examine how sentences understanding procedures are learned and used in languages that are quite different from English in the clues to meaning that they offer (specifically, Korean, Tagalog, and Kannada, and perhaps two others). The potential applications of these findings to education are significant, as vocabulary and sentence understanding skills are fundamental to successful functioning in the technological culture of the 21st century, and many children are in need of enhancing and remedial intervention. In addition, as the United States citizenry becomes progressively more multilingual, and is increasingly drawn into global interactions, the ability to acquire second, and even third and fourth, languages becomes an ever more precious social and economic commodity PUBLIC HEALTH RELEVANCE This project is designed to further the understanding of how young children learn what the words in their language mean, and how these words are combined to make meaningful sentences. The ability to understand spoken and written language rapidly and close to errorlessly is a basic requirement for economic and social well-being in 21st Century American life. The findings are expected to be relevant to second language learning as well; multilingualism is an increasingly precious commodity for Americans as they interact more and more with speakers of different languages both with the country and with cultures around the world.