My goal, as an established computational neuroscientist, is to augment my research program by acquiring new research skills and knowledge that will enable me to formulate and to answer questions relevant to auditory perception, specifically auditory streaming, and neurobiological mechanisms thereof. My career since receiving my PhD (1973) has involved computational modeling, simulation and application of nonlinear dynamical systems concepts, primarily at the cell and circuit level to identify basic mechanisms of cell and circuit dynamics: integration and input/output properties such as rhythmicity and bistability. I have dedicated most of my funded research efforts since the late-1990s to understanding issues in auditory temporal processing -- working steadily on computational modeling and (directing but not performing) in vitro patch-clamp experiments, primarily on the biophysical mechanisms of neurons in the auditory brain stem;they show exquisite ability for sub-millisecond coincidence detection for the neuronal computation of sound-localization Recently, through a collaborative research experience, I have worked with models of perceptual bistability for ambiguous visual scenes. It has been satisfying and successful and it has motivated me to acquire knowledge and skills for investigating perceptual scene issues in the auditory modality. Furthermore, while the stimuli for studying visual perceptual bistability are typically stationary, in the auditory modality, stimuli are naturally dynamic: sound is changing and therefore the structural features or cues of an "auditory object" are evolving. How we identify and track in time these features lead to compelling questions about dynamical mechanisms. I am motivated to apply dynamical systems concepts and mechanistic modeling to the streaming problem and I am enthused about investigations on the perceptual/behavioral level, a quite different level from my past research. In the long term, I plan to seek funds as a PI for recruiting, training, working with students/postdocs in auditory streaming and perception, for both modeling and psychophysical experiments. I expect to have access to an existing sound-proof acoustic chamber here at NYU for behavioral experiments. Specifically, I plan to study and work, guided by the mentorship of highly regarded auditory systems neuroscientists, to: (1) acquire foundational knowledge, literature familiarity, working communication skills, and (2) acquire the experimental know-how to design, carry-out, and supervise psychophysical experiments and computational modeling in the area of auditory streaming. My mentors will be: Dr. Elyse Sussman of Albert Einstein College of Medicine and Dr. Shihab Shamma of the University of Maryland, College Park. I will learn about design, implementation and data analysis of human psychophysics experiments for streaming, primarily, from Sussman and her staff, so that I can perform these experiments independently - including the experimental component of my research project. From Shamma, I will learn, in addition, about neurophysiological experiments on ferrets (auditory cortex) and about modeling and data analysis for his ongoing streaming research. Shamma's expertise will be valuable for the modeling component of my research project to develop mechanistic models for bistability in streaming. The process for learning background and empirical bases with both mentors will involve my reading of foundational literature, regular discussions with mentors, participating in lab meetings, and sitting-in on experiments. I plan to participate in a grad-level course at the Univ of MD in Psychoacoustics (Experimental Audiology) taught annually by Monita Chatterjee. My research strategy involves two linked components, computational modeling and experimental psychophysics, on the dynamics of auditory streaming for a potentially ambiguous stimulus. The stimulus consists of two tone-sequences of different frequencies (frequency difference, DF) and interleaved with presentation rate (PR): say, A_B_A__A_B_A__A_B_A__A_B_A__.... The subject may perceive the A- sequence and B-sequence as distinct (segregated, two streams) or as a unit (integrated, one stream) depending on the values of DF and PR;2 streams if DF or PR is large. For long presentations, the perceptions may alternate in time (durations of a few seconds) (Pressnitzer &Hupe, 2006). My primary hypothesis is that bistability underlies the random perceptual alternations between integration and segregation and that these correspond to two near-steady states of neuronal activity and alternations correspond to random switching between the two states. Furthermore, the two states may be demonstrated directly by a slow- up-then-down stimulus ramp that we will test with models and experiments. PUBLIC HEALTH RELEVANCE: People who have hearing loss or wear cochlear implants have difficulty listening to or selecting one voice in noisy environments. The results of our work on auditory perception will help elucidate neural mechanisms that make it possible to segregate sounds and hear distinct sound streams. These insights could help with developing prosthetics that ultimately could improve this ability in hearing impaired users.