Understanding how vocal and social interactions accumulate to affect the outcome of normal or abnormal speech development is important, but difficult to investigate. The proposed research will use songbirds as a model system to investigate how vocal coordination, social interactions, and vocal tutoring affect the outcome of vocal learning. Research will focus on the sensory phase of vocal learning and on the development of vocal combinatorial learning: the ability to rearrange and flexibly combine vocal sounds. The first set of experiments will present zebra finches with vocal combinatorial tasks to test if they can rearrange different song units: syllables, sub-syllabic segments, or multi-syllabi song chunks. Results will determine if the units that birds can rearrange are rigid or plastic, fla or hierarchical. Follow-up experiments will determine what role perception, social context, and reinforcement might play in the development of combinatorial capacity. The second set of experiments will examine how social context affects tutor song choice, using a technique for controlling the contingencies that accompany the songs the bird hears during early vocal learning. The last set of experiments will focus on calling interactions, which, according to preliminary results, involve a remarkable level of vocal plasticity. Using a vocal robotic interfac that autonomously coordinates calls with the bird, experiments will examine vocal coordination capacities by challenging birds to adjust rhythms and acoustic features of calls. Follow-up studies will examine how social/vocal coordination may lead to song imitation. Measurement of heart rate, movement trajectories, and brain dopamine levels during vocal interactions will provide estimates of birds' engagement and determine what kinds of vocal coordination dynamics can generate engagement leading to song imitation. Experiments will also examine vocal coordination in female zebra finches, which do not sing, but engage in calling interactions. Finally, Sound Analysis Pro, a software for analysis of vocal learning supported by this project since 2001, will be extended to multiple platforms, to facilitate high-throughput sound analysis of vocal behavior, including human speech disorder and aphasia research.