Humans commonly encounter reverberant sounds and, in order to accurately interpret acoustic data, reverberation processing must be a fundamental aspect of human audition. However, little is known about how the human auditory system detects reverberation, and subsequently processes reverberant signals. In addition, state-of- the-art computational dereverberation algorithms perform poorly compared with healthy human listeners. Hearing-impaired listeners, with and without cochlear implants, often show decreased comprehension in reverberant environments. Human perception of reverberation will be explored via two experimental methods: (1) Systematic measurement of reverberation properties in real-word spaces. This will measure the distribution of reverberation filter properties that humans regularly encounter. (2) Perceptual experiments with synthetic sources or synthetic reverberation filters. Volunteer listeners will judge synthetic sounds as reverberant or dry and attempt to match sources and filters. It is hypothesized that the auditory system must make assumptions about the source and/or filter to perceptually separate the two and if the synthetic source/filter violates these assumptions, listener's ability to correctly identify them from reverberant recordings will degrade. The broader objective of this work is to gain a basic understanding of how quantitative changes in an acoustic signal, affect both the perceived sound and sense of space in human listeners. Such work will inform the design of future audition experiments, cochlear implants, public-announcement systems for large reverberant spaces and automated voice-recognition systems for human-computer-interfaces.