Stereoscopic vision, by detecting interocular correlations, enhances depth perception. Stereodeficiencies often emerge during the first months of life, and left untreated can lead to severe loss of visual acuity in one eye and/or strabismus. Early treatment results in much better outcomes, yet diagnostic tests for infants are cumbersome and not widely available. We investigated whether reflexive eye movements, which in principle can be recorded even in infants (because they do not require active participation by the subject), can be used to identify stereodeficiencies. Reflexive ocular following eye movements induced by fast drifting noise stimuli were recorded in 10 adult human participants (5 with normal stereoacuity, 5 stereodeficient). To manipulate interocular correlation, the stimuli shown to the two eyes were either identical, different, or had opposite contrast. Monocular presentations were also interleaved. The participants passively fixate the screen. In the participants with normal stereoacuity, the responses to binocular identical stimuli were significantly larger than those induced by binocular opposite stimuli. In the stereodeficient participants the responses were indistinguishable. Despite the small size of ocular following responses, 40 trials, corresponding to less than 2 minutes of testing, were sufficient to reliably differentiate normal from stereodeficient participants. Thus it seems that ocular-following eye movements, because of their reliance on cortical neurons sensitive to interocular correlations, are affected by stereodeficiencies. Because these eye movements can be recorded noninvasively and with minimal participant cooperation, they can potentially be measured even in infants and might thus provide an useful screening tool for this currently underserved population. These and similar studies typically use stimuli with both bright and dark dots on a gray background, as these are thought to provide a stronger signal for stereo matching. It has been argued that this Reflects separate contributions from distinct ON and OFF channels in early (monocular) visual processing. This is based on the fact that observers perform better on a noisy disparity discrimination task when the stimulus is a random-dot pattern consisting of equal numbers of black and white dots (a mixed-polarity stimulus, argued to activate both ON and OFF stereo channels), than when it consists of all-white or all-black dots (same-polarity, argued to activate only one). However, it is not clear how this theory can be reconciled with our current understanding of disparity encoding. Recently, a binocular convolutional neural network was able to replicate the mixed-polarity advantage shown by human observers, even though it was based on linear filters and contained no mechanisms which would respond separately to black or white dots. Here, we show that a subtle feature of the way the stimuli were constructed in all these experiments can explain the results. The interocular correlation between left and right images is actually lower for the same-polarity stimuli than for mixed-polarity stimuli with the same amount of disparity noise applied to the dots. Because our current theories suggest stereopsis is based on a correlation-like computation in primary visual cortex, this postulate can explain why performance was better for the mixed-polarity stimuli. Thus there may be no real advantage of using stimuli with both bright and dark dots.