We used an analog reporting method to investigate the perception of temporal patterns, which involves the translation of temporal information into a visuospatial representation. Based on the assumption that the participants’ visuospatial report is an accurate reflection of their mental representation of the pattern, the method provides a quantitative and direct measure of the mental representation.
Participants heard sequences of three brief tone pips, spanning a total of 1 or 1.2 sec from first to last tone (blocked), with the middle tone uniformly distributed within the interval. After each sequence, they had to place a vertical line within a horizontal bar symbolizing the sequence, at a position that represented the time when the middle tone occurred. We found that stimuli with middle tones that were within ±10% of the midpoint (i.e., near 0.5 sec in the 1 sec sequences, or 0.6 sec in the 1.2 sec sequences) were reported as if they had occurred at the midpoint itself (i.e., assimilation). A subdivision of the total interval into equal parts thus seemed to correspond to a perceptual category: Response variability was maximal at the boundaries of the assimilation zone, and a contrast effect was observed immediately beyond that zone (i.e., participants exaggerated the tone’s deviation from the temporal midpoint).
If the method indeed captures the mental representation of the patterns, it should be possible to validate our findings in experiments not involving visuospatial responses. We performed a second experiment in which a classical 2AFC discrimination task was used. Participants had to compare two auditory sequences, each consisting of three tone pips, and decide whether in the second sequence, compared to the first sequence, the middle tone was played earlier or later. As predicted by the visuospatial data of the first experiment, local maxima in performance were observed near the assimilation zone’s boundaries in this purely auditory experiment when one sequence fell outside the assimilation zone while the other sequence fell within it.
This conforms to rhythmic categorical perception and generalizes previous findings (Clarke, 1987; Desain & Honing, 2003; see also Schulze, 1989) in a task which does not involve rhythmic categorization (or quantization) via the use of musical notation and the explicit instruction to disregard expressive timing, and to participants who are not highly musically trained (ours were nonmusicians or amateur musicians at best).