Review for "Voluntary control of illusory contour formation"

Completed on 15 Dec 2017 by Thomas Wallis . Sourced from https://www.biorxiv.org/content/early/2017/11/15/219279.

Login to endorse this review.


Comments to author

Author response is in blue.

This paper presents an investigation of the relationship between voluntary attention and illusory contour perception using a novel stimulus and the classification image technique. The paper is interesting and well-written; I have some minor comments.

- I’m not convinced by line 176. The effect itself is pretty weak / small, and to show that a weak effect disappears is maybe not surprising. Do the BFs at least show evidence supporting no difference?

- I think the red line showing the illusory edge row is confusing - I initially mistook it for the area of pixels you were in fact testing, which of course didn’t make sense. I think it would be better to have something below the edge, spanning the illusory portion.

- Line 197: The authors hypothesise that the illusory star form constrains voluntary interpolation of the illusory triangle edge. They could presumably test this by measuring classification images after rotating the non-target pacmen by 90 degrees (breaking the star but largely preserving local contrast). I think this condition would actually be useful as a baseline for the plots in Figure 2b: how strong could we expect the middle of the contour to be in the absence of the illusory star? For example, the authors could state something like “the presence of the competing illusory form reduces the strength of the illusory contour by 3-fold”. Is there any other data that could speak to this - perhaps Jason Gold’s work?

- what exactly does the SVM fitting add? The pictures are nice to explicitly show the result of two / three potential hypotheses for how to do the task, but then these are not directly tested against the data. Rather it’s left up to the readers’ impression of the classification images and their correspondence to the three models (which is admittedly much more than most classification image studies do). While I do think it’s nice to have those hypothesis images generated from understandable models, I also wonder what value that’s added beyond just sketching those hypotheses by hand. Can the authors think of a way to test those hypotheses against the data more formally?



Many thanks for the comments, Tom. We have updated the manuscript (which should appear on bioRxiv soon after this post).

1) We have added Bayes Factors that test whether the row of pixels below and above the implied triangle edge is different from zero. For both observers, we found evidence that, for the row below, pixels were not different from zero, and for the row above, evidence was equivocal. See line 200 of updated manuscript.

2) Thanks for the feedback. We've tried to do as you suggested, though we had to move the red line indicator farther from the illusory contour than we'd like so that it doesn't conflict with the inducer outline.

3) We think our suggestion that the competing illusory triangle constrains the strength of the illusory contour is warranted for two reasons: 1) qualitatively, there is very close correspondence between the "star" SVM prediction and our inversion result (Fig. 3); and 2) the change in illusory contour strength is precisely aligned to the implied geometry of the competing triangle - down to a pixel. Although the control suggested in a good idea, we do not think it's necessary (see also our response to the final point below). Moreover, we don't have access to our naive participant anymore. A comparison to Gold's work is difficult because the illusory figure in that previous work was much smaller than our study (about half the size).

4) Thank you for these suggestions. We now explicitly compare the SVM model predictions with the human data. In brief, we use a least squares method to find which model prediction most closely matches the mean classification image (e.g. see FIg. 2c in the updated manuscript). Consistent with our conclusions from the original analysis of pixel values, we found that model with the least error is the model in which the SVM is trained to perceive an entire triangle. We think this is a simple yet powerful analysis that greatly strengthens the conclusions of our paper.