Open preprint reviews by Krzysztof Jacek Gorgolewski

The GridCAT: A toolbox for automated analysis of human grid cell codes in fMRI

Matthias Stangl , Jonathan Shine and Thomas Wolbers

Review posted on 07th February 2017

GridCAT is a much-appreciated attempt to provide computational tools for modeling grid-like patterns in fMRI data. I am by no means an expert in grid cells, but I can provide advice and recommendations with regards to brain imaging software:


- Please mention the license the software is distributed under.

- Please mention the license the data is distributed under. To maximize the impact of this example dataset (fostering future comparisons and benchmarks) I would recommend distributing this dataset under public domain license (CC0 or PDDL) and putting it on openfmri.org

- I was, unfortunately, unable to run your software because I do not possess a valid MATLAB license. This costly dependency will most likely be the biggest limitation of your tool. There are two way to deal with this problem: make it compatible with Octave (free MATLAB alternative) or provide a standalone MATLAB Runtime executable (see https://www.mathworks.com/products/compiler/mcr.html)

- I would encourage the authors to add support for input event text files formatted according to the Brain Imaging Data Structure standard (see http://bids.neuroimaging.io/bids_spec1.0.0.pdf section 8.5 and Gorgolewski et al. 2016)

- Please describe in the paper how other developers can contribute to your toolbox. I recommend putting it on GitHub and using the excellent Pull Request functionality.

- Please describe in the paper how users can report errors and feature requests. I again would recommend using GitHub or neurostars.org.

- Is there a programmatic API built in your toolbox? In other words a set of functions that would allow advanced users to script their analyses. If so please describe it and provide an example.

- Please describe how you approached testing when writing the code. Is there any form of automatic tests (unit, smoke or integration tests)? Are you using continuous integration service to monitor the integrity of your code?

- For the GLM1 modeling step: is it possible to provide nuisance regressors (for example motion)? If so are you reporting information about colinearity of the fitted model?

- For the ROI feature - it would be useful to show users the location of their ROI on top of the BOLD data. This would provide a sanity check that can avoid using masks that are not properly coregistered.

- It would be beneficial for the paper to include some figures of the GUI from the manual and maybe list the plethora of different analysis option available on different steps in a table.

- Please add error bars to figure 5.

Chris Gorgolewski

show less


FAST Adaptive Smoothing and Thresholding for Improved Activation Detection in Low-Signal fMRI

Israel Almodóvar-Rivera and Ranjan Maitra

Review posted on 06th February 2017

Authors present and appealing methodological improvement on the Adaptive Segmentation (AS) method. The main improvement is alleviating the need to set input parameters (bandwidths sequence). Those parameters are fitted from the data in an optimal way.


Even though the paper has the potential to be a meaningful contribution to the field it lacks thorough comparison with the state of the art. The following steps to improve the situation should be considered:

- The selection of pattern used in the simulation seems to be motivated by the nature of fMRI data which is good, but at the same time, it does not highlight the specific issues that FAST is solving. Have a look at the simulations included in Polzehl et al. 2010 showing how smoothing across neighboring positive and negative activation areas can cancel the effect out. It would be beneficial to construct simulations that highlight the specific situation in which FAST overcomes the limitations of AS.

- Neuroimaging is strongly leaning towards permutation based testing methods due to the reduced number of assumptions. I would recommend adding cluster and voxel based permutation based inferences to your analysis. Please mind that permutation based testing is not the same as finding cluster cut offs via simulations.

- I would also recommend adding threshold-free cluster enhancement (Smith and Nichols 2009) to the set of compared methods. It is also a multiscale method that has been successfully used in many studies. This method also works best in comparison with permutation tests.

- It would be good to assess the rate of false positive findings in your comparison. This could be done by applying a random boxcar model to resting state data and evaluating how many spurious activations you find (see Eklund et al. 2012).

- Speaking of false positive and false negative voxels. It seems that the evaluation of your method in the context of the state of the art presented in Figure 4 is very sensitive to the threshold (alpha level) chosen for each method. I would suspect that AS and CT would perform better if a different alpha level was chosen. To measure the ability to detect signal more accurately I would recommend varying the alpha level to create a receiver operator curve (based on false positive and false negative voxels rather than Jaccard overlap) and calculate the area under it.

Minor:
- In the figure, you use the TP11 acronym to denote the adaptive segmentation algorithm, but in the rest of the paper, you use AS. It would be good to normalize this.

Chris Gorgolewski

show less