-
Notifications
You must be signed in to change notification settings - Fork 9
Diffusion processing
Although one of the great advantages of diffusion MRI is that it allows reconstruction of white matter architecture in-vivo, increasingly the technique is also applied to post-mortem data. This has the advantage that data can be acquired from species that cannot easily and/or ethically be collected in-vivo, increasing the range of species that can be studied. However, this increased opportunity comes with some costs: post-mortem scanning is often associated with vastly reduced SNR and, depending on the sequence, can have contrast inversions which make most standard preprocessing tools crash.
Here, we briefly highlight some of the issues one might encounter.
-
Sequence choice. For post-mortem imaging in particular, spin-echo sequences have been shown to work well, but steady-state diffusion-weighted imaging offers some potential benefits (e.g., McNab and Miller, 2010, NeuroImage). The disadvantage of the latter is that standard pipelines, such as FSL’s bedpost, need to be adapted (Tender et al., 2020, Magn Res Med);
-
Distortion correction. Particularly in in-vivo data, diffusion-weighted images tend to be distorted due to susceptibility-induced distortions and eddy currents in the gradient coils. Excellent tools are available, such as FSL’s EDDY and TOPUP. As these tools expect a particular setup, they often need adjusting for NHP imaging;
-
Estimation of diffusion parameters and tractography. The choice of analysis method is very much dependent on the question at hand. The author has generally favoured probabilistic tractography as implemented in FSL’s BedpostX (Beherens et al., 2007, NeuroImage), which allows one to quantify the uncertainty in a tractography solution. However, quantifying uncertainty has proven to be a double-edged sword, as less informed reviewers can assume that means there is more uncertainty than in a method that simply thresholds data and ignores uncertainty. Parameters such as the step size and curvature threshold often have to be adjusted to the size of the brain and the scanning resolution, although the effects are less dramatic than might be expected;
-
Tract identification. Working with non-human species means that some of the white matter tracts might course differently and terminate in different places than one might expect based on the human. Standardized tractography recipes for the human and macaque are available now as part of FSL’s Xtract (Warrington et al., in press, Neuroimage) (fsl.fmrib.ox.ac.uk/fsl/fslwiki/XTRACT) and authors are encouraged to upload their own recipes to the Xtract gitlab, so that we can work towards standardization in the field;
-
Surface-based tractography. There are well-known issues with tractography along the white matter towards the grey matter surface (e.g., Reveley et al., 2015, PNAS). These issues are not unique to NHP research, but are worth highlighting as one of the motivations of NHP tractography is often a direct comparison with tracer data that is based on grey matter to grey matter connectivity. It is important to keep in mind that tractography is not in-vivo tract tracing, it is a different method with its own strengths and weaknesses. One method we have found useful in making surface representations of tract terminations is not to track towards the grey matter surface, but away from it. This is less susceptible to the problems of fibers fanning out in a gyrus. One would then (1) reconstruct a tract’s course in the white matter in volume space, (2) run tractography from the grey matter toward the white matter, and (3) multiply the two matrices to achieve a surface*tract matrix. Such an approach is implemented in Mr Cat’s multiply_fdt (Mars et al., 2018, eLife; Eichert et al., 2019, Cortex);
-
Tract-based spatial statistics (TBSS). TBSS (Smith et al., 2006, Neuroimage) is a method for voxel-wise identification of relationship between local microstructure and some experimental variable. It is implemented in a very standardized pipeline in FSL, which assumes a standard human brain and use of the MNI152 template. This thus requires some adaptation for NHP data, mostly relying on good registration to alternative templates, similar to what was discussed above for other modalities). Statistical significance testing also requires attention, as most thresholds common in the literature again assume human brain size and acquisition resolution. Various authors have therefore tried to assess reliability of their effects by replication across hemispheres, but this is not an ideal approach.
Existing tools and pipelines
A pipeline adapting tools from FSL to suit post-mortem non-human primate data, termed ‘phoenix’, is part of Mr Cat (https://github.com/neuroecology/MrCat/tree/master/core) and will be released in May 2020.
Apart from FSL, the MRtrix package (Tournier et al., 2019, NeuroImage) is gaining popularity in NHP neuroimaging, due its flexibility and the implementation of multiple tractography algorithms.
A. Why the interest in NHP neuroimaging?
B. What makes NHP MRI challenging?
C. Typical data analysis challenges
D. Structural data processing steps and PRIME-RE tools
E. Functional data processing steps and PRIME-RE tools
F. Diffusion data processing steps and PRIME-RE tools
G. Cross-species comparisons and PRIME-RE tools