Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output-verbosity 3 and tractogram #91

Closed
dkp opened this issue Nov 17, 2020 · 13 comments · Fixed by #92
Closed

output-verbosity 3 and tractogram #91

dkp opened this issue Nov 17, 2020 · 13 comments · Fixed by #92

Comments

@dkp
Copy link

dkp commented Nov 17, 2020

Hi Robert,

participant level ran flawlessly once the derivatives directory was no longer nested in the bids dir.
I ran with --output-verbosity 3 which should generate a tractogram.
Version 0.42 generates a monster sub-219_tractogram.tck file in the tractogram directory.

However, version 0.5 does not generate the *.tck file:
MRtrix3_connectome-participant/sub-219/ses-itbs/tractogram:
sub-219_ses-itbs_space-dwi_tdi.nii.gz

Isn't the tck file the tractogram? Perhaps my knowledge of MRtrix3 is just woefully inadequate,
but it seems like a major change.

Thanks for your patience.

@Lestropie
Copy link
Collaborator

The tractogram file is certainly flagged to export at output verbosity 3, and the conditions for export are the same as those for the file you report seeing; so I'm not sure exactly how it is that's happened. I'll try to reproduce locally... can you provide the exact conditions in which you're running (i.e. environment & command-line options)?

@dkp
Copy link
Author

dkp commented Nov 17, 2020 via email

@dkp
Copy link
Author

dkp commented Nov 17, 2020 via email

@Lestropie Lestropie mentioned this issue Nov 23, 2020
@dkp
Copy link
Author

dkp commented Nov 26, 2020

Hi Robert,
I tried rerunning the same pipeline without output verbosity 3 set. I left the existing mrtrix_scratch directory in place, hoping that its contents would be re-used to speed up rerunning the participant level. This does not seem to have happened though.

Unfortunately, it seems Freesurfer's recon-all has crashed under these circumstances. This is especially surprising because I used the same test dataset with fmriprep, and Freesurfer ran without issue. I will try wiping out the scratch directory and the participant directory and running again, then report back.

Thanks for all your hard work.

-Dianne
mrtrix3_v0.5_participant_hcpmmp1.o212100.zip

@Lestropie
Copy link
Collaborator

I left the existing mrtrix_scratch directory in place, hoping that its contents would be re-used to speed up rerunning the participant level. This does not seem to have happened though.

When running outside of the container environment, MRtrix3 scripts have the -continue option, which is at least intended to provide such a functionality. However it's far from robust. Nominally such a functionality would be constructing a full graph of all processing steps, detecting which have been applied and which have not, and acting accordingly; whereas this literally just skips commands until a certain point in the script. But if you've not even utilised that option, then no, there's no way it'd re-use an existing scratch directory.

Unfortunately, it seems Freesurfer's recon-all has crashed under these circumstances. This is especially surprising because I used the same test dataset with fmriprep, and Freesurfer ran without issue.

It looks to me like an internal FreeSurfer failure:

mris_volmask --aseg_name aseg.presurf --label_left_white 2 --label_left_ribbon 3 --label_right_white 41 --label_right_ribbon 42 --save_ribbon --parallel freesurfer 
error: MatrixMultiply: m1 is null!

What I don't know is whether fmriprep may be reading such an error message and then making some modification that permits FreeSurfer to continue (I for instance do that myself with eddy; see here).

@dkp
Copy link
Author

dkp commented Nov 26, 2020 via email

@dkp
Copy link
Author

dkp commented Nov 27, 2020 via email

@Lestropie
Copy link
Collaborator

Can you test running FreeSurfer directly on that image, i.e. without using fmriprep as a proxy? If fmriprep is detecting and resolving the issue, then a direct recon-all call should crash in the same way as it is when run within MRtrix3_connectome; if it doesn't crash in the same way, then there's something about how I'm configuring FreeSurfer within the container that it doesn't like.

@dkp
Copy link
Author

dkp commented Nov 28, 2020 via email

@Lestropie
Copy link
Collaborator

I ran a session-nested dataset through participant-level processing with a FreeSurfer-based parcellation without issue, so it's not as simple as session-nesting borking FreeSurfer. It would be kinda weird for that to do so given that all FreeSurfer is provided with for recon-all is an already-renamed T1w image. I can only guess that in the process of converting from a session-nested BIDS structure to a more basic structure without sessions, you have somehow modified the T1w image in such a way that nullifies the original problem. Given I can't reproduce with data at hand I probably need access to the data you're using if I'm to diagnose.

@dkp
Copy link
Author

dkp commented Dec 1, 2020 via email

@dkp
Copy link
Author

dkp commented Dec 2, 2020 via email

@Lestropie
Copy link
Collaborator

Unfortunately without being able to reproduce the issue locally I'm quite stuck as to what to suggest. It's entirely possible that there is some stochastic behaviour within FreeSurfer that results in an outright error in one execution and no issue if the same data are re-run, which would make diagnosing the causative factor at your end even harder if that is the case. So I might just proceed with merging and tagging 0.5.1, and we can keep our eyes out for this issue (which should probably be moved to a separate issue since it's deviated from the original issue submission title).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants