-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adaptive mask uses different thresholds in t2smap.py and tedana.py, which causes an inconsistency problem in the OFC areas #680
Comments
Hi @zswang-brainimaging -- Thanks for opening this ! Indeed, the recent release explicitly changes the behavior in the In All that to say, the behavior you're seeing is as expected, and the inconsistencies between the workflows come from their divergent goals ! Does this help to explain the "inconsistency" ? Let us know. |
Hi @emdupre
|
Thank you so much for detailed reply!
central here to apply for removal of noise, right ? Usually in an ICA-based denoising algorithm, PCA is applied to all the raw
removed. Applying ICA to here is already a good idea: Remove S0-related noise from all the echo data, then run t2smap to |
I'll try to unpack this a bit here to help in the discussion !
This is what's happening in tedana. It might be most helpful to look at this figure from the documentation. It's a little out of date (and your opening this issue is a great reminder that we should update it, thank you !), but we can follow the general idea. First, we take the input data and fit our decay model to calculate the optimal combination. This is done in both the The optimally combined data is then passed to PCA. In the original Kundu methods (also known as ME-ICA, see Kundu et al 2017, NIMG, for example), this "TEDPCA" included a decision tree that integrated echo-specific information to judge whether PCA-derived components were more likely signal or noise. Although we retain the option to call this PCA-specific decision tree (see the docstring for The data after PCA are then passed to ICA and decomposed into independent components. The independent components are then evaluated as BOLD or non-BOLD based on several criteria, including projection back onto the original echos. This is why we need a minimum number of good echos available -- otherwise it's unclear how to evaluate the components.
This is exactly what's happening, but the key difference from getting five denoised time courses for five echo data is that the optimal combination is what went into our PCA + ICA decomposition. So we get back the reconstructed optimal combination. If we wanted the individual echos, we would need to reverse the optimal combination procedure, but this still wouldn't solve the issue you've identified, since we needed several good echos to get reasonable denoising (at least based on our current criteria !). There are multiple paths forward, here. The one we're thinking most about is to develop a more flexible decision tree that will allow "less rigorous" denoising (compared to our current pipeline, but maybe not to your particular use case) and therefore allow you to maintain more signal. The challenge there is that we (1) need to build that flexibility into the decision tree (something @tsalo and @handwerkerd are working on !) and (2) need to have a means to allow users to transparently report exactly what decision criteria they chose to operate under. Hopefully that clarifies a bit what is happening, even if it doesn't immediately address your use case. |
@emdupre :
I have learnt a lot from your detailed explanations. Really Appreciate ! I still have some questions that I want to
|
This was not consistent across ME-ICA implementations, and in later papers (see eg Kundu et al., 2017, NIMG) the decomposition on optimally combined data was preferred, see eg the section on Making ME-fMRI practical in that paper which states:
If you look in the original ME-ICA codebase, you can see that this was controlled with an option called
The workflow itself will fail, as it requires signal in at least two echos to complete the current decision tree.
Absolutely ! I don't mean to say that they're bad, but I also can't say that they're good -- I just don't have that kind of information available, at least based on our current metrics.
Our current decision tree (see here, for example) needs more than one component. You could just use that data, regardless of any decision tree criteria, but then it's hard to know what that data would mean. It would be denoised in those voxels with sufficient signal, but it would not be denoised in those regions with insufficient signal, like OFC. Again, though, this is something that could be addressed in a more flexible decision tree, where you could compare echo-specific and more general (e.g. AROMA-like) component evaluation.
I agree with you that I don't want to pass judgment on those OFC signals ! And I'd also agree that they likely need to be treated differently. I think the immediate question is : what would that "different treatment" really look like, and how could we achieve it in a principled way ? Personally, I'm pinning a lot of my hopes on updating the decision tree, here, or maybe the decomposition itself. Again, this doesn't help your immediate issues, but for that I see at least a few paths forward:
Of course, we'd love your continued feedback as we update the decision tree ! But this is an ongoing process, and I know that your data processing needs likely have some time pressure. |
I just wanted to explicitly link this discussion to a previous one (#600 (comment)), since @tsalo also provides a nice walk-through there of the current approach in the |
@emdupre :
After we run more tests, it has become clear that t2smap now is working well to keep the OFC signals when using So can the algorithm be slightly modified to use threshold =1 for those prefrontal areas with weak signals ? (meaning Thank you! |
"since other echo have do signals in those regions ?" : a typo, actually, I meant "since other echo have no signals in those regions ?" |
Dear all:
For your following note:
“ If you look in the original ME-ICA codebase, you can see that this was controlled with an option called --sourceTEs, where -1 was the optimal combination (the default) and 0 was the z-concatenated echos (the option you're referring to). tedana in fact used to include this option of z-concatenated echos, but this was removed based on discussion here. This is one of the tedana project differences mentioned here.”
Could you please email me this original code with sourceTEs option ?
Thank you so much!
David
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Summary
In the older version of t2samp, "mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=3)" causes signal loss problem in the prefrontal areas, especially in the OFC, which is very obvious and problematic. In the latest version of t2smap.py, "mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=3)" has been changed to "mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=1)", meaning that 3 good echo data are no longer required. Yes, using threshold =1, the issue of signal loss in the prefrontal areas are resolved, where optimally combined image desc-optcom_bold.nii looks much better now. But in the latest version of tedana.py, "mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=3)" still uses threshold =3 (3 good echo data are required), which causes the same signal loss problem in the prefrontal areas, especially in the OFC. Try to modify threshold =1, but running tedana.py failed, with some errors, one of which is: "IndexError: index 0 is out of bounds for axis 1 with size 0".
Additional Detail
if modifying " mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=3)" in tedana.py into
" mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=1)", the running procedure will fail, but if modifying " mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=3)" to " mask, masksum = utils.make_adaptive_mask(catd, mask=mask, getsum=True, threshold=2)" , the running procedure can be completed. The result image dn_ts_OC or ts_OC looks better than ones generated when using threshold =3, but still not as good as the image "desc-optcom_bold.nii.gz" generated using t2smap.py when settting threshold=1 in the adaptive mask.
Next Steps
The text was updated successfully, but these errors were encountered: