Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tedana's adaptive mask removes white matter and CSF necessary for nuisance regressions #14

Open
tsalo opened this issue Oct 18, 2021 · 4 comments

Comments

@tsalo
Copy link
Member

tsalo commented Oct 18, 2021

Some of the nuisance regressions (e.g., deepest WM, aCompCor) require white matter and CSF voxels, but those voxels are masked out by tedana's adaptive masking procedure.

In the interest of moving forward, I may work with a fork of tedana rather than an official release, in which adaptive masking is disabled when an explicit mask is provided. I'm not sure how well that will work out, but it's the only thing I can think of.

@tsalo
Copy link
Member Author

tsalo commented Oct 18, 2021

Given ME-ICA/tedana#736, I wonder if this might be a change worth adopting in tedana. It wouldn't impact the classification if we treated all brain voxels as if they had signal in one echo, and optimal combination in two echoes, even with a terrible T2* value, can't produce values worse than the original two echoes' values...

@handwerkerd
Copy link
Collaborator

For some/all of your nuisance regressors one option may be to calculate the regressors on the optimally combined data, where the voxels should no longer be masked out. Since the denoising step is effectively regression, you can potentially create your own denoised time series by regressing out the time series for tedana-rejected components and these nuisance time series in one step. It wouldn't be hard to add an option for additional nuissance regressors into io.write_split_ts Making that option accessible to tedana.py might take a bit more work, but wouldn't be necessary yet.

Given your second comment, the AFNI approach to multi-echo denoising is to just regress out the rejected component time series from the unmasked optimally combined data. I don't remember what we're masking now, but that could be a viable approach. The one risk is that it might obscure which voxels has high quality data.

@handwerkerd
Copy link
Collaborator

One other thought: Our adaptive masking shouldn't mask CSF. We might want to look at the criteria we're using for creating the adaptive mask to see if there's a way we can get it to reliably preserve CSF. For example, I think the adaptive mask is looking for voxels with significantly lower raw magntiudes in each echo. The first echo shouldn't have much drop out. If we use more forgiving thresholds for the first echo and then look for echoes where the signal drops more than is plausible for T2* or S0 models, that might conserve more of the CSF voxels.

@tsalo
Copy link
Member Author

tsalo commented Oct 19, 2021

At the moment (outside of my fork) tedana's optimally-combined data will exclude those CSF and WM voxels, since they are labeled as having zero good echoes by the adaptive mask, so I can't use the optcom data directly as-is. I agree that our adaptive masking is being overly aggressive though, and would like to look into fixes.

The changes on my fork basically just set any voxel within the explicit mask to at least an adaptive mask value of 1. Other than the voxels that would otherwise have been 0, the rest of the mask is the same. Hopefully that approach will be good enough to apply within tedana, but if not we can work directly on the adaptive mask generation code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants