Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisit make_adaptive_mask #113

Closed
tsalo opened this issue Aug 18, 2018 · 15 comments
Closed

Revisit make_adaptive_mask #113

tsalo opened this issue Aug 18, 2018 · 15 comments
Labels
discussion issues that still need to be discussed priority: high issues that would be really helpful if they were fixed already T2*/S0 estimation issues related to the estimation of T2* and S0

Comments

@tsalo
Copy link
Member

tsalo commented Aug 18, 2018

Per discussion in #102, we want to take a closer look at how make_adaptive_mask determines which voxels have sufficient signal at each TE. The threshold applied below appears to be arbitrary:

# get 33rd %ile of `first_echo` and find corresponding index
# NOTE: percentile is arbitrary
perc = np.percentile(first_echo, 33, interpolation='higher')
perc_val = (echo_means[:, 0] == perc)
# extract values from all echos at relevant index
# NOTE: threshold of 1/3 voxel value is arbitrary
lthrs = np.squeeze(echo_means[perc_val].T) / 3
# if multiple samples were extracted per echo, keep the one w/the highest signal
if lthrs.ndim > 1:
lthrs = lthrs[:, lthrs.sum(axis=0).argmax()]
# determine samples where absolute value is greater than echo-specific thresholds
# and count # of echos that pass criterion
masksum = (np.abs(echo_means) > lthrs).sum(axis=-1)

As mentioned in #102, @rmarkello also thinks that the thresholds are hard-coded based on a three-echo dataset, so we'll want to look into that as well.

We have access to a five-echo dataset from @handwerkerd, so we can use that to investigate the method by which the signal quality threshold is determined and see if there's a better method out there. We can also use the quality of the log-linear fit to the decay model, and perhaps calculate TSNR for each echo's time series.

Does anyone have any thoughts on how we should do this?

BTW I've started looking into using TSNR to make an adaptive mask here.

@tsalo tsalo added the discussion issues that still need to be discussed label Aug 18, 2018
@tsalo
Copy link
Member Author

tsalo commented Feb 9, 2019

Given a recent problem case that appears to have at least partially caused by poor automatic masking, I think we should make sure to circle back to this issue. In the linked case, make_adaptive_mask identified almost the whole bounding box, while using nilearn.masking.compute_epi_mask (run on the first echo) to generate an explicit mask returned reasonable results.

@tsalo
Copy link
Member Author

tsalo commented Feb 25, 2019

Does anyone have any concerns about updating the workflow to use nilearn.masking.compute_epi_mask to create an initial mask (when no explicit mask is provided) prior to running make_adaptive_mask?

@dowdlelt
Copy link
Collaborator

It seems that having a standardized automated way to get things started is very useful. As long as the masking isn't removing too much, such that the denoised output data isn't useful, it seems fine to have something that prevents the 'whole bounding-box' problem.

@tsalo
Copy link
Member Author

tsalo commented Feb 27, 2019

Sounds good to me. I've opened a new PR, #226, with the change. It doesn't solve all of the issues with make_adaptive_mask, but it's a start.

@tsalo
Copy link
Member Author

tsalo commented Mar 6, 2019

Here's an idea for how we could improve make_adaptive_mask. It doesn't solve issues with the current method, but supplements it to flag very bad echoes. You can see the code here. Basically, what I'm proposing is that we look at the mean value for each voxel at each echo, over time. Then we move through the echoes, and flag any voxel when the mean signal for that voxel increases from one echo to the next. The logic there is that, regardless of changes in T2* or S0, we should see signal decreases from one echo to the next, so any increases must be purely artifactual. Below, you can see an example of such a situation.

bad_echo

There are probably a lot of ways in which we can improve this first pass (e.g., not using the mean, setting tolerances for the differences, etc.), but what does everyone think of the general logic?

@dowdlelt
Copy link
Collaborator

dowdlelt commented Mar 6, 2019

This is interesting to me - is there any particular spatial distribution of these bad echo voxels? That plot suggest that the 'bad echo' is very stable - always an increase across the entire timeseries, if I'm reading it correctly. The logic seems sound - but I am very curious as to why and where these baddies exist and if there is any useful information in that.

@tsalo
Copy link
Member Author

tsalo commented Mar 6, 2019

At least in the five-echo test dataset, they seem to mostly be in the ventricles.

@dowdlelt
Copy link
Collaborator

dowdlelt commented Mar 6, 2019

Thanks! I was expecting there, or cardiac regions - but still not clear to me why it would be so consistent. In any case, that's really neat. Wonder if that info could be useful in some way, perhaps overlay with those areas as another metric for ICA acceptance/rejection, or just for generating a noise regressor.

@tsalo
Copy link
Member Author

tsalo commented Mar 7, 2019

I was thinking of using the T2*/S0 log-linear model fit in calculating existing metrics, by deweighting the contributions of voxels with poor fit (see #230). Once I realized we could also use the more obvious artifact detailed here in make_adaptive_mask, I figured that the log-linear model fit could get at the same thing. There are some things we'll have to figure out in order to do that, but I think it's reasonable.

@tsalo
Copy link
Member Author

tsalo commented Mar 7, 2019

Alternatively, what if we used only good echoes to calculate the pseudo F-statistics for each voxel?

@jbteves
Copy link
Collaborator

jbteves commented Apr 20, 2019

@dowdlelt I think in an in-person conversation you were saying something about the log-linear fit; would any of your findings contribute to this issue?

@tsalo
Copy link
Member Author

tsalo commented Apr 20, 2019

I proposed some improvements to the adaptive mask generation in #231, but I don't know that anyone was interested in them. Those improvements just looked at the overall scale for the different echoes within a voxel to see if the mean value increased from one echo to the next at any point (which shouldn't happen).

I don't think anyone has looked at fit quality as a metric of "good" signal yet, but it would be great to see if @dowdlelt has made any progress there.

I think that adaptive mask generation improvement has taken a back seat to metric calculation and component selection (for good reason). Perhaps it would be best to label this as low-priority (i.e., an enhancement that we might get to at some point) or to open two new issues: one for the improvement I proposed and one requesting an investigation of fit quality on echo signal.

@jbteves jbteves added the priority: low issues that are not urgent label May 2, 2019
@jbteves
Copy link
Collaborator

jbteves commented May 23, 2019

After waffling back and forth, I think I'd prefer to close this and open two issues as you suggested. Would you mind writing the issues, @tsalo ? We will then mark them both as low-priority and close this one if you agree with going that way.

@tsalo
Copy link
Member Author

tsalo commented May 26, 2019

I've opened #312, although I can't figure out how to investigate fit quality's effect on echo signal so I haven't opened the second issue.

@tsalo tsalo added the T2*/S0 estimation issues related to the estimation of T2* and S0 label Oct 4, 2019
@tsalo tsalo added priority: high issues that would be really helpful if they were fixed already and removed priority: low issues that are not urgent labels Nov 6, 2019
@emdupre
Copy link
Member

emdupre commented Nov 8, 2019

I think this is ready to close -- please re-open @tsalo if you disagree !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion issues that still need to be discussed priority: high issues that would be really helpful if they were fixed already T2*/S0 estimation issues related to the estimation of T2* and S0
Projects
None yet
Development

No branches or pull requests

4 participants