Skip to content

Commit

Permalink
[ENH] Add carpet plot to outputs (#696)
Browse files Browse the repository at this point in the history
* Create new denoising function.

* Add carpet plots.

* Add docstring.

* Update test outputs.

* Drop output check from test.

Now that write_split_ts doesn't return anything.... we shouldn't check that it does.

* Add carpet plots to report

* Re-drop varexpl.

* Fix style issue.

* Don't unmask outputs of denoise_ts.

* Use io_generator for carpet plots.

* Split carpet plots into separate figures.

* Try to organize the report?

* Fix report generation.

* Added buttons to select carpet plot and added some structure to the reports

* Revert "Added buttons to select carpet plot and added some structure to the reports"

This reverts commit fbe23b4.

* Undo black formatting

* Add MIR/GSR figures to carpet plots.

This doesn't toggle the new buttons based on workflow settings, so it will probably fail as is.

* Fill in docstrings.

* Add template and function to only show GSR and MIR if they were calculated.

* Fix output files in test list.

* Only change button names when files are generated

* Add carpet plots figure and section in docs.

* Fix section order.

* Update tedana/io.py

* Added Zaki's suggestion to avoid "bouncing" effect when a button is pressed

Co-authored-by: Eneko Uruñuela <enekouru@gmail.com>
  • Loading branch information
tsalo and eurunuela authored Jul 17, 2021
1 parent d2c9e84 commit 12f16eb
Show file tree
Hide file tree
Showing 13 changed files with 524 additions and 143 deletions.
Binary file added docs/_static/carpet_overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,7 @@ API
tedana.io.load_data
tedana.io.new_nii_like
tedana.io.add_decomp_prefix
tedana.io.denoise_ts
tedana.io.split_ts
tedana.io.write_split_ts
tedana.io.writeresults
Expand Down
87 changes: 50 additions & 37 deletions docs/outputs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -190,43 +190,6 @@ I011 ignored ign_add0
I012 ignored ign_add1
===== ================= ========================================================


**************************
Citable workflow summaries
**************************

``tedana`` generates a report for the workflow, customized based on the parameters used and including relevant citations.
The report is saved in a plain-text file, report.txt, in the output directory.

An example report

TE-dependence analysis was performed on input data. An initial mask was generated from the first echo using nilearn's compute_epi_mask function. An adaptive mask was then generated, in which each voxel's value reflects the number of echoes with 'good' data. A monoexponential model was fit to the data at each voxel using nonlinear model fitting in order to estimate T2* and S0 maps, using T2*/S0 estimates from a log-linear fit as initial values. For each voxel, the value from the adaptive mask was used to determine which echoes would be used to estimate T2* and S0. In cases of model fit failure, T2*/S0 estimates from the log-linear fit were retained instead. Multi-echo data were then optimally combined using the T2* combination method (Posse et al., 1999). Principal component analysis in which the number of components was determined based on a variance explained threshold was applied to the optimally combined data for dimensionality reduction. A series of TE-dependence metrics were calculated for each component, including Kappa, Rho, and variance explained. Independent component analysis was then used to decompose the dimensionally reduced dataset. A series of TE-dependence metrics were calculated for each component, including Kappa, Rho, and variance explained. Next, component selection was performed to identify BOLD (TE-dependent), non-BOLD (TE-independent), and uncertain (low-variance) components using the Kundu decision tree (v2.5; Kundu et al., 2013). Rejected components' time series were then orthogonalized with respect to accepted components' time series.

This workflow used numpy (Van Der Walt, Colbert, & Varoquaux, 2011), scipy (Jones et al., 2001), pandas (McKinney, 2010), scikit-learn (Pedregosa et al., 2011), nilearn, and nibabel (Brett et al., 2019).

This workflow also used the Dice similarity index (Dice, 1945; Sørensen, 1948).

References

Brett, M., Markiewicz, C. J., Hanke, M., Côté, M.-A., Cipollini, B., McCarthy, P., … freec84. (2019, May 28). nipy/nibabel. Zenodo. http://doi.org/10.5281/zenodo.3233118

Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297-302.

Jones E, Oliphant E, Peterson P, et al. SciPy: Open Source Scientific Tools for Python, 2001-, http://www.scipy.org/

Kundu, P., Brenowitz, N. D., Voon, V., Worbe, Y., Vértes, P. E., Inati, S. J., ... & Bullmore, E. T. (2013). Integrated strategy for improving functional connectivity mapping using multiecho fMRI. Proceedings of the National Academy of Sciences, 110(40), 16187-16192.

McKinney, W. (2010, June). Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference (Vol. 445, pp. 51-56).

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct), 2825-2830.

Posse, S., Wiese, S., Gembris, D., Mathiak, K., Kessler, C., Grosse‐Ruyken, M. L., ... & Kiselev, V. G. (1999). Enhancement of BOLD‐contrast sensitivity by single‐shot multi‐echo functional MR imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 42(1), 87-97.

Sørensen, T. J. (1948). A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. I kommission hos E. Munksgaard.

Van Der Walt, S., Colbert, S. C., & Varoquaux, G. (2011). The NumPy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2), 22.


.. _interactive reports:

*********************
Expand Down Expand Up @@ -408,3 +371,53 @@ Save |Save| Saves an image reproduction of the plot in PNG format.
Specific user interactions can be switched on/off by clicking on their associated icon within
the toolbar of a given plot. Active interactions show an horizontal blue line underneath their
icon, while inactive ones lack the line.


************
Carpet plots
************

In additional to the elements described above, ``tedana``'s interactive reports include carpet plots for the main outputs of the workflow:
the optimally combined data, the denoised data, the high-Kappa (accepted) data, and the low-Kappa (rejected) data.

These plots may be useful for visual quality control of the overall denoising run.

.. image:: /_static/rep01_carpet_overview.png
:align: center
:height: 400px


**************************
Citable workflow summaries
**************************

``tedana`` generates a report for the workflow, customized based on the parameters used and including relevant citations.
The report is saved in a plain-text file, report.txt, in the output directory.

An example report

TE-dependence analysis was performed on input data. An initial mask was generated from the first echo using nilearn's compute_epi_mask function. An adaptive mask was then generated, in which each voxel's value reflects the number of echoes with 'good' data. A monoexponential model was fit to the data at each voxel using nonlinear model fitting in order to estimate T2* and S0 maps, using T2*/S0 estimates from a log-linear fit as initial values. For each voxel, the value from the adaptive mask was used to determine which echoes would be used to estimate T2* and S0. In cases of model fit failure, T2*/S0 estimates from the log-linear fit were retained instead. Multi-echo data were then optimally combined using the T2* combination method (Posse et al., 1999). Principal component analysis in which the number of components was determined based on a variance explained threshold was applied to the optimally combined data for dimensionality reduction. A series of TE-dependence metrics were calculated for each component, including Kappa, Rho, and variance explained. Independent component analysis was then used to decompose the dimensionally reduced dataset. A series of TE-dependence metrics were calculated for each component, including Kappa, Rho, and variance explained. Next, component selection was performed to identify BOLD (TE-dependent), non-BOLD (TE-independent), and uncertain (low-variance) components using the Kundu decision tree (v2.5; Kundu et al., 2013). Rejected components' time series were then orthogonalized with respect to accepted components' time series.

This workflow used numpy (Van Der Walt, Colbert, & Varoquaux, 2011), scipy (Jones et al., 2001), pandas (McKinney, 2010), scikit-learn (Pedregosa et al., 2011), nilearn, and nibabel (Brett et al., 2019).

This workflow also used the Dice similarity index (Dice, 1945; Sørensen, 1948).

References

Brett, M., Markiewicz, C. J., Hanke, M., Côté, M.-A., Cipollini, B., McCarthy, P., … freec84. (2019, May 28). nipy/nibabel. Zenodo. http://doi.org/10.5281/zenodo.3233118

Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297-302.

Jones E, Oliphant E, Peterson P, et al. SciPy: Open Source Scientific Tools for Python, 2001-, http://www.scipy.org/

Kundu, P., Brenowitz, N. D., Voon, V., Worbe, Y., Vértes, P. E., Inati, S. J., ... & Bullmore, E. T. (2013). Integrated strategy for improving functional connectivity mapping using multiecho fMRI. Proceedings of the National Academy of Sciences, 110(40), 16187-16192.

McKinney, W. (2010, June). Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference (Vol. 445, pp. 51-56).

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct), 2825-2830.

Posse, S., Wiese, S., Gembris, D., Mathiak, K., Kessler, C., Grosse‐Ruyken, M. L., ... & Kiselev, V. G. (1999). Enhancement of BOLD‐contrast sensitivity by single‐shot multi‐echo functional MR imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 42(1), 87-97.

Sørensen, T. J. (1948). A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. I kommission hos E. Munksgaard.

Van Der Walt, S., Colbert, S. C., & Varoquaux, G. (2011). The NumPy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2), 22.
95 changes: 52 additions & 43 deletions tedana/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,6 +328,51 @@ def add_decomp_prefix(comp_num, prefix, max_value):
return comp_name


def denoise_ts(data, mmix, mask, comptable):
"""Apply component classifications to data for denoising.
Parameters
----------
data : (S x T) array_like
Input time series
mmix : (C x T) array_like
Mixing matrix for converting input data to component space, where `C`
is components and `T` is the same as in `data`
mask : (S,) array_like
Boolean mask array
comptable : (C x X) :obj:`pandas.DataFrame`
Component metric table. One row for each component, with a column for
each metric. Requires at least one column: "classification".
Returns
-------
dnts : (S x T) array_like
Denoised data (i.e., data with rejected components removed).
hikts : (S x T) array_like
High-Kappa data (i.e., data composed only of accepted components).
lowkts : (S x T) array_like
Low-Kappa data (i.e., data composed only of rejected components).
"""
acc = comptable[comptable.classification == 'accepted'].index.values
rej = comptable[comptable.classification == 'rejected'].index.values

# mask and de-mean data
mdata = data[mask]
dmdata = mdata.T - mdata.T.mean(axis=0)

# get variance explained by retained components
betas = get_coeffs(dmdata.T, mmix, mask=None)
varexpl = (1 - ((dmdata.T - betas.dot(mmix.T))**2.).sum() /
(dmdata**2.).sum()) * 100
LGR.info('Variance explained by decomposition: {:.02f}%'.format(varexpl))

# create component-based data
hikts = utils.unmask(betas[:, acc].dot(mmix.T[acc, :]), mask)
lowkts = utils.unmask(betas[:, rej].dot(mmix.T[rej, :]), mask)
dnts = utils.unmask(data[mask] - lowkts[mask], mask)
return dnts, hikts, lowkts


# File Writing Functions
def write_split_ts(data, mmix, mask, comptable, io_generator, echo=0):
"""
Expand Down Expand Up @@ -370,64 +415,28 @@ def write_split_ts(data, mmix, mask, comptable, io_generator, echo=0):
acc = comptable[comptable.classification == 'accepted'].index.values
rej = comptable[comptable.classification == 'rejected'].index.values

# mask and de-mean data
mdata = data[mask]
dmdata = mdata.T - mdata.T.mean(axis=0)

# get variance explained by retained components
betas = get_coeffs(dmdata.T, mmix, mask=None)
varexpl = (1 - ((dmdata.T - betas.dot(mmix.T))**2.).sum() /
(dmdata**2.).sum()) * 100
LGR.info('Variance explained by ICA decomposition: {:.02f}%'.format(varexpl))

# create component and de-noised time series and save to files
hikts = betas[:, acc].dot(mmix.T[acc, :])
lowkts = betas[:, rej].dot(mmix.T[rej, :])
dnts = data[mask] - lowkts
dnts, hikts, lowkts = denoise_ts(data, mmix, mask, comptable)

if len(acc) != 0:
if echo != 0:
fout = io_generator.save_file(
utils.unmask(hikts, mask),
'high kappa ts split img',
echo=echo
)

fout = io_generator.save_file(hikts, 'high kappa ts split img', echo=echo)
else:
fout = io_generator.save_file(
utils.unmask(hikts, mask),
'high kappa ts img',
)
fout = io_generator.save_file(hikts, 'high kappa ts img')
LGR.info('Writing high-Kappa time series: {}'.format(fout))

if len(rej) != 0:
if echo != 0:
fout = io_generator.save_file(
utils.unmask(lowkts, mask),
'low kappa ts split img',
echo=echo
)
fout = io_generator.save_file(lowkts, 'low kappa ts split img', echo=echo)
else:
fout = io_generator.save_file(
utils.unmask(lowkts, mask),
'low kappa ts img',
)
fout = io_generator.save_file(lowkts, 'low kappa ts img')
LGR.info('Writing low-Kappa time series: {}'.format(fout))

if echo != 0:
fout = io_generator.save_file(
utils.unmask(dnts, mask),
'denoised ts split img',
echo=echo
)
fout = io_generator.save_file(dnts, 'denoised ts split img', echo=echo)
else:
fout = io_generator.save_file(
utils.unmask(dnts, mask),
'denoised ts img',
)
fout = io_generator.save_file(dnts, 'denoised ts img')

LGR.info('Writing denoised time series: {}'.format(fout))
return varexpl


def writeresults(ts, mask, comptable, mmix, n_vols, io_generator):
Expand Down
Loading

0 comments on commit 12f16eb

Please sign in to comment.