Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding robustica option to ICA decomposition to achieve consistent results (new PR) #1125

Closed
wants to merge 34 commits into from
Closed
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
f4eaa3e
Add robustica method
BahmanTahayori Jul 31, 2023
b0cac3a
Incorporation of major comments regarding robustica addition
BahmanTahayori Dec 5, 2023
55c2ae4
Add robustica 0.1.3 to dependency list
BahmanTahayori Nov 1, 2023
cd55a3f
Multiple fixes to RobustICA addition from code review
BahmanTahayori Dec 5, 2023
2d9b007
Specify magic number fixed seed of 42 as a constant
BahmanTahayori Nov 29, 2023
09e565e
Merge remote-tracking branch 'upstream/main' into add_robustica_rsclean
BahmanTahayori Dec 5, 2023
fc5f9ea
Updated
BahmanTahayori Dec 5, 2023
4fc3043
Robustica Updates
BahmanTahayori Dec 6, 2023
a20ff57
Incorporating the third round of Robert E. Smith's comments
BahmanTahayori Dec 20, 2023
cc5e05d
Merge pull request #3 from BahmanTahayori/add_robustica_rsclean
BahmanTahayori Dec 20, 2023
a449fec
Merge branch 'ME-ICA:main' into main
BahmanTahayori Feb 9, 2024
78c8140
Enhance the "ica_method" description suggested by D. Handwerker
BahmanTahayori Feb 9, 2024
ac85e6a
Enhancing the "n_robust_runs" description suggested by D. Handwerkerd
BahmanTahayori Feb 9, 2024
979d026
RobustICA: Restructure code loop over robust methods (#4)
Lestropie Feb 11, 2024
71d8d4a
merging recent changes
BahmanTahayori Feb 21, 2024
cac38cd
Applied suggested changes
BahmanTahayori Feb 29, 2024
5fcf148
Fixing the conflict
BahmanTahayori Feb 29, 2024
b7d08e9
Merge branch 'ME-ICA:main' into main
BahmanTahayori Feb 29, 2024
a113423
Incorporating more comments
BahmanTahayori Mar 4, 2024
b60e9a6
Merge remote-tracking branch 'upstream/main'
Lestropie Apr 12, 2024
45c95ce
aligning adding-robustica with Main
handwerkerd Aug 13, 2024
8e6878f
Adding already requested changes
handwerkerd Aug 13, 2024
88fd148
fixed failing tests
handwerkerd Aug 16, 2024
a221e72
updated documentation in faq.rst
handwerkerd Aug 27, 2024
8622a9b
more documentation changes
handwerkerd Aug 29, 2024
419e9d4
Update docs/faq.rst
handwerkerd Aug 30, 2024
d29a91b
Update docs/faq.rst
handwerkerd Aug 30, 2024
efb712e
Aligning robustICA with current Main + (#5)
handwerkerd Aug 30, 2024
dee68ec
align with main
handwerkerd Sep 3, 2024
171a835
Merge branch 'ME-ICA:main' into adding-robustica
handwerkerd Sep 3, 2024
4b86fe2
fixed ica.py docstring error
handwerkerd Sep 3, 2024
c01ec51
added scikit-learn-extra to pyproject and changed ref name
handwerkerd Sep 3, 2024
cd50037
increment circleci version keys
handwerkerd Sep 3, 2024
54a4ac3
Merge branch 'main' into adding-robustica
BahmanTahayori Sep 10, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/approach.rst
Original file line number Diff line number Diff line change
Expand Up @@ -311,6 +311,8 @@ Next, ``tedana`` applies TE-dependent independent component analysis (ICA) in
order to identify and remove TE-independent (i.e., non-BOLD noise) components.
The dimensionally reduced optimally combined data are first subjected to ICA in
order to fit a mixing matrix to the whitened data.
``tedana`` can use a single interation of FastICA or multiple interations of robustICA,
with an explanation of those approaches `in our FAQ`_.
This generates a number of independent timeseries (saved as **desc-ICA_mixing.tsv**),
as well as parameter estimate maps which show the spatial loading of these components on the
brain (**desc-ICA_components.nii.gz**).
Expand Down Expand Up @@ -356,6 +358,7 @@ yielding a denoised timeseries, which is saved as **desc-denoised_bold.nii.gz**.

.. image:: /_static/a15_denoised_data_timeseries.png

.. _in our FAQ: faq.html#tedana-what-is-the-right-number-of-ica-components-what-options-let-me-get-it
.. _These decision trees are detailed here: included_decision_trees.html

*******************************
Expand Down
68 changes: 68 additions & 0 deletions docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,11 +93,79 @@ The TEDICA step may fail to converge if TEDPCA is either too strict
With updates to the ``tedana`` code, this issue is now rare, but it may happen
when preprocessing has not been applied to the data, or when improper steps have
been applied to the data (e.g. rescaling, nuisance regression).
I can also still happen when everything is seemingly correct
(see the answer to the next question).
If you are confident that your data have been preprocessed correctly prior to
applying tedana, and you encounter this problem, please submit a question to `NeuroStars`_.

.. _NeuroStars: https://neurostars.org

*********************************************************************************
[tedana] What is the right number of ICA components & what options let me get it?
*********************************************************************************

Part of the PCA step in ``tedana`` processing involves identifying the number of
components that contain meaningful signal.
The PCA components are then used to calculate the same number of ICA components.
The ``--tedpca`` option includes several options to identify the "correct" number
of PCA components.
``kundu`` and ``kundu-stabilize`` use several echo-based criteria to exclude PCA
components that are unlikely to contain T2* or S0 signal.
``mdl`` (conservative & fewest components), ``kic``,
& ``aic`` (liberal & more components) use `MAPCA`_.
Within the same general method, each uses a cost function to find a minimum
where more components no longer model meaningful variance.
For some datasets we see all methods fail and result in too few or too many components.
There is no consistent number of components or % variance explained to define the correct number.
The correct number of components will depend on the noise levels of the data.
For example, smaller voxels will results in more thermal noise and less total variance explained.
A dataset with more head motion artifacts will have more variance explained,
since more structured signal is within the head motion artifacts.
The clear failure cases are extreme. That is getting less than 1/5 the number of components
compared to time points or having nearly has many components as time point.
We are working on identifying why this happens and adding get better solutions.
Our current guess is that most of the above methods assume data are
independant and identically distributed (IID),
and signal leakage from in-slice and multi-slice accelleration may violate this assumption.

We have one option that is generally useful and is also a partial solution.
``--ica_method robustica`` will run `robustica`_.
This is a method that, for a given number of PCA components,
will repeated run ICA and identify components that are stable across iterations.
While running ICA multiple times will slow processing, as a general benefit,
this means that the ICA results are less sensitive to the initialization parameters,
computer hardware, and software versions.
This will result in better stability and replicability of ICA results.
Additionally, `robustica`_ almost always results in fewer components than initially prescripted,
since there are fewer stable components across interations than the total number of components.
This means, even if the initial PCA component estimate is a bit off,
the number of resulting robust ICA components will represent stable information in the data.
For a dataset where the PCA comoponent estimation methods are failing,
one could use ``--tedpca`` with a fixed integer for a constant number of components,
that is on the high end of typical for a study,
and then `robustica`_ will reduce the number of components to only find stable information.
That said, if the fixed PCA component number is too high,
then the method will have too many unstable components,
and if the fixed PCA component number is too low, then there will be even fewer ICA components.
The number of ICA components is more consisent,
but is still sensitive to the intial number of PCA components.
For example, for a single dataset 60 PCA components might result in 46 stable ICA components,
while 55 PCA components might results in 43 stable ICA components.
We are still testing how these interact and give better recommendations and even more stable results.
At that point, ``--ica_method robustica`` might become the default setting,
While the TEDANA developers expect that ``--ica_method robustica`` may become
the default configuration in future TEDANA versions,
it is first being released to the public as a non-default option
in hope of gaining insight into its behaviour
across a broader range of multi-echo fMRI data.
If users are having trouble with PCA component estimation failing on a dataset,
we recommend using RobustICA;
we also invite user feedback on its behaviour and efficacy.


.. _MAPCA: https://github.com/ME-ICA/mapca
.. _robustica: https://github.com/CRG-CNAG/robustica

.. _manual classification:

********************************************************************************
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ dependencies = [
"pandas>=2.0,<=2.2.2",
"pybtex",
"pybtex-apa-style",
"robustica>=0.1.3",
"scikit-learn>=0.21, <=1.5.1",
"scipy>=1.2.0, <=1.14.1",
"threadpoolctl",
Expand Down
19 changes: 19 additions & 0 deletions tedana/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
"""Setting default values for ICA decomposition."""

DEFAULT_ICA_METHOD = "fastica"
DEFAULT_N_MAX_ITER = 500
DEFAULT_N_MAX_RESTART = 10
DEFAULT_SEED = 42


"""Setting values for number of robust runs."""

DEFAULT_N_ROBUST_RUNS = 30
MIN_N_ROBUST_RUNS = 5
MAX_N_ROBUST_RUNS = 500
WARN_N_ROBUST_RUNS = 200


"""Setting the warning threshold for the index quality."""

WARN_IQ = 0.6
192 changes: 184 additions & 8 deletions tedana/decomposition/ica.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,47 @@
import warnings

import numpy as np
from robustica import RobustICA
from scipy import stats
from sklearn.decomposition import FastICA

from tedana.config import (
DEFAULT_ICA_METHOD,
DEFAULT_N_MAX_ITER,
DEFAULT_N_MAX_RESTART,
DEFAULT_N_ROBUST_RUNS,
WARN_IQ,
WARN_N_ROBUST_RUNS,
)

LGR = logging.getLogger("GENERAL")
RepLGR = logging.getLogger("REPORT")


def tedica(data, n_components, fixed_seed, maxit=500, maxrestart=10):
"""Perform ICA on ``data`` and return mixing matrix.
def tedica(
data,
n_components,
fixed_seed,
ica_method=DEFAULT_ICA_METHOD,
n_robust_runs=DEFAULT_N_ROBUST_RUNS,
maxit=DEFAULT_N_MAX_ITER,
maxrestart=DEFAULT_N_MAX_RESTART,
):
"""Perform ICA on `data` with the user selected ica method and returns mixing matrix.

Parameters
----------
data : (S x T) :obj:`numpy.ndarray`
Dimensionally reduced optimally combined functional data, where `S` is
samples and `T` is time
n_components : :obj:`int`
Number of components retained from PCA decomposition
Number of components retained from PCA decomposition.
fixed_seed : :obj:`int`
Seed for ensuring reproducibility of ICA results
Seed for ensuring reproducibility of ICA results.
ica_method : :obj: `str`
slected ICA method, can be fastica or robutica.
n_robust_runs : :obj: `int`
selected number of robust runs when robustica is used. Default is 30.
maxit : :obj:`int`, optional
Maximum number of iterations for ICA. Default is 500.
maxrestart : :obj:`int`, optional
Expand All @@ -37,17 +59,171 @@ def tedica(data, n_components, fixed_seed, maxit=500, maxrestart=10):
where `C` is components and `T` is the same as in `data`
fixed_seed : :obj:`int`
Random seed from final decomposition.

Notes
-----
Uses `sklearn` implementation of FastICA for decomposition
"""
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
RepLGR.info(
"Independent component analysis was then used to "
"decompose the dimensionally reduced dataset."
)

ica_method = ica_method.lower()

if ica_method == "robustica":
mmix, fixed_seed = r_ica(
data,
n_components=n_components,
fixed_seed=fixed_seed,
n_robust_runs=n_robust_runs,
max_it=maxit,
)
elif ica_method == "fastica":
mmix, fixed_seed = f_ica(
data,
n_components=n_components,
fixed_seed=fixed_seed,
maxit=maxit,
maxrestart=maxrestart,
)
else:
raise ValueError("The selected ICA method is invalid!")

return mmix, fixed_seed


def r_ica(data, n_components, fixed_seed, n_robust_runs, max_it):
"""Perform robustica on `data` and returns mixing matrix.

Parameters
----------
data : (S x T) :obj:`numpy.ndarray`
Dimensionally reduced optimally combined functional data, where `S` is
samples and `T` is time
n_components : :obj:`int`
Number of components retained from PCA decomposition.
fixed_seed : :obj:`int`
Seed for ensuring reproducibility of ICA results.
n_robust_runs : :obj: `int'
selected number of robust runs when robustica is used. Default is 30.
maxit : :obj:`int`, optional
Maximum number of iterations for ICA. Default is 500.

Returns
-------
mmix : (T x C) :obj:`numpy.ndarray`
Z-scored mixing matrix for converting input data to component space,
where `C` is components and `T` is the same as in `data`
fixed_seed : :obj:`int`
Random seed from final decomposition.
"""
if n_robust_runs > WARN_N_ROBUST_RUNS:
LGR.warning(
"The selected n_robust_runs is a very big number! The process will take a long time!"
)

RepLGR.info("RobustICA package was used for ICA decomposition \\citep{Anglada2022}.")
handwerkerd marked this conversation as resolved.
Show resolved Hide resolved

if fixed_seed == -1:
fixed_seed = np.random.randint(low=1, high=1000)

robust_method = "DBSCAN"
robust_ica_converged = False
while not robust_ica_converged:
try:
robust_ica = RobustICA(
n_components=n_components,
robust_runs=n_robust_runs,
whiten="arbitrary-variance",
max_iter=max_it,
random_state=fixed_seed,
robust_dimreduce=False,
fun="logcosh",
robust_method=robust_method,
)

s, mmix = robust_ica.fit_transform(data)
q = robust_ica.evaluate_clustering(
robust_ica.S_all,
robust_ica.clustering.labels_,
robust_ica.signs_,
robust_ica.orientation_,
)
robust_ica_converged = True

except Exception:
if robust_method == "DBSCAN":
# if RobustICA failed wtih DBSCAN, run again wtih AgglomerativeClustering
robust_method = "AgglomerativeClustering"
else:
raise ValueError("RobustICA failed to converge")

LGR.info(
f"The {robust_method} clustering algorithm was used clustering "
f"components across different runs"
)

iq = np.array(
np.mean(q[q["cluster_id"] >= 0].iq)
) # Excluding outliers (cluster -1) from the index quality calculation

if iq < WARN_IQ:
LGR.warning(
f"The resultant mean Index Quality is low ({iq}). It is recommended to rerun the "
"process with a different seed."
)

mmix = mmix[
:, q["cluster_id"] >= 0
] # Excluding outliers (cluster -1) when calculating the mixing matrix
mmix = stats.zscore(mmix, axis=0)

LGR.info(
f"RobustICA with {n_robust_runs} robust runs and seed {fixed_seed} was used. "
f"The mean Index Quality is {iq}."
)

no_outliers = np.count_nonzero(robust_ica.clustering.labels_ == -1)
if no_outliers:
LGR.info(
f"The {robust_method} clustering algorithm detected outliers when clustering "
f"components for different runs. These outliers are excluded when calculating "
f"the index quality and the mixing matrix to maximise the robustness of the "
f"decomposition."
)

return mmix, fixed_seed


def f_ica(data, n_components, fixed_seed, maxit, maxrestart):
"""Perform FastICA on `data` and returns mixing matrix.

Parameters
----------
data : (S x T) :obj:`numpy.ndarray`
Dimensionally reduced optimally combined functional data, where `S` is
samples and `T` is time
n_components : :obj:`int`
Number of components retained from PCA decomposition
fixed_seed : :obj:`int`
Seed for ensuring reproducibility of ICA results
maxit : :obj:`int`, optional
Maximum number of iterations for ICA. Default is 500.
maxrestart : :obj:`int`, optional
Maximum number of attempted decompositions to perform with different
random seeds. ICA will stop running if there is convergence prior to
reaching this limit. Default is 10.

Returns
-------
mmix : (T x C) :obj:`numpy.ndarray`
Z-scored mixing matrix for converting input data to component space,
where `C` is components and `T` is the same as in `data`
fixed_seed : :obj:`int`
Random seed from final decomposition.

Notes
-----
Uses `sklearn` implementation of FastICA for decomposition
"""
if fixed_seed == -1:
fixed_seed = np.random.randint(low=1, high=1000)

Expand Down
11 changes: 11 additions & 0 deletions tedana/resources/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -333,3 +333,14 @@ @article{tedana_decision_trees
year = {2024},
doi = {10.6084/m9.figshare.25251433.v2}
}

@Article{Anglada2022,
handwerkerd marked this conversation as resolved.
Show resolved Hide resolved
Author = {Anglada-Girotto Miquel and Miravet-Verde Samuel and Serrano Luis and Head Sarah},
Title = {robustica: customizable robust independent component analysis},
Journal = {BMC Bioinformatics},
Volume = {23},
Number = {519},
doi = {10.1186/s12859-022-05043-9},
year = 2022
}

20 changes: 0 additions & 20 deletions tedana/tests/data/nih_five_echo_outputs_verbose.txt
Original file line number Diff line number Diff line change
Expand Up @@ -121,26 +121,6 @@ figures/sub-01_comp_028.png
figures/sub-01_comp_029.png
figures/sub-01_comp_030.png
figures/sub-01_comp_031.png
figures/sub-01_comp_032.png
figures/sub-01_comp_033.png
figures/sub-01_comp_034.png
figures/sub-01_comp_035.png
figures/sub-01_comp_036.png
figures/sub-01_comp_037.png
figures/sub-01_comp_038.png
figures/sub-01_comp_039.png
figures/sub-01_comp_040.png
figures/sub-01_comp_041.png
figures/sub-01_comp_042.png
figures/sub-01_comp_043.png
figures/sub-01_comp_044.png
figures/sub-01_comp_045.png
figures/sub-01_comp_046.png
figures/sub-01_comp_047.png
figures/sub-01_comp_048.png
figures/sub-01_comp_049.png
figures/sub-01_comp_050.png
figures/sub-01_comp_051.png
figures/sub-01_rmse_brain.svg
figures/sub-01_rmse_timeseries.svg
figures/sub-01_s0_brain.svg
Expand Down
Loading