You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, every output image voxel is processed individually. The sliding spatial window is centred at that location, the PCA data matrix is filled, decomposition is done, a threshold is found, and the output signal for that one voxel is reconstructed.
There is the option of using a number of patches that is smaller than the number of output image voxels. Say for example you use a factor of 3 downsampling (easier to conceptualise since it doesn't require any change to kernel centering). Following decomposition and rank estimation, you reconstruct the low rank representation of not just the voxel in the centre of the kernel, but the 27 voxels in the centre of the kernel. Therefore you only have to perform 1/27 as many SVDs.
Not quite sure how to resolve against #3024, since that suggests using data from across a larger number of patches whereas here the suggestion is to use less patches. Perhaps the proposal here would be advantageous only for exceptionally large datasets.
The text was updated successfully, but these errors were encountered:
Detail mentioned in https://www.sciencedirect.com/science/article/pii/S1053811919305348.
Currently, every output image voxel is processed individually. The sliding spatial window is centred at that location, the PCA data matrix is filled, decomposition is done, a threshold is found, and the output signal for that one voxel is reconstructed.
There is the option of using a number of patches that is smaller than the number of output image voxels. Say for example you use a factor of 3 downsampling (easier to conceptualise since it doesn't require any change to kernel centering). Following decomposition and rank estimation, you reconstruct the low rank representation of not just the voxel in the centre of the kernel, but the 27 voxels in the centre of the kernel. Therefore you only have to perform 1/27 as many SVDs.
Not quite sure how to resolve against #3024, since that suggests using data from across a larger number of patches whereas here the suggestion is to use less patches. Perhaps the proposal here would be advantageous only for exceptionally large datasets.
The text was updated successfully, but these errors were encountered: