Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable gpu load nifti #8188

Open
wants to merge 24 commits into
base: dev
Choose a base branch
from

Conversation

yiheng-wang-nv
Copy link
Contributor

@yiheng-wang-nv yiheng-wang-nv commented Nov 2, 2024

Related to #8241 .

Description

A few sentences describing the changes proposed in this pull request.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
@yiheng-wang-nv yiheng-wang-nv force-pushed the add-gds-support-on-niftireader branch from ed1de7e to ca1cfb8 Compare November 2, 2024 00:35
pre-commit-ci bot and others added 8 commits November 2, 2024 00:35
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
@yiheng-wang-nv
Copy link
Contributor Author

related topic in: nipy/nibabel#1385

Will modify my draft PR accordingly.

yiheng-wang-nv and others added 5 commits December 12, 2024 12:05
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
@yiheng-wang-nv yiheng-wang-nv marked this pull request as ready for review December 13, 2024 06:20
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Default is False. CuPy and Kvikio are required for this option.
Note: For compressed NIfTI files, some operations may still be performed on CPU memory,
and the acceleration may not be significant. In some cases, it may be slower than loading on CPU.
#TODO: the first kvikio call is slow since it will initialize internal buffers, cuFile, GDS, etc.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Nic-Ma @KumoLiu @ericspod , here I added a TODO, and later after finishing an acceleration tutorial, I will submit a separate PR to update the docstrings. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to do that warm up call in some other way, such as in the constructor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ericspod , thanks for the suggestions.
Yes, we can do in the constructor, and trigger if to_gpu=True. However, there is no standard warm up way according to kvikio team's information. My current method is:

def warm_up():
    a = cp.arange(100)
    with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
        tmp_file_name = tmp_file.name
        f = kvikio.CuFile(tmp_file_name, "w")
        # Write whole array to file
        f.write(a)
        f.close()

        b = cp.empty_like(a)
        f = kvikio.CuFile(tmp_file_name, "r")
        f.read(b)

I think it can be used in practical, or in a tutorial, but I'm not sure if it's formal to add this kind of code in our reader. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the probably only way if there's nothing to explicitly init things, so it's up to you if avoiding the slow initial call is worth adding this. I would add a comment about why this is here and set delete_on_close to False instead of delete in the context manager.

@KumoLiu
Copy link
Contributor

KumoLiu commented Dec 13, 2024

Thank you for your contribution!

I have a question about this enhancement: Will it load NIfTI/DICOM files to the GPU faster compared to caching all the data on the GPU? If not, does this option use less GPU memory since the metadata is stored on the CPU? Thanks.

@yiheng-wang-nv
Copy link
Contributor Author

yiheng-wang-nv commented Dec 13, 2024

Thank you for your contribution!

I have a question about this enhancement: Will it load NIfTI/DICOM files to the GPU faster compared to caching all the data on the GPU? If not, does this option use less GPU memory since the metadata is stored on the CPU? Thanks.

Hi @KumoLiu , these are two different things. The enhanced reader can load nii images from disk to GPU directly (the old way is to load from disk to cpu, and then convert from cpu to gpu), and the loaded data can do other things, like cache as you mentioned.

@yiheng-wang-nv
Copy link
Contributor Author

Hi @KumoLiu , do you know what is the root cause of the doc error: https://github.com/Project-MONAI/MONAI/actions/runs/12311294929/job/34361260922?pr=8188#step:7:247
cupy and kvikio are only imported optionally, I'm not sure why we have the error. Do I need to modify some other places?

@KumoLiu
Copy link
Contributor

KumoLiu commented Dec 13, 2024

Hi @KumoLiu , do you know what is the root cause of the doc error: https://github.com/Project-MONAI/MONAI/actions/runs/12311294929/job/34361260922?pr=8188#step:7:247 cupy and kvikio are only imported optionally, I'm not sure why we have the error. Do I need to modify some other places?

Yes, you may need include cupy here: https://github.com/Project-MONAI/MONAI/blob/dev/docs/requirements.txt

monai/data/image_reader.py Show resolved Hide resolved
monai/data/image_reader.py Show resolved Hide resolved
@@ -924,7 +952,7 @@ def read(self, data: Sequence[PathLike] | PathLike, **kwargs):
img_.append(img) # type: ignore
return img_ if len(filenames) > 1 else img_[0]

def get_data(self, img) -> tuple[np.ndarray, dict]:
def get_data(self, img) -> tuple[np.ndarray | cp.ndarray, dict]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the typing work if CuPy isn't installed? I don't know if our CICD tests are going to catch when it's not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ericspod . TYPE_CHECKING requires the library is installed, but install cupy and kvikio in our CI/CD machine seems not easy (cannot simply add in requirements.txt)
Therefore, I remove this type hint and remove the import from TYPE_CHECKING. Do you think it's fine?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have to be pragmatic so if that works let's do it for now. Users who don't have Cupy installed would have issues too. I'd suggest later looking at defining some sort of placeholder type for Cupy so we can type things as either sort of ndarray.

yiheng-wang-nv and others added 3 commits December 16, 2024 15:08
Co-authored-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Signed-off-by: Yiheng Wang <68361391+yiheng-wang-nv@users.noreply.github.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants