Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pad uses numpy though pytorch backend is available #6066

Closed
function2-llx opened this issue Feb 26, 2023 · 4 comments · Fixed by #6076
Closed

Pad uses numpy though pytorch backend is available #6066

function2-llx opened this issue Feb 26, 2023 · 4 comments · Fixed by #6076

Comments

@function2-llx
Copy link
Contributor

function2-llx commented Feb 26, 2023

Describe the bug
According to the doc string of Pad:

`torch.nn.functional.pad` is used unless the mode or kwargs are not available in torch,
in which case `np.pad` will be used.

However, sometimes Pad uses numpy though pytorch backend is available.

To Reproduce
Run the following script, which wraps pad of numpy and torch by printing corresponding information first, and pads the input with Monai's transform to see the output:

import torch

from monai.transforms import Pad
from monai.transforms.croppad import functional as F
from monai.utils import PytorchPadMode

class Wrap:
    def __init__(self, func, info: str):
        self.func = func
        self.info = info

    def __call__(self, *args, **kwargs):
        print(self.info)
        return self.func(*args, **kwargs)

if __name__ == '__main__':
    F._np_pad = Wrap(F._np_pad, 'np')
    F._pt_pad = Wrap(F._pt_pad, 'pt')
    pad = Pad(mode=PytorchPadMode.CONSTANT)
    x = torch.randn(1, 5, 5)
    print(pad(x, [(0, 1), (1, 1), (0, 0)]).shape)

Expected behavior

PyTorch is available in the script, so it should output pt (indicating that PyTorch pad is used)

Actual behavior
The script outputs np (indicating that numpy pad is used).

Environment

================================
Printing MONAI config...
================================
MONAI version: 1.1.0+88.gf519699f
Numpy version: 1.24.2
Pytorch version: 1.13.1.post200
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: f519699f4b709aa69f23c252842e4798cac293d8

Optional dependencies:
Pytorch Ignite version: 0.4.10
ITK version: 5.3.0
Nibabel version: 5.0.0
scikit-image version: 0.19.3
Pillow version: 9.2.0
Tensorboard version: 2.12.0
gdown version: 4.6.2
TorchVision version: 0.14.1a0+b69fce3
tqdm version: 4.64.1
lmdb version: 1.4.0
psutil version: 5.9.4
pandas version: 1.5.3
einops version: 0.6.0
transformers version: 4.21.3
mlflow version: 2.1.1
pynrrd version: 1.0.0

For details about installing the optional dependencies, please visit:
    https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies


================================
Printing system config...
================================
System: Linux
Linux version: Ubuntu 22.04.2 LTS
Platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.35
Processor: x86_64
Machine: x86_64
Python version: 3.10.9
Process name: python
Command: ['python', '-c', 'import monai; monai.config.print_debug_info()']
Open files: []
Num physical CPUs: 8
Num logical CPUs: 16
Num usable CPUs: 16
CPU usage (%): [2.0, 2.7, 5.3, 3.4, 4.1, 2.1, 2.7, 0.0, 3.4, 4.2, 7.2, 12.2, 4.8, 6.2, 2.1, 97.9]
CPU freq. (MHz): 3733
Load avg. in last 1, 5, 15 mins (%): [2.6, 2.4, 1.9]
Disk usage (%): 47.3
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 62.7
Available memory (GB): 32.0
Used memory (GB): 29.7

================================
Printing GPU config...
================================
Num GPUs: 1
Has CUDA: True
CUDA version: 11.2
cuDNN enabled: True
cuDNN version: 8401
Current device: 0
Library compiled for CUDA architectures: ['sm_35', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'compute_86']
GPU 0 Name: NVIDIA GeForce GTX 1080 Ti
GPU 0 Is integrated: False
GPU 0 Is multi GPU board: False
GPU 0 Multi processor count: 28
GPU 0 Total memory (GB): 10.9
GPU 0 CUDA capability (maj.min): 6.1

Additional context

I think the condition for PyTorch here is too strict.

_pad = (
_pt_pad
if mode in {"reflect", "replicate"} and img.dtype not in {torch.int16, torch.int64, torch.bool, torch.uint8}
else _np_pad
)

@wyli
Copy link
Contributor

wyli commented Feb 26, 2023

thanks for reporting, would you like to help enhance the logic? the main testing cases are listed here https://github.com/Project-MONAI/MONAI/blob/dev/tests/test_pad_mode.py

@function2-llx
Copy link
Contributor Author

Hi @wyli, I'd like to, but maybe two weeks later, since only then will I have enough time.

#4749 introduced this behavior for fixing another issue. Could you please give some hints on how to keep these compatible?

@function2-llx
Copy link
Contributor Author

function2-llx commented Feb 27, 2023

I see, according to pytorch/pytorch#40763 (comment), torch.nn.functional.pad does not support input tensors with dtype other than float32 or float64 for modes "replicate" and "reflect". The fix will be simple, change this condition:

_pad = (
_pt_pad
if mode in {"reflect", "replicate"} and img.dtype not in {torch.int16, torch.int64, torch.bool, torch.uint8}
else _np_pad
)

to:

 _pad = ( 
     _np_pad 
     if mode in {"reflect", "replicate"} and img.dtype in {torch.int16, torch.int64, torch.bool, torch.uint8} 
     else  _pt_pad
 ) 

Or, we can just use PyTorch pad here first and use NumPy pad if any exception occurs. I'm not sure which one is desired.

@wyli
Copy link
Contributor

wyli commented Feb 27, 2023

ok, I remember the _pt_pad was not very robust, it's only compatible with a few combinations of dtypes + padding modes. I can have a look soon.

@wyli wyli mentioned this issue Feb 27, 2023
7 tasks
wyli added a commit that referenced this issue Feb 28, 2023
Fixes #6066

### Description

prefer the pytorch backend as much as possible

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [x] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [x] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.

---------

Signed-off-by: Wenqi Li <wenqil@nvidia.com>
wyli pushed a commit that referenced this issue Mar 29, 2023
Fixes #6245 .

### Description
support GPU tensor for
* `GridPatch`, `GridPatchd`, `RandGridPatch` and `RandGridPatchd`
* `GridPatchDataset`, `PatchIter`, `PatchIterd` and `iter_patch`

#### Changes
* Currently, `GridPatch`, `GridPatchDataset` and their siblings need to
copy GPU tensor to numpy array (or not support GPU tensor) in their
pipeline. This PR enables them to support end-to-end GPU operations.
* Since the padding function `pad_nd` is used heavily, its robustness is
improved. Some strange behavior introduced by
#6066 is fixed.
* In `GridPatch`, #6082
might be fixed by this PR?
* `RandGridPatch`'s mode defaults to `None` (consistent with docs)

#### Need to Notice
* Default padding mode of `PatchIter`, `PatchIterd` and `iter_patch` are
kept as `NumpyPadMode.WRAP`. However, `GridPatch` defaults to `None`.
(might be inconsistent?)

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [x] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [x] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.

---------


Signed-off-by: Qingpeng Li <qingpeng9802@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants