You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Using the patch pixel sampler (patch_size=32) and adding masks results in very slow training increasing training time from minutes (w/o masks) to days (w/ masks). After logging times I could identify that the torch.nn.functional.max_pool2d(tensor, kernel_size=kernel_size, stride=1, padding=(kernel_size - 1) // 2) within the dialate function in nerfstudio/data/utils/pixel_sampling_utils.py taking up to 77sec for one batch.
To Reproduce
Steps to reproduce the behavior:
Find a NeRF scene with images and masks
Set --pipeline.datamanager.patch-size 32
Run w/ and w/o masks to see the difference
Expected behavior
A faster default implementation or the possibility to swap to GPU for fast max_pool2d computation.
The text was updated successfully, but these errors were encountered:
Describe the bug
Using the patch pixel sampler (patch_size=32) and adding masks results in very slow training increasing training time from minutes (w/o masks) to days (w/ masks). After logging times I could identify that the
torch.nn.functional.max_pool2d(tensor, kernel_size=kernel_size, stride=1, padding=(kernel_size - 1) // 2)
within thedialate
function innerfstudio/data/utils/pixel_sampling_utils.py
taking up to 77sec for one batch.To Reproduce
Steps to reproduce the behavior:
--pipeline.datamanager.patch-size 32
Expected behavior
A faster default implementation or the possibility to swap to GPU for fast max_pool2d computation.
The text was updated successfully, but these errors were encountered: