Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I encountered an error while training the neural renderer #11

Open
mioyeah opened this issue Aug 12, 2023 · 0 comments
Open

I encountered an error while training the neural renderer #11

mioyeah opened this issue Aug 12, 2023 · 0 comments

Comments

@mioyeah
Copy link

mioyeah commented Aug 12, 2023

/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/criterions/rendering_loss.py:150: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
flatten_index = (flatten_uv[:,:,0] // h + flatten_uv[:,:,1] // w * W).long()
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [64,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [65,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [66,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
n/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [63,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
Traceback (most recent call last):
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/train.py", line 32, in
cli_main()
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr_cli/train.py", line 378, in cli_main
main(args)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr_cli/train.py", line 107, in main
should_end_training = train(args, trainer, task, epoch_itr)
File "/home/lab3090/anaconda3/envs/neuralactor/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr_cli/train.py", line 184, in train
log_output = trainer.train_step(samples)
File "/home/lab3090/anaconda3/envs/neuralactor/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairseq-stable/fairseq/trainer.py", line 457, in train_step
raise e
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairseq-stable/fairseq/trainer.py", line 425, in train_step
loss, sample_size_i, logging_output = self.task.train_step(
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/tasks/neural_rendering.py", line 329, in train_step
return super().train_step(sample, model, criterion, optimizer, update_num, ignore_grad)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairseq-stable/fairseq/tasks/fairseq_task.py", line 351, in train_step
loss, sample_size, logging_output = criterion(model, sample)
File "/home/lab3090/anaconda3/envs/neuralactor/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/criterions/rendering_loss.py", line 48, in forward
loss, loss_output = self.compute_loss(model, net_output, sample, reduce=reduce)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/criterions/rendering_loss.py", line 156, in compute_loss
target_colors = target_colors.gather(2, flatten_index.unsqueeze(-1).repeat(1,1,1,3))
RuntimeError: CUDA error: device-side assert triggered
May I ask how to solve it?

@mioyeah mioyeah changed the title I made an error during training I encountered an error while training the neural renderer Aug 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant