You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks for share !
I encountered an error while training:
Traceback (most recent call last):
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/train.py", line 238, in
main(args)
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/train.py", line 131, in main
epochpass.epoch_pass(
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/netscripts/epochpass.py", line 50, in epoch_pass
loss, results, losses = model(batch)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/models/htt.py", line 169, in forward
batch_seq_ain_feature=flatten_ain_feature.contiguous().view(-1,self.ntokens_action,flatten_ain_feature.shape[-1])
RuntimeError: shape '[-1, 128, 512]' is invalid for input of size 16384
The shape doesn't match. I think one of the parameters is self.ntokens_action, do I think the initial settings for this parameter are customized?
May I know how to solve the problem of parameter matching.
The text was updated successfully, but these errors were encountered:
Hello, thanks for share !
I encountered an error while training:
Traceback (most recent call last):
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/train.py", line 238, in
main(args)
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/train.py", line 131, in main
epochpass.epoch_pass(
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/netscripts/epochpass.py", line 50, in epoch_pass
loss, results, losses = model(batch)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/miniconda3/envs/handmesh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zhangjiabao/data/1/zhangjiabao/mesh/HTT/models/htt.py", line 169, in forward
batch_seq_ain_feature=flatten_ain_feature.contiguous().view(-1,self.ntokens_action,flatten_ain_feature.shape[-1])
RuntimeError: shape '[-1, 128, 512]' is invalid for input of size 16384
The shape doesn't match. I think one of the parameters is self.ntokens_action, do I think the initial settings for this parameter are customized?
May I know how to solve the problem of parameter matching.
The text was updated successfully, but these errors were encountered: