We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: ViTMatte1.Inference1: Exception caught processing model: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File .../torch.py, line 13, in forward image_and_trimap = {"image": image, "trimap": trimap} model = self.model _0 = ((model).forward(image_and_trimap, ))["phas"] ~~~~~~~~~~~~~~ <--- HERE return torch.contiguous(_0) File .../vitmatte.py, line 21, in forward images, H, W, = _0 backbone = self.backbone features = (backbone).forward(images, ) ~~~~~~~~~~~~~~~~~ <--- HERE decoder = self.decoder outputs = (decoder).forward(features, images, ) File .../vit.py, line 21, in forward _0 = torch.modeling.backbone.utils.get_abs_pos patch_embed = self.patch_embed x0 = (patch_embed).forward(x, ) ~~~~~~~~~~~~~~~~~~~~ <--- HERE pos_embed = self.pos_embed pretrain_use_cls_token = self.pretrain_use_cls_token File .../utils.py, line 10, in forward x: Tensor) -> Tensor: proj = self.proj x0 = (proj).forward(x, ) ~~~~~~~~~~~~~ <--- HERE return torch.permute(x0, [0, 2, 3, 1]) def get_abs_pos(abs_pos: Tensor, File .../conv.py, line 23, in forward weight = self.weight bias = self.bias _0 = (self)._conv_forward(input, weight, bias, ) ~~~~~~~~~~~~~~~~~~~ <--- HERE return _0 def _conv_forward(self: torch.torch.nn.modules.conv.Conv2d, File .../conv.py, line 29, in _conv_forward weight: Tensor, bias: Optional[Tensor]) -> Tensor: _1 = torch.conv2d(input, weight, bias, [16, 16], [0, 0], [1, 1]) ~~~~~~~~~~~~ <--- HERE return _1
Traceback of TorchScript, original code (most recent call last): File "nuke_vitmatte.py", line 72, in forward }
return self.model(image_and_trimap)["phas"].contiguous() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
File .../vitmatte.py, line 42, in forward images, H, W = self.preprocess_inputs(batched_inputs)
The text was updated successfully, but these errors were encountered:
I have the same issue on my nuke indie 15.1v1.
Sorry, something went wrong.
No branches or pull requests
ERROR: ViTMatte1.Inference1: Exception caught processing model: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File .../torch.py, line 13, in forward
image_and_trimap = {"image": image, "trimap": trimap}
model = self.model
_0 = ((model).forward(image_and_trimap, ))["phas"]
~~~~~~~~~~~~~~ <--- HERE
return torch.contiguous(_0)
File .../vitmatte.py, line 21, in forward
images, H, W, = _0
backbone = self.backbone
features = (backbone).forward(images, )
~~~~~~~~~~~~~~~~~ <--- HERE
decoder = self.decoder
outputs = (decoder).forward(features, images, )
File .../vit.py, line 21, in forward
_0 = torch.modeling.backbone.utils.get_abs_pos
patch_embed = self.patch_embed
x0 = (patch_embed).forward(x, )
~~~~~~~~~~~~~~~~~~~~ <--- HERE
pos_embed = self.pos_embed
pretrain_use_cls_token = self.pretrain_use_cls_token
File .../utils.py, line 10, in forward
x: Tensor) -> Tensor:
proj = self.proj
x0 = (proj).forward(x, )
~~~~~~~~~~~~~ <--- HERE
return torch.permute(x0, [0, 2, 3, 1])
def get_abs_pos(abs_pos: Tensor,
File .../conv.py, line 23, in forward
weight = self.weight
bias = self.bias
_0 = (self)._conv_forward(input, weight, bias, )
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _0
def _conv_forward(self: torch.torch.nn.modules.conv.Conv2d,
File .../conv.py, line 29, in _conv_forward
weight: Tensor,
bias: Optional[Tensor]) -> Tensor:
_1 = torch.conv2d(input, weight, bias, [16, 16], [0, 0], [1, 1])
~~~~~~~~~~~~ <--- HERE
return _1
Traceback of TorchScript, original code (most recent call last):
File "nuke_vitmatte.py", line 72, in forward
}
File .../vitmatte.py, line 42, in forward
images, H, W = self.preprocess_inputs(batched_inputs)
The text was updated successfully, but these errors were encountered: