Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 + CUDA 11.7.0 #15924

Merged

Conversation

casparvl
Copy link
Contributor

@casparvl casparvl commented Jul 28, 2022

@casparvl casparvl changed the title {devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 {devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 [WIP] Jul 29, 2022
casparl and others added 3 commits August 1, 2022 15:04
…these - lines change with every PT version etc - and we now have the ability to allow for a small number of failed test. Set allowed number of failing tests at 20 (14 fail on my current system).
@casparvl
Copy link
Contributor Author

casparvl commented Aug 2, 2022

Test report by @casparvl
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
gcn1 - Linux RHEL 8.4, x86_64, Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz, 4 x NVIDIA NVIDIA A100-SXM4-40GB, 515.43.04, Python 3.6.8
See https://gist.github.com/bace94cc78d56c56eb4e5d5c86b12d46 for a full test report.

@casparvl
Copy link
Contributor Author

casparvl commented Aug 3, 2022

Should probably add pytorch/pytorch#81691

@surak
Copy link
Contributor

surak commented Aug 3, 2022

Hey Casper, have a look at this one pytorch/pytorch#81691

@casparvl
Copy link
Contributor Author

casparvl commented Aug 5, 2022

@surak : I had a look, but that PR is still being actively worked on. I wouldn't be in favor of taking the current 'fix' and apply it as an EasyBuild patch, as long as they haven't settled on what exactly that fix should look like.

I propose we check out if they have settled on a solution by the time we are about to merge this PR - and if so, add that patch in as a last thing. If not, I'd propose to just merge this, and update this EasyConfig with a patch once a solution has been settled on.

@casparvl
Copy link
Contributor Author

casparvl commented Aug 5, 2022

Failing tests are:

        distributions/test_constraints failed!
        distributions/test_distributions failed!
        test_fx failed!
        test_jit failed!
        test_jit_cuda_fuser failed!
        test_jit_legacy failed!
        test_jit_profiling failed!
        test_package failed!
        test_quantization failed!
        test_reductions failed!
        test_sort_and_select failed!
        test_sparse failed!
        test_tensor_creation_ops failed!
        test_torch failed!

Due to the max_failed_tests=20 the build will complete, but I'll try to check out if these failures really don't point to issues with the installation.

We should however think about patching our EasyBlock in how it counts the number of failed tests: I'm pretty sure that currently, it counts complete test suites as single failures, and then 20 could be quite a lot. As an example: test_tensor_creation_ops consists of 1008 tests, out of which 50 fail, and 141 are skipped. Currently, this counts as 1 failure towards the limit of 20. It should count as 50 failures, and probably our limit should be higher...

@casparvl
Copy link
Contributor Author

casparvl commented Aug 8, 2022

Analysis of the failing tests (part 1)

distributions/test_constraints

Click to expand
=================================== FAILURES ===================================
______________ test_constraint[True-constraint_fn5-False-value5] _______________
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/distributions/test_constraints.py", line 71, in test_constraint
    assert constraint_fn.check(t(value)).all() == result
AssertionError: assert tensor(True, device='cuda:0') == False
 +  where tensor(True, device='cuda:0') = <built-in method all of Tensor object at 0x152135570950>()
 +    where <built-in method all of Tensor object at 0x152135570950> = tensor(True, device='cuda:0').all
 +      where tensor(True, device='cuda:0') = <bound method _PositiveDefinite.check of PositiveDefinite()>(tensor([[ 3., -5.],\n        [-5.,  3.]], device='cuda:0', dtype=torch.float64))
 +        where <bound method _PositiveDefinite.check of PositiveDefinite()> = PositiveDefinite().check
 +        and   tensor([[ 3., -5.],\n        [-5.,  3.]], device='cuda:0', dtype=torch.float64) = <class 'torch.cuda.DoubleTensor'>([[3.0, -5], [-5.0, 3]])
_______________ test_constraint[True-constraint_fn7-True-value7] _______________
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/distributions/test_constraints.py", line 71, in test_constraint
    assert constraint_fn.check(t(value)).all() == result
AssertionError: assert tensor(False, device='cuda:0') == True
 +  where tensor(False, device='cuda:0') = <built-in method all of Tensor object at 0x1521355719e0>()
 +    where <built-in method all of Tensor object at 0x1521355719e0> = tensor(False, device='cuda:0').all
 +      where tensor(False, device='cuda:0') = <bound method _PositiveSemidefinite.check of PositiveSemidefinite()>(tensor([[1., 2.],\n        [2., 4.]], device='cuda:0', dtype=torch.float64))
 +        where <bound method _PositiveSemidefinite.check of PositiveSemidefinite()> = PositiveSemidefinite().check
 +        and   tensor([[1., 2.],\n        [2., 4.]], device='cuda:0', dtype=torch.float64) = <class 'torch.cuda.DoubleTensor'>([[1.0, 2], [2.0, 4]])
=============================== warnings summary ===============================
../../../../../../../../../home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_cuda.py:19
  /home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_cuda.py:19: DeprecationWarning: distutils Version c
lasses are deprecated. Use packaging.version instead.
    CUDA11OrLater = torch.version.cuda and LooseVersion(torch.version.cuda) >= "11.0"

../../../../../../../../../sw/arch/RHEL8/EB_production/2022/software/Python/3.10.4-GCCcore-11.3.0/lib/python3.10/site-packages/setuptools/_distutils/version.py:351
  /sw/arch/RHEL8/EB_production/2022/software/Python/3.10.4-GCCcore-11.3.0/lib/python3.10/site-packages/setuptools/_distutils/version.py:351: DeprecationWarning: distutils Version classes are deprecated.
 Use packaging.version instead.
    other = LooseVersion(other)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [2] distributions/test_constraints.py:83: `biject_to` not implemented.
FAILED distributions/test_constraints.py::test_constraint[True-constraint_fn5-False-value5]
FAILED distributions/test_constraints.py::test_constraint[True-constraint_fn7-True-value7]
============= 2 failed, 128 passed, 2 skipped, 2 warnings in 4.42s =============
distributions/test_constraints failed!

distributions/test_distributions

Click to expand
======================================================================
FAIL: test_wishart_log_prob (__main__.TestDistributions)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/distributions/test_distributions.py", line 2334, in test_wishart_log_prob
    self.assertEqual(0.0, (batched_prob - unbatched_prob).abs().max(), atol=1e-3, rtol=0)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Scalars are not close!

Absolute difference: nan (up to 0.001 allowed)
Relative difference: nan (up to 0 allowed)

----------------------------------------------------------------------
Ran 219 tests in 26.549s

FAILED (failures=1)
distributions/test_distributions failed!

test_fx

Click to expand
======================================================================
ERROR: test_assert (__main__.TestFX)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 3289, in test_assert
    traced = symbolic_trace(f)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 3285, in f
    assert x > 1
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 284, in __bool__
    return self.tracer.to_bool(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 160, in to_bool
    raise TraceError('symbolically traced variables cannot be used as inputs to control flow')
torch.fx.proxy.TraceError: symbolically traced variables cannot be used as inputs to control flow

======================================================================
ERROR: test_nn_functional_adaptive_max_pool1d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
======================================================================
ERROR: test_nn_functional_adaptive_max_pool2d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors

======================================================================
ERROR: test_nn_functional_adaptive_max_pool3d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors

======================================================================
ERROR: test_nn_functional_fractional_max_pool2d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
======================================================================
ERROR: test_nn_functional_fractional_max_pool3d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors

======================================================================
ERROR: test_nn_functional_group_norm (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in group_norm
    _verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:]))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 291, in __len__
    raise RuntimeError("'len' is not supported in symbolic tracing by default. If you want "
RuntimeError: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope

======================================================================
ERROR: test_nn_functional_max_pool1d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
======================================================================
ERROR: test_nn_functional_max_pool2d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors

======================================================================
ERROR: test_nn_functional_max_pool3d (__main__.TestFunctionalTracing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_fx.py", line 4027, in functional_test
    symbolic_trace(fn)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 878, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 587, in trace
    self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/_jit_internal.py", line 416, in fn
    dispatch_flag = kwargs[arg_name]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 260, in __iter__
    return self.tracer.iter(self)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/proxy.py", line 169, in iter
    raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors

----------------------------------------------------------------------
Ran 1470 tests in 5.177s

FAILED (errors=10, skipped=736, expected failures=6)
test_fx failed!

@casparvl
Copy link
Contributor Author

casparvl commented Aug 8, 2022

Analysis of the failing tests (part 2)

test_jit

Click to expand
======================================================================
ERROR: test_adv_indexing_list (jit.test_python_builtins.TestPythonBuiltinOP)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_python_builtins.py", line 368, in test_adv_indexing_list
    self.checkScript(func5, (input,))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 521, in checkScript
    python_outputs = python_fn(*inputs)
  File "<string>", line 3, in func5
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_all (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 7002, in test_all
    self.assertTrue(test_all_tensor(torch.tensor([3.14, 3, 99], dtype=torch.uint8)))
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_nn_GRU (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15053, in test_nn_GRU
    seq_script_out = self.runAndSaveRNG(lambda x: SeqLengthGRU()(x), (seq_input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 160, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15053, in <lambda>
    seq_script_out = self.runAndSaveRNG(lambda x: SeqLengthGRU()(x), (seq_input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError:
kind_.is_prim() INTERNAL ASSERT FAILED at "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/torch/csrc/jit/ir/ir.cpp":1215, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::index_select doesn't have one either. We don't know if this op has side effects.
The above operation failed shape propagation in this context:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 21
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE
======================================================================
ERROR: test_nn_LSTM (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15026, in test_nn_LSTM
    script_out = self.runAndSaveRNG(lambda x: S()(x), (input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 160, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15026, in <lambda>
    script_out = self.runAndSaveRNG(lambda x: S()(x), (input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError:
kind_.is_prim() INTERNAL ASSERT FAILED at "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/torch/csrc/jit/ir/ir.cpp":1215, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::index_select doesn't have one either. We don't know if this op has side effects.
The above operation failed shape propagation in this context:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 21
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE
======================================================================
ERROR: test_script_pack_padded_sequence (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9826, in test_script_pack_padded_sequence
    scripted_pack_padded_seq = torch.jit.script(pack_padded_pad_packed_script)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
RuntimeError:

PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 195
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
'_packed_sequence_init' is being compiled since it was called from 'pack_padded_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 261
    data, batch_sizes = \
        _VF._pack_padded_sequence(input, lengths, batch_first)
    return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_padded_sequence' is being compiled since it was called from 'pack_padded_pad_packed_script'
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9812
        def pack_padded_pad_packed_script(x, seq_lens):
            x = pack_padded_sequence(x, seq_lens)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            x, lengths = pad_packed_sequence(x)
            return x, lengths
======================================================================
ERROR: test_script_pad_sequence_pack_sequence (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9875, in test_script_pad_sequence_pack_sequence
    self.checkScript(pack_sequence_func,
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 467, in checkScript
    cu = torch.jit.CompilationUnit(script, _frames_up=frames_up)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
RuntimeError:

PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 195
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
'_packed_sequence_init' is being compiled since it was called from 'pack_padded_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 261
    data, batch_sizes = \
        _VF._pack_padded_sequence(input, lengths, batch_first)
    return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_padded_sequence' is being compiled since it was called from 'pack_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 482
    """
    lengths = torch.as_tensor([v.size(0) for v in sequences])
    return pack_padded_sequence(pad_sequence(sequences), lengths, enforce_sorted=enforce_sorted)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_sequence' is being compiled since it was called from 'pack_sequence_func'
  File "<string>", line 3
def pack_sequence_func(tensor_list, enforce_sorted=True):
    # type: (List[Tensor], bool) -> Tensor
    return pad_packed_sequence(pack_sequence(tensor_list, enforce_sorted))[0]
                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

======================================================================
ERROR: test_torch_tensor_as_tensor (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 7458, in test_torch_tensor_as_tensor
    t2 = scope['func']()
  File "<string>", line 4, in func
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
FAIL: test_hash_float (jit.test_hash.TestHash)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_hash.py", line 78, in test_hash_float
    self.checkScript(fn, (float("nan"), float("nan")))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 522, in checkScript
    self.assertEqual(python_outputs, script_outputs, atol=atol, rtol=rtol)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Booleans mismatch: False is not True

======================================================================
FAIL: test_assign_python_attr (jit.test_type_sharing.TestTypeSharing)
Assigning a new (python-only) attribute should not change type sharing
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 233, in test_assign_python_attr
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5003.M

======================================================================
FAIL: test_basic (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 50, in test_basic
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5004.M

======================================================================
FAIL: test_builtin_function_same (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 300, in test_builtin_function_same
    self.assertSameType(c1, c2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.Caller != __torch__.jit.test_type_sharing.___torch_mangle_5006.Caller

======================================================================
FAIL: test_constants (jit.test_type_sharing.TestTypeSharing)
Types should be shared for identical constant values, and different for different constant values
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 90, in test_constants
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5007.M

======================================================================
FAIL: test_diff_attr_values (jit.test_type_sharing.TestTypeSharing)
Types should be shared even if attribute values differ
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 70, in test_diff_attr_values
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5008.M

======================================================================
FAIL: test_ignored_fns (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 415, in test_ignored_fns
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5009.M

======================================================================
FAIL: test_mutate_attr_value (jit.test_type_sharing.TestTypeSharing)
Mutating the value of an attribute should not change type sharing
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 211, in test_mutate_attr_value
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5016.M

======================================================================
FAIL: test_python_function_attribute_same (jit.test_type_sharing.TestTypeSharing)
Same functions passed in should lead to same types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 378, in test_python_function_attribute_same
    self.assertSameType(fn1_mod, fn2_mod)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5019.M

======================================================================
FAIL: test_script_function_attribute_same (jit.test_type_sharing.TestTypeSharing)
Same functions passed in should lead to same types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 335, in test_script_function_attribute_same
    self.assertSameType(fn1_mod, fn2_mod)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5025.M

======================================================================
FAIL: test_submodules (jit.test_type_sharing.TestTypeSharing)
If submodules differ, the types should differ.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 127, in test_submodules
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5031.M

======================================================================
FAIL: test_type_shared_ignored_attributes (jit.test_type_sharing.TestTypeSharing)
Test that types are shared if the exclusion of their
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 587, in test_type_shared_ignored_attributes
    self.assertSameType(a_with_linear, a_with_string)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.A != __torch__.jit.test_type_sharing.___torch_mangle_5036.A

----------------------------------------------------------------------
Ran 2661 tests in 104.873s

FAILED (failures=12, errors=7, skipped=89, expected failures=7)
test_jit failed!

test_jit_cuda_fuser

Click to expand
======================================================================
ERROR: test_unary_ops (__main__.TestCudaFuser)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit_cuda_fuser.py", line 658, in test_unary_ops
    self._unary_test_helper(op, dtype, False)  # test special numbers
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit_cuda_fuser.py", line 602, in _unary_test_helper
    self.assertTrue(self._compare("failing case {}\n{}\n{}\n{}".format(dtype, operation, x, y), o, jit_o, 1e-2))
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit_cuda_fuser.py", line 1273, in _compare
    close = torch.allclose(a, b, rtol=error, atol=error, equal_nan=True)
RuntimeError: BFloat16 did not match Bool

----------------------------------------------------------------------
Ran 7431 tests in 1041.923s

FAILED (errors=1, skipped=7302)
test_jit_cuda_fuser failed!

test_jit_legacy

Click to expand
======================================================================
ERROR: test_peephole_optimize_shape_ops (test_jit.TestJit)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 676, in test_peephole_optimize_shape_ops
    test_dtype()
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 675, in test_dtype
    test_input(func, torch.tensor(0.5, dtype=torch.int64), 2)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_adv_indexing_list (jit.test_python_builtins.TestPythonBuiltinOP)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_python_builtins.py", line 368, in test_adv_indexing_list
    self.checkScript(func5, (input,))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 521, in checkScript
    python_outputs = python_fn(*inputs)
  File "<string>", line 3, in func5
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_all (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 7002, in test_all
    self.assertTrue(test_all_tensor(torch.tensor([3.14, 3, 99], dtype=torch.uint8)))
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_nn_GRU (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15053, in test_nn_GRU
    seq_script_out = self.runAndSaveRNG(lambda x: SeqLengthGRU()(x), (seq_input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 160, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15053, in <lambda>
    seq_script_out = self.runAndSaveRNG(lambda x: SeqLengthGRU()(x), (seq_input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError:
kind_.is_prim() INTERNAL ASSERT FAILED at "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/torch/csrc/jit/ir/ir.cpp":1215, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::index_select doesn't have one either. We don't know if this op has side effects.
The above operation failed shape propagation in this context:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 21
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE


======================================================================
ERROR: test_nn_LSTM (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15026, in test_nn_LSTM
    script_out = self.runAndSaveRNG(lambda x: S()(x), (input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 160, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15026, in <lambda>
    script_out = self.runAndSaveRNG(lambda x: S()(x), (input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError:
kind_.is_prim() INTERNAL ASSERT FAILED at "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/torch/csrc/jit/ir/ir.cpp":1215, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::index_select doesn't have one either. We don't know if this op has side effects.
The above operation failed shape propagation in this context:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 21
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE

======================================================================
ERROR: test_script_pack_padded_sequence (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9826, in test_script_pack_padded_sequence
    scripted_pack_padded_seq = torch.jit.script(pack_padded_pad_packed_script)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
RuntimeError:

PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 195
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
'_packed_sequence_init' is being compiled since it was called from 'pack_padded_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 261
    data, batch_sizes = \
        _VF._pack_padded_sequence(input, lengths, batch_first)
    return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_padded_sequence' is being compiled since it was called from 'pack_padded_pad_packed_script'
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9812
        def pack_padded_pad_packed_script(x, seq_lens):
            x = pack_padded_sequence(x, seq_lens)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            x, lengths = pad_packed_sequence(x)
            return x, lengths

======================================================================
ERROR: test_script_pad_sequence_pack_sequence (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9875, in test_script_pad_sequence_pack_sequence
    self.checkScript(pack_sequence_func,
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 467, in checkScript
    cu = torch.jit.CompilationUnit(script, _frames_up=frames_up)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
RuntimeError:

PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 195
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
'_packed_sequence_init' is being compiled since it was called from 'pack_padded_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 261
    data, batch_sizes = \
        _VF._pack_padded_sequence(input, lengths, batch_first)
    return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_padded_sequence' is being compiled since it was called from 'pack_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 482
    """
    lengths = torch.as_tensor([v.size(0) for v in sequences])
    return pack_padded_sequence(pad_sequence(sequences), lengths, enforce_sorted=enforce_sorted)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_sequence' is being compiled since it was called from 'pack_sequence_func'
  File "<string>", line 3
def pack_sequence_func(tensor_list, enforce_sorted=True):
    # type: (List[Tensor], bool) -> Tensor
    return pad_packed_sequence(pack_sequence(tensor_list, enforce_sorted))[0]
                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

======================================================================
ERROR: test_torch_tensor_as_tensor (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 7458, in test_torch_tensor_as_tensor
    t2 = scope['func']()
  File "<string>", line 4, in func
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
FAIL: test_hash_float (jit.test_hash.TestHash)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_hash.py", line 78, in test_hash_float
    self.checkScript(fn, (float("nan"), float("nan")))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 522, in checkScript
    self.assertEqual(python_outputs, script_outputs, atol=atol, rtol=rtol)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Booleans mismatch: False is not True

======================================================================
FAIL: test_assign_python_attr (jit.test_type_sharing.TestTypeSharing)
Assigning a new (python-only) attribute should not change type sharing
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 233, in test_assign_python_attr
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5007.M

======================================================================
FAIL: test_basic (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 50, in test_basic
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5008.M

======================================================================
FAIL: test_builtin_function_same (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 300, in test_builtin_function_same
    self.assertSameType(c1, c2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.Caller != __torch__.jit.test_type_sharing.___torch_mangle_5010.Caller

======================================================================
FAIL: test_constants (jit.test_type_sharing.TestTypeSharing)
Types should be shared for identical constant values, and different for different constant values
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 90, in test_constants
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5011.M

======================================================================
FAIL: test_diff_attr_values (jit.test_type_sharing.TestTypeSharing)
Types should be shared even if attribute values differ
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 70, in test_diff_attr_values
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5012.M

======================================================================
FAIL: test_ignored_fns (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 415, in test_ignored_fns
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5013.M

======================================================================
FAIL: test_mutate_attr_value (jit.test_type_sharing.TestTypeSharing)
Mutating the value of an attribute should not change type sharing
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 211, in test_mutate_attr_value
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5020.M

======================================================================
FAIL: test_python_function_attribute_same (jit.test_type_sharing.TestTypeSharing)
Same functions passed in should lead to same types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 378, in test_python_function_attribute_same
    self.assertSameType(fn1_mod, fn2_mod)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5023.M

======================================================================
FAIL: test_script_function_attribute_same (jit.test_type_sharing.TestTypeSharing)
Same functions passed in should lead to same types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 335, in test_script_function_attribute_same
    self.assertSameType(fn1_mod, fn2_mod)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5029.M

======================================================================
FAIL: test_submodules (jit.test_type_sharing.TestTypeSharing)
If submodules differ, the types should differ.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 127, in test_submodules
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5035.M

======================================================================
FAIL: test_type_shared_ignored_attributes (jit.test_type_sharing.TestTypeSharing)
Test that types are shared if the exclusion of their
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 587, in test_type_shared_ignored_attributes
    self.assertSameType(a_with_linear, a_with_string)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.A != __torch__.jit.test_type_sharing.___torch_mangle_5040.A

----------------------------------------------------------------------
Ran 2661 tests in 108.707s

FAILED (failures=12, errors=8, skipped=87, expected failures=7)

@casparvl
Copy link
Contributor Author

casparvl commented Aug 8, 2022

Analysis of the failing tests (part 3)

test_jit_profiling failed!

Click to expand
======================================================================
ERROR: test_adv_indexing_list (jit.test_python_builtins.TestPythonBuiltinOP)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_python_builtins.py", line 368, in test_adv_indexing_list
    self.checkScript(func5, (input,))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 521, in checkScript
    python_outputs = python_fn(*inputs)
  File "<string>", line 3, in func5
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_all (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 7002, in test_all
    self.assertTrue(test_all_tensor(torch.tensor([3.14, 3, 99], dtype=torch.uint8)))
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_nn_GRU (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15053, in test_nn_GRU
    seq_script_out = self.runAndSaveRNG(lambda x: SeqLengthGRU()(x), (seq_input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 160, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15053, in <lambda>
    seq_script_out = self.runAndSaveRNG(lambda x: SeqLengthGRU()(x), (seq_input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError:
kind_.is_prim() INTERNAL ASSERT FAILED at "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/torch/csrc/jit/ir/ir.cpp":1215, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::index_select doesn't have one either. We don't know if this op has side effects.
The above operation failed shape propagation in this context:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 21
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE

======================================================================
ERROR: test_nn_LSTM (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15026, in test_nn_LSTM
    script_out = self.runAndSaveRNG(lambda x: S()(x), (input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_jit.py", line 160, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 15026, in <lambda>
    script_out = self.runAndSaveRNG(lambda x: S()(x), (input,))[0]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError:
kind_.is_prim() INTERNAL ASSERT FAILED at "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/torch/csrc/jit/ir/ir.cpp":1215, please report a bug to PyTorch. Only prim ops are allowed to not have a registered operator but aten::index_select doesn't have one either. We don't know if this op has side effects.
The above operation failed shape propagation in this context:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 21
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE


======================================================================
ERROR: test_script_pack_padded_sequence (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9826, in test_script_pack_padded_sequence
    scripted_pack_padded_seq = torch.jit.script(pack_padded_pad_packed_script)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
RuntimeError:

PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 195
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
'_packed_sequence_init' is being compiled since it was called from 'pack_padded_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 261
    data, batch_sizes = \
        _VF._pack_padded_sequence(input, lengths, batch_first)
    return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_padded_sequence' is being compiled since it was called from 'pack_padded_pad_packed_script'
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9812
        def pack_padded_pad_packed_script(x, seq_lens):
            x = pack_padded_sequence(x, seq_lens)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            x, lengths = pad_packed_sequence(x)
            return x, lengths

======================================================================
ERROR: test_script_pad_sequence_pack_sequence (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 9875, in test_script_pad_sequence_pack_sequence
    self.checkScript(pack_sequence_func,
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 467, in checkScript
    cu = torch.jit.CompilationUnit(script, _frames_up=frames_up)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_recursive.py", line 845, in try_compile_fn
    return torch.jit.script(fn, _rcb=rcb)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/jit/_script.py", line 1343, in script
    fn = torch._C._jit_script_compile(
RuntimeError:

PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 195
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
'_packed_sequence_init' is being compiled since it was called from 'pack_padded_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 261
    data, batch_sizes = \
        _VF._pack_padded_sequence(input, lengths, batch_first)
    return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_padded_sequence' is being compiled since it was called from 'pack_sequence'
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 482
    """
    lengths = torch.as_tensor([v.size(0) for v in sequences])
    return pack_padded_sequence(pad_sequence(sequences), lengths, enforce_sorted=enforce_sorted)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'pack_sequence' is being compiled since it was called from 'pack_sequence_func'
  File "<string>", line 3
def pack_sequence_func(tensor_list, enforce_sorted=True):
    # type: (List[Tensor], bool) -> Tensor
    return pad_packed_sequence(pack_sequence(tensor_list, enforce_sorted))[0]
                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

======================================================================
ERROR: test_torch_tensor_as_tensor (test_jit.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_jit.py", line 7458, in test_torch_tensor_as_tensor
    t2 = scope['func']()
  File "<string>", line 4, in func
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
FAIL: test_hash_float (jit.test_hash.TestHash)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_hash.py", line 78, in test_hash_float
    self.checkScript(fn, (float("nan"), float("nan")))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
    self.checkScript(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/jit_utils.py", line 522, in checkScript
    self.assertEqual(python_outputs, script_outputs, atol=atol, rtol=rtol)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Booleans mismatch: False is not True

======================================================================
FAIL: test_assign_python_attr (jit.test_type_sharing.TestTypeSharing)
Assigning a new (python-only) attribute should not change type sharing
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 233, in test_assign_python_attr
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5003.M

======================================================================
FAIL: test_basic (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 50, in test_basic
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5004.M

======================================================================
FAIL: test_builtin_function_same (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 300, in test_builtin_function_same
    self.assertSameType(c1, c2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.Caller != __torch__.jit.test_type_sharing.___torch_mangle_5006.Caller

======================================================================
FAIL: test_constants (jit.test_type_sharing.TestTypeSharing)
Types should be shared for identical constant values, and different for different constant values
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 90, in test_constants
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5007.M

======================================================================
FAIL: test_diff_attr_values (jit.test_type_sharing.TestTypeSharing)
Types should be shared even if attribute values differ
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 70, in test_diff_attr_values
    self.assertSameType(m1, m2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5008.M

======================================================================
FAIL: test_ignored_fns (jit.test_type_sharing.TestTypeSharing)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 415, in test_ignored_fns
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5009.M

======================================================================
FAIL: test_mutate_attr_value (jit.test_type_sharing.TestTypeSharing)
Mutating the value of an attribute should not change type sharing
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 211, in test_mutate_attr_value
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5016.M

======================================================================
FAIL: test_python_function_attribute_same (jit.test_type_sharing.TestTypeSharing)
Same functions passed in should lead to same types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 378, in test_python_function_attribute_same
    self.assertSameType(fn1_mod, fn2_mod)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5019.M

======================================================================
FAIL: test_script_function_attribute_same (jit.test_type_sharing.TestTypeSharing)
Same functions passed in should lead to same types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 335, in test_script_function_attribute_same
    self.assertSameType(fn1_mod, fn2_mod)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5025.M

======================================================================
FAIL: test_submodules (jit.test_type_sharing.TestTypeSharing)
If submodules differ, the types should differ.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 127, in test_submodules
    self.assertSameType(a, b)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.M != __torch__.jit.test_type_sharing.___torch_mangle_5031.M

======================================================================
FAIL: test_type_shared_ignored_attributes (jit.test_type_sharing.TestTypeSharing)
Test that types are shared if the exclusion of their
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 587, in test_type_shared_ignored_attributes
    self.assertSameType(a_with_linear, a_with_string)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/jit/test_type_sharing.py", line 26, in assertSameType
    self.assertEqual(m1._c._type(), m2._c._type())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: __torch__.jit.test_type_sharing.A != __torch__.jit.test_type_sharing.___torch_mangle_5036.A

----------------------------------------------------------------------
Ran 2661 tests in 100.641s

FAILED (failures=12, errors=7, skipped=89, expected failures=7)
test_jit_profiling failed!

test_package

Click to expand
======================================================================
ERROR: test_broken_dependency (test_dependency_api.TestDependencyAPI)
A unpackageable dependency should raise a PackagingError.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 286, in test_broken_dependency
    exporter.save_source_string("my_module", "import foo; import bar")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_deny (test_dependency_api.TestDependencyAPI)
Test marking packages as "deny" during export.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 94, in test_deny
    exporter.save_source_string("foo", "import package_a.subpackage")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
======================================================================
ERROR: test_deny_glob (test_dependency_api.TestDependencyAPI)
Test marking packages as "deny" using globs instead of package names.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 104, in test_deny_glob
    exporter.save_source_string(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_extern (test_dependency_api.TestDependencyAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 31, in test_extern
    he.save_source_string("foo", "import package_a.subpackage; import module_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
======================================================================
ERROR: test_extern_glob (test_dependency_api.TestDependencyAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 50, in test_extern_glob
    he.save_source_string(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_intern_error (test_dependency_api.TestDependencyAPI)
Failure to handle all dependencies should lead to an error.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 239, in test_intern_error
    he.save_pickle("obj", "obj.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_mock (test_dependency_api.TestDependencyAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 120, in test_mock
    he.save_source_string("foo", "import package_a.subpackage")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_mock_glob (test_dependency_api.TestDependencyAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 141, in test_mock_glob
    he.save_source_string(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_pickle_mocked (test_dependency_api.TestDependencyAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 189, in test_pickle_mocked
    he.save_pickle("obj", "obj.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_pickle_mocked_all (test_dependency_api.TestDependencyAPI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 202, in test_pickle_mocked_all
    he.save_pickle("obj", "obj.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_repackage_mocked_module (test_dependency_api.TestDependencyAPI)
Re-packaging a package that contains a mocked module should work correctly.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_api.py", line 324, in test_repackage_mocked_module
    exporter.save_source_string("foo", "import package_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_extern_and_mock_hook (test_dependency_hooks.TestDependencyHooks)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_hooks.py", line 117, in test_extern_and_mock_hook
    exporter.save_source_string("foo", "import module_a; import package_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_multiple_extern_hooks (test_dependency_hooks.TestDependencyHooks)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_hooks.py", line 54, in test_multiple_extern_hooks
    exporter.save_source_string("foo", "import module_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_multiple_mock_hooks (test_dependency_hooks.TestDependencyHooks)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_hooks.py", line 74, in test_multiple_mock_hooks
    exporter.save_source_string("foo", "import module_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_remove_hooks (test_dependency_hooks.TestDependencyHooks)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_hooks.py", line 95, in test_remove_hooks
    exporter.save_source_string("foo", "import module_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_single_hook (test_dependency_hooks.TestDependencyHooks)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_dependency_hooks.py", line 34, in test_single_hook
    exporter.save_source_string("foo", "import module_a")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_importer_access (test_directory_reader.DirectoryReaderTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_directory_reader.py", line 219, in test_importer_access
    he.save_source_string("main", src, is_package=True)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_resource_access (test_directory_reader.DirectoryReaderTest)
Packaged modules should be able to use the importlib.resources API to access
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_directory_reader.py", line 191, in test_package_resource_access
    pe.save_source_string("foo.bar", mod_src)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_resource_access_by_path (test_directory_reader.DirectoryReaderTest)
Tests that packaged code can used importlib.resources.path.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_directory_reader.py", line 248, in test_resource_access_by_path
    e.save_source_string("main", src, is_package=True)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_unique_module_names (test_mangling.TestMangling)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_mangling.py", line 88, in test_unique_module_names
    pe.save_pickle("obj", "obj.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_dunder_package_present (test_misc.TestMisc)
The attribute '__torch_package__' should be populated on imported modules.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 239, in test_dunder_package_present
    pe.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_exporter_content_lists (test_misc.TestMisc)
Test content list API for PackageExporter's contained modules.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 167, in test_exporter_content_lists
    he.save_pickle("obj", "obj.pkl", package_b.PackageBObject(["a"]))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
:
======================================================================
ERROR: test_file_structure (test_misc.TestMisc)
Tests package's Directory structure representation of a zip file. Ensures
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 83, in test_file_structure
    he.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_file_structure_has_file (test_misc.TestMisc)
Test Directory's has_file() method.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 147, in test_file_structure_has_file
    he.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_inspect_class (test_misc.TestMisc)
Should be able to retrieve source for a packaged class.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 215, in test_inspect_class
    pe.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
:
======================================================================
ERROR: test_is_from_package (test_misc.TestMisc)
is_from_package should work for objects and modules
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 193, in test_is_from_package
    pe.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_python_version (test_misc.TestMisc)
Tests that the current python version is stored in the package and is available
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 119, in test_python_version
    he.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_std_lib_sys_hackery_checks (test_misc.TestMisc)
The standard library performs sys.module assignment hackery which
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_misc.py", line 279, in test_std_lib_sys_hackery_checks
    pe.save_pickle("obj", "obj.pkl", mod)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
:
======================================================================
ERROR: test_package_fx_custom_tracer (test_package_fx.TestPackageFX)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_fx.py", line 152, in test_package_fx_custom_tracer
    pe.save_pickle("model", "model.pkl", gm)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 595, in save_pickle
    pickler.dump(obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 925, in _persistent_id
    *obj.__reduce_package__(self),
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/graph_module.py", line 678, in __reduce_package__
    exporter.save_source_string(generated_module_name, module_code)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_fx_package (test_package_fx.TestPackageFX)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_fx.py", line 66, in test_package_fx_package
    pe.save_pickle("model", "model.pkl", model)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_fx_simple (test_package_fx.TestPackageFX)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_fx.py", line 35, in test_package_fx_simple
    pe.save_pickle("model", "model.pkl", traced)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 595, in save_pickle
    pickler.dump(obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 925, in _persistent_id
    *obj.__reduce_package__(self),
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/graph_module.py", line 678, in __reduce_package__
    exporter.save_source_string(generated_module_name, module_code)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_fx_with_imports (test_package_fx.TestPackageFX)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_fx.py", line 109, in test_package_fx_with_imports
    pe.save_pickle("model", "model.pkl", gm)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 595, in save_pickle
    pickler.dump(obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 925, in _persistent_id
    *obj.__reduce_package__(self),
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/graph_module.py", line 678, in __reduce_package__
    exporter.save_source_string(generated_module_name, module_code)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_then_fx (test_package_fx.TestPackageFX)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_fx.py", line 50, in test_package_then_fx
    pe.save_pickle("model", "model.pkl", model)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_load_shared_scriptmodules (test_package_script.TestPackageScript)
Test loading of single ScriptModule shared by multiple eager
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 496, in test_load_shared_scriptmodules
    e.save_pickle("res", "mod.pkl", mod_parent)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_load_shared_tensors (test_package_script.TestPackageScript)
Test tensors shared across eager and ScriptModules on load
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 565, in test_load_shared_tensors
    e.save_pickle("res", "mod1.pkl", mod1)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_load_shared_tensors_repackaged (test_package_script.TestPackageScript)
Test tensors shared across eager and ScriptModules on load
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 613, in test_load_shared_tensors_repackaged
    e.save_pickle("res", "mod1.pkl", mod1)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_interface (test_package_script.TestPackageScript)
Packaging an interface class should work correctly.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 45, in test_package_interface
    pe.save_pickle("model", "model.pkl", uses_interface)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_script_class (test_package_script.TestPackageScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 110, in test_package_script_class
    pe.save_module(fake.__name__)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 503, in save_module
    self._intern_module(module_name, dependencies)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 559, in _intern_module
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
:
======================================================================
ERROR: test_package_script_class_referencing_self (test_package_script.TestPackageScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 135, in test_package_script_class_referencing_self
    exporter.save_pickle("obj", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_save_eager_mods_sharing_scriptmodule (test_package_script.TestPackageScript)
Test saving of single ScriptModule shared by multiple
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 466, in test_save_eager_mods_sharing_scriptmodule
    e.save_pickle("res", "mod1.pkl", mod1)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_save_shared_tensors (test_package_script.TestPackageScript)
Test tensors shared across eager and ScriptModules are serialized once.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 521, in test_save_shared_tensors
    e.save_pickle("res", "tensor", shared_tensor)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_saving_and_scripting_packaged_mod (test_package_script.TestPackageScript)
Test scripting a module loaded from a package
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 660, in test_saving_and_scripting_packaged_mod
    e.save_pickle("model", "model.pkl", orig_mod)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_tensor_sharing_pickle (test_package_script.TestPackageScript)
Test that saving a ScriptModule and a separately saving a tensor
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_package_script.py", line 780, in test_tensor_sharing_pickle
    exporter.save_pickle("model", "input.pkl", original_tensor)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_repackage_import_indirectly_via_parent_module (test_repackage.TestRepackage)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_repackage.py", line 30, in test_repackage_import_indirectly_via_parent_module
    pe.save_pickle("default", "model.py", model_a)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_importer_access (test_resources.TestResources)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_resources.py", line 118, in test_importer_access
    he.save_source_string("main", src, is_package=True)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_package_resource_access (test_resources.TestResources)
Packaged modules should be able to use the importlib.resources API to access
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_resources.py", line 95, in test_package_resource_access
    pe.save_source_string("foo.bar", mod_src)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_resource_access_by_path (test_resources.TestResources)
Tests that packaged code can used importlib.resources.path.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_resources.py", line 142, in test_resource_access_by_path
    he.save_source_string("main", src, is_package=True)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
======================================================================
ERROR: test_dunder_imports (test_save_load.TestSaveLoad)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_save_load.py", line 89, in test_dunder_imports
    he.save_pickle("res", "obj.pkl", obj)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_exporting_mismatched_code (test_save_load.TestSaveLoad)
If an object with the same qualified name is loaded from different
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_save_load.py", line 184, in test_exporting_mismatched_code
    pe.save_pickle("obj", "obj.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_pickle (test_save_load.TestSaveLoad)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_save_load.py", line 151, in test_pickle
    he.save_pickle("obj", "obj.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_save_imported_module (test_save_load.TestSaveLoad)
Saving a module that came from another PackageImporter should work.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_save_load.py", line 225, in test_save_imported_module
    exporter.save_pickle("model", "model.pkl", obj2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_saving_source (test_save_load.TestSaveLoad)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_save_load.py", line 33, in test_saving_source
    he.save_source_file("foodir", str(packaging_directory / "package_a"))
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 287, in save_source_file
    self.save_source_string(*item)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
ERROR: test_saving_string (test_save_load.TestSaveLoad)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/package/test_save_load.py", line 53, in test_saving_string
    he.save_source_string("my_mod", src)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 381, in save_source_string
    self.add_dependency(dep)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 459, in add_dependency
    if self._can_implicitly_extern(module_name):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

----------------------------------------------------------------------
Ran 131 tests in 1.575s

FAILED (errors=53, skipped=6)
test_package failed!

@casparvl
Copy link
Contributor Author

casparvl commented Aug 8, 2022

Analysis of the failing tests (part 4)

test_quantization

Click to expand
======================================================================
ERROR: test_histogram_observer_against_reference (quantization.core.test_workflow_module.TestHistogramObserver)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 724, in _execute_once_for_engine
    result = self.execute_once(data)
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 662, in execute_once
    result = self.test_runner(data, run)
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/executors.py", line 47, in default_new_style_executor
    return function(data)
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 658, in run
    return test(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_workflow_module.py", line 713, in test_histogram_observer_against_reference
    self.assertEqual(ref_qparams, my_qparams)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!

Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.00035736337304115295 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 0.0059641752820562685 at index (0,) (up to 1.3e-06 allowed)

The failure occurred for item [0]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_workflow_module.py", line 696, in test_histogram_observer_against_reference
    bins=st.sampled_from([256, 512, 1024, 2048]),
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 1235, in wrapped_test
    raise the_error_hypothesis_found
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 905, in __flaky
    raise Flaky(message) from cause
hypothesis.errors.Flaky: Hypothesis test_histogram_observer_against_reference(self=<quantization.core.test_workflow_module.TestHistogramObserver testMethod=test_histogram_observer_against_reference>, N=1000, bins=512, dtype=torch.qint8, qscheme=torch.per_tensor_affine, reduce_range=True) produces unreliable results: Falsified on the first call but did not on a subsequent one

======================================================================
ERROR: test_quantized_rnn (quantization.eager.test_quantize_eager_ptq.TestQuantizeEagerPTQDynamic)
Test dynamic quantization, scriptability and serialization for dynamic quantized lstm modules on int8 and fp16
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/eager/test_quantize_eager_ptq.py", line 1276, in test_quantized_rnn
    dtype=st.sampled_from([torch.qint8, torch.float16]))
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 1235, in wrapped_test
    raise the_error_hypothesis_found
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/eager/test_quantize_eager_ptq.py", line 1330, in test_quantized_rnn
    scripted(packed_input)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 432, in prof_meth_call
    return prof_callable(meth_call, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 426, in prof_callable
    return callable(*args, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/eager/test_quantize_eager_ptq.py", line 1314, in forward
                def forward(self, x: PackedSequence) -> Tuple[PackedSequence, Tuple[torch.Tensor, torch.Tensor]]:
                    return self.cell(x)
                           ~~~~~~~~~ <--- HERE
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/dynamic/modules/rnn.py", line 455, in forward_packed
        output = PackedSequence(output_, batch_sizes,
                                sorted_indices, unsorted_indices)
        return output, self.permute_hidden(hidden, unsorted_indices)
                       ~~~~~~~~~~~~~~~~~~~ <--- HERE
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/dynamic/modules/rnn.py", line 463, in permute_hidden
        if permutation is None:
            return hx
        return apply_permutation(hx[0], permutation), apply_permutation(hx[1], permutation)
               ~~~~~~~~~~~~~~~~~ <--- HERE
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/dynamic/modules/rnn.py", line 12, in apply_permutation
def apply_permutation(tensor: Tensor, permutation: Tensor, dim: int = 1) -> Tensor:
    return tensor.index_select(dim, permutation)
           ~~~~~~~~~~~~~~~~~~~ <--- HERE
RuntimeError: Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #2 'index'

======================================================================
ERROR: test_conv_bn (quantization.jit.test_quantize_jit.TestQuantizeJit)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 172, in test_fn
    qfunction(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/jit/test_quantize_jit.py", line 3747, in test_conv_bn
    model_script = quantize_jit(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 172, in quantize_jit
    return _quantize_jit(model, qconfig_dict, run_fn, run_args, inplace, debug, quant_type=QuantType.STATIC)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 113, in _quantize_jit
    model = prepare_jit(model, qconfig_dict, inplace)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 67, in prepare_jit
    return _prepare_jit(model, qconfig_dict, inplace, quant_type=QuantType.STATIC)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 53, in _prepare_jit
    model = fuse_conv_bn_jit(model, inplace)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 40, in fuse_conv_bn_jit
    model_c = torch._C._jit_pass_fold_convbn(model_c)
RuntimeError: Expected a value of type 'NoneType' for field 'bias', but found 'Tensor'

======================================================================
ERROR: test_foldbn_complex_cases (quantization.jit.test_quantize_jit.TestQuantizeJitPasses)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/jit/test_quantize_jit.py", line 376, in test_foldbn_complex_cases
    scripted_or_traced = fuse_conv_bn_jit(scripted_or_traced)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 40, in fuse_conv_bn_jit
    model_c = torch._C._jit_pass_fold_convbn(model_c)
RuntimeError: Expected a value of type 'NoneType' for field 'bias', but found 'Tensor'

======================================================================
ERROR: test_foldbn_shared_classtype (quantization.jit.test_quantize_jit.TestQuantizeJitPasses)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/jit/test_quantize_jit.py", line 292, in test_foldbn_shared_classtype
    folded = fuse_conv_bn_jit(scripted_or_traced)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 40, in fuse_conv_bn_jit
    model_c = torch._C._jit_pass_fold_convbn(model_c)
RuntimeError: Expected a value of type 'NoneType' for field 'bias', but found 'Tensor'

======================================================================
ERROR: test_foldbn_trivial_nobias (quantization.jit.test_quantize_jit.TestQuantizeJitPasses)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/jit/test_quantize_jit.py", line 205, in test_foldbn_trivial_nobias
    scripted_or_traced = fuse_conv_bn_jit(scripted_or_traced)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/quantize_jit.py", line 40, in fuse_conv_bn_jit
    model_c = torch._C._jit_pass_fold_convbn(model_c)
RuntimeError: Expected a value of type 'NoneType' for field 'bias', but found 'Tensor'

======================================================================
ERROR: test_grid_sample (quantization.core.test_quantized_functional.TestQuantizedFunctionalOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_functional.py", line 225, in test_grid_sample
    C=st.integers(1, 10),
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 1235, in wrapped_test
    raise the_error_hypothesis_found
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_functional.py", line 234, in test_grid_sample
    X_q = torch.quantize_per_tensor(X, scale=scale, zero_point=zero_point, dtype=torch.quint8)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_advanced_indexing (quantization.core.test_quantized_op.TestQuantizedOps)
Verifies that the x[:, [0], :, :] syntax works for quantized tensors.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 2650, in test_advanced_indexing
    x_q = torch.quantize_per_tensor(
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_channel_shuffle (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 1240, in test_channel_shuffle
    min_side=2, max_side=32, max_numel=10**5),
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 1235, in wrapped_test
    raise the_error_hypothesis_found
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 1253, in test_channel_shuffle
    a_ref = torch.quantize_per_tensor(a_out, scale=scale,
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_custom_module_lstm (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 172, in test_fn
    qfunction(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 2741, in test_custom_module_lstm
    y_ref = lstm(x)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/rnn.py", line 769, in forward
    result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: expected scalar type Float but found Double

======================================================================
ERROR: test_custom_module_multi_head_attention (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 172, in test_fn
    qfunction(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 2866, in test_custom_module_multi_head_attention
    y = mha_prepared(*fp_data)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 2791, in forward
    return self.layer(query, key, value, key_padding_mask, need_weights, attn_mask)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantizable/modules/activation.py", line 305, in forward
    return self._forward_impl(query, key, value, key_padding_mask,
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantizable/modules/activation.py", line 336, in _forward_impl
    q = self.linear_Q(query)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1148, in _call_impl
    result = forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Float but found Double

======================================================================
ERROR: test_leaky_relu (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 379, in test_leaky_relu
    qX = torch.quantize_per_tensor(X, scale=scale, zero_point=zero_point,
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qadd_broadcast (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 1221, in test_qadd_broadcast
    qA = torch.quantize_per_tensor(A, 0.02, 0, torch.quint8)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qgelu (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 457, in test_qgelu
    qX = torch.quantize_per_tensor(X, scale=scale, zero_point=zero_point,
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qmul_broadcast (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 1205, in test_qmul_broadcast
    qA = torch.quantize_per_tensor(A, scale=scale_A, zero_point=zero_point_A,
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qrelu6 (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 277, in test_qrelu6
    self._test_activation_function(X, 'relu6', relu6_test_configs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 192, in _test_activation_function
    qX = torch.quantize_per_tensor(X, scale=scale, zero_point=zero_point,
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_quantized_mean_qnnpack (quantization.core.test_quantized_op.TestQuantizedOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_quantization.py", line 295, in wrapper
    fn(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 2158, in test_quantized_mean_qnnpack
    @given(keep=st.booleans())
  File "/sw/arch/RHEL8/EB_production/2022/software/hypothesis/6.46.7-GCCcore-11.3.0/lib/python3.10/site-packages/hypothesis/core.py", line 1235, in wrapped_test
    raise the_error_hypothesis_found
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_op.py", line 2169, in test_quantized_mean_qnnpack
    XQ = torch.quantize_per_tensor(X, scale=0.2, zero_point=0, dtype=torch.quint8)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_compare_per_channel_device_numerics (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 591, in test_compare_per_channel_device_numerics
    qr = torch.quantize_per_channel(r, scales, zero_points, axis, dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_compare_per_tensor_device_numerics (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 565, in test_compare_per_tensor_device_numerics
    qtr = torch.quantize_per_tensor(r, scale, zero_point, dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_per_channel_qtensor_to_memory_format (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 189, in test_per_channel_qtensor_to_memory_format
    qx = torch.quantize_per_channel(x, scales=scales, zero_points=zero_points, dtype=dtype, axis=axis)
RuntimeError: quantize_tensor_per_channel_affine expects a Float Tensor, got Double

======================================================================
ERROR: test_per_tensor_qtensor_to_memory_format (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 153, in test_per_tensor_qtensor_to_memory_format
    qx = torch.quantize_per_tensor(x, scale=scale, zero_point=zero_point, dtype=dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qtensor_channel_float_assignment (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 317, in test_qtensor_channel_float_assignment
    qt1 = torch.quantize_per_channel(t1, scales=torch.tensor(scales),
RuntimeError: quantize_tensor_per_channel_affine expects a Float Tensor, got Double

======================================================================
ERROR: test_qtensor_float_assignment (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 366, in test_qtensor_float_assignment
    qr[0] = torch.Tensor([11.3]).to(device=device)  # float assignment
RuntimeError: Quantized copy only works with kFloat as source Tensor

======================================================================
ERROR: test_qtensor_index_select_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 964, in test_qtensor_index_select_cpu
    self._test_qtensor_index_select('cpu')
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 976, in _test_qtensor_index_select
    x_selected_quantized = torch.quantize_per_tensor(x_selected, scale, zp, quant_type)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qtensor_index_select_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 961, in test_qtensor_index_select_cuda
    self._test_qtensor_index_select('cuda')
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 976, in _test_qtensor_index_select
    x_selected_quantized = torch.quantize_per_tensor(x_selected, scale, zp, quant_type)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qtensor_permute (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 792, in test_qtensor_permute
    qx = torch.quantize_per_tensor(x, 1.0, 0, dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_qtensor_unsqueeze (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 1097, in test_qtensor_unsqueeze
    qx = torch.quantize_per_tensor(x, scale=1.0, zero_point=0, dtype=torch.quint8)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_quant_pin_memory (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 1182, in test_quant_pin_memory
    x_q = torch.quantize_per_tensor(x, 1, 0, torch.quint8)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_rnn (quantization.core.test_quantized_module.TestReferenceQuantizedModule)
Checks the rnn reference quantized modules has correct numerics
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 1590, in test_rnn
    weight = self._quant_dequant_weight(getattr(fp32_rnn, wn), weight_qparams)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 1469, in _quant_dequant_weight
    weight = torch.quantize_per_tensor(weight, scale, zero_point, dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_rnn_cell (quantization.core.test_quantized_module.TestReferenceQuantizedModule)
Checks the rnn cell reference quantized modules has correct numerics
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 1532, in test_rnn_cell
    ref_res = ref_cell(x, state[rnn_type])
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/_reference/modules/rnn.py", line 193, in forward
    self.get_weight_ih(), self.get_weight_hh(),
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/_reference/modules/rnn.py", line 92, in get_weight_ih
    return get_quantize_and_dequantized_weight(self, "weight_ih")
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/_reference/modules/rnn.py", line 35, in get_quantize_and_dequantized_weight
    weight = _quantize_and_dequantize_weight(*params)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/_reference/modules/utils.py", line 128, in _quantize_and_dequantize_weight
    weight_quant = _quantize_weight(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/_reference/modules/utils.py", line 104, in _quantize_weight
    weight = torch.quantize_per_tensor(weight, weight_scale, weight_zero_point, weight_dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_sparse (quantization.core.test_quantized_module.TestReferenceQuantizedModule)
Embedding and EmbeddingBag
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 1657, in test_sparse
    fp32_embedding.weight = torch.nn.Parameter(self._quant_dequant_weight(fp32_embedding.weight, weight_qparams))
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 1469, in _quant_dequant_weight
    weight = torch.quantize_per_tensor(weight, scale, zero_point, dtype)
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_linear_relu_package_quantization_transforms (quantization.bc.test_backward_compatibility.TestSerialization)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_quantization.py", line 279, in wrapper
    fn(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/bc/test_backward_compatibility.py", line 381, in test_linear_relu_package_quantization_transforms
    self._test_package(m, input_size=(1, 1, 4, 4), generate=False)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/bc/test_backward_compatibility.py", line 212, in _test_package
    mq = _do_quant_transforms(m, input_tensor)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/bc/test_backward_compatibility.py", line 177, in _do_quant_transforms
    mp(input_tensor)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/graph_module.py", line 652, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/graph_module.py", line 277, in __call__
    raise e
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/fx/graph_module.py", line 267, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "<eval_with_key>.12530", line 5, in forward
    activation_post_process_0 = self.activation_post_process_0(x);  x = None
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/ao/quantization/observer.py", line 1087, in forward
    torch.histc(
RuntimeError: torch.histogram: input tensor and hist tensor should have the same dtype, but got input float and hist double

======================================================================
ERROR: test_batch_norm2d (quantization.core.test_quantized_module.TestStaticQuantizedModule)
Tests the correctness of the batchnorm2d module.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 693, in test_batch_norm2d
    y_ref = float_mod(x)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py", line 168, in forward
    return F.batch_norm(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/functional.py", line 2438, in batch_norm
    return torch.batch_norm(
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

======================================================================
ERROR: test_batch_norm2d_serialization (quantization.core.test_quantized_module.TestStaticQuantizedModule)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 750, in test_batch_norm2d_serialization
    self._test_batch_norm_serialization(_get_model, data1, data2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 727, in _test_batch_norm_serialization
    ref1 = mq1(data2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/modules/__init__.py", line 53, in forward
    return torch.quantize_per_tensor(X, float(self.scale),
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_batch_norm3d (quantization.core.test_quantized_module.TestStaticQuantizedModule)
Tests the correctness of the batchnorm3d module.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 711, in test_batch_norm3d
    y_ref = float_mod(x)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py", line 168, in forward
    return F.batch_norm(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/functional.py", line 2438, in batch_norm
    return torch.batch_norm(
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

======================================================================
ERROR: test_batch_norm3d_serialization (quantization.core.test_quantized_module.TestStaticQuantizedModule)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 763, in test_batch_norm3d_serialization
    self._test_batch_norm_serialization(_get_model, data1, data2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 727, in _test_batch_norm_serialization
    ref1 = mq1(data2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/modules/__init__.py", line 53, in forward
    return torch.quantize_per_tensor(X, float(self.scale),
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_dropout_serialization (quantization.core.test_quantized_module.TestStaticQuantizedModule)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 681, in test_dropout_serialization
    self._test_dropout_serialization(_get_model, data1, data2)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 658, in _test_dropout_serialization
    ref1 = mq1(data2)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/quantized/modules/__init__.py", line 53, in forward
    return torch.quantize_per_tensor(X, float(self.scale),
RuntimeError: Quantize only works on Float Tensor, got Double

======================================================================
ERROR: test_group_norm (quantization.core.test_quantized_module.TestStaticQuantizedModule)
Tests the correctness of the groupnorm module.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 815, in test_group_norm
    dqY_ref = float_mod(dqX)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 272, in forward
    return F.group_norm(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/functional.py", line 2516, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type Float but found Double

======================================================================
ERROR: test_instance_norm (quantization.core.test_quantized_module.TestStaticQuantizedModule)
Tests the correctness of the instancenorm{n}d modules.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 854, in test_instance_norm
    dqY_ref = float_mod(dqX)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py", line 72, in forward
    return self._apply_instance_norm(input)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py", line 32, in _apply_instance_norm
    return F.instance_norm(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/functional.py", line 2483, in instance_norm
    return torch.instance_norm(
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

======================================================================
ERROR: test_layer_norm (quantization.core.test_quantized_module.TestStaticQuantizedModule)
Tests the correctness of the layernorm module.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 784, in test_layer_norm
    dqY_ref = float_mod(dqX)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 189, in forward
    return F.layer_norm(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/nn/functional.py", line 2503, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

======================================================================
ERROR: test_linear_api (quantization.core.test_quantized_module.TestStaticQuantizedModule)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 172, in test_fn
    qfunction(*args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 79, in test_linear_api
    self._test_linear_api_impl(
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_module.py", line 188, in _test_linear_api_impl
    pe.save_pickle("module", "qlinear.pkl", qlinear)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 665, in save_pickle
    _check_mocked_error(module, field)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 618, in _check_mocked_error
    if self._can_implicitly_extern(module):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/package_exporter.py", line 1069, in _can_implicitly_extern
    and is_stdlib_module(top_level_package_name)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 15, in is_stdlib_module
    return base_module in _get_stdlib_modules()
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/package/_stdlib.py", line 29, in _get_stdlib_modules
    raise RuntimeError(f"Unsupported Python version: {sys.version_info}")
RuntimeError: Unsupported Python version: sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)

======================================================================
FAIL: test_fp16_saturate_op (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 1195, in test_fp16_saturate_op
    self.assertEqual(y, ref)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float64.

======================================================================
FAIL: test_per_channel_qtensor_creation_cpu (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 417, in test_per_channel_qtensor_creation_cpu
    self._test_per_channel_qtensor_creation(torch.device('cpu'))
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 449, in _test_per_channel_qtensor_creation
    self.assertEqual(zero_points, q.q_per_channel_zero_points())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: The values for attribute 'dtype' do not match: torch.float64 != torch.float32.

======================================================================
FAIL: test_per_channel_qtensor_creation_cuda (quantization.core.test_quantized_tensor.TestQuantizedTensor)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 436, in test_per_channel_qtensor_creation_cuda
    self._test_per_channel_qtensor_creation(torch.device('cuda'))
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/quantization/core/test_quantized_tensor.py", line 449, in _test_per_channel_qtensor_creation
    self.assertEqual(zero_points, q.q_per_channel_zero_points())
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: The values for attribute 'dtype' do not match: torch.float64 != torch.float32.

----------------------------------------------------------------------
Ran 877 tests in 298.276s

FAILED (failures=3, errors=41, skipped=47)
test_quantization failed!

test_reductions

Click to expand
======================================================================
ERROR: test_dim_arg_reduction_scalar_cpu_int16 (__main__.TestReductionsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_te
st
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cpu_int32 (__main__.TestReductionsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cpu_int64 (__main__.TestReductionsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cpu_int8 (__main__.TestReductionsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cpu_uint8 (__main__.TestReductionsCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cuda_int16 (__main__.TestReductionsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cuda_int32 (__main__.TestReductionsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cuda_int64 (__main__.TestReductionsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cuda_int8 (__main__.TestReductionsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dim_arg_reduction_scalar_cuda_uint8 (__main__.TestReductionsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 1865, in test_dim_arg_reduction_scalar
    x = torch.tensor(example, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_mode_large_cuda (__main__.TestReductionsCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 390, in instantiated_test
    raise rte
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 943, in only_fn
    return fn(slf, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 884, in test_mode_large
    testset_for_shape((10, 2048), 10)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 876, in testset_for_shape
    self._test_mode_intervals(shape, [(i, d - i)], device)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_reductions.py", line 863, in _test_mode_intervals
    values, indices = torch.mode(x, -1, False)
RuntimeError: CUDA error: too many resources requested for launch
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

----------------------------------------------------------------------
Ran 5854 tests in 165.505s

FAILED (errors=11, skipped=218, expected failures=98)
test_reductions failed!

test_sort_and_select

Click to expand
======================================================================
ERROR: test_unique_dim_cpu (__main__.TestSortAndSelectCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_te
st
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_sort_and_select.py", line 670, in test_unique_dim
    run_test(device, torch.long)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_sort_and_select.py", line 416, in run_test
    x = torch.tensor([[[1., 1.],
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_unique_dim_cuda (__main__.TestSortAndSelectCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_te
st
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_sort_and_select.py", line 670, in test_unique_dim
    run_test(device, torch.long)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_sort_and_select.py", line 416, in run_test
    x = torch.tensor([[[1., 1.],
TypeError: 'float' object cannot be interpreted as an integer

----------------------------------------------------------------------
Ran 185 tests in 8.157s

FAILED (errors=2, skipped=15)
test_sort_and_select failed!

test_sparse

Click to expand
======================================================================
ERROR: test_factory_type_inference_cpu_int64 (__main__.TestSparseCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 943, in only_fn
    return fn(slf, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_sparse.py", line 2558, in test_factory_type_inference
    t = torch.sparse_coo_tensor(torch.tensor(([0], [2])), torch.tensor([1.], dtype=dtype))
TypeError: 'float' object cannot be interpreted as an integer

----------------------------------------------------------------------
Ran 2538 tests in 39.961s

FAILED (errors=1, skipped=272)
test_sparse failed!

test_tensor_creation_ops

Click to expand
======================================================================
ERROR: test_dstack_cpu_int16 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_te
st
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cpu_int32 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cpu_int64 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cpu_int8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cpu_uint8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cpu_int16 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cpu_int32 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cpu_int64 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cpu_int8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cpu_uint8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cpu_int16 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cpu_int32 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cpu_int64 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cpu_int8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cpu_uint8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cpu_int16 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cpu_int32 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cpu_int64 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cpu_int8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cpu_uint8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cpu_int16 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cpu_int32 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cpu_int64 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cpu_int8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cpu_uint8 (__main__.TestTensorCreationCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cuda_int16 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cuda_int32 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cuda_int64 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cuda_int8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_dstack_cuda_uint8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1057, in test_dstack
    self._test_special_stacks(2, 3, torch.dstack, np.dstack, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cuda_int16 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cuda_int32 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cuda_int64 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cuda_int8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_hstack_column_stack_cuda_uint8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1023, in test_hstack_column_stack
    self._test_special_stacks(1, 1, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cuda_int16 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cuda_int32 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cuda_int64 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cuda_int8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_logspace_cuda_uint8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 3151, in test_logspace
    self.assertEqual(torch.tensor([2. ** (i / 8.) for i in range(49)], device=device, dtype=dtype),
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cuda_int16 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cuda_int32 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cuda_int64 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cuda_int8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vander_types_cuda_uint8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 138, in test_vander_types
    pt_x = torch.tensor(x, device=device, dtype=dtype)
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cuda_int16 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cuda_int32 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cuda_int64 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cuda_int8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

======================================================================
ERROR: test_vstack_row_stack_cuda_uint8 (__main__.TestTensorCreationCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1808, in wrapper
    method(*args, **kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
    result = test(self, **param_kwargs)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
    return fn(self, *args, **kwargs)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 1042, in test_vstack_row_stack
    self._test_special_stacks(0, 2, torch_op, np_op, device, dtype)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in _test_special_stacks
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_tensor_creation_ops.py", line 976, in <listcomp>
    input_t = [torch.tensor(random.uniform(0, 10), device=device, dtype=dtype) for i in range(num_tensors)]
TypeError: 'float' object cannot be interpreted as an integer

----------------------------------------------------------------------
Ran 1008 tests in 83.874s

FAILED (errors=50, skipped=141)
test_tensor_creation_ops failed!

test_torch

Click to expand
======================================================================
FAIL: test_to (__main__.TestTorch)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_torch.py", line 7699, in test_to
    self._test_to_with_layout(torch.sparse_csr)
  File "/gpfs/scratch1/shared/casparl/PyTorch/1.12.0/foss-2022a-CUDA-11.7.0/pytorch/test/test_torch.py", line 7687, in _test_to_with_layout
    self.assertEqual(b.device, a.to(cuda, non_blocking=non_blocking).device)
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2219, in assertEqual
    assert_equal(
  File "/home/casparl/.local/easybuild/RHEL8/2022/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7.0/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1095, in assert_equal
    raise error_metas[0].to_error(msg)
AssertionError: Object comparison failed: device(type='cuda', index=1) != device(type='cuda', index=0)

----------------------------------------------------------------------
Ran 1556 tests in 87.645s

FAILED (failures=1, skipped=78)
[TORCH_VITAL] Dataloader.enabled                 True
[TORCH_VITAL] Dataloader.basic_unit_test                 TEST_VALUE_STRING
[TORCH_VITAL] CUDA.used          true
test_torch failed!

@surak
Copy link
Contributor

surak commented Sep 19, 2022

Trying this with eb --read-only-installdir -l --from-pr 15924 --robot I get a
== 2022-09-19 14:26:03,636 robot.py:317 WARNING Missing dependencies (EasyBuild module names): magma/2.6.2-foss-2022a-CUDA-11.7.0

@casparvl
Copy link
Contributor Author

@surak it needs #15921 , which hasn't been merged yet. Sorry, I lost sight of that a bit, there were two requested changes there, which I fixed just now. If you want, you can also just see if you consider that ready to be merged, in which case you don't need to pull in 15921 also...

@surak
Copy link
Contributor

surak commented Sep 21, 2022

@surak it needs #15921 , which hasn't been merged yet. Sorry, I lost sight of that a bit, there were two requested changes there, which I fixed just now. If you want, you can also just see if you consider that ready to be merged, in which case you don't need to pull in 15921 also...

I would love to, but I get bitten by the lib curses bug and can't do anything anymore :-(

@casparvl
Copy link
Contributor Author

casparvl commented Sep 21, 2022

I guess that means you need to test #16270 in order to test #15921 in order to test this PR :P What a mess...

@surak
Copy link
Contributor

surak commented Sep 21, 2022

I guess that means you need to test #16270 in order to test #15921 in order to test this PR :P

I just applied that manually and reinstalled all ncurses I had on the system, as this mr fails in an even weirder way:

== FAILED: Installation ended unsuccessfully (build directory: /easybuild/2020/build/FCC/4.5.0/system-system): build failed (first 300 chars): 
Module command '/usr/lmod/lmod/libexec/lmod python load lang/tcsds-1.2.31' failed with exit code 1; stderr: Lmod has detected the following error: The following module(s) are unknown: "lang/tcsds-1.2.31"

@surak
Copy link
Contributor

surak commented Sep 22, 2022

In any case, this one pass for me, with a dual processor (48 cores, 92 smt) AMD EPYC with 4x RTX 3090! My github seems broken, gives me a 404 for the upload test report.

@casparvl casparvl changed the title {devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 [WIP] {devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 Sep 23, 2022
@casparvl
Copy link
Contributor Author

In my opinion, this PR is ready to be merged, so if anyone wants to formally review: please do.

Regarding the failing tests: as we know from previous EasyConfigs, the PyTorch test suite contains many tests that fail outside of their own CI environment. 99 out of 100 times we've investigated such issues before, it was simply the test that was broken. We used to patch these, but that's so much work that it delays the roll-out of new PyTorch EasyConfigs a lot. That's why nowadays, we accept a number of failing tests, as long as it's "reasonable". In my opinion, the current set of test failures is reasonable.

I've looked through the failing tests. One of the common failure patterns we see now is

TypeError: '<sometype>' object cannot be interpreted as <someothertype>

These are the result of changes in implicit type conversion in Python 3.10, something that torch.tensor relies on. E.g.:

torch.tensor([0.5,1], dtype=torch.int8)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'float' object cannot be interpreted as an integer

No longer works. This has broken a large amount of the tests, and will probably also break some existing PyTorch code. It is however not something we can/should fix on the EasyBuild side: the official PyTorch wheel for Python 3.10 shows exactly the same behaviour. It should simply be considered a known issue in PyTorch 1.12.0 combined with Python 3.10.

More info on this, see pytorch/pytorch#69316 and pytorch/pytorch#72282

@SebastianAchilles

This comment was marked as off-topic.

@casparvl
Copy link
Contributor Author

@boegelbot please test @ generoso

@boegelbot
Copy link
Collaborator

@casparvl: Request for testing this PR well received on login1

PR test command 'EB_PR=15924 EB_ARGS= /opt/software/slurm/bin/sbatch --job-name test_PR_15924 --ntasks=4 ~/boegelbot/eb_from_pr_upload_generoso.sh' executed!

  • exit code: 0
  • output:
Submitted batch job 9177

Test results coming soon (I hope)...

- notification for comment with ID 1256242298 processed

Message to humans: this is just bookkeeping information for me,
it is of no use to you (unless you think I have a bug, which I don't).

@SebastianAchilles
Copy link
Member

Test report by @SebastianAchilles
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
bdw-opensuse-154 - Linux openSUSE Leap 15.4, x86_64, Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz (broadwell), 2 x NVIDIA NVIDIA GeForce GTX 1060 6GB, 510.85.02, Python 3.6.15
See https://gist.github.com/cb0d3ab601068c9685b32616c123405a for a full test report.

casparl and others added 3 commits September 29, 2022 17:15
…sybuild-easyblocks#2794 we'll actually start counting failing tests, instead of failing test suites. Thus, much higher numbers can be expected, since many test suites have multiple failing tests
@easybuilders easybuilders deleted a comment from boegelbot Oct 3, 2022
@smoors
Copy link
Contributor

smoors commented Oct 3, 2022

@boegelbot: please test @ generoso

@boegelbot
Copy link
Collaborator

@smoors: Request for testing this PR well received on login1

PR test command 'EB_PR=15924 EB_ARGS= /opt/software/slurm/bin/sbatch --job-name test_PR_15924 --ntasks=4 ~/boegelbot/eb_from_pr_upload_generoso.sh' executed!

  • exit code: 0
  • output:
Submitted batch job 9229

Test results coming soon (I hope)...

- notification for comment with ID 1265050895 processed

Message to humans: this is just bookkeeping information for me,
it is of no use to you (unless you think I have a bug, which I don't).

@boegelbot
Copy link
Collaborator

Test report by @boegelbot
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
cns1 - Linux Rocky Linux 8.5, x86_64, Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz (haswell), Python 3.6.8
See https://gist.github.com/4abee24350f90d1d9fd8e14ac009bc5d for a full test report.

@casparvl
Copy link
Contributor Author

casparvl commented Oct 4, 2022

Test report by @casparvl
Using easyblocks from PR(s) easybuilders/easybuild-easyblocks#2794
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
gcn11 - Linux RHEL 8.4, x86_64, Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz, 4 x NVIDIA NVIDIA A100-SXM4-40GB, 515.43.04, Python 3.6.8
See https://gist.github.com/4966264d3ae4f33a2c6dd65b8d29c2c2 for a full test report.

@casparvl
Copy link
Contributor Author

casparvl commented Oct 5, 2022

Test report by @casparvl
Using easyblocks from PR(s) easybuilders/easybuild-easyblocks#2794
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
software2.lisa.surfsara.nl - Linux debian 10.13, x86_64, Intel(R) Xeon(R) Bronze 3104 CPU @ 1.70GHz, 4 x NVIDIA NVIDIA TITAN V, 470.103.01, Python 3.7.3
See https://gist.github.com/ac8d42ba59a569a34eeb6b695e2ce1a6 for a full test report.

@smoors
Copy link
Contributor

smoors commented Oct 6, 2022

Test report by @smoors
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
node252.hydra.os - Linux CentOS Linux 7.9.2009, x86_64, Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz (broadwell), 1 x NVIDIA Tesla P100-PCIE-16GB, 515.48.07, Python 2.7.5
See https://gist.github.com/1d72f62458b6db1d16bf12a4cda844cc for a full test report.

@smoors smoors dismissed boegel’s stale review October 6, 2022 12:27

changes done

@smoors smoors modified the milestones: 4.x, 5.0, next release (4.6.2?) Oct 6, 2022
Copy link
Contributor

@smoors smoors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@smoors
Copy link
Contributor

smoors commented Oct 6, 2022

Going in, thanks @casparvl!

@smoors smoors merged commit ad54d9a into easybuilders:develop Oct 6, 2022
@easybuilders easybuilders deleted a comment from boegelbot Oct 6, 2022
@boegel boegel changed the title {devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 {devel}[foss/2022a] PyTorch v1.12.0 w/ Python 3.10.4 + CUDA 11.7.0 Oct 6, 2022
@boegel
Copy link
Member

boegel commented Oct 8, 2022

@Flamefire Any thoughts on the failing tests here? See the detailed overview provided by @casparvl in #15924 (comment)

@Flamefire
Copy link
Contributor

I'm currently working on PyTorch 1.12.1 and got pretty far already with the patches I've made for 1.11.0 but still investigating some failures. Mine is for the older toolchain though (2021b)
Once I finished that I'll take a look here especially as I have a much lower allowed failures and even less excluded tests and fixed a lot of real failures on PPC. See the work done for 1.11.0: #16339

@casparvl
Copy link
Contributor Author

Test report by @casparvl
Using easyblocks from PR(s) easybuilders/easybuild-easyblocks#2803
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
gcn2 - Linux RHEL 8.4, x86_64, Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz, 4 x NVIDIA NVIDIA A100-SXM4-40GB, 515.43.04, Python 3.6.8
See https://gist.github.com/a455877548ea2363f5e860a1b3b7c13b for a full test report.

@casparvl
Copy link
Contributor Author

Test report by @casparvl
Using easyblocks from PR(s) easybuilders/easybuild-easyblocks#2803
SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
software2.lisa.surfsara.nl - Linux debian 10.13, x86_64, Intel(R) Xeon(R) Bronze 3104 CPU @ 1.70GHz, 4 x NVIDIA NVIDIA TITAN V, 470.103.01, Python 3.7.3
See https://gist.github.com/e6851ded598fbca9ae4fd17f522ee19a for a full test report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants