Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'NoneType' object is not callable #9

Closed
xxmlala opened this issue Apr 26, 2022 · 2 comments
Closed

TypeError: 'NoneType' object is not callable #9

xxmlala opened this issue Apr 26, 2022 · 2 comments

Comments

@xxmlala
Copy link

xxmlala commented Apr 26, 2022

It runs well with the code in 'Quick-start' from https://github.com/idiap/fast-transformers.
But I meet this error when running the 'train.py' with no modification:

name: train_default
args Namespace(batch_size='1', epochs=200, gpus=None, lr=0.0001, name='train_default', path=None, train_data='../dataset/lpd_5_prcem_mix_v8_10000.npz')
num of encoder classes: [  18    3   18  129   18    6   20  102 4865] [7, 1, 6]
D_MODEL 512  N_LAYER 12  N_HEAD 8 DECODER ATTN causal-linear
>>>>>: [  18    3   18  129   18    6   20  102 4865]
DEVICE COUNT: 1
VISIBLE: 0
n_parameters: 39,006,324
    train_data: dataset
    batch_size: 1
    num_batch: 3039
    train_x: (3039, 9999, 9)
    train_y: (3039, 9999, 9)
    train_mask: (3039, 9999)
    lr_init: 0.0001
    DECAY_EPOCH: []
    DECAY_RATIO: 0.1
Traceback (most recent call last):
  File "train.py", line 226, in <module>
    train_dp()
  File "train.py", line 169, in train_dp
    losses = net(is_train=True, x=batch_x, target=batch_y, loss_mask=batch_mask, init_token=batch_init)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jovyan/work/test-mm21/video-bgm-generation/src/model.py", line 482, in forward
    return self.train_forward(**kwargs)
  File "/home/jovyan/work/test-mm21/video-bgm-generation/src/model.py", line 450, in train_forward
    h, y_type = self.forward_hidden(x, memory=None, is_training=True, init_token=init_token)
  File "/home/jovyan/work/test-mm21/video-bgm-generation/src/model.py", line 221, in forward_hidden
    encoder_hidden = self.transformer_encoder(encoder_pos_emb, attn_mask)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/fast_transformers/transformers.py", line 138, in forward
    x = layer(x, attn_mask=attn_mask, length_mask=length_mask)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/fast_transformers/transformers.py", line 81, in forward
    key_lengths=length_mask
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/fast_transformers/attention/attention_layer.py", line 109, in forward
    key_lengths
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/fast_transformers/attention/causal_linear_attention.py", line 101, in forward
    values
  File "/opt/conda/lib/python3.6/site-packages/fast_transformers/attention/causal_linear_attention.py", line 23, in causal_linear
    V_new = causal_dot_product(Q, K, V)
  File "/opt/conda/lib/python3.6/site-packages/fast_transformers/causal_product/__init__.py", line 48, in forward
    product
TypeError: 'NoneType' object is not callable
@wzk1015
Copy link
Owner

wzk1015 commented Apr 26, 2022

see #3 , you should use correct versions of torch and fast-transformers based on your CUDA version

the Quick-start of the fast-transformer repo uses full attention, but ours use causal-linear, so the Quick-start does not guarantee your environment is correct

@xxmlala
Copy link
Author

xxmlala commented May 8, 2022

Thanks for your reply very much!
When I changed my pytorch to the right version, the problem vanished.

@xxmlala xxmlala closed this as completed May 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants