Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. #2749

Closed
tazz4843 opened this issue Jul 7, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@tazz4843
Copy link

tazz4843 commented Jul 7, 2023

Describe the bug

Attempting to run tortoise without a Nvidia GPU installed in the system throws the error reported in the title. It appears that the CLI properly disables CUDA when not available:

Generating autoregressive samples..
/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')

but the TTS library does not through Python code.

To Reproduce

  1. Run the following code on a system with no CUDA libraries installed (I have a 13700K + Arc A770 on Arch Linux), modifying audio-samples to point to the proper directory (audio samples attached)
    audio_samples.zip
from TTS.api import TTS

# Load the model
bark = TTS("tts_models/multilingual/multi-dataset/bark")
print(" - Loaded `bark`")

speaker = "qeii"
src_dir = "/home/zero/data/audio-samples/"
output_base = "/home/zero/data/audio-samples/{}/quick-brown-fox-{}.wav"
text_sample = "I am Queen Elizabeth the Second, queen to 32 sovereign nations."

bark.tts_to_file(
    text=text_sample,
    file_path=output_base.format(speaker, "bark"),
    voice_dir=src_dir,
    speaker=speaker,
)
  1. See error:
/home/zero/PycharmProjects/scripty-tts-server/venv/bin/python /home/zero/PycharmProjects/scripty-tts-server/main.py 
 > tts_models/multilingual/multi-dataset/bark is already downloaded.
 > Using model: bark
 - Loaded `bark`
 > Text splitted to sentences.
['I am Queen Elizabeth the Second, queen to 32 sovereign nations.']
Traceback (most recent call last):
  File "/home/zero/PycharmProjects/scripty-tts-server/main.py", line 12, in <module>
    bark.tts_to_file(
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/api.py", line 596, in tts_to_file
    wav = self.tts(text=text, speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/api.py", line 543, in tts
    wav = self.synthesizer.tts(
          ^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 365, in tts
    outputs = self.tts_model.synthesize(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/tts/models/bark.py", line 218, in synthesize
    history_prompt = load_voice(self, speaker_id, voice_dirs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/tts/layers/bark/inference_funcs.py", line 81, in load_voice
    generate_voice(audio=audio_path, model=model, output_path=output_path)
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/tts/layers/bark/inference_funcs.py", line 139, in generate_voice
    tokenizer = HubertTokenizer.load_from_checkpoint(model.config.LOCAL_MODEL_PATHS["hubert_tokenizer"]).to(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/TTS/tts/layers/bark/hubert/tokenizer.py", line 124, in load_from_checkpoint
    model.load_state_dict(torch.load(path))
                          ^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
             ^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 1142, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 1116, in load_tensor
    wrap_storage=restore_location(storage, location),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 217, in default_restore_location
    result = fn(storage, location)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 182, in _cuda_deserialize
    device = validate_cuda_device(location)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zero/PycharmProjects/scripty-tts-server/venv/lib/python3.11/site-packages/torch/serialization.py", line 166, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Expected behavior

No exception to be thrown.

Logs

No response

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": "11.7"
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.0.1+cu117",
        "TTS": "0.15.5",
        "numpy": "1.24.3"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "",
        "python": "3.11.3",
        "version": "#1 SMP PREEMPT_DYNAMIC Sat, 01 Jul 2023 16:17:21 +0000"
    }
}

Additional context

Using the patched PR branch from #2748 otherwise I get the exception mentioned in #2745.

@tazz4843 tazz4843 added the bug Something isn't working label Jul 7, 2023
@erogol
Copy link
Member

erogol commented Jul 7, 2023

I guess Iknow the fix for this, but as said in the documentation, Bark takes ages on the CPU even if we fix it.

erogol added a commit that referenced this issue Jul 7, 2023
@erogol
Copy link
Member

erogol commented Jul 7, 2023

This should fix #2750

@erogol erogol closed this as completed in 672ec3b Jul 8, 2023
Tindell pushed a commit to pugtech-co/TTS that referenced this issue Sep 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants