Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TypeError: expected Tensor as element 0 in argument 0, but got tuple #12523

Closed
1 task done
ClipSkipper opened this issue Aug 13, 2023 · 3 comments
Closed
1 task done
Labels
bug Report of a confirmed bug

Comments

@ClipSkipper
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

When using any SDXL lora in current dev branch I get this error;

'TypeError: expected Tensor as element 0 in argument 0, but got tuple'

Images are generated with SDXL checkpoints, however all loras produce this error.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Image generation should be as normal with lora loaded and weights applied.

Version or Commit where the problem happens

version: 1.5.1

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

xformers

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--no-half-vae --xformers

List of extensions

none

Console logs

*** Error completing request
*** Arguments: ('task(zeliaaqznbf3mf6)', ' <lora:niji3D_test_v2:1>,Female,woman Cozy Knit Sweater in Oversized Fit, Fleece-lined Jogger Pants in Heather Gray, Chunky Knit Scarf in Neutral Tone, Slip-on Sneakers in White,Twisted Side Ponytail hairstyle (English Hollyhock,Rainy Season color background:1.3),   <lora:niji3D_test_v2:1>', 'deformed,large breasts,missing limbs,amputated,pants,shorts,cat ears,bad anatomy, naked, no clothes,disfigured, poorly drawn face, mutation, mutated,ugly, disgusting, blurry, watermark, watermarked, over saturated, obese, doubled face,b&w, black and white, sepia, nude, frekles, no masks,duplicate image, blur, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), low resolution, normal quality, monochrome, grayscale, bad anatomy,(fat:1.2),facing away, looking away,tilted head,lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worst quality, low quality, normal quality,jpeg artifacts,signature, watermark, username,blurry,bad feet,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,', [], 40, 'DPM++ SDE Karras', 1, 1, 7.5, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000027C86ED2AD0>, 0, False, '', 0.8, -1, -1, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 681, in process_images
        res = process_images_inner(p)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 805, in process_images_inner
        p.setup_conds()
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 1258, in setup_conds
        super().setup_conds()
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 415, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 403, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\prompt_parser.py", line 168, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_models_xl.py", line 31, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_hijack_open_clip.py", line 57, in encode_with_transformers
        d = self.wrapped.encode_with_transformer(tokens)
      File "C:\SDXL V1.1\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 470, in encode_with_transformer
        x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
      File "C:\SDXL V1.1\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 502, in text_transformer_forward
        x = r(x, attn_mask=attn_mask)
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 242, in forward
        x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 228, in attention
        return self.attn(
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_MultiheadAttention_forward
        network_apply_weights(self)
      File "C:\SDXL V1.1\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 345, in network_apply_weights
        updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
    TypeError: expected Tensor as element 0 in argument 0, but got tuple

Additional information

No response

@ClipSkipper ClipSkipper added the bug-report Report of a bug, yet to be confirmed label Aug 13, 2023
@fabbarix
Copy link

fabbarix commented Aug 13, 2023

This is due to a change in this place:

return updown * self.calc_scale() * self.multiplier(), ex_bias

coming from commit bd4da44. The calc_updown now returns a tuple.

While waiting for a fix, you can change these lines -

updown_q = module_q.calc_updown(self.in_proj_weight)
updown_k = module_k.calc_updown(self.in_proj_weight)
updown_v = module_v.calc_updown(self.in_proj_weight)
- to something like:

updown_q, ex_bias = module_q.calc_updown(self.in_proj_weight)
updown_k, ex_bias = module_k.calc_updown(self.in_proj_weight)
updown_v, ex_bias = module_v.calc_updown(self.in_proj_weight)

and also line

updown_out = module_out.calc_updown(self.out_proj.weight)
to:

updown_out, ex_bias = module_out.calc_updown(self.out_proj.weight)

@catboxanon
Copy link
Collaborator

cc @KohakuBlueleaf

@catboxanon
Copy link
Collaborator

Fixed by #12543

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Report of a confirmed bug
Projects
None yet
Development

No branches or pull requests

3 participants