Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix: Float64 error for mps devices on set_timesteps #4040

Merged
merged 8 commits into from
Jul 28, 2023

Conversation

ZachNagengast
Copy link
Contributor

What type of PR is this? (check all applicable)

  • Refactor
  • Feature
  • Bug Fix
  • Optimization
  • Documentation Update
  • Community Node Submission

Have you discussed this change with the InvokeAI team?

  • Yes
  • No, because: minor fix, let me know your thoughts

Have you updated all relevant documentation?

  • Yes
  • No

Description

Related Tickets & Documents

QA Instructions, Screenshots, Recordings

Added/updated tests?

  • Yes
  • No : Requires mps device

[optional] Are there any post deployment tasks we need to perform?

Please test on an MPS (M1/M2) device.

Relevant code causing the error in #4017

https://github.com/huggingface/diffusers/blob/01b6ec21faf2dce3373238b12eb450030ab1f318/src/diffusers/schedulers/scheduling_euler_discrete.py#L263C3-L268C75

        self.sigmas = torch.from_numpy(sigmas).to(device=device)
        if str(device).startswith("mps"):
            # mps does not support float64
            self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
        else:
            self.timesteps = torch.from_numpy(timesteps).to(device=device)

@ZachNagengast ZachNagengast changed the title Pass device to set_timestep to avoid float64 error bugfix: Float64 error for mps devices on set_timesteps Jul 27, 2023
@Millu Millu requested a review from lstein July 28, 2023 01:06
@Millu Millu self-requested a review July 28, 2023 06:20
Copy link
Contributor

@Millu Millu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tested this on mine - 2020 M1 w/ 16GB of RAM and it is working as expected.

HUGE shoutout to @psychedelicious for the help in setting up the local dev environment

@psychedelicious
Copy link
Collaborator

Would like approval from @StAlKeR7779 for this one.

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a Linux CUDA system, running with this PR gives me this for "banana sushi", SDXL-base-1.0, and 1024x1024 using the Euler scheduler:

image

Same settings on main give me this:
image

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With @StAlKeR7779 's help, I committed a small change that fixes the issue on Linux systems. This should be tested again on MPS systems.

@ZachNagengast
Copy link
Contributor Author

ZachNagengast commented Jul 28, 2023

I'm not sure why the commit isn't showing up here, but it just needs a quick black . It's showing up now

ZachNagengast@2164674

@lstein lstein merged commit 3e4420c into invoke-ai:main Jul 28, 2023
7 checks passed
@Millu
Copy link
Contributor

Millu commented Aug 10, 2023

@ZachNagengast are you in the Discord?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants