Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md for SD Fine Tuning #171

Closed
wants to merge 1 commit into from
Closed

Conversation

mvpatel2000
Copy link
Contributor

No description provided.

Copy link
Contributor

@A-Jacobson A-Jacobson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried the pip install for xformers? I trained this with PyTorch 1.12 but the readme says they only support 13. If you've tried it and it works go ahead and merge.

@@ -40,8 +40,7 @@ cd examples/stable_diffusion
xformers contains faster, more memory effecient transformer layers. But can take a while to install.

```
pip install ninja # Faster xformers install
pip install git+https://github.com/facebookresearch/xformers.git@3df785ce54114630155621e2be1c2fa5037efa27#egg=xformers
pip install xformers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the install still slow? If not, should we just put it in requirements?

@mvpatel2000
Copy link
Contributor Author

Have you tried the pip install for xformers? I trained this with PyTorch 1.13 but the readme says they only support 13. If you've tried it and it works go ahead and merge.

Let me verify fine tune gives same performance with 1.13. I was able to get it to work with the full training runs....

@mvpatel2000
Copy link
Contributor Author

Closing in favor of #175

@mvpatel2000 mvpatel2000 deleted the mvpatel2000-patch-1 branch February 16, 2023 22:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants