-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plain green images generated #69
Comments
Use basujindal's optimized script and the switch --precision full, it won't blow up the VRAM usage. Close any applications that have Hardware Acceleration (or disable such as Steam, Discord, Chrome) |
Thank you so much. I realized I posted this on the wrong repo. I apologize. The solution I described above works for basujindal's repo. |
I have the same problem, I tried every file .ckpt, 6 GB of VRAM (NVIDIA 1660 Super) Edit: Back to green again |
The issue is still valid but I'm afraid it's related to pytorch... I have seen comments on line about the 1660 getting NaNs using fp16.. |
You can debug this issue by checking the output of each step, likely a NaN issue from fp16 (which you can resolve by switching to fp32) or some weights are not initialized properly (happens to me once) |
I have the same problem, I have a laptop with a 4GB 1650 GTX. I got black images at first and green images now, after a few changes. I have tried --precision full but I only get out of memory error. |
For me, it finally worked by adding this argument: "--precision full". |
Finally also worked for me with --precision full. But I have to close the browser (Mozilla), seems to be that I am very short in memory (8GB RAM and 4GB GPU). Only can get one image each time if I want not go out of memory, but it's fine. I still can not believe this impressive IA implementation, it's almost magic! |
im also on 1660 Super and getting green image |
Im too on 1660 super and using "webui.cmd --precision full" does not work... any other clue? |
I'm also on 1660 super and getting green image. Wasn't fixed with --precision full (also had to modify line 281 in txt2img.py: |
Hi! after changing txt2img.py the desktop interface works? im still getting green images... |
I'm also on 1660 super and getting green image. |
What prompt did you use? I'm on the same hardware and trying to get only one image, I still get the memory error. Thanks! |
I reduced memory usage like this:
|
Is what you've listed as bullet points the result of switching the one line? If not, how did you manage to change those attributes @baobabKoodaa? |
I modified the txt2img script in multiple places. You can just ctrl+f the file for stuff related to watermarking and comment it out. Same thing for SFW filter. Remember to comment out unused imports after commenting out code. |
dev branch .bin .pt embeddings textual inversion + seamless x y animation
Also on a 1660 Ti and only getting plain green images. |
Solution for 16xx card owners, which is worked for me:
After that you should get black image instead of green, that mean you are on the right way torch.backends.cudnn.benchmark = True After that you should get normal images, not green and not black |
This works, thank you! |
HEY WHERE DO I ADD STEP 4? |
img2img user here - getting a green output & enabling precision-full text box gives an error - "Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same". |
hey, this is very simple, found the file called txt2img.py. add those two lines at the end of the files, then add
|
This means just lib files or dll files and lib files? If include dll files,do I need to overwrite the original files? |
I'm having the same issue on training, does anyone now a setting that resolves this? Precision setting is not applicable for training |
you should not, this is an old issue. update everything and give more details about your environments if it doesn't helps |
I have last version of the stable-diffusion repo. And following their instructions as in setting up environment like this.
However when I run any of the given scripts with python main.py --base ./configs/latent-diffusion/.yaml -t --gpus 0, -n "256_stable_diff_4ch" all I get is an image with a color. I have checked the weights and grads of the model none of them is NA or INF. And I am observing this from the initialization of the model. OpenAI's improved-diffusion repo works fine and score-guided diffusion also works fine but somehow couldn't manage to run stable diffusion. I am using Nvidia A40 on a server but result is the same on both cpu and gpu runs. Here is my pip list. I also have a dataset of my own where I am using torchvision.datasets.ImageFolder. I have also tried to use CelebA-HQ but the result is the same on both as I said.
|
It runs well, but it generates images where all pixels are the same shade of green, specifically #007B00.
In the basujindal repo, changing the default from autocast to full in txt2img.py fixes this problem. I did it in the file in scripts/orig_scripts but it didn't make a difference.
Whenever I run dream.py with --full_precision it runs out of memory. I have 6 GB of VRAM.
The text was updated successfully, but these errors were encountered: