Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plain green images generated #69

Open
pedropachecog opened this issue Aug 23, 2022 · 28 comments
Open

Plain green images generated #69

pedropachecog opened this issue Aug 23, 2022 · 28 comments

Comments

@pedropachecog
Copy link

It runs well, but it generates images where all pixels are the same shade of green, specifically #007B00.

In the basujindal repo, changing the default from autocast to full in txt2img.py fixes this problem. I did it in the file in scripts/orig_scripts but it didn't make a difference.

Whenever I run dream.py with --full_precision it runs out of memory. I have 6 GB of VRAM.

@bscout9956
Copy link

Use basujindal's optimized script and the switch --precision full, it won't blow up the VRAM usage. Close any applications that have Hardware Acceleration (or disable such as Steam, Discord, Chrome)
For the regular stable-diffusion code, try --W 384 --H 384 (lower quality but it will work)

@pedropachecog
Copy link
Author

Thank you so much. I realized I posted this on the wrong repo. I apologize. The solution I described above works for basujindal's repo.

@breadbrowser
Copy link

@Iustin117
Copy link

Iustin117 commented Aug 24, 2022

I have the same problem, I tried every file .ckpt, 6 GB of VRAM (NVIDIA 1660 Super)
Now after reinstalling everything the generated pictures are black...

Edit: Back to green again

@bscout9956
Copy link

The issue is still valid but I'm afraid it's related to pytorch... I have seen comments on line about the 1660 getting NaNs using fp16..

@taoisu
Copy link

taoisu commented Aug 25, 2022

You can debug this issue by checking the output of each step, likely a NaN issue from fp16 (which you can resolve by switching to fp32) or some weights are not initialized properly (happens to me once)

@Adolfovik
Copy link

I have the same problem, I have a laptop with a 4GB 1650 GTX. I got black images at first and green images now, after a few changes. I have tried --precision full but I only get out of memory error.

@Iustin117
Copy link

For me, it finally worked by adding this argument: "--precision full".

@Adolfovik
Copy link

Finally also worked for me with --precision full. But I have to close the browser (Mozilla), seems to be that I am very short in memory (8GB RAM and 4GB GPU). Only can get one image each time if I want not go out of memory, but it's fine. I still can not believe this impressive IA implementation, it's almost magic!

@migero
Copy link

migero commented Sep 2, 2022

im also on 1660 Super and getting green image

@jorgitobg
Copy link

Im too on 1660 super and using "webui.cmd --precision full" does not work... any other clue?

@baobabKoodaa
Copy link

I'm also on 1660 super and getting green image. Wasn't fixed with --precision full (also had to modify line 281 in txt2img.py: precision_scope = autocast if opt.precision=="full" else nullcontext).

@jorgitobg
Copy link

Hi! after changing txt2img.py the desktop interface works? im still getting green images...

@ccimage
Copy link

ccimage commented Sep 6, 2022

I'm also on 1660 super and getting green image.

@CptTony
Copy link

CptTony commented Sep 7, 2022

What prompt did you use? I'm on the same hardware and trying to get only one image, I still get the memory error. Thanks!

@baobabKoodaa
Copy link

I reduced memory usage like this:

  • scripts/txt2img.py, function - load_model_from_config, line - 63, change from: model.cuda() to model.cuda().half()
  • removed invisible watermarking
  • reduced n_samples to 1
  • reduced resolution to 256x256
  • removed sfw filter

@Zenahr
Copy link

Zenahr commented Sep 8, 2022

I reduced memory usage like this:

  • scripts/txt2img.py, function - load_model_from_config, line - 63, change from: model.cuda() to model.cuda().half()
  • removed invisible watermarking
  • reduced n_samples to 1
  • reduced resolution to 256x256
  • removed sfw filter

Is what you've listed as bullet points the result of switching the one line? If not, how did you manage to change those attributes @baobabKoodaa?

@baobabKoodaa
Copy link

Is what you've listed as bullet points the result of switching the one line? If not, how did you manage to change those attributes @baobabKoodaa?

I modified the txt2img script in multiple places. You can just ctrl+f the file for stuff related to watermarking and comment it out. Same thing for SFW filter. Remember to comment out unused imports after commenting out code.

enzymezoo-code pushed a commit to enzymezoo-code/stable-diffusion that referenced this issue Sep 24, 2022
dev branch .bin .pt embeddings textual inversion + seamless x y animation
@nibblesnbits
Copy link

Also on a 1660 Ti and only getting plain green images.

@ArDiouscuros
Copy link

Solution for 16xx card owners, which is worked for me:

  1. Download cudnn libraries from NVIDIA site, version > 8.2.0 (I have tested 8.5.0.96 and 8.3.3.40)
  2. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib
  3. Place missing dependency zlibwapi.dll to the same folder
    -or-
  4. Update torch to version including new cundnn : e.g. torch==1.12.0+cu116

After that you should get black image instead of green, that mean you are on the right way
Add following lines to txt2img:

torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True

After that you should get normal images, not green and not black

@leonidwang
Copy link

Solution for 16xx card owners, which is worked for me:

  1. Download cudnn libraries from NVIDIA site, version > 8.2.0 (I have tested 8.5.0.96 and 8.3.3.40)
  2. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib
  3. Place missing dependency zlibwapi.dll to the same folder
    -or-
  4. Update torch to version including new cundnn : e.g. torch==1.12.0+cu116

After that you should get black image instead of green, that mean you are on the right way Add following lines to txt2img:

torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True

After that you should get normal images, not green and not black

This works, thank you!

@fabsway23
Copy link

fabsway23 commented Oct 8, 2022

Solution for 16xx card owners, which is worked for me:

  1. Download cudnn libraries from NVIDIA site, version > 8.2.0 (I have tested 8.5.0.96 and 8.3.3.40)
  2. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib
  3. Place missing dependency zlibwapi.dll to the same folder
    -or-
  4. Update torch to version including new cundnn : e.g. torch==1.12.0+cu116

After that you should get black image instead of green, that mean you are on the right way Add following lines to txt2img:

torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True

After that you should get normal images, not green and not black

HEY WHERE DO I ADD STEP 4?

@fashiontryon-production

img2img user here - getting a green output & enabling precision-full text box gives an error - "Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same".
Anyone else facing this issue ?

@FVolral
Copy link

FVolral commented Oct 20, 2022

torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True

hey, this is very simple, found the file called txt2img.py. add those two lines at the end of the files, then add

import torch at the begining and you are done. you don't need to use those flags anymore

@ccccxxx
Copy link

ccccxxx commented Oct 24, 2022

  1. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib

This means just lib files or dll files and lib files?

If include dll files,do I need to overwrite the original files?

@furkan-celik
Copy link

I'm having the same issue on training, does anyone now a setting that resolves this? Precision setting is not applicable for training

@FVolral
Copy link

FVolral commented Jan 22, 2023

you should not, this is an old issue. update everything and give more details about your environments if it doesn't helps

@furkan-celik
Copy link

furkan-celik commented Jan 23, 2023

I have last version of the stable-diffusion repo. And following their instructions as in setting up environment like this.

conda env create -f environment.yaml conda activate ldm

However when I run any of the given scripts with python main.py --base ./configs/latent-diffusion/.yaml -t --gpus 0, -n "256_stable_diff_4ch" all I get is an image with a color.

I have checked the weights and grads of the model none of them is NA or INF. And I am observing this from the initialization of the model. OpenAI's improved-diffusion repo works fine and score-guided diffusion also works fine but somehow couldn't manage to run stable diffusion.

I am using Nvidia A40 on a server but result is the same on both cpu and gpu runs. Here is my pip list. I also have a dataset of my own where I am using torchvision.datasets.ImageFolder. I have also tried to use CelebA-HQ but the result is the same on both as I said.

absl-py 1.4.0 aiohttp 3.8.3 aiosignal 1.3.1 albumentations 0.4.3 altair 4.2.0 antlr4-python3-runtime 4.8 async-timeout 4.0.2 attrs 22.2.0 backports.zoneinfo 0.2.1 blinker 1.5 brotlipy 0.7.0 cachetools 5.3.0 certifi 2022.12.7 cffi 1.15.1 charset-normalizer 2.0.4 click 8.1.3 clip 1.0 /home/guests/furkan_celik/stable-diffusion/src/clip coloredlogs 15.0.1 cryptography 38.0.4 decorator 5.1.1 diffusers 0.11.1 einops 0.3.0 entrypoints 0.4 filelock 3.9.0 flatbuffers 23.1.21 flit-core 3.6.0 frozenlist 1.3.3 fsspec 2023.1.0 ftfy 6.1.1 future 0.18.3 gitdb 4.0.10 GitPython 3.1.30 google-auth 2.16.0 google-auth-oauthlib 0.4.6 grpcio 1.51.1 huggingface-hub 0.11.1 humanfriendly 10.0 idna 3.4 imageio 2.9.0 imageio-ffmpeg 0.4.2 imgaug 0.2.6 importlib-metadata 6.0.0 importlib-resources 5.10.2 invisible-watermark 0.1.5 Jinja2 3.1.2 jsonschema 4.17.3 kornia 0.6.0 latent-diffusion 0.0.1 /home/guests/furkan_celik/stable-diffusion Markdown 3.4.1 markdown-it-py 2.1.0 MarkupSafe 2.1.2 mdurl 0.1.2 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 mpmath 1.2.1 multidict 6.0.4 networkx 3.0 numpy 1.24.1 oauthlib 3.2.2 omegaconf 2.1.1 onnx 1.13.0 onnxruntime 1.13.1 opencv-python 4.1.2.30 opencv-python-headless 4.7.0.68 packaging 23.0 pandas 1.5.3 Pillow 9.3.0 pip 20.3.3 pkgutil-resolve-name 1.3.10 protobuf 3.20.3 pudb 2019.2 pyarrow 10.0.1 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pydeck 0.8.0 pyDeprecate 0.3.1 Pygments 2.14.0 Pympler 1.0.1 pyOpenSSL 22.0.0 pyrsistent 0.19.3 PySocks 1.7.1 python-dateutil 2.8.2 pytorch-lightning 1.4.2 pytz 2022.7.1 pytz-deprecation-shim 0.1.0.post0 PyWavelets 1.4.1 PyYAML 6.0 regex 2022.10.31 requests 2.28.1 requests-oauthlib 1.3.1 rich 13.2.0 rsa 4.9 scikit-image 0.19.3 scipy 1.10.0 semver 2.13.0 setuptools 65.6.3 six 1.16.0 smmap 5.0.0 streamlit 1.17.0 sympy 1.11.1 taming-transformers 0.0.1 /home/guests/furkan_celik/stable-diffusion/src/taming-transformers tensorboard 2.11.2 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 test-tube 0.7.5 tifffile 2023.1.23.1 tokenizers 0.12.1 toml 0.10.2 toolz 0.12.0 torch 1.11.0 torch-fidelity 0.3.0 torchmetrics 0.6.0 torchvision 0.12.0 tornado 6.2 tqdm 4.64.1 transformers 4.19.2 typing-extensions 4.4.0 tzdata 2022.7 tzlocal 4.2 urllib3 1.26.14 urwid 2.1.2 validators 0.20.0 watchdog 2.2.1 wcwidth 0.2.6 Werkzeug 2.2.2 wheel 0.37.1 yarl 1.8.2 zipp 3.11.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests