-
Notifications
You must be signed in to change notification settings - Fork 27.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward) #14097
Comments
I got the same error using "--use-cpu all", If you modify this segment of modules/devices.py
to just return CPU it properly forces CN onto the CPU as well. Which points to the issue being with it looking for a task tied to use_cpu and not finding it. Digging into CN's code, it calls get_device_for("controlnet"). So I reverted the code and tried something different. I'm guessing you're using the same flag, just "--use-cpu all"? I got it working with the original code by replacing that with "--use-cpu all controlnet". This would effectively make use_cpu an array holding "all" and "controlnet", allowing CN to find itself and get CPU. Another fix for this might be
which makes 'all' behave as implied by the term, forcing CPU on everything, this also applies to something called swinir (which from what I can tell was also missed by "all" previously). This would be a fix code-side, whereas adding controlnet to the argument list is user-side and easier to do. |
fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097
Also I applaud the courage with which you just posted that prompt loll, I'd have at least generated a sfw prompt to attach before posting logs. |
If you can try using this prompt word, it is normal |
Thank you for your answer. I'll give it a try |
I forgot to mention you might need to add --no-half-controlnet - it's the controlnet version of --no-half that seems necessary to run on CPU |
It is very likely that this is the reason. I tried and replied. Thank you very much |
The issue also occurs with general NVIDIA GPUs so I think it may have nothing to do with |
Same error text, "but found at least two devices, cpu and cuda:0!", or are the two devices found different? If it's the same text and you're not using use-cpu then something must be falling back on cpu for some reason for it to be listed in the error. What's your parameters and what's the top of the stack trace when it throws the error? Maybe you have an extension with fallback behavior? I have an nvidia gpu but only get this error when using |
Same error text and stack traces. It also went through sd-webui-controlnet and have AnimateDiff enabled.
|
Interesting... The error definitely points to something (either SD itself, controlnet, or AnimateDiff) is being shunted to the CPU. I tried your arguments, and don't get the same error either nor am I seeing that shunting happening. I don't have AnimateDiff, but I don't see why it would be shunting to CPU when your arguments don't cause that for either SD or CN sd-webui-aki, where did that come from? This repro is stable-diffusion-webui, I'm wondering if there's code differences between the two, I'd be willing to install sd-webui-aki to troubleshoot it and figure out what's going on. Are you comfortable with python? If you can track down that devices.py file in your setup, you could print() the task when devices are fetched to see who's getting shunted. If you can get me a copy of just that devices.py file i could send you back a debug version to probe this a bit (it'd return info on who is requesting a device and what they get to the output). |
I'd be curious if you get the issue when not using AnimateDiff? Like, if you just use Controlnet and StableDiffusion. From what I'm understanding of the logs it looks like controlnet does it's thing quite happily and the crash is happening when AnimateDiff tries to grab hold of it - if AnimateDiff alone is having the issue then I'll take a look at that. |
It's literally I am not experiencing that either, or I'd debug it on my own. It is some random guys on forum that had this.
I went through my error reporting warehouse and it seems the error only happens when both Controlnet and Animatediff are involved. I fail to find any instance where only Controlnet or Animatediff exists. Also the number of such events and users are pretty low (somewhere around 30-40 in 14 days so it is a pretty rare one). FYI on one of the instances these extensions are installed: extensions
pip packages
|
There are no errors when using Controlnet alone or Animatediff alone, and there are issues with the combination of the two |
Thanks for the answers - i'll install and test animatediff tomorrow to see if I can repro the issue. |
So I found something relevant and perhaps worth testing - but I can't get it working myself (I'm running this raw in windows, not using a docker instance). When I just use --xformers it falls back to doggettx, if I force xformers i get an error that xformers fails to load because it's missing triton and triton doesn't exist on windows. I can import xformers in an interactive python shell, but not xformers.ops (this throws the triton error). It then hard-crashes on generation. That said, this might be worth testing on your end, from animatediff's docs : Attention Adding --xformers / --opt-sdp-attention to your command lines can significantly reduce VRAM and improve speed. However, due to a bug in xformers, you may or may not get CUDA error. If you get CUDA error, please either completely switch to --opt-sdp-attention, or preserve --xformers -> go to Settings/AnimateDiff -> choose "Optimize attention layers with sdp (torch >= 2.0.0 required)". kkget were you using xformers too? If you don't use xformers or use opt-sdp-attention instead of xformers does it affect the error at all? It might simply be the issue listed in animatediff's repo. That said, this might be better approached from AnimateDiff's repo, on top of this one. It seems to be an issue with AnimateDiff itself from what you guys are saying. I did find a few places where CPU-shunting could be happening, gonna test that then if it still fails I'll bite the bullet and try to get this set up in linux so I can test xformers myself with triton installed. |
Triton is not involved in any way here so it is fine to leave that alone. xformers can work raw in Windows just make sure the version matches with the torch. I found out that both SDP and xformers are having this in a similar fashion so it might be irrelevant. |
continue-revolution/sd-webui-animatediff#302 (comment) Fair enough. This is a known issue when |
but I use Animatediff is not have the error
|
Thank you very much,I tried this setting and it can work now |
* added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for AUTOMATIC1111#13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix AUTOMATIC1111#13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes AUTOMATIC1111#13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * fix gradio video component and canvas fit for inpaint * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt <kph@hotmail.ca> * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon <storyicon@foxmail.com> * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * AUTOMATIC1111#14122 Fix edge case where style text has multiple {prompt} placeholders * AUTOMATIC1111#14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c1954. Since the modification is expected to move to mac_specific.py (AUTOMATIC1111#14046 (comment)) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for AUTOMATIC1111#14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for AUTOMATIC1111#14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes AUTOMATIC1111#13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements anapnoe#206 * fix side panel show/hide button hot zone does not use the entire width anapnoe#204 * Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request AUTOMATIC1111#14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev AUTOMATIC1111#13354 : solve lora loading issue * Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon <storyicon@foxmail.com> Co-authored-by: Gleb Alekseev <alekseev.gleb@gmail.com> Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com> Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park <wkpark@gmail.com> Co-authored-by: Khachatur Avanesian <jailbreakvideo@gmail.com> Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra <dadadaluo@gmail.com> Co-authored-by: David Benson <dben@users.noreply.github.com> Co-authored-by: Meerkov <GoMeerkov@gmail.com> Co-authored-by: Emily Zeng <zhixuan.zeng@gmail.com> Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani <riteshgangnani10> Co-authored-by: GerryDE <gerritfresen4@gmail.com> Co-authored-by: fuchen.ljl <yjqqqqdx_01@163.com> Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <cabelo@opensuse.org> Co-authored-by: wfjsw <wfjsw@users.noreply.github.com> Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro <konstantin.adamovich@gmail.com> Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless <aria1th@naver.com> Co-authored-by: Kieran Hunt <kph@hotmail.ca> Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: storyicon <storyicon@foxmail.com> Co-authored-by: Tom Haelbich <haelbito@outlook.com> Co-authored-by: hidenorly <twitte.harold@gmail.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Charlie Joynt <cjj1977@users.noreply.github.com> Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll <vfirst218@gmail.com> Co-authored-by: MrCheeze <fishycheeze@yahoo.ca> Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com>
* added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for AUTOMATIC1111#13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix AUTOMATIC1111#13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes AUTOMATIC1111#13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * fix gradio video component and canvas fit for inpaint * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt <kph@hotmail.ca> * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon <storyicon@foxmail.com> * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * AUTOMATIC1111#14122 Fix edge case where style text has multiple {prompt} placeholders * AUTOMATIC1111#14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c1954. Since the modification is expected to move to mac_specific.py (AUTOMATIC1111#14046 (comment)) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for AUTOMATIC1111#14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for AUTOMATIC1111#14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes AUTOMATIC1111#13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements anapnoe#206 * fix side panel show/hide button hot zone does not use the entire width anapnoe#204 * Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request AUTOMATIC1111#14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev AUTOMATIC1111#13354 : solve lora loading issue * Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon <storyicon@foxmail.com> Co-authored-by: Gleb Alekseev <alekseev.gleb@gmail.com> Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com> Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park <wkpark@gmail.com> Co-authored-by: Khachatur Avanesian <jailbreakvideo@gmail.com> Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra <dadadaluo@gmail.com> Co-authored-by: David Benson <dben@users.noreply.github.com> Co-authored-by: Meerkov <GoMeerkov@gmail.com> Co-authored-by: Emily Zeng <zhixuan.zeng@gmail.com> Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani <riteshgangnani10> Co-authored-by: GerryDE <gerritfresen4@gmail.com> Co-authored-by: fuchen.ljl <yjqqqqdx_01@163.com> Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <cabelo@opensuse.org> Co-authored-by: wfjsw <wfjsw@users.noreply.github.com> Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro <konstantin.adamovich@gmail.com> Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless <aria1th@naver.com> Co-authored-by: Kieran Hunt <kph@hotmail.ca> Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: storyicon <storyicon@foxmail.com> Co-authored-by: Tom Haelbich <haelbito@outlook.com> Co-authored-by: hidenorly <twitte.harold@gmail.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Charlie Joynt <cjj1977@users.noreply.github.com> Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll <vfirst218@gmail.com> Co-authored-by: MrCheeze <fishycheeze@yahoo.ca> Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com>
* pull (#11) * added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for AUTOMATIC1111#13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix AUTOMATIC1111#13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes AUTOMATIC1111#13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * fix gradio video component and canvas fit for inpaint * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt <kph@hotmail.ca> * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon <storyicon@foxmail.com> * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * AUTOMATIC1111#14122 Fix edge case where style text has multiple {prompt} placeholders * AUTOMATIC1111#14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c1954. Since the modification is expected to move to mac_specific.py (AUTOMATIC1111#14046 (comment)) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for AUTOMATIC1111#14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for AUTOMATIC1111#14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes AUTOMATIC1111#13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements anapnoe#206 * fix side panel show/hide button hot zone does not use the entire width anapnoe#204 * Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request AUTOMATIC1111#14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev AUTOMATIC1111#13354 : solve lora loading issue * Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon <storyicon@foxmail.com> Co-authored-by: Gleb Alekseev <alekseev.gleb@gmail.com> Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com> Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park <wkpark@gmail.com> Co-authored-by: Khachatur Avanesian <jailbreakvideo@gmail.com> Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra <dadadaluo@gmail.com> Co-authored-by: David Benson <dben@users.noreply.github.com> Co-authored-by: Meerkov <GoMeerkov@gmail.com> Co-authored-by: Emily Zeng <zhixuan.zeng@gmail.com> Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani <riteshgangnani10> Co-authored-by: GerryDE <gerritfresen4@gmail.com> Co-authored-by: fuchen.ljl <yjqqqqdx_01@163.com> Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <cabelo@opensuse.org> Co-authored-by: wfjsw <wfjsw@users.noreply.github.com> Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro <konstantin.adamovich@gmail.com> Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless <aria1th@naver.com> Co-authored-by: Kieran Hunt <kph@hotmail.ca> Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: storyicon <storyicon@foxmail.com> Co-authored-by: Tom Haelbich <haelbito@outlook.com> Co-authored-by: hidenorly <twitte.harold@gmail.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Charlie Joynt <cjj1977@users.noreply.github.com> Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll <vfirst218@gmail.com> Co-authored-by: MrCheeze <fishycheeze@yahoo.ca> Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com> * Z (#12) * added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for AUTOMATIC1111#13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix AUTOMATIC1111#13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes AUTOMATIC1111#13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * fix gradio video component and canvas fit for inpaint * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt <kph@hotmail.ca> * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon <storyicon@foxmail.com> * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * AUTOMATIC1111#14122 Fix edge case where style text has multiple {prompt} placeholders * AUTOMATIC1111#14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c1954. Since the modification is expected to move to mac_specific.py (AUTOMATIC1111#14046 (comment)) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for AUTOMATIC1111#14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for AUTOMATIC1111#14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes AUTOMATIC1111#13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements anapnoe#206 * fix side panel show/hide button hot zone does not use the entire width anapnoe#204 * Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request AUTOMATIC1111#14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev AUTOMATIC1111#13354 : solve lora loading issue * Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon <storyicon@foxmail.com> Co-authored-by: Gleb Alekseev <alekseev.gleb@gmail.com> Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com> Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park <wkpark@gmail.com> Co-authored-by: Khachatur Avanesian <jailbreakvideo@gmail.com> Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra <dadadaluo@gmail.com> Co-authored-by: David Benson <dben@users.noreply.github.com> Co-authored-by: Meerkov <GoMeerkov@gmail.com> Co-authored-by: Emily Zeng <zhixuan.zeng@gmail.com> Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani <riteshgangnani10> Co-authored-by: GerryDE <gerritfresen4@gmail.com> Co-authored-by: fuchen.ljl <yjqqqqdx_01@163.com> Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <cabelo@opensuse.org> Co-authored-by: wfjsw <wfjsw@users.noreply.github.com> Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro <konstantin.adamovich@gmail.com> Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless <aria1th@naver.com> Co-authored-by: Kieran Hunt <kph@hotmail.ca> Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: storyicon <storyicon@foxmail.com> Co-authored-by: Tom Haelbich <haelbito@outlook.com> Co-authored-by: hidenorly <twitte.harold@gmail.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Charlie Joynt <cjj1977@users.noreply.github.com> Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll <vfirst218@gmail.com> Co-authored-by: MrCheeze <fishycheeze@yahoo.ca> Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com> --------- Signed-off-by: storyicon <storyicon@foxmail.com> Co-authored-by: Gleb Alekseev <alekseev.gleb@gmail.com> Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com> Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park <wkpark@gmail.com> Co-authored-by: Khachatur Avanesian <jailbreakvideo@gmail.com> Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra <dadadaluo@gmail.com> Co-authored-by: David Benson <dben@users.noreply.github.com> Co-authored-by: Meerkov <GoMeerkov@gmail.com> Co-authored-by: Emily Zeng <zhixuan.zeng@gmail.com> Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: GerryDE <gerritfresen4@gmail.com> Co-authored-by: fuchen.ljl <yjqqqqdx_01@163.com> Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <cabelo@opensuse.org> Co-authored-by: wfjsw <wfjsw@users.noreply.github.com> Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro <konstantin.adamovich@gmail.com> Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless <aria1th@naver.com> Co-authored-by: Kieran Hunt <kph@hotmail.ca> Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: storyicon <storyicon@foxmail.com> Co-authored-by: Tom Haelbich <haelbito@outlook.com> Co-authored-by: hidenorly <twitte.harold@gmail.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Charlie Joynt <cjj1977@users.noreply.github.com> Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll <vfirst218@gmail.com> Co-authored-by: MrCheeze <fishycheeze@yahoo.ca> Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com>
* fix IndexError: list index out of range error interrupted while postprocess * added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for AUTOMATIC1111#13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix AUTOMATIC1111#13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes AUTOMATIC1111#13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for AUTOMATIC1111#13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt <kph@hotmail.ca> * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon <storyicon@foxmail.com> * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start AUTOMATIC1111#14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * AUTOMATIC1111#14122 Fix edge case where style text has multiple {prompt} placeholders * AUTOMATIC1111#14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c1954. Since the modification is expected to move to mac_specific.py (AUTOMATIC1111#14046 (comment)) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for AUTOMATIC1111#14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for AUTOMATIC1111#14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes AUTOMATIC1111#13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request AUTOMATIC1111#14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements anapnoe#206 * fix side panel show/hide button hot zone does not use the entire width anapnoe#204 * Merge pull request AUTOMATIC1111#14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request AUTOMATIC1111#14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request AUTOMATIC1111#14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request AUTOMATIC1111#14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request AUTOMATIC1111#14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request AUTOMATIC1111#14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request AUTOMATIC1111#14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request AUTOMATIC1111#14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request AUTOMATIC1111#14237 from ReneKroon/dev AUTOMATIC1111#13354 : solve lora loading issue * Merge pull request AUTOMATIC1111#14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon <storyicon@foxmail.com> Co-authored-by: Won-Kyu Park <wkpark@gmail.com> Co-authored-by: Gleb Alekseev <alekseev.gleb@gmail.com> Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com> Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Khachatur Avanesian <jailbreakvideo@gmail.com> Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra <dadadaluo@gmail.com> Co-authored-by: David Benson <dben@users.noreply.github.com> Co-authored-by: Meerkov <GoMeerkov@gmail.com> Co-authored-by: Emily Zeng <zhixuan.zeng@gmail.com> Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani <riteshgangnani10> Co-authored-by: GerryDE <gerritfresen4@gmail.com> Co-authored-by: fuchen.ljl <yjqqqqdx_01@163.com> Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) <cabelo@opensuse.org> Co-authored-by: wfjsw <wfjsw@users.noreply.github.com> Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro <konstantin.adamovich@gmail.com> Co-authored-by: AngelBottomless <aria1th@naver.com> Co-authored-by: Kieran Hunt <kph@hotmail.ca> Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: storyicon <storyicon@foxmail.com> Co-authored-by: Tom Haelbich <haelbito@outlook.com> Co-authored-by: hidenorly <twitte.harold@gmail.com> Co-authored-by: Aarni Koskela <akx@iki.fi> Co-authored-by: Charlie Joynt <cjj1977@users.noreply.github.com> Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll <vfirst218@gmail.com> Co-authored-by: MrCheeze <fishycheeze@yahoo.ca> Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com> Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com>
fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN AUTOMATIC1111#14097
Is there an existing issue for this?
What happened?
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Steps to reproduce the problem
normal use
What should have happened?
use Animatediff and controlnet
Sysinfo
python: 3.10.11 • torch: 2.0.0+cu118 • xformers: 0.0.17 • gradio: 3.41.2
What browsers do you use to access the UI ?
No response
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: