From 04adec6f0d45fe817e04cd895bd371dc0c5ced4f Mon Sep 17 00:00:00 2001 From: martianunlimited Date: Thu, 25 Jan 2024 21:43:05 +1300 Subject: [PATCH] Anapnoe current (#13) * pull (#11) * added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for #13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix #13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes #13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for #13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * fix gradio video component and canvas fit for inpaint * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start #14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14122 Fix edge case where style text has multiple {prompt} placeholders * https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c19545c83fa6925c9ce2216ee64964eb5129ce. Since the modification is expected to move to mac_specific.py (https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046#issuecomment-1826731532) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for #14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for #14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes #13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request #14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements #206 * fix side panel show/hide button hot zone does not use the entire width #204 * Merge pull request #14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request #14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request #14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request #14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request #14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request #14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request #14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request #14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request #14237 from ReneKroon/dev #13354 : solve lora loading issue * Merge pull request #14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon Co-authored-by: Gleb Alekseev Co-authored-by: missionfloyd Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park Co-authored-by: Khachatur Avanesian Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra Co-authored-by: David Benson Co-authored-by: Meerkov Co-authored-by: Emily Zeng Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani Co-authored-by: GerryDE Co-authored-by: fuchen.ljl Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) Co-authored-by: wfjsw Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless Co-authored-by: Kieran Hunt Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name Co-authored-by: storyicon Co-authored-by: Tom Haelbich Co-authored-by: hidenorly Co-authored-by: Aarni Koskela Co-authored-by: Charlie Joynt Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll Co-authored-by: MrCheeze Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com> * Z (#12) * added option to play notification sound or not * Convert (emphasis) to (emphasis:1.1) per @SirVeggie's suggestion * Make attention conversion optional Fix square brackets multiplier * put notification.mp3 option at the end of the page * more general case of adding an infotext when no images have been generated * use shallow copy for #13535 * remove duplicated code * support webui.settings.bat * Start / Restart generation by Ctrl (Alt) + Enter Add ability to interrupt current generation and start generation again by Ctrl (Alt) + Enter * add an option to not print stack traces on ctrl+c. * repair unload sd checkpoint button * respect keyedit_precision_attention setting when converting from old (((attention))) syntax * Update script.js Exclude lambda * Update script.js LF instead CRLF * Update script.js * Add files via upload LF * wip incorrect OFT implementation * inference working but SLOW * faster by using cached R in forward * faster by calculating R in updown and using cached R in forward * refactor: fix constraint, re-use get_weight * style: formatting * style: fix ambiguous variable name * rework some of changes for emphasis editing keys, force conversion of old-style emphasis * fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1) * fix bug when using --gfpgan-models-path * fix Blank line contains whitespace * refactor: use forward hook instead of custom forward * fix: return orig weights during updown, merge weights before forward * fix: support multiplier, no forward pass hook * style: cleanup oft * fix: use merge_weight to cache value * refactor: remove used OFT functions * fix: multiplier applied twice in finalize_updown * style: conform style * Update prompts_from_file script to allow concatenating entries with the general prompt. * linting issue * call state.jobnext() before postproces*() * Fix #13796 Fix comment error that makes understanding scheduling more confusing. * test implementation based on kohaku diag-oft implementation * detect diag_oft type * no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch * added accordion settings options * Fix parenthesis auto selection Fixes #13813 * Update requirements_versions.txt * skip multihead attn for now * refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb * refactor: use same updown for both kohya OFT and LyCORIS diag-oft * refactor: remove unused function * correct a typo modify "defaul" to "default" * add a visible checkbox to input accordion * eslint * properly apply sort order for extra network cards when selected from dropdown allow selection of default sort order in settings remove 'Default' sort order, replace with 'Name' * Add SSD-1B as a supported model * Added memory clearance after deletion * Use devices.torch_gc() instead of empty_cache() * added compact prompt option * compact prompt option disabled by default * linter * more changes for #13865: fix formatting, rename the function, add comment and add a readme entry * fix img2img_tabs error * fix exception related to the pix2pix * Add option to set notification sound volume * fix pix2pix producing bad results * moved nested with to single line to remove extra tabs * removed changes that weren't merged properly * multiline with statement for readibility * Update README.md Modify the stablediffusion dependency address * Update README.md Modify the stablediffusion dependency address * - opensuse compatibility * Enable prompt hotkeys in style editor * Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ * fix added accordion settings options * ExitStack as alternative to suppress * implementing script metadata and DAG sorting mechanism * populate loaded_extensions from extension list instead * reverse the extension load order so builtin extensions load earlier natively * add hyperTile https://github.com/tfernd/HyperTile * remove the assumption of same name * allow comma and whitespace as separator * fix * bug fix * dir buttons start with / so only the correct dir will be shown and not dirs with a substrings as name from the dir * Lint * Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed * Adds 'Path' sorting for Extra network cards * fix gradio video component and canvas fit for inpaint * hotfix: call shared.state.end() after postprocessing done * Implement Hypertile Co-Authored-By: Kieran Hunt * copy LDM VAE key from XL * fix: ignore calc_scale() for COFT which has very small alpha * feat: LyCORIS/kohya OFT network support * convert/add hypertile options * fix ruff - add newline * Adds tqdm handler to logging_config.py for progress bar integration * Take into account tqdm not being installed before first boot for logging * actually adds handler to logging_config.py * Fix critical issue - unet apply * Fix inverted option issue I'm pretty sure I was sleepy while implementing this * set empty value for SD XL 3rd layer * fix double gc and decoding with unet context * feat: fix randn found element of type float at pos 2 Signed-off-by: storyicon * use metadata.ini for meta filename * Option to show batch img2img results in UI shared.opts.img2img_batch_show_results_limit limit the number of images return to the UI for batch img2img default limit 32 0 no images are shown -1 unlimited, all images are shown * save sysinfo as .json GitHub now allows uploading of .json files in issues * rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code * added option for default behavior of dir buttons * Add FP32 fallback support on sd_vae_approx This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the submodule may require additional modifications. The following is the example modification on the other submodule. ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py class Upsample(nn.Module): ..snip.. def forward(self, x): assert x.shape[1] == self.channels if self.dims == 3: x = F.interpolate( x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" ) else: try: x = F.interpolate(x, scale_factor=2, mode="nearest") except: x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype) if self.use_conv: x = self.conv(x) return x ..snip.. ``` You can see the FP32 fallback execution as same as sd_vae_approx.py. * fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start #14047 * Update ruff to 0.1.6 * Simplify restart_sampler (suggested by ruff) * use extension name for determining an extension is installed in the index * Move exception_records related methods to errors.py * remove traceback in sysinfo * move file * rework hypertile into a built-in extension * do not save HTML explanations from options page to config * fix linter errors * compact prompt layout: preserve scroll when switching between lora tabs * json.dump(ensure_ascii=False) improve json readability * add categories to settings * also consider extension url * add Block component creation callback * catch uncaught exception with ui creation scripts prevent total webui crash * Allow use of mutiple styles csv files * bugfix for warning message (#6) * bugfix for warning message (#6) * bugfix for warning message * bugfix error message * Allow use of mutiple styles csv files * https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14122 Fix edge case where style text has multiple {prompt} placeholders * https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14005 * Support XYZ scripts / split hires path from unet * cache divisors / fix ruff * fix ruff in hypertile_xyz.py * fix ruff - set comprehension * hypertile_xyz: we don't need isnumeric check for AxisOption * Update devices.py fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14097 * fix Auto focal point crop for opencv >= 4.8.x autocrop.download_and_cache_models in opencv >= 4.8 the face detection model was updated download the base on opencv version returns the model path or raise exception * reformat file with uniform indentation * Revert "Add FP32 fallback support on sd_vae_approx" This reverts commit 58c19545c83fa6925c9ce2216ee64964eb5129ce. Since the modification is expected to move to mac_specific.py (https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046#issuecomment-1826731532) * Add FP32 fallback support on torch.nn.functional.interpolate This tries to execute interpolate with FP32 if it failed. Background is that on some environment such as Mx chip MacOS devices, we get error as follows: ``` "torch/nn/functional.py", line 3931, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' ``` In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it. Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```. And the fallback for torch.nn.functional.interpolate is necessary at ```modules/sd_vae_approx.py``` 's ```VAEApprox.forward``` ```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward``` * Fix the Ruff error about unused import * Initial IPEX support * add max-heigh/width to global-popup-inner prevent the pop-up from being too big as to making exiting the pop-up impossible * Close popups with escape key * Fix bug where is_using_v_parameterization_for_sd2 fails because the sd_hijack is only partially undone * Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * Disable ipex autocast due to its bad perf * split UI settings page into many * put code that can cause an exception into its own function for #14120 * Fix fp64 * extras tab batch: actually use original filename preprocessing upscale: do not do an extra upscale step if it's not needed * Remove webui-ipex-user.bat * remove Train/Preprocessing tab and put all its functionality into extras batch images mode * potential fix for #14172 * alternate implementation for unet forward replacement that does not depend on hijack being applied * Fix `save_samples` being checked early when saving masked composite * Re-add setting lost as part of e294e46 * rework mask and mask_composite logic * Add import_hook hack to work around basicsr incompatibility Fixes #13985 * Update launch_utils.py to fix wrong dep. checks and reinstalls Fixes failing dependency checks for extensions having a different package name and import name (for example ffmpeg-python / ffmpeg), which currently is causing the unneeded reinstall of packages at runtime. In fact with current code, the same string is used when installing a package and when checking for its presence, as you can see in the following example: > launch_utils.run_pip("install ffmpeg-python", "required package") [ Installing required package: "ffmpeg-python" ... ] [ Installed ] > launch_utils.is_installed("ffmpeg-python") False ... which would actually return true with: > launch_utils.is_installed("ffmpeg") True * Lint * make webui not crash when running with --disable-all-extensions option * update changelog * repair old handler for postprocessing API * repair old handler for postprocessing API in a way that doesn't break interface * add hypertile infotext * Merge pull request #14203 from AUTOMATIC1111/remove-clean_text() remove clean_text() * fix Inpaint Image Appears Behind Some UI Elements #206 * fix side panel show/hide button hot zone does not use the entire width #204 * Merge pull request #14300 from AUTOMATIC1111/oft_fixes Fix wrong implementation in network_oft * Merge pull request #14296 from akx/paste-resolution Allow pasting in WIDTHxHEIGHT strings into the width/height fields * Merge pull request #14270 from kaalibro/extra-options-elem-id Assign id for "extra_options". Replace numeric field with slider. * Merge pull request #14276 from AUTOMATIC1111/fix-styles Fix styles * Merge pull request #14266 from kaalibro/dev Re-add setting lost as part of e294e46 * Merge pull request #14229 from Nuullll/ipex-embedding [IPEX] Fix embedding and ControlNet * Merge pull request #14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer add option: Live preview in full page image viewer * Merge pull request #14216 from wfjsw/state-dict-ref-comparison change state dict comparison to ref compare * Merge pull request #14237 from ReneKroon/dev #13354 : solve lora loading issue * Merge pull request #14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox default False js_live_preview_in_modal_lightbox * update to 1.7 from upstream * Update README.md * Update screenshot.png * Update CITATION.cff * update to latest version * update to latest version --------- Signed-off-by: storyicon Co-authored-by: Gleb Alekseev Co-authored-by: missionfloyd Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park Co-authored-by: Khachatur Avanesian Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra Co-authored-by: David Benson Co-authored-by: Meerkov Co-authored-by: Emily Zeng Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: Ritesh Gangnani Co-authored-by: GerryDE Co-authored-by: fuchen.ljl Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) Co-authored-by: wfjsw Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless Co-authored-by: Kieran Hunt Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name Co-authored-by: storyicon Co-authored-by: Tom Haelbich Co-authored-by: hidenorly Co-authored-by: Aarni Koskela Co-authored-by: Charlie Joynt Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll Co-authored-by: MrCheeze Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com> --------- Signed-off-by: storyicon Co-authored-by: Gleb Alekseev Co-authored-by: missionfloyd Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com> Co-authored-by: Won-Kyu Park Co-authored-by: Khachatur Avanesian Co-authored-by: v0xie <28695009+v0xie@users.noreply.github.com> Co-authored-by: avantcontra Co-authored-by: David Benson Co-authored-by: Meerkov Co-authored-by: Emily Zeng Co-authored-by: w-e-w <40751091+w-e-w@users.noreply.github.com> Co-authored-by: gibiee <37574274+gibiee@users.noreply.github.com> Co-authored-by: GerryDE Co-authored-by: fuchen.ljl Co-authored-by: Alessandro de Oliveira Faria (A.K.A. CABELO) Co-authored-by: wfjsw Co-authored-by: aria1th <35677394+aria1th@users.noreply.github.com> Co-authored-by: Tom Haelbich <65122811+h43lb1t0@users.noreply.github.com> Co-authored-by: kaalibro Co-authored-by: anapnoe <124302297+anapnoe@users.noreply.github.com> Co-authored-by: AngelBottomless Co-authored-by: Kieran Hunt Co-authored-by: Lucas Daniel Velazquez M <19197331+Luxter77@users.noreply.github.com> Co-authored-by: Your Name Co-authored-by: storyicon Co-authored-by: Tom Haelbich Co-authored-by: hidenorly Co-authored-by: Aarni Koskela Co-authored-by: Charlie Joynt Co-authored-by: obsol <33932119+read-0nly@users.noreply.github.com> Co-authored-by: Nuullll Co-authored-by: MrCheeze Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com> Co-authored-by: illtellyoulater <3078931+illtellyoulater@users.noreply.github.com> --- .eslintrc.js | 1 + .github/ISSUE_TEMPLATE/bug_report.yml | 67 +- .github/workflows/on_pull_request.yaml | 2 +- CHANGELOG.md | 162 ++++ configs/alt-diffusion-m18-inference.yaml | 73 ++ extensions-builtin/Lora/lora_logger.py | 33 + extensions-builtin/Lora/lyco_helpers.py | 47 ++ extensions-builtin/Lora/network.py | 1 + extensions-builtin/Lora/network_glora.py | 33 + extensions-builtin/Lora/network_oft.py | 82 ++ extensions-builtin/Lora/networks.py | 60 +- .../Lora/ui_extra_networks_lora.py | 7 +- .../themes/sdxl_moonlight_orange.css | 1 + .../html/templates/template-app-root.html | 2 +- .../template-aside-extra-networks.html | 2 +- extensions-builtin/anapnoe-sd-uiux/style.css | 9 + .../canvas-zoom-and-pan/javascript/zoom.js | 2 +- .../scripts/extra_options_section.py | 153 ++-- extensions-builtin/hypertile/hypertile.py | 351 +++++++++ .../hypertile/scripts/hypertile_script.py | 109 +++ .../hypertile/scripts/hypertile_xyz.py | 51 ++ .../mobile/javascript/mobile.js | 2 + javascript/dragdrop.js | 2 +- javascript/edit-attention.js | 79 +- javascript/extraNetworks.js | 95 ++- javascript/imageviewer.js | 7 +- javascript/inputAccordion.js | 81 +- javascript/notification.js | 6 +- javascript/settings.js | 71 ++ javascript/token-counters.js | 26 +- javascript/ui.js | 75 +- modules/api/api.py | 81 +- modules/api/models.py | 39 +- modules/cache.py | 2 +- modules/cmd_args.py | 10 +- modules/config_states.py | 3 +- modules/devices.py | 18 +- modules/errors.py | 18 +- modules/extensions.py | 96 ++- modules/generation_parameters_copypaste.py | 15 +- modules/gfpgan_model.py | 25 +- modules/gitpython_hack.py | 2 +- modules/gradio_extensons.py | 156 ++-- modules/hypernetworks/hypernetwork.py | 4 +- modules/images.py | 19 +- modules/img2img.py | 56 +- modules/import_hook.py | 11 + modules/initialize.py | 336 ++++----- modules/initialize_util.py | 408 +++++----- modules/launch_utils.py | 42 +- modules/localization.py | 21 +- modules/logging_config.py | 57 +- modules/mac_specific.py | 15 + modules/models/diffusion/ddpm_edit.py | 7 +- modules/options.py | 553 ++++++++------ modules/paths.py | 2 +- modules/paths_internal.py | 1 + modules/postprocessing.py | 94 ++- modules/processing.py | 53 +- modules/processing_scripts/seed.py | 222 +++--- modules/prompt_parser.py | 9 +- modules/restart.py | 4 +- modules/rng.py | 340 ++++----- modules/script_callbacks.py | 6 +- modules/scripts.py | 137 +++- modules/scripts_postprocessing.py | 86 ++- modules/sd_disable_initialization.py | 2 +- modules/sd_hijack.py | 41 +- modules/sd_models.py | 70 +- modules/sd_models_config.py | 5 +- modules/sd_models_types.py | 65 +- modules/sd_samplers_extra.py | 148 ++-- modules/sd_samplers_timesteps_impl.py | 274 +++---- modules/sd_unet.py | 14 +- modules/shared_cmd_options.py | 36 +- modules/shared_items.py | 22 +- modules/shared_options.py | 703 +++++++++--------- modules/shared_state.py | 318 ++++---- modules/styles.py | 169 ++++- modules/sub_quadratic_attention.py | 4 +- modules/sysinfo.py | 18 +- modules/textual_inversion/autocrop.py | 239 +++--- modules/textual_inversion/preprocess.py | 232 ------ .../textual_inversion/textual_inversion.py | 78 +- modules/textual_inversion/ui.py | 7 - modules/txt2img.py | 4 +- modules/ui.py | 420 ++++------- modules/ui_common.py | 15 +- modules/ui_extensions.py | 19 +- modules/ui_extra_networks.py | 65 +- modules/ui_extra_networks_checkpoints.py | 10 +- modules/ui_extra_networks_hypernets.py | 13 +- .../ui_extra_networks_textual_inversion.py | 10 +- modules/ui_extra_networks_user_metadata.py | 2 +- modules/ui_gradio_extensions.py | 6 +- modules/ui_loadsave.py | 24 +- modules/ui_postprocessing.py | 18 +- modules/ui_prompt_styles.py | 230 +++--- modules/ui_settings.py | 59 +- modules/ui_toprow.py | 143 ++++ modules/upscaler.py | 6 +- modules/xlmr_m18.py | 164 ++++ modules/xpu_specific.py | 59 ++ pyproject.toml | 1 + requirements_versions.txt | 2 +- script.js | 33 +- scripts/postprocessing_caption.py | 30 + scripts/postprocessing_codeformer.py | 16 +- .../postprocessing_create_flipped_copies.py | 32 + scripts/postprocessing_focal_crop.py | 54 ++ scripts/postprocessing_gfpgan.py | 13 +- scripts/postprocessing_split_oversized.py | 71 ++ scripts/postprocessing_upscale.py | 14 +- scripts/processing_autosized_crop.py | 64 ++ scripts/prompts_from_file.py | 32 +- scripts/xyz_grid.py | 7 +- style.css | 69 +- webui.bat | 5 + webui.py | 2 +- webui.sh | 23 +- 120 files changed, 5664 insertions(+), 3156 deletions(-) create mode 100644 configs/alt-diffusion-m18-inference.yaml create mode 100644 extensions-builtin/Lora/lora_logger.py create mode 100644 extensions-builtin/Lora/network_glora.py create mode 100644 extensions-builtin/Lora/network_oft.py create mode 100644 extensions-builtin/anapnoe-sd-theme-editor/themes/sdxl_moonlight_orange.css create mode 100644 extensions-builtin/hypertile/hypertile.py create mode 100644 extensions-builtin/hypertile/scripts/hypertile_script.py create mode 100644 extensions-builtin/hypertile/scripts/hypertile_xyz.py create mode 100644 javascript/settings.js delete mode 100644 modules/textual_inversion/preprocess.py create mode 100644 modules/ui_toprow.py create mode 100644 modules/xlmr_m18.py create mode 100644 modules/xpu_specific.py create mode 100644 scripts/postprocessing_caption.py create mode 100644 scripts/postprocessing_create_flipped_copies.py create mode 100644 scripts/postprocessing_focal_crop.py create mode 100644 scripts/postprocessing_split_oversized.py create mode 100644 scripts/processing_autosized_crop.py diff --git a/.eslintrc.js b/.eslintrc.js index 4777c276e9b..cf8397695e1 100644 --- a/.eslintrc.js +++ b/.eslintrc.js @@ -74,6 +74,7 @@ module.exports = { create_submit_args: "readonly", restart_reload: "readonly", updateInput: "readonly", + onEdit: "readonly", //extraNetworks.js requestGet: "readonly", popup: "readonly", diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index cf6a2be86fa..5876e941085 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -1,25 +1,45 @@ name: Bug Report -description: You think somethings is broken in the UI +description: You think something is broken in the UI title: "[Bug]: " labels: ["bug-report"] body: + - type: markdown + attributes: + value: | + > The title of the bug report should be short and descriptive. + > Use relevant keywords for searchability. + > Do not leave it blank, but also do not put an entire error log in it. - type: checkboxes attributes: - label: Is there an existing issue for this? - description: Please search to see if an issue already exists for the bug you encountered, and that it hasn't been fixed in a recent build/commit. + label: Checklist + description: | + Please perform basic debugging to see if extensions or configuration is the cause of the issue. + Basic debug procedure +  1. Disable all third-party extensions - check if extension is the cause +  2. Update extensions and webui - sometimes things just need to be updated +  3. Backup and remove your config.json and ui-config.json - check if the issue is caused by bad configuration +  4. Delete venv with third-party extensions disabled - sometimes extensions might cause wrong libraries to be installed +  5. Try a fresh installation webui in a different directory - see if a clean installation solves the issue + Before making a issue report please, check that the issue hasn't been reported recently. options: - - label: I have searched the existing issues and checked the recent builds/commits - required: true + - label: The issue exists after disabling all extensions + - label: The issue exists on a clean installation of webui + - label: The issue is caused by an extension, but I believe it is caused by a bug in the webui + - label: The issue exists in the current version of the webui + - label: The issue has not been reported before recently + - label: The issue has been reported before but has not been fixed yet - type: markdown attributes: value: | - *Please fill this form with as much information as possible, don't forget to fill "What OS..." and "What browsers" and *provide screenshots if possible** + > Please fill this form with as much information as possible. Don't forget to "Upload Sysinfo" and "What browsers" and provide screenshots if possible - type: textarea id: what-did attributes: label: What happened? description: Tell us what happened in a very clear and simple way + placeholder: | + txt2img is not working as intended. validations: required: true - type: textarea @@ -27,9 +47,9 @@ body: attributes: label: Steps to reproduce the problem description: Please provide us with precise step by step instructions on how to reproduce the bug - value: | - 1. Go to .... - 2. Press .... + placeholder: | + 1. Go to ... + 2. Press ... 3. ... validations: required: true @@ -38,13 +58,8 @@ body: attributes: label: What should have happened? description: Tell us what you think the normal behavior should be - validations: - required: true - - type: textarea - id: sysinfo - attributes: - label: Sysinfo - description: System info file, generated by WebUI. You can generate it in settings, on the Sysinfo page. Drag the file into the field to upload it. If you submit your report without including the sysinfo file, the report will be closed. If needed, review the report to make sure it includes no personal information you don't want to share. If you can't start WebUI, you can use --dump-sysinfo commandline argument to generate the file. + placeholder: | + WebUI should ... validations: required: true - type: dropdown @@ -58,12 +73,25 @@ body: - Brave - Apple Safari - Microsoft Edge + - Android + - iOS - Other + - type: textarea + id: sysinfo + attributes: + label: Sysinfo + description: System info file, generated by WebUI. You can generate it in settings, on the Sysinfo page. Drag the file into the field to upload it. If you submit your report without including the sysinfo file, the report will be closed. If needed, review the report to make sure it includes no personal information you don't want to share. If you can't start WebUI, you can use --dump-sysinfo commandline argument to generate the file. + placeholder: | + 1. Go to WebUI Settings -> Sysinfo -> Download system info. + If WebUI fails to launch, use --dump-sysinfo commandline argument to generate the file + 2. Upload the Sysinfo as a attached file, Do NOT paste it in as plain text. + validations: + required: true - type: textarea id: logs attributes: label: Console logs - description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after your bug happened. If it's very long, provide a link to pastebin or similar service. + description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after the bug occured. If it's very long, provide a link to pastebin or similar service. render: Shell validations: required: true @@ -71,4 +99,7 @@ body: id: misc attributes: label: Additional information - description: Please provide us with any relevant additional info or context. + description: | + Please provide us with any relevant additional info or context. + Examples: +  I have updated my GPU driver recently. diff --git a/.github/workflows/on_pull_request.yaml b/.github/workflows/on_pull_request.yaml index 78e608ee945..9e44c806ab3 100644 --- a/.github/workflows/on_pull_request.yaml +++ b/.github/workflows/on_pull_request.yaml @@ -20,7 +20,7 @@ jobs: # not to have GHA download an (at the time of writing) 4 GB cache # of PyTorch and other dependencies. - name: Install Ruff - run: pip install ruff==0.0.272 + run: pip install ruff==0.1.6 - name: Run Ruff run: ruff . lint-js: diff --git a/CHANGELOG.md b/CHANGELOG.md index 2c72359fc42..67429bbff0f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,165 @@ +## 1.7.0 + +### Features: +* settings tab rework: add search field, add categories, split UI settings page into many +* add altdiffusion-m18 support ([#13364](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13364)) +* support inference with LyCORIS GLora networks ([#13610](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13610)) +* add lora-embedding bundle system ([#13568](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13568)) +* option to move prompt from top row into generation parameters +* add support for SSD-1B ([#13865](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13865)) +* support inference with OFT networks ([#13692](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13692)) +* script metadata and DAG sorting mechanism ([#13944](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13944)) +* support HyperTile optimization ([#13948](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13948)) +* add support for SD 2.1 Turbo ([#14170](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14170)) +* remove Train->Preprocessing tab and put all its functionality into Extras tab +* initial IPEX support for Intel Arc GPU ([#14171](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14171)) + +### Minor: +* allow reading model hash from images in img2img batch mode ([#12767](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12767)) +* add option to align with sgm repo's sampling implementation ([#12818](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12818)) +* extra field for lora metadata viewer: `ss_output_name` ([#12838](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12838)) +* add action in settings page to calculate all SD checkpoint hashes ([#12909](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12909)) +* add button to copy prompt to style editor ([#12975](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12975)) +* add --skip-load-model-at-start option ([#13253](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13253)) +* write infotext to gif images +* read infotext from gif images ([#13068](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13068)) +* allow configuring the initial state of InputAccordion in ui-config.json ([#13189](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13189)) +* allow editing whitespace delimiters for ctrl+up/ctrl+down prompt editing ([#13444](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13444)) +* prevent accidentally closing popup dialogs ([#13480](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13480)) +* added option to play notification sound or not ([#13631](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13631)) +* show the preview image in the full screen image viewer if available ([#13459](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13459)) +* support for webui.settings.bat ([#13638](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13638)) +* add an option to not print stack traces on ctrl+c +* start/restart generation by Ctrl (Alt) + Enter ([#13644](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13644)) +* update prompts_from_file script to allow concatenating entries with the general prompt ([#13733](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13733)) +* added a visible checkbox to input accordion +* added an option to hide all txt2img/img2img parameters in an accordion ([#13826](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13826)) +* added 'Path' sorting option for Extra network cards ([#13968](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13968)) +* enable prompt hotkeys in style editor ([#13931](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13931)) +* option to show batch img2img results in UI ([#14009](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14009)) +* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page +* add FP32 fallback support on sd_vae_approx ([#14046](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046)) +* support XYZ scripts / split hires path from unet ([#14126](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14126)) +* allow use of mutiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125)) + +### Extensions and API: +* update gradio to 3.41.2 +* support installed extensions list api ([#12774](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12774)) +* update pnginfo API to return dict with parsed values +* add noisy latent to `ExtraNoiseParams` for callback ([#12856](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12856)) +* show extension datetime in UTC ([#12864](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12864), [#12865](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12865), [#13281](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13281)) +* add an option to choose how to combine hires fix and refiner +* include program version in info response. ([#13135](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13135)) +* sd_unet support for SDXL +* patch DDPM.register_betas so that users can put given_betas in model yaml ([#13276](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13276)) +* xyz_grid: add prepare ([#13266](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13266)) +* allow multiple localization files with same language in extensions ([#13077](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13077)) +* add onEdit function for js and rework token-counter.js to use it +* fix the key error exception when processing override_settings keys ([#13567](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13567)) +* ability for extensions to return custom data via api in response.images ([#13463](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13463)) +* call state.jobnext() before postproces*() ([#13762](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13762)) +* add option to set notification sound volume ([#13884](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13884)) +* update Ruff to 0.1.6 ([#14059](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14059)) +* add Block component creation callback ([#14119](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14119)) +* catch uncaught exception with ui creation scripts ([#14120](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14120)) +* use extension name for determining an extension is installed in the index ([#14063](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14063)) +* update is_installed() from launch_utils.py to fix reinstalling already installed packages ([#14192](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14192)) + +### Bug Fixes: +* fix pix2pix producing bad results +* fix defaults settings page breaking when any of main UI tabs are hidden +* fix error that causes some extra networks to be disabled if both and are present in the prompt +* fix for Reload UI function: if you reload UI on one tab, other opened tabs will no longer stop working +* prevent duplicate resize handler ([#12795](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12795)) +* small typo: vae resolve bug ([#12797](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12797)) +* hide broken image crop tool ([#12792](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12792)) +* don't show hidden samplers in dropdown for XYZ script ([#12780](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12780)) +* fix style editing dialog breaking if it's opened in both img2img and txt2img tabs +* hide --gradio-auth and --api-auth values from /internal/sysinfo report +* add missing infotext for RNG in options ([#12819](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12819)) +* fix notification not playing when built-in webui tab is inactive ([#12834](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12834)) +* honor `--skip-install` for extension installers ([#12832](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12832)) +* don't print blank stdout in extension installers ([#12833](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12833), [#12855](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12855)) +* get progressbar to display correctly in extensions tab +* keep order in list of checkpoints when loading model that doesn't have a checksum +* fix inpainting models in txt2img creating black pictures +* fix generation params regex ([#12876](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12876)) +* fix batch img2img output dir with script ([#12926](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12926)) +* fix #13080 - Hypernetwork/TI preview generation ([#13084](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13084)) +* fix bug with sigma min/max overrides. ([#12995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12995)) +* more accurate check for enabling cuDNN benchmark on 16XX cards ([#12924](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12924)) +* don't use multicond parser for negative prompt counter ([#13118](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13118)) +* fix data-sort-name containing spaces ([#13412](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13412)) +* update card on correct tab when editing metadata ([#13411](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13411)) +* fix viewing/editing metadata when filename contains an apostrophe ([#13395](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13395)) +* fix: --sd_model in "Prompts from file or textbox" script is not working ([#13302](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13302)) +* better Support for Portable Git ([#13231](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13231)) +* fix issues when webui_dir is not work_dir ([#13210](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13210)) +* fix: lora-bias-backup don't reset cache ([#13178](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13178)) +* account for customizable extra network separators whyen removing extra network text from the prompt ([#12877](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12877)) +* re fix batch img2img output dir with script ([#13170](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13170)) +* fix `--ckpt-dir` path separator and option use `short name` for checkpoint dropdown ([#13139](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13139)) +* consolidated allowed preview formats, Fix extra network `.gif` not woking as preview ([#13121](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13121)) +* fix venv_dir=- environment variable not working as expected on linux ([#13469](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13469)) +* repair unload sd checkpoint button +* edit-attention fixes ([#13533](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13533)) +* fix bug when using --gfpgan-models-path ([#13718](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13718)) +* properly apply sort order for extra network cards when selected from dropdown +* fixes generation restart not working for some users when 'Ctrl+Enter' is pressed ([#13962](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13962)) +* thread safe extra network list_items ([#13014](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13014)) +* fix not able to exit metadata popup when pop up is too big ([#14156](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14156)) +* fix auto focal point crop for opencv >= 4.8 ([#14121](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14121)) +* make 'use-cpu all' actually apply to 'all' ([#14131](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14131)) +* extras tab batch: actually use original filename +* make webui not crash when running with --disable-all-extensions option + +### Other: +* non-local condition ([#12814](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12814)) +* fix minor typos ([#12827](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12827)) +* remove xformers Python version check ([#12842](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12842)) +* style: file-metadata word-break ([#12837](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12837)) +* revert SGM noise multiplier change for img2img because it breaks hires fix +* do not change quicksettings dropdown option when value returned is `None` ([#12854](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12854)) +* [RC 1.6.0 - zoom is partly hidden] Update style.css ([#12839](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12839)) +* chore: change extension time format ([#12851](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12851)) +* WEBUI.SH - Use torch 2.1.0 release candidate for Navi 3 ([#12929](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12929)) +* add Fallback at images.read_info_from_image if exif data was invalid ([#13028](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13028)) +* update cmd arg description ([#12986](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12986)) +* fix: update shared.opts.data when add_option ([#12957](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12957), [#13213](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13213)) +* restore missing tooltips ([#12976](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12976)) +* use default dropdown padding on mobile ([#12880](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12880)) +* put enable console prompts option into settings from commandline args ([#13119](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13119)) +* fix some deprecated types ([#12846](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12846)) +* bump to torchsde==0.2.6 ([#13418](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13418)) +* update dragdrop.js ([#13372](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13372)) +* use orderdict as lru cache:opt/bug ([#13313](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13313)) +* XYZ if not include sub grids do not save sub grid ([#13282](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13282)) +* initialize state.time_start befroe state.job_count ([#13229](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13229)) +* fix fieldname regex ([#13458](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13458)) +* change denoising_strength default to None. ([#13466](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13466)) +* fix regression ([#13475](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13475)) +* fix IndexError ([#13630](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13630)) +* fix: checkpoints_loaded:{checkpoint:state_dict}, model.load_state_dict issue in dict value empty ([#13535](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13535)) +* update bug_report.yml ([#12991](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12991)) +* requirements_versions httpx==0.24.1 ([#13839](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13839)) +* fix parenthesis auto selection ([#13829](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13829)) +* fix #13796 ([#13797](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13797)) +* corrected a typo in `modules/cmd_args.py` ([#13855](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13855)) +* feat: fix randn found element of type float at pos 2 ([#14004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14004)) +* adds tqdm handler to logging_config.py for progress bar integration ([#13996](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13996)) +* hotfix: call shared.state.end() after postprocessing done ([#13977](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13977)) +* fix dependency address patch 1 ([#13929](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13929)) +* save sysinfo as .json ([#14035](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14035)) +* move exception_records related methods to errors.py ([#14084](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14084)) +* compatibility ([#13936](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13936)) +* json.dump(ensure_ascii=False) ([#14108](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14108)) +* dir buttons start with / so only the correct dir will be shown and no… ([#13957](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13957)) +* alternate implementation for unet forward replacement that does not depend on hijack being applied +* re-add `keyedit_delimiters_whitespace` setting lost as part of commit e294e46 ([#14178](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14178)) +* fix `save_samples` being checked early when saving masked composite ([#14177](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14177)) +* slight optimization for mask and mask_composite ([#14181](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14181)) +* add import_hook hack to work around basicsr/torchvision incompatibility ([#14186](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14186)) + ## 1.6.1 ### Bug Fixes: diff --git a/configs/alt-diffusion-m18-inference.yaml b/configs/alt-diffusion-m18-inference.yaml new file mode 100644 index 00000000000..41a031d55f0 --- /dev/null +++ b/configs/alt-diffusion-m18-inference.yaml @@ -0,0 +1,73 @@ +model: + base_learning_rate: 1.0e-04 + target: ldm.models.diffusion.ddpm.LatentDiffusion + params: + linear_start: 0.00085 + linear_end: 0.0120 + num_timesteps_cond: 1 + log_every_t: 200 + timesteps: 1000 + first_stage_key: "jpg" + cond_stage_key: "txt" + image_size: 64 + channels: 4 + cond_stage_trainable: false # Note: different from the one we trained before + conditioning_key: crossattn + monitor: val/loss_simple_ema + scale_factor: 0.18215 + use_ema: False + + scheduler_config: # 10000 warmup steps + target: ldm.lr_scheduler.LambdaLinearScheduler + params: + warm_up_steps: [ 10000 ] + cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases + f_start: [ 1.e-6 ] + f_max: [ 1. ] + f_min: [ 1. ] + + unet_config: + target: ldm.modules.diffusionmodules.openaimodel.UNetModel + params: + image_size: 32 # unused + in_channels: 4 + out_channels: 4 + model_channels: 320 + attention_resolutions: [ 4, 2, 1 ] + num_res_blocks: 2 + channel_mult: [ 1, 2, 4, 4 ] + num_head_channels: 64 + use_spatial_transformer: True + use_linear_in_transformer: True + transformer_depth: 1 + context_dim: 1024 + use_checkpoint: True + legacy: False + + first_stage_config: + target: ldm.models.autoencoder.AutoencoderKL + params: + embed_dim: 4 + monitor: val/rec_loss + ddconfig: + double_z: true + z_channels: 4 + resolution: 256 + in_channels: 3 + out_ch: 3 + ch: 128 + ch_mult: + - 1 + - 2 + - 4 + - 4 + num_res_blocks: 2 + attn_resolutions: [] + dropout: 0.0 + lossconfig: + target: torch.nn.Identity + + cond_stage_config: + target: modules.xlmr_m18.BertSeriesModelWithTransformation + params: + name: "XLMR-Large" diff --git a/extensions-builtin/Lora/lora_logger.py b/extensions-builtin/Lora/lora_logger.py new file mode 100644 index 00000000000..d51de29704f --- /dev/null +++ b/extensions-builtin/Lora/lora_logger.py @@ -0,0 +1,33 @@ +import sys +import copy +import logging + + +class ColoredFormatter(logging.Formatter): + COLORS = { + "DEBUG": "\033[0;36m", # CYAN + "INFO": "\033[0;32m", # GREEN + "WARNING": "\033[0;33m", # YELLOW + "ERROR": "\033[0;31m", # RED + "CRITICAL": "\033[0;37;41m", # WHITE ON RED + "RESET": "\033[0m", # RESET COLOR + } + + def format(self, record): + colored_record = copy.copy(record) + levelname = colored_record.levelname + seq = self.COLORS.get(levelname, self.COLORS["RESET"]) + colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}" + return super().format(colored_record) + + +logger = logging.getLogger("lora") +logger.propagate = False + + +if not logger.handlers: + handler = logging.StreamHandler(sys.stdout) + handler.setFormatter( + ColoredFormatter("[%(name)s]-%(levelname)s: %(message)s") + ) + logger.addHandler(handler) diff --git a/extensions-builtin/Lora/lyco_helpers.py b/extensions-builtin/Lora/lyco_helpers.py index 279b34bc928..1679a0ce633 100644 --- a/extensions-builtin/Lora/lyco_helpers.py +++ b/extensions-builtin/Lora/lyco_helpers.py @@ -19,3 +19,50 @@ def rebuild_cp_decomposition(up, down, mid): up = up.reshape(up.size(0), -1) down = down.reshape(down.size(0), -1) return torch.einsum('n m k l, i n, m j -> i j k l', mid, up, down) + + +# copied from https://github.com/KohakuBlueleaf/LyCORIS/blob/dev/lycoris/modules/lokr.py +def factorization(dimension: int, factor:int=-1) -> tuple[int, int]: + ''' + return a tuple of two value of input dimension decomposed by the number closest to factor + second value is higher or equal than first value. + + In LoRA with Kroneckor Product, first value is a value for weight scale. + secon value is a value for weight. + + Becuase of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different. + + examples) + factor + -1 2 4 8 16 ... + 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 + 128 -> 8, 16 128 -> 2, 64 128 -> 4, 32 128 -> 8, 16 128 -> 8, 16 + 250 -> 10, 25 250 -> 2, 125 250 -> 2, 125 250 -> 5, 50 250 -> 10, 25 + 360 -> 8, 45 360 -> 2, 180 360 -> 4, 90 360 -> 8, 45 360 -> 12, 30 + 512 -> 16, 32 512 -> 2, 256 512 -> 4, 128 512 -> 8, 64 512 -> 16, 32 + 1024 -> 32, 32 1024 -> 2, 512 1024 -> 4, 256 1024 -> 8, 128 1024 -> 16, 64 + ''' + + if factor > 0 and (dimension % factor) == 0: + m = factor + n = dimension // factor + if m > n: + n, m = m, n + return m, n + if factor < 0: + factor = dimension + m, n = 1, dimension + length = m + n + while m length or new_m>factor: + break + else: + m, n = new_m, new_n + if m > n: + n, m = m, n + return m, n + diff --git a/extensions-builtin/Lora/network.py b/extensions-builtin/Lora/network.py index d8e8dfb7ff0..6021fd8de0f 100644 --- a/extensions-builtin/Lora/network.py +++ b/extensions-builtin/Lora/network.py @@ -93,6 +93,7 @@ def __init__(self, name, network_on_disk: NetworkOnDisk): self.unet_multiplier = 1.0 self.dyn_dim = None self.modules = {} + self.bundle_embeddings = {} self.mtime = None self.mentioned_name = None diff --git a/extensions-builtin/Lora/network_glora.py b/extensions-builtin/Lora/network_glora.py new file mode 100644 index 00000000000..492d487078d --- /dev/null +++ b/extensions-builtin/Lora/network_glora.py @@ -0,0 +1,33 @@ + +import network + +class ModuleTypeGLora(network.ModuleType): + def create_module(self, net: network.Network, weights: network.NetworkWeights): + if all(x in weights.w for x in ["a1.weight", "a2.weight", "alpha", "b1.weight", "b2.weight"]): + return NetworkModuleGLora(net, weights) + + return None + +# adapted from https://github.com/KohakuBlueleaf/LyCORIS +class NetworkModuleGLora(network.NetworkModule): + def __init__(self, net: network.Network, weights: network.NetworkWeights): + super().__init__(net, weights) + + if hasattr(self.sd_module, 'weight'): + self.shape = self.sd_module.weight.shape + + self.w1a = weights.w["a1.weight"] + self.w1b = weights.w["b1.weight"] + self.w2a = weights.w["a2.weight"] + self.w2b = weights.w["b2.weight"] + + def calc_updown(self, orig_weight): + w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype) + w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype) + w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype) + w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype) + + output_shape = [w1a.size(0), w1b.size(1)] + updown = ((w2b @ w1b) + ((orig_weight @ w2a) @ w1a)) + + return self.finalize_updown(updown, orig_weight, output_shape) diff --git a/extensions-builtin/Lora/network_oft.py b/extensions-builtin/Lora/network_oft.py new file mode 100644 index 00000000000..fa647020f0a --- /dev/null +++ b/extensions-builtin/Lora/network_oft.py @@ -0,0 +1,82 @@ +import torch +import network +from lyco_helpers import factorization +from einops import rearrange + + +class ModuleTypeOFT(network.ModuleType): + def create_module(self, net: network.Network, weights: network.NetworkWeights): + if all(x in weights.w for x in ["oft_blocks"]) or all(x in weights.w for x in ["oft_diag"]): + return NetworkModuleOFT(net, weights) + + return None + +# Supports both kohya-ss' implementation of COFT https://github.com/kohya-ss/sd-scripts/blob/main/networks/oft.py +# and KohakuBlueleaf's implementation of OFT/COFT https://github.com/KohakuBlueleaf/LyCORIS/blob/dev/lycoris/modules/diag_oft.py +class NetworkModuleOFT(network.NetworkModule): + def __init__(self, net: network.Network, weights: network.NetworkWeights): + + super().__init__(net, weights) + + self.lin_module = None + self.org_module: list[torch.Module] = [self.sd_module] + + self.scale = 1.0 + + # kohya-ss + if "oft_blocks" in weights.w.keys(): + self.is_kohya = True + self.oft_blocks = weights.w["oft_blocks"] # (num_blocks, block_size, block_size) + self.alpha = weights.w["alpha"] # alpha is constraint + self.dim = self.oft_blocks.shape[0] # lora dim + # LyCORIS + elif "oft_diag" in weights.w.keys(): + self.is_kohya = False + self.oft_blocks = weights.w["oft_diag"] + # self.alpha is unused + self.dim = self.oft_blocks.shape[1] # (num_blocks, block_size, block_size) + + is_linear = type(self.sd_module) in [torch.nn.Linear, torch.nn.modules.linear.NonDynamicallyQuantizableLinear] + is_conv = type(self.sd_module) in [torch.nn.Conv2d] + is_other_linear = type(self.sd_module) in [torch.nn.MultiheadAttention] # unsupported + + if is_linear: + self.out_dim = self.sd_module.out_features + elif is_conv: + self.out_dim = self.sd_module.out_channels + elif is_other_linear: + self.out_dim = self.sd_module.embed_dim + + if self.is_kohya: + self.constraint = self.alpha * self.out_dim + self.num_blocks = self.dim + self.block_size = self.out_dim // self.dim + else: + self.constraint = None + self.block_size, self.num_blocks = factorization(self.out_dim, self.dim) + + def calc_updown(self, orig_weight): + oft_blocks = self.oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype) + eye = torch.eye(self.block_size, device=self.oft_blocks.device) + + if self.is_kohya: + block_Q = oft_blocks - oft_blocks.transpose(1, 2) # ensure skew-symmetric orthogonal matrix + norm_Q = torch.norm(block_Q.flatten()) + new_norm_Q = torch.clamp(norm_Q, max=self.constraint) + block_Q = block_Q * ((new_norm_Q + 1e-8) / (norm_Q + 1e-8)) + oft_blocks = torch.matmul(eye + block_Q, (eye - block_Q).float().inverse()) + + R = oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype) + + # This errors out for MultiheadAttention, might need to be handled up-stream + merged_weight = rearrange(orig_weight, '(k n) ... -> k n ...', k=self.num_blocks, n=self.block_size) + merged_weight = torch.einsum( + 'k n m, k n ... -> k m ...', + R, + merged_weight + ) + merged_weight = rearrange(merged_weight, 'k m ... -> (k m) ...') + + updown = merged_weight.to(orig_weight.device, dtype=orig_weight.dtype) - orig_weight + output_shape = orig_weight.shape + return self.finalize_updown(updown, orig_weight, output_shape) diff --git a/extensions-builtin/Lora/networks.py b/extensions-builtin/Lora/networks.py index 96f935b236f..629bf85376d 100644 --- a/extensions-builtin/Lora/networks.py +++ b/extensions-builtin/Lora/networks.py @@ -5,16 +5,21 @@ import lora_patches import network import network_lora +import network_glora import network_hada import network_ia3 import network_lokr import network_full import network_norm +import network_oft import torch from typing import Union from modules import shared, devices, sd_models, errors, scripts, sd_hijack +import modules.textual_inversion.textual_inversion as textual_inversion + +from lora_logger import logger module_types = [ network_lora.ModuleTypeLora(), @@ -23,6 +28,8 @@ network_lokr.ModuleTypeLokr(), network_full.ModuleTypeFull(), network_norm.ModuleTypeNorm(), + network_glora.ModuleTypeGLora(), + network_oft.ModuleTypeOFT(), ] @@ -149,9 +156,20 @@ def load_network(name, network_on_disk): is_sd2 = 'model_transformer_resblocks' in shared.sd_model.network_layer_mapping matched_networks = {} + bundle_embeddings = {} for key_network, weight in sd.items(): - key_network_without_network_parts, network_part = key_network.split(".", 1) + key_network_without_network_parts, _, network_part = key_network.partition(".") + + if key_network_without_network_parts == "bundle_emb": + emb_name, vec_name = network_part.split(".", 1) + emb_dict = bundle_embeddings.get(emb_name, {}) + if vec_name.split('.')[0] == 'string_to_param': + _, k2 = vec_name.split('.', 1) + emb_dict['string_to_param'] = {k2: weight} + else: + emb_dict[vec_name] = weight + bundle_embeddings[emb_name] = emb_dict key = convert_diffusers_name_to_compvis(key_network_without_network_parts, is_sd2) sd_module = shared.sd_model.network_layer_mapping.get(key, None) @@ -174,6 +192,17 @@ def load_network(name, network_on_disk): key = key_network_without_network_parts.replace("lora_te1_text_model", "transformer_text_model") sd_module = shared.sd_model.network_layer_mapping.get(key, None) + # kohya_ss OFT module + elif sd_module is None and "oft_unet" in key_network_without_network_parts: + key = key_network_without_network_parts.replace("oft_unet", "diffusion_model") + sd_module = shared.sd_model.network_layer_mapping.get(key, None) + + # KohakuBlueLeaf OFT module + if sd_module is None and "oft_diag" in key: + key = key_network_without_network_parts.replace("lora_unet", "diffusion_model") + key = key_network_without_network_parts.replace("lora_te1_text_model", "0_transformer_text_model") + sd_module = shared.sd_model.network_layer_mapping.get(key, None) + if sd_module is None: keys_failed_to_match[key_network] = key continue @@ -195,6 +224,14 @@ def load_network(name, network_on_disk): net.modules[key] = net_module + embeddings = {} + for emb_name, data in bundle_embeddings.items(): + embedding = textual_inversion.create_embedding_from_data(data, emb_name, filename=network_on_disk.filename + "/" + emb_name) + embedding.loaded = None + embeddings[emb_name] = embedding + + net.bundle_embeddings = embeddings + if keys_failed_to_match: logging.debug(f"Network {network_on_disk.filename} didn't match keys: {keys_failed_to_match}") @@ -210,11 +247,15 @@ def purge_networks_from_memory(): def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None): + emb_db = sd_hijack.model_hijack.embedding_db already_loaded = {} for net in loaded_networks: if net.name in names: already_loaded[net.name] = net + for emb_name, embedding in net.bundle_embeddings.items(): + if embedding.loaded: + emb_db.register_embedding_by_name(None, shared.sd_model, emb_name) loaded_networks.clear() @@ -257,6 +298,21 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No net.dyn_dim = dyn_dims[i] if dyn_dims else 1.0 loaded_networks.append(net) + for emb_name, embedding in net.bundle_embeddings.items(): + if embedding.loaded is None and emb_name in emb_db.word_embeddings: + logger.warning( + f'Skip bundle embedding: "{emb_name}"' + ' as it was already loaded from embeddings folder' + ) + continue + + embedding.loaded = False + if emb_db.expected_shape == -1 or emb_db.expected_shape == embedding.shape: + embedding.loaded = True + emb_db.register_embedding(embedding, shared.sd_model) + else: + emb_db.skipped_embeddings[name] = embedding + if failed_to_load_networks: sd_hijack.model_hijack.comments.append("Networks not found: " + ", ".join(failed_to_load_networks)) @@ -418,6 +474,7 @@ def network_forward(module, input, original_forward): def network_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]): self.network_current_names = () self.network_weights_backup = None + self.network_bias_backup = None def network_Linear_forward(self, input): @@ -564,6 +621,7 @@ def infotext_pasted(infotext, params): available_networks = {} available_network_aliases = {} loaded_networks = [] +loaded_bundle_embeddings = {} networks_in_memory = {} available_network_hash_lookup = {} forbidden_network_aliases = {} diff --git a/extensions-builtin/Lora/ui_extra_networks_lora.py b/extensions-builtin/Lora/ui_extra_networks_lora.py index 55409a7829d..df02c663b12 100644 --- a/extensions-builtin/Lora/ui_extra_networks_lora.py +++ b/extensions-builtin/Lora/ui_extra_networks_lora.py @@ -17,6 +17,8 @@ def refresh(self): def create_item(self, name, index=None, enable_filter=True): lora_on_disk = networks.available_networks.get(name) + if lora_on_disk is None: + return path, ext = os.path.splitext(lora_on_disk.filename) @@ -66,9 +68,10 @@ def create_item(self, name, index=None, enable_filter=True): return item def list_items(self): - for index, name in enumerate(networks.available_networks): + # instantiate a list to protect against concurrent modification + names = list(networks.available_networks) + for index, name in enumerate(names): item = self.create_item(name, index) - if item is not None: yield item diff --git a/extensions-builtin/anapnoe-sd-theme-editor/themes/sdxl_moonlight_orange.css b/extensions-builtin/anapnoe-sd-theme-editor/themes/sdxl_moonlight_orange.css new file mode 100644 index 00000000000..90e4c354c96 --- /dev/null +++ b/extensions-builtin/anapnoe-sd-theme-editor/themes/sdxl_moonlight_orange.css @@ -0,0 +1 @@ +--ae-primary-color:hsl(12deg 75% 62%);--ae-secondary-color:hsl(15deg 6% 13%);--ae-main-bg-color:hsl(30deg 16% 6%);--ae-input-height:35px;--ae-input-slider-height:0.5;--ae-input-icon-height:calc(var(--ae-input-height) - var(--ae-input-border-size) * 2);--ae-input-padding:5px;--ae-input-font-size:14px;--ae-input-line-height:20px;--ae-label-color:hsl(335deg 66% 85%);--ae-secondary-label-color:hsl(150deg 100% 50%);--ae-input-border-size:1px;--ae-input-border-radius:5px;--ae-input-text-color:hsl(12deg 75% 62%);--ae-input-placeholder-color:hsl(12deg 75% 62%);--ae-input-bg-color:hsl(30deg 17% 8%);--ae-input-border-color:hsl(30deg 20% 16%);--ae-input-hover-text-color:hsl(30deg 18% 12%);--ae-panel-bg-color:hsl(30deg 18% 12%);--ae-panel-border-color:hsl(30deg 16% 16%);--ae-panel-padding:5px;--ae-border-radius:0px;--ae-border-size:1px;--ae-gap-size-val:3px;--ae-gap-size:max(var(--ae-gap-size-val), var(--ae-border-size));--ae-border-size-neg:calc(var(--ae-border-size) * -1);--ae-border-size-x2:calc((var(--ae-border-size) * 2) + 0px);--ae-group-bg-color:hsl(15deg 7% 11%);--ae-group-padding:0px;--ae-group-radius:0px;--ae-group-border-size:0px;--ae-group-border-color:hsl(30deg 16% 16%);--ae-group-gap:1px;--ae-panel-border-radius:0px;--ae-subpanel-border-radius:8px;--ae-outside-gap-size:8px;--ae-inside-padding-size:8px;--ae-tool-button-size:34px;--ae-tool-button-radius:16px;--ae-generate-button-height:70px;--ae-max-padding:max(var(--ae-outside-gap-size),var(--ae-inside-padding-size));--ae-icon-size:22px;--ae-mobile-outside-gap-size:2px;--ae-mobile-inside-padding-size:2px;--panel-border-radius:4px;--subpanel-border-radius:8px;--outside-gap-size:8px;--inside-padding-size:8px;--tool-button-size:34px;--tool-button-radius:16px;--generate-button-height:70px;--max-padding:max(var(--outside-gap-size),var(--inside-padding-size));--icon-size:22px;--mobile-outside-gap-size:3px;--mobile-inside-padding-size:3px; \ No newline at end of file diff --git a/extensions-builtin/anapnoe-sd-uiux/html/templates/template-app-root.html b/extensions-builtin/anapnoe-sd-uiux/html/templates/template-app-root.html index 77cb68065cc..4de8f1803fd 100644 --- a/extensions-builtin/anapnoe-sd-uiux/html/templates/template-app-root.html +++ b/extensions-builtin/anapnoe-sd-uiux/html/templates/template-app-root.html @@ -97,7 +97,7 @@