This is a forked version of the AUTOMATIC1111/stable-diffusion-webui project - basically the same thing with some additional API features like background removal. Setup is the same, so check their github for better docs.
This server is designed to be used with the Seth's AI Tools Client <-- github page that has its download and screenshots/movies
Or, don't use my front-end client and just use its API directly:
- Here's a Python Jupter notebook showing examples of how to use the standard AUTOMATIC1111 api
- Here's a Python Jupter notebook showing how to use the extended features available in my forked server (AI background removal, AI subject masking, etc)
Note: This repository was deleted and replaced with the AUTOMATIC1111/stable-diffusion-webui fork Sept 19th 2022, specific missing features that I need are folded into it. Previously I had written my own custom server but that was like, too much work man
- (merged with latest auto1111 stuff)
- Versioned to 0.49
Installation and Running (modified from stable-diffusion-webui docs)
Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.
Note: This might not work, auto1111 has changed the setup a bit and I haven't tested it under Windows, just linux. If that happens, check the auto1111 instructions instead, it should still apply here.
- Install Python 3.10.6, checking "Add Python to PATH"
- Install git.
- Download the aitools_server repository, for example by running
git clone https://github.com/SethRobinson/aitools_server.git
. - Place any stable diffusion checkpoint such as
sd-v1-5-inpainting.ckpt
in themodels/Stable-diffusion
directory (see dependencies for where to get one) - Run
webui-user.bat
from Windows Explorer as normal, non-administrator, user.
- Install the dependencies: Note: Requires Python 3.10+ now!
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3 gperftools-libs libglvnd-glx
# openSUSE-based:
sudo zypper install wget git python3 libtcmalloc4 libglvnd
# Arch-based:
sudo pacman -S wget git python3
- To install in
/home/$(whoami)/aitools_server/
, run:
bash <(wget -qO- https://raw.githubusercontent.com/SethRobinson/aitools_server/master/webui.sh)
- Place sd-v1.5-inpainting.ckpt or another stable diffusion model in models/Stable-diffusion. (see dependencies for where to get it).
- Run the server from shell with:
python launch.py --listen --port 7860 --api
(if on linux, you can do sh runserver.sh, it's an included helper script that does something similar)
Don't have a strong enough GPU or want to give it a quick test run without hassle? No problem, use this Colab notebook. (Works fine on the free tier)
Go to its directory (probably aitools_server) in a shell or command prompt and type:
git pull
If you feel bold, you can also merge it with the latest Automatic1111 server yourself. This CAN break things, so you probably shouldn't do this unless you really need a new feature and Seth hasn't merged the latest yet. (not recommended unless you know how to resolve what are probably simple merge issues)
sh merge_with_automatic1111.sh
Verify the server works by visiting it with a browser. You should be able to generate and paint images via the default web gradio interface. Now you're ready to use the native client.
Note The first time you use the server, it may appear that nothing is happening - look at the server window/shell, it's probably downloading a bunch of stuff for each new feature you use. This only happens the first time!
-
Download the Client (Windows, ~36 MB) (Or get the Unity source)
-
Unzip somewhere and run aitools_client.exe
The client should start up. If you click "Generate", images should start being made. By default it tries to find the server at localhost at port 7860. If it's somewhere else, you need to click "Configure" and edit/add server info. You can add/remove multiple servers on the fly while using the app. (all will be utilitized simultaneously by the app)
You can run multiple instances of the server from the same install.
Start one instance: (uh, this is how for linux, not sure about Windows)
CUDA_VISIBLE_DEVICES=0 python launch.py --listen --port 7860 --api
Then from another shell start another specifying a different GPU and port:
CUDA_VISIBLE_DEVICES=1 python launch.py --listen --port 7861 --api
Then on the client, click Configure and edit in an add_server command for both servers.
- Seth's AI Tools created by Seth A. Robinson (seth@rtsoft.com) twitter: @rtsoft - Codedojo, Seth's blog
- Highly Accurate Dichotomous Image Segmentation (Xuebin Qin and Hang Dai and Xiaobin Hu and Deng-Ping Fan and Ling Shao and Luc Van Gool)
- The original stable-diffusion-webui project the server portion is forked from
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- Spandrel - https://github.com/chaiNNer-org/spandrel implementing
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (Birch-san/diffusers#1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Hypertile - tfernd - https://github.com/tfernd/HyperTile
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)