-
-
Notifications
You must be signed in to change notification settings - Fork 365
Cloud GPU
This is a step-by-step guide on how to run the Stable Diffusion server remotely on cloud services like runpod.io, vast.ai or sailflow.ai. This allows you to use the plugin without installing a server on your local machine, and is a great option if you don't own a powerful GPU. There is no subscription, you typically pay between 0.25$ to 0.50$ per hour.
runpod.io is fairly stream-lined, but offers limited information about download speeds, and is a bit more pricey.
Go to runpod.io and create an account. Go to "Billing" and add funds (one-time payment, minimum 10$, card required).
I choose community cloud and select 1x RTX 3080. It's cheap and fast! Click Deploy.
You are free to choose one of the other options of course.
Runpod supports all kinds of workloads. To run a server for the Krita plugin, select "Stable Diffusion ComfyUI for Krita". Just typing "krita" into the search bar should find it, or use this link. Click Continue.
By default a recommended set of models will be downloaded at start-up. You can change this with a custom start command.
You will get a cost summary. Click Deploy.
Now you have to wait until the server is up. This can take a while (~10 minutes) depending on the download speed of your pod. Eventually it should look like this:
Once your pod is running, click Connect and choose "HTTP Service [Port 3001]".
This should open ComfyUI running in your browser. Now simply copy the URL into the Krita plugin and connect!
After you're done using the pod, remember to stop it. You can keep it inactive to potentially reuse it later, but it's not free. To avoid charges, make sure to discard/delete the pod.
vast.ai has a very similar offering. You get more details about the machine you will run on, but it also requires you to filter the available pods to get what you want. Minimum initial funds is 5$.
The UI is very similar to runpod.io and so are the steps to set it up.
You can use this template. Try to select a pod in your region with good internet connection. Click Rent.
Tip
Optional: Customize which Models to download
By default a recommended set of models will be downloaded at start-up. You can change this with a custom start command.
Once your instance has finished loading and is running, click the button which displays the port range to find the URL to connect.
The URL will be the one which maps to port 3001. Copy it into Krita and connect. Make sure it doesn't contain any spaces!
sailflow.ai is a GPU cloud platform, which allows you to start krita-ai-diffusion faster. You don't have to install ComfyUI in the cloud GPU server in advance, you just follow the instruction below, create a job, connect to the web address and then done.
Go to sailflow.ai and create an account, try referral code: R5QXRVE6.
You can select Krita template immediately at Home page. Feel free to start, being comfortable to top-up later or try subscription plans. Click Krita.
You are free to choose one of the other options of course.
You will get a cost summary. Click Submit.
Now you have to wait until the server is up. This will be available soon (~1 minutes) depending on the download speed of your pod. Eventually it should look like this:
Once your pod is running, you may notice that SD APP button has been enabled and you can connect to ComfyUI through this button.
This should open ComfyUI running in your browser. Now simply copy the URL into the Krita plugin and connect!
After you're done using the pod, remember to stop it. You can keep it inactive to potentially reuse it later, but it's not free. To avoid charges, make sure to discard/delete the pod.
You can tweak the Docker start command to customize which models are downloaded. The default start command is
/start.sh --recommended
which will download a recommended set of models (currently SDXL workload with some checkpoint and control models).
Here are some examples for different setups:
-
/start.sh
| Don't download anything. Only useful if you want to download manually. -
/start.sh --sd15 --checkpoints --controlnet
| Models required to run SD1.5 with all recommended checkpoints and control models -
/start.sh --sdxl --checkpoint juggernaut
| Models required to run SDXL with only Juggernaut (realistic) checkpoint -
/start.sh --all
| Download everything. Will take a very long time at start up.
All options are listed below: List of arguments for download_models.py
Before you deploy your pod, click "Edit Template" to get to the "Pod Template Overrides" section. Then, set the "Container Start Command" to the command you want.
Argument | Description |
---|---|
--minimal |
download the minimum viable set of models |
--recommended |
download a recommended set of models |
--all |
download ALL models |
--sd15 |
[Workload] everything needed to run SD 1.5 (no checkpoints) |
--sdxl |
[Workload] everything needed to run SDXL (no checkpoints) |
--checkpoints |
download all checkpoints for selected workloads |
--controlnet |
download ControlNet models for selected workloads |
--checkpoint |
download a specific checkpoint (can specify multiple times) |
--upscalers |
download additional upscale models |
Checkpoint names which can be used with --checkpoint
: realistic-vision
, dreamshaper
, flat2d-animerge
, juggernaut
, zavychroma
, flux-schnell
You can download any custom models to the pod after it has started. Either use ComfyUI Manager (installed by default), or do it via SSH terminal.
You can either use the web terminal, or connect via ssh.
- Download Checkpoints to
/models/checkpoints
- Download LoRA to
/models/lora
cd /models/checkpoints
wget --content-disposition 'http://file-you-want-to-download'
After download is complete, click the refresh button in Krita and they should show up.