Skip to content

Commit

Permalink
Merge branch 'main' of github.com:piEsposito/diffusers into main
Browse files Browse the repository at this point in the history
  • Loading branch information
piEsposito committed Oct 20, 2022
2 parents 965dfe1 + 9af3535 commit 6e81ac5
Show file tree
Hide file tree
Showing 33 changed files with 5,044 additions and 209 deletions.
146 changes: 146 additions & 0 deletions .github/actions/setup-miniconda/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
name: Set up conda environment for testing

description: Sets up miniconda in your ${RUNNER_TEMP} environment and gives you the ${CONDA_RUN} environment variable so you don't have to worry about polluting non-empeheral runners anymore

inputs:
python-version:
description: If set to any value, dont use sudo to clean the workspace
required: false
type: string
default: "3.9"
miniconda-version:
description: Miniconda version to install
required: false
type: string
default: "4.12.0"
environment-file:
description: Environment file to install dependencies from
required: false
type: string
default: ""

runs:
using: composite
steps:
# Use the same trick from https://github.com/marketplace/actions/setup-miniconda
# to refresh the cache daily. This is kind of optional though
- name: Get date
id: get-date
shell: bash
run: echo "::set-output name=today::$(/bin/date -u '+%Y%m%d')d"
- name: Setup miniconda cache
id: miniconda-cache
uses: actions/cache@v2
with:
path: ${{ runner.temp }}/miniconda
key: miniconda-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}
- name: Install miniconda (${{ inputs.miniconda-version }})
if: steps.miniconda-cache.outputs.cache-hit != 'true'
env:
MINICONDA_VERSION: ${{ inputs.miniconda-version }}
shell: bash -l {0}
run: |
MINICONDA_INSTALL_PATH="${RUNNER_TEMP}/miniconda"
mkdir -p "${MINICONDA_INSTALL_PATH}"
case ${RUNNER_OS}-${RUNNER_ARCH} in
Linux-X64)
MINICONDA_ARCH="Linux-x86_64"
;;
macOS-ARM64)
MINICONDA_ARCH="MacOSX-arm64"
;;
macOS-X64)
MINICONDA_ARCH="MacOSX-x86_64"
;;
*)
echo "::error::Platform ${RUNNER_OS}-${RUNNER_ARCH} currently unsupported using this action"
exit 1
;;
esac
MINICONDA_URL="https://repo.anaconda.com/miniconda/Miniconda3-py39_${MINICONDA_VERSION}-${MINICONDA_ARCH}.sh"
curl -fsSL "${MINICONDA_URL}" -o "${MINICONDA_INSTALL_PATH}/miniconda.sh"
bash "${MINICONDA_INSTALL_PATH}/miniconda.sh" -b -u -p "${MINICONDA_INSTALL_PATH}"
rm -rf "${MINICONDA_INSTALL_PATH}/miniconda.sh"
- name: Update GitHub path to include miniconda install
shell: bash
run: |
MINICONDA_INSTALL_PATH="${RUNNER_TEMP}/miniconda"
echo "${MINICONDA_INSTALL_PATH}/bin" >> $GITHUB_PATH
- name: Setup miniconda env cache (with env file)
id: miniconda-env-cache-env-file
if: ${{ runner.os }} == 'macOS' && ${{ inputs.environment-file }} != ''
uses: actions/cache@v2
with:
path: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
key: miniconda-env-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}-${{ hashFiles(inputs.environment-file) }}
- name: Setup miniconda env cache (without env file)
id: miniconda-env-cache
if: ${{ runner.os }} == 'macOS' && ${{ inputs.environment-file }} == ''
uses: actions/cache@v2
with:
path: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
key: miniconda-env-${{ runner.os }}-${{ runner.arch }}-${{ inputs.python-version }}-${{ steps.get-date.outputs.today }}
- name: Setup conda environment with python (v${{ inputs.python-version }})
if: steps.miniconda-env-cache-env-file.outputs.cache-hit != 'true' && steps.miniconda-env-cache.outputs.cache-hit != 'true'
shell: bash
env:
PYTHON_VERSION: ${{ inputs.python-version }}
ENV_FILE: ${{ inputs.environment-file }}
run: |
CONDA_BASE_ENV="${RUNNER_TEMP}/conda-python-${PYTHON_VERSION}"
ENV_FILE_FLAG=""
if [[ -f "${ENV_FILE}" ]]; then
ENV_FILE_FLAG="--file ${ENV_FILE}"
elif [[ -n "${ENV_FILE}" ]]; then
echo "::warning::Specified env file (${ENV_FILE}) not found, not going to include it"
fi
conda create \
--yes \
--prefix "${CONDA_BASE_ENV}" \
"python=${PYTHON_VERSION}" \
${ENV_FILE_FLAG} \
cmake=3.22 \
conda-build=3.21 \
ninja=1.10 \
pkg-config=0.29 \
wheel=0.37
- name: Clone the base conda environment and update GitHub env
shell: bash
env:
PYTHON_VERSION: ${{ inputs.python-version }}
CONDA_BASE_ENV: ${{ runner.temp }}/conda-python-${{ inputs.python-version }}
run: |
CONDA_ENV="${RUNNER_TEMP}/conda_environment_${GITHUB_RUN_ID}"
conda create \
--yes \
--prefix "${CONDA_ENV}" \
--clone "${CONDA_BASE_ENV}"
# TODO: conda-build could not be cloned because it hardcodes the path, so it
# could not be cached
conda install --yes -p ${CONDA_ENV} conda-build=3.21
echo "CONDA_ENV=${CONDA_ENV}" >> "${GITHUB_ENV}"
echo "CONDA_RUN=conda run -p ${CONDA_ENV} --no-capture-output" >> "${GITHUB_ENV}"
echo "CONDA_BUILD=conda run -p ${CONDA_ENV} conda-build" >> "${GITHUB_ENV}"
echo "CONDA_INSTALL=conda install -p ${CONDA_ENV}" >> "${GITHUB_ENV}"
- name: Get disk space usage and throw an error for low disk space
shell: bash
run: |
echo "Print the available disk space for manual inspection"
df -h
# Set the minimum requirement space to 4GB
MINIMUM_AVAILABLE_SPACE_IN_GB=4
MINIMUM_AVAILABLE_SPACE_IN_KB=$(($MINIMUM_AVAILABLE_SPACE_IN_GB * 1024 * 1024))
# Use KB to avoid floating point warning like 3.1GB
df -k | tr -s ' ' | cut -d' ' -f 4,9 | while read -r LINE;
do
AVAIL=$(echo $LINE | cut -f1 -d' ')
MOUNT=$(echo $LINE | cut -f2 -d' ')
if [ "$MOUNT" = "/" ]; then
if [ "$AVAIL" -lt "$MINIMUM_AVAILABLE_SPACE_IN_KB" ]; then
echo "There is only ${AVAIL}KB free space left in $MOUNT, which is less than the minimum requirement of ${MINIMUM_AVAILABLE_SPACE_IN_KB}KB. Please help create an issue to PyTorch Release Engineering via https://github.com/pytorch/test-infra/issues and provide the link to the workflow run."
exit 1;
else
echo "There is ${AVAIL}KB free space left in $MOUNT, continue"
fi
fi
done
58 changes: 53 additions & 5 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Run non-slow tests
name: Run fast tests

on:
pull_request:
Expand All @@ -10,14 +10,14 @@ concurrency:
cancel-in-progress: true

env:
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 60
MPS_TORCH_VERSION: 1.13.0

jobs:
run_tests_cpu:
name: Diffusers tests
name: CPU tests on Ubuntu
runs-on: [ self-hosted, docker-gpu ]
container:
image: python:3.7
Expand All @@ -39,7 +39,7 @@ jobs:
run: |
python utils/print_env.py
- name: Run all non-slow selected tests on CPU
- name: Run all fast tests on CPU
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=tests_torch_cpu tests/
Expand All @@ -51,5 +51,53 @@ jobs:
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_torch_test_reports
name: pr_torch_cpu_test_reports
path: reports

run_tests_apple_m1:
name: MPS tests on Apple M1
runs-on: [ self-hosted, apple-m1 ]

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Clean checkout
shell: arch -arch arm64 bash {0}
run: |
git clean -fxd
- name: Setup miniconda
uses: ./.github/actions/setup-miniconda
with:
python-version: 3.9

- name: Install dependencies
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install --pre torch==${MPS_TORCH_VERSION} --extra-index-url https://download.pytorch.org/whl/test/cpu
- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py
- name: Run all fast tests on MPS
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=tests_torch_mps tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_mps_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_torch_mps_test_reports
path: reports
28 changes: 12 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,14 +210,16 @@ You can also run this example on colab [![Open In Colab](https://colab.research.

### In-painting using Stable Diffusion

The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and text prompt.
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and a text prompt. It uses a model optimized for this particular task, whose license you need to accept before use.

```python
from io import BytesIO
Please, visit the [model card](https://huggingface.co/runwayml/stable-diffusion-inpainting), read the license carefully and tick the checkbox if you agree. Note that this is an additional license, you need to accept it even if you accepted the text-to-image Stable Diffusion license in the past. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section](https://huggingface.co/docs/hub/security-tokens) of the documentation.

import torch
import requests

```python
import PIL
import requests
import torch
from io import BytesIO

from diffusers import StableDiffusionInpaintPipeline

Expand All @@ -231,21 +233,15 @@ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

device = "cuda"
model_id_or_path = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionInpaintPipeline.from_pretrained(
model_id_or_path,
revision="fp16",
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
# or download via git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
# and pass `model_id_or_path="./stable-diffusion-v1-4"`.
pipe = pipe.to(device)

prompt = "a cat sitting on a bench"
images = pipe(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
pipe = pipe.to("cuda")

images[0].save("cat_on_bench.png")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

### Tweak prompts reusing seeds and latents
Expand Down
20 changes: 10 additions & 10 deletions docs/source/api/pipelines/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -151,10 +151,10 @@ You can generate your own latents to reproduce results, or tweak your prompt on
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and text prompt.

```python
from io import BytesIO

import requests
import PIL
import requests
import torch
from io import BytesIO

from diffusers import StableDiffusionInpaintPipeline

Expand All @@ -170,15 +170,15 @@ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

device = "cuda"
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16
).to(device)

prompt = "a cat sitting on a bench"
images = pipe(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")

images[0].save("cat_on_bench.png")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
37 changes: 26 additions & 11 deletions docs/source/using-diffusers/inpaint.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,19 @@ specific language governing permissions and limitations under the License.

# Text-Guided Image-Inpainting

The [`StableDiffusionInpaintPipeline`] lets you edit specific parts of an image by providing a mask and text prompt.
The [`StableDiffusionInpaintPipeline`] lets you edit specific parts of an image by providing a mask and a text prompt. It uses a version of Stable Diffusion specifically trained for in-painting tasks.

```python
from io import BytesIO
<Tip warning={true}>
Note that this model is distributed separately from the regular Stable Diffusion model, so you have to accept its license even if you accepted the Stable Diffusion one in the past.

import requests
Please, visit the [model card](https://huggingface.co/runwayml/stable-diffusion-inpainting), read the license carefully and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section](https://huggingface.co/docs/hub/security-tokens) of the documentation.
</Tip>

```python
import PIL
import requests
import torch
from io import BytesIO

from diffusers import StableDiffusionInpaintPipeline

Expand All @@ -34,15 +40,24 @@ mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))

device = "cuda"
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16
).to(device)
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```

prompt = "a cat sitting on a bench"
images = pipe(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images
`image` | `mask_image` | `prompt` | **Output** |
:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
<img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="250"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="250"/> | ***Face of a yellow cat, high resolution, sitting on a park bench*** | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="250"/> |

images[0].save("cat_on_bench.png")
```

You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)

<Tip warning={true}>
A previous experimental implementation of in-painting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn't contain the new model will still apply the old in-painting method.
</Tip>
Loading

0 comments on commit 6e81ac5

Please sign in to comment.