-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add stable diffusion from huggingface #254
Conversation
from utils.pytorch import PyTorchRunnerV2 | ||
from utils.text_to_image.stable_diffusion import StableDiffusion | ||
|
||
if os.environ.get("ENABLE_BF16_X86") == "1": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ENABLE_BF16_X86 is currently used to enable implicit bf16 in AML. Here you are explicitly loading model in bfloat16 - create run_pytorch_bf16 function instead.
@@ -14,7 +14,6 @@ def get_input(self): | |||
adjectives = ["big", "small", "thin", "wide", "blonde", "pale"] | |||
nouns = ["dog", "cat", "horse", "astronaut", "human", "robot"] | |||
actions = ["sings", "rides a triceratop", "rides a horse", "eats a burger", "washes clothes", "looks at hands"] | |||
seed(42) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove seed? It's good to have deterministic behavior and repeatable results.
this is a draft pull request, ignore run1.py file.
example command to run on Ampere:
OPENBLAS_NUM_THREADS=10 AIO_IMPLICIT_FP16_TRANSFORM_FILTER=".*" AIO_NUM_THREADS=128 python run_hf.py -m stabilityai/stable-diffusion-xl-base-1.0 -b 1 --steps 25
example command to run on Intel Sapphire:
ENABLE_BF16_X86=1 AIO_NUM_THREADS=128 DNNL_MAX_CPU_ISA=AVX512_CORE_AMX python run_hf.py -m stabilityai/stable-diffusion-xl-base-1.0 -b 1 --steps 25