Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

help please #2163

Open
royortegaphoto opened this issue Jun 29, 2024 · 0 comments
Open

help please #2163

royortegaphoto opened this issue Jun 29, 2024 · 0 comments

Comments

@royortegaphoto
Copy link

first time here, just install and first try, thanks to all

here is the log

shark_tank local cache is located at C:\Users\User.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
gradio temporary image cache located at C:\Users\User\Documents\ia\shark_tmp/gradio. You may change this by setting the GRADIO_TEMP_DIR environment variable.
No temporary images files to clear.
vulkan devices are available.
metal devices are not available.
cuda devices are not available.
rocm devices are available.
shark_tank local cache is located at C:\Users\User.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
local-sync devices are available.
shark_tank local cache is located at C:\Users\User.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
local-task devices are available.
shark_tank local cache is located at C:\Users\User.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

Running on local URL: http://0.0.0.0:8080
IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.
IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.


IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

IMPORTANT: You are using gradio version 3.44.3, however version 4.29.0 is available, please upgrade.

shark_tank local cache is located at C:\Users\User.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag

To create a public link, set share=True in launch().
Found device AMD Radeon(TM) Graphics. Using target triple rdna2-unknown-windows.
Using tuned models for stabilityai/stable-diffusion-2-1-base(fp16) on device vulkan://00000000-0300-0000-0000-000000000000.
scheduler/scheduler_config.json: 100%|█████████████████████████████████████████████████| 346/346 [00:00<00:00, 352kB/s]
huggingface_hub\file_download.py:138: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\User.cache\huggingface\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
saving euler_scale_model_input_1_512_512_vulkan_fp16_torch_linalg.mlir to C:\Users\User\AppData\Local\Temp
No vmfb found. Compiling and saving to C:\Users\User\Documents\ia\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb
Configuring for device:vulkan://00000000-0300-0000-0000-000000000000
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Saved vmfb in C:\Users\User\Documents\ia\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb.
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Loading module C:\Users\User\Documents\ia\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
saving euler_step_1_512_512_vulkan_fp16_torch_linalg.mlir to C:\Users\User\AppData\Local\Temp
No vmfb found. Compiling and saving to C:\Users\User\Documents\ia\euler_step_1_512_512_vulkan_fp16.vmfb
Configuring for device:vulkan://00000000-0300-0000-0000-000000000000
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Saved vmfb in C:\Users\User\Documents\ia\euler_step_1_512_512_vulkan_fp16.vmfb.
Loading module C:\Users\User\Documents\ia\euler_step_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
use_tuned? sharkify: True
_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base
tokenizer/vocab.json: 100%|███████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 11.3MB/s]
tokenizer/merges.txt: 100%|█████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 3.68MB/s]
tokenizer/special_tokens_map.json: 100%|██████████████████████████████████████████████████████| 460/460 [00:00<?, ?B/s]
tokenizer/tokenizer_config.json: 100%|████████████████████████████████████████████████████████| 807/807 [00:00<?, ?B/s]
text_encoder/config.json: 100%|███████████████████████████████████████████████████████████████| 613/613 [00:00<?, ?B/s]
model.safetensors: 100%|██████████████████████████████████████████████████████████| 1.36G/1.36G [04:00<00:00, 5.66MB/s]
saving clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan_torch_linalg.mlir to .
No vmfb found. Compiling and saving to C:\Users\User\Documents\ia\clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan.vmfb
Configuring for device:vulkan://00000000-0300-0000-0000-000000000000
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Saved vmfb in C:\Users\User\Documents\ia\clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan.vmfb.
Loading module C:\Users\User\Documents\ia\clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
unet/config.json: 100%|███████████████████████████████████████████████████████████████████████| 911/911 [00:00<?, ?B/s]
diffusion_pytorch_model.safetensors: 100%|████████████████████████████████████████| 3.46G/3.46G [09:49<00:00, 5.87MB/s]
torch\fx\node.py:263: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph.
warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.")
Loading Winograd config file from C:\Users\User.local/shark_tank/configs\unet_winograd_vulkan.json
404 GET https://storage.googleapis.com/storage/v1/b/shark_tank/o?projection=noAcl&prefix=sd_tuned%2Fconfigs&prettyPrint=false: The specified bucket does not exist.
Retrying with a different base model configuration
mat1 and mat2 shapes cannot be multiplied (128x768 and 1024x320)
Retrying with a different base model configuration
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead
Retrying with a different base model configuration
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead
Retrying with a different base model configuration
Given groups=1, weight of size [320, 4, 3, 3], expected input[4, 7, 512, 512] to have 4 channels, but got 7 channels instead
Retrying with a different base model configuration
ERROR: Traceback (most recent call last):
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 640, in run_until_complete
File "asyncio\windows_events.py", line 321, in run_forever
File "asyncio\base_events.py", line 607, in run_forever
File "asyncio\base_events.py", line 1922, in _run_once
File "asyncio\events.py", line 80, in _run
File "gradio\queueing.py", line 431, in process_events
File "gradio\queueing.py", line 388, in call_prediction
File "gradio\route_utils.py", line 219, in call_process_api
File "gradio\blocks.py", line 1437, in process_api
File "gradio\blocks.py", line 1123, in call_function
File "gradio\utils.py", line 503, in async_iteration
File "gradio\utils.py", line 496, in anext
File "anyio\to_thread.py", line 33, in run_sync
File "anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
File "anyio_backends_asyncio.py", line 807, in run
File "gradio\utils.py", line 479, in run_sync_iterator_async
File "gradio\utils.py", line 629, in gen_wrapper
File "ui\txt2img_ui.py", line 195, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 134, in generate_images
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 235, in produce_img_latents
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 114, in load_unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 858, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 853, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 63, in check_compilation
SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "starlette\routing.py", line 686, in lifespan
File "uvicorn\lifespan\on.py", line 137, in receive
File "asyncio\queues.py", line 158, in get
asyncio.exceptions.CancelledError

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 640, in run_until_complete
File "asyncio\windows_events.py", line 321, in run_forever
File "asyncio\base_events.py", line 607, in run_forever
File "asyncio\base_events.py", line 1922, in _run_once
File "asyncio\events.py", line 80, in _run
File "gradio\queueing.py", line 431, in process_events
File "gradio\queueing.py", line 388, in call_prediction
File "gradio\route_utils.py", line 219, in call_process_api
File "gradio\blocks.py", line 1437, in process_api
File "gradio\blocks.py", line 1123, in call_function
File "gradio\utils.py", line 503, in async_iteration
File "gradio\utils.py", line 496, in anext
File "anyio\to_thread.py", line 33, in run_sync
File "anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
File "anyio_backends_asyncio.py", line 807, in run
File "gradio\utils.py", line 479, in run_sync_iterator_async
File "gradio\utils.py", line 629, in gen_wrapper
File "ui\txt2img_ui.py", line 195, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 134, in generate_images
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 235, in produce_img_latents
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 114, in load_unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 858, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 853, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 63, in check_compilation
SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "uvicorn\protocols\websockets\websockets_impl.py", line 247, in run_asgi
File "uvicorn\middleware\proxy_headers.py", line 84, in call
File "fastapi\applications.py", line 292, in call
File "starlette\applications.py", line 122, in call
File "starlette\middleware\errors.py", line 149, in call
File "starlette\middleware\cors.py", line 75, in call
File "starlette\middleware\exceptions.py", line 68, in call
File "fastapi\middleware\asyncexitstack.py", line 17, in call
File "starlette\routing.py", line 718, in call
File "starlette\routing.py", line 341, in handle
File "starlette\routing.py", line 82, in app
File "fastapi\routing.py", line 324, in app
File "gradio\routes.py", line 578, in join_queue
File "asyncio\tasks.py", line 639, in sleep
asyncio.exceptions.CancelledError
Keyboard interruption in main thread... closing server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant