Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got "indexError:list index out of range" after one successful excution #52

Open
wtyisjoe opened this issue Oct 20, 2023 · 2 comments
Open

Comments

@wtyisjoe
Copy link

I got an error after successfully generating gif the first time.

It can run successfully once and will get error aftermath, so I have to restart comfyui to generate next animation.

I have set the batch size to 16 and the KSampler worked fine.

below is the output:

got prompt
[AnimateDiff] - INFO - Loading motion module mm_sd_v15_v2.ckpt
[AnimateDiff] - INFO - Converting motion module to fp16.
model_type EPS
adm 0
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
got prompt
missing {'cond_stage_model.text_projection', 'cond_stage_model.logit_scale'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'cond_stage_model.transformer.text_model.embeddings.position_ids', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
Requested to load SD1ClipModel
Loading 1 new model
[AnimateDiff] - INFO - Injecting motion module with method default.
E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Requested to load BaseModel
Loading 1 new model
100%
E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.register_buffer('betas', torch.tensor(betas, dtype=torch.float32))
E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py:49: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
self.register_buffer('alphas_cumprod', torch.tensor(alphas_cumprod, dtype=torch.float32))
[AnimateDiff] - INFO - Ejecting motion module with method default.
Prompt executed in 315.94 seconds
[AnimateDiff] - INFO - Injecting motion module with method default.
Requested to load BaseModel
Loading 1 new model
loading in lowvram mode 1041.7423362731934
[AnimateDiff] - INFO - Ejecting motion module with method default.
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\animatediff\sampler.py", line 295, in animatediff_sample
return super().sample(
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 90, in sample
real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask)
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 81, in prepare_sampling
comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise_shape[0] * noise_shape[2] * noise_shape[3]) + inference_memory)
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 402, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 293, in model_load
device_map = accelerate.infer_auto_device_map(self.real_model, max_memory={0: "{}MiB".format(lowvram_model_memory // (1024 * 1024)), "cpu": "16GiB"})
File "E:\comfyUI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\utils\modeling.py", line 958, in infer_auto_device_map
tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n in tied_param][0]
IndexError: list index out of range

Prompt executed in 4.20 seconds

@artventuredev
Copy link
Contributor

Hmm, this look the same with #34. Can you observe the VRAM (via Task Manager or similar tools) to see if VRAM is released after the first gen?

@wtyisjoe
Copy link
Author

Hmm, this look the same with #34. Can you observe the VRAM (via Task Manager or similar tools) to see if VRAM is released after the first gen?

Yes, it dropped after generation but the issue still occurred. I have 4 gb dedicated gpu memory and it dropped from nearly full to 1.6gb. I know very little about how it works but I don't need to remove modules from memory on a1111, which I'm using more often.

Edit: After closing the console, the memory dropped to 0.

If it is because of memory, how to get rid of that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants