Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor must be cuda and dense #199

Open
bibibabibo26 opened this issue May 28, 2024 · 1 comment
Open

Tensor must be cuda and dense #199

bibibabibo26 opened this issue May 28, 2024 · 1 comment

Comments

@bibibabibo26
Copy link

hello, when run main_finetune.py till 238th row:
for param in fsdp_ignored_parameters:
dist.broadcast(param.data, src=dist.get_global_rank(fs_init.get_data_parallel_group(), 0),
group=fs_init.get_data_parallel_group())
it throws an runtime error:
发生异常: RuntimeError
Tensors must be CUDA and dense
File "/amax/yt26/.conda/envs/accessory/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1570, in broadcast
work = group.broadcast([tensor], opts)
File "/amax/yt26/.conda/envs/accessory/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/amax/yt26/VCM/LLaMA2-Accessory/accessory/main_finetune.py", line 238, in main
dist.broadcast(param.data, src=dist.get_global_rank(fs_init.get_data_parallel_group(), 0),
File "/amax/yt26/VCM/LLaMA2-Accessory/accessory/main_finetune.py", line 369, in
main(args)
RuntimeError: Tensors must be CUDA and dense
how can I deal with that? thank you.

@bibibabibo26
Copy link
Author

I find the reason, I didn't config the -quat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant