Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Please support Llama3.2 and Qwen2.5 #2526

Open
mihara-bot opened this issue Sep 27, 2024 · 4 comments
Open

[Feature] Please support Llama3.2 and Qwen2.5 #2526

mihara-bot opened this issue Sep 27, 2024 · 4 comments

Comments

@mihara-bot
Copy link

Motivation

https://huggingface.co/collections/meta-llama/llama-32-66f448ffc8c32f949b04c8cf
https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e

Related resources

No response

Additional context

No response

@the-nine-nation
Copy link

就模型结构而言,qwen2.5==qwen2,我已经跑起来了

@mihara-bot
Copy link
Author

就模型结构而言,qwen2.5==qwen2,我已经跑起来了

Hi,但是我在用TurboMind Qwen 2.5会报错,提示不支持

@the-nine-nation
Copy link

pytorch是可以的,turbomind或许需要适配,你可以尝试直接改代码底层,看能不能让引擎将qwen2.5认成qwen2

@mihara-bot
Copy link
Author

pytorch是可以的,turbomind或许需要适配,你可以尝试直接改代码底层,看能不能让引擎将qwen2.5认成qwen2

谢谢您的建议!在官方正式支持前,也只能这么干了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants