-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduction of gemm4xN and gemmMx4 for Q4_0 and Q8_0 for better performance results #8908
Introduction of gemm4xN and gemmMx4 for Q4_0 and Q8_0 for better performance results #8908
Conversation
…delta multiplication
The PR #8908 was also tested in an AMD Ryzen ThreadRipper PRO 5995WX machine. Test Results are attached below along with flags supported and other details Performance Results in AMD Ryzen Threadripper PRO 5995WX GCC Linux : Mistral-7B-Instruct-v0.3 model: Q4_0 Model :
Q8_0 Model :
GCC Version = 12.3 The machine supports the following flags by default : | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | Original Unquantized Models : Llama2 7B : https://huggingface.co/meta-llama/Llama-2-7b |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I observe 10%-15% PP speed improvement on Ryzen 9 5950X using Gemma 2 2B models. Perplexity is the same
GCC Linux :
Meta Llama2 7B model:
Q4_0 Model :
Q8_0 Model :
Mistral-7B-Instruct-v0.3 model:
Q4_0 Model :
Q8_0 Model :
GCC Version = 12.3
The PR was tested in AMD Raphael 7600X which supports the following flags by default :
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1|
Original Unquantized Models :
Llama2 7B : https://huggingface.co/meta-llama/Llama-2-7b
Mistral 7B Instruct v0.3 : https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3