Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama : add Mixtral support #4406

Merged
merged 47 commits into from
Dec 13, 2023
Merged

llama : add Mixtral support #4406

merged 47 commits into from
Dec 13, 2023

Conversation

slaren
Copy link
Collaborator

@slaren slaren commented Dec 11, 2023

close #4381

Description

Add initial support for Mixture-of-Experts (MoE) LLM architectures.
Support for quantization and partial GPU offloading is available.

289443553-a3d5c7e3-db57-4ff8-90ce-59fcfc5b16d1.mp4

(vid) llama.cpp server running Q4_0 Mixtral-8x7B-32k on M2 Ultra

Running Mixtral-8x7B-32k

The following instructions work with the torrent data released on Dec 8.

# download torrent data into models/mixtral-8x7b-32k
# ...

# convert to F16
python3 convert.py ./models/mixtral-8x7b-32k/ \
         --outfile ./models/mixtral-8x7b-32k/ggml-model-f16.gguf \
         --outtype f16

# quantize to Q4_0
./quantize ./models/mixtral-8x7b-32k/ggml-model-f16.gguf \
           ./models/mixtral-8x7b-32k/ggml-model-q4_0.gguf \
           q4_0

# run Q4_0 inference
./main -m ./models/mixtral-8x7b-32k/ggml-model-q4_0.gguf \
       -p "I believe the meaning of life is" \
       -ngl 999 -s 1 -n 128 -t 8

Running the Instruct model

Download and convert:

# clone
git clone https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 ./models/mixtral-instruct-8x7b

# convert to F16 GGUF
python3 convert.py ./models/mixtral-instruct-8x7b/ \
         --outfile ./models/mixtral-instruct-8x7b/ggml-model-f16.gguf \
         --outtype f16

# quantize to Q4_0
./quantize ./models/mixtral-instruct-8x7b/ggml-model-f16.gguf \
           ./models/mixtral-instruct-8x7b/ggml-model-q4_0.gguf \
           q4_0

Run like this for example:

./main \
  -m models/mixtral-instruct-8x7b/ggml-model-q4_0.gguf \
  -p "[INST] Prove that sqrt(2) is rational number. [/INST]" \
  --repeat_penalty 1 \
  --no-penalize-nl \
  --color --temp 0 -c 4096 -n -1 
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_build_graph: non-view tensors processed: 1124/1124
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Ultra
ggml_metal_init: picking default device: Apple M2 Ultra
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/ggerganov/development/github/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name:   Apple M2 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 154618.82 MB
ggml_metal_init: maxTransferRate               = built-in GPU
llama_new_context_with_model: compute buffer total size = 319.35 MiB
llama_new_context_with_model: max tensor size =   102.54 MiB
ggml_metal_add_buffer: allocated 'data            ' buffer, size = 25216.62 MiB, (25218.25 / 147456.00)
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =   512.03 MiB, (25730.28 / 147456.00)
ggml_metal_add_buffer: allocated 'alloc           ' buffer, size =   316.05 MiB, (26046.33 / 147456.00)

system_info: n_threads = 16 / 24 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | 
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.000
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp 
generate: n_ctx = 4096, n_batch = 512, n_predict = -1, n_keep = 0


 [INST] Prove that sqrt(2) is rational number. [/INST] I apologize, but the statement that sqrt(2) is a rational number is false. The number sqrt(2) is an irrational number, which means it cannot be expressed as a fraction a/b, where a and b are integers with no common factors other than 1, and b is not equal to zero.

To prove that sqrt(2) is irrational, we can use proof by contradiction. Here's a simple proof:

1. Assume, to the contrary, that sqrt(2) is rational. This means it can be expressed as a fraction a/b, where a and b are integers with no common factors other than 1, and b is not equal to zero.
2. If sqrt(2) = a/b, then squaring both sides gives 2 = a^2/b^2, or a^2 = 2 * b^2.
3. This equation implies that a^2 is even (since it's 2 times an integer). But then a must also be even (since only even numbers squared result in even numbers).
4. If a is even, then we can express a as 2k for some integer k. Substituting this into the equation gives (2k)^2 = 2 * b^2, or 4k^2 = 2 * b^2, or b^2 = 2 * k^2.
5. This equation implies that b^2 is even, and so b is also even.
6. But if both a and b are even, then they share a common factor of 2. This contradicts our initial assumption that a and b have no common factors other than 1.

Since our assumption that sqrt(2) is rational leads to a contradiction, we must conclude that sqrt(2) is irrational. [end of text]

llama_print_timings:        load time =   29022.56 ms
llama_print_timings:      sample time =       9.49 ms /   404 runs   (    0.02 ms per token, 42575.61 tokens per second)
llama_print_timings: prompt eval time =     179.90 ms /    20 tokens (    9.00 ms per token,   111.17 tokens per second)
llama_print_timings:        eval time =    7522.72 ms /   403 runs   (   18.67 ms per token,    53.57 tokens per second)
llama_print_timings:       total time =    7747.22 ms

Few notes:

  • make sure to have enough context (-c 4096, can be even more, but note that default is only 512)
  • disable the repeat penalty (--repeat_penalty 1), without this you can see typos, misspellings and early EOS
  • disable the newline penalty (--no-penalize-nl), this might be important for code generation
  • use -p "[INST] some instruction [/INST]", this should match the prompt template specified in the official repo

Implementation details

Supporting MoE in ggml requires the introduction of a new indirect matrix multiplication operator:

    // indirect matrix multiplication
    //  ggml_mul_mat_id(ctx, as, ids, id, b) ~= ggml_mul_mat(ctx, as[ids[id]], b)
    GGML_API struct ggml_tensor * ggml_mul_mat_id(
            struct ggml_context * ctx,
            struct ggml_tensor  * const as[],
            int                   n_as,
            struct ggml_tensor  * ids,
            int                   id,
            struct ggml_tensor  * b);

ggml_mul_mat_id allows selecting the source matrix dynamically during graph evaluation, based on the contents on the ids tensor, which can be the result of another operation. For batch evaluation, ids can contain multiple rows, and a different matrix is used to evaluate each row of the b matrix.

The current implementation is efficient for BS=1, but not so much for BS>1, since each token in the batch is evaluated separately. Improvements will follow up in the future.

Quantization support

The quantize tool can be used as usual to generate quantum versions of the model.

IMPORTANT NOTE
The currently implemented quantum mixtures are a first iteration and it is very likely to change in the future! Please, acknowledge that and be prepared to re-quantize or re-download the models in the near future!

Current quantum mixtures:

  • The FFN tensors are quantized using the selected type
  • F16 gating tensors (blk.{bid}.ffn_get_inp)
  • Q8_0 KV tensors (blk.{bid}.attn_k.weight, blk.{bid}.attn_v.weight)

mixtral-q4_0-types.txt

GGUF changes

  • add Keys.LLM.EXPERT_COUNT = "{arch}.expert_count"
  • add Keys.LLM.EXPERT_USED_COUNT = "{arch}.expert_used_count"
  • add 4 new tensor names:
    • FFN_GATE_INP
    • FFN_GATE_EXP
    • FFN_DOWN_EXP
    • FFN_UP_EXP

TODOs

ggerganov and others added 30 commits December 9, 2023 10:51
sfxworks added a commit to sfxworks/LocalAI that referenced this pull request Dec 15, 2023
In reference to ggerganov/llama.cpp#4406

Need a newer version of llama.cpp to handle MoE models, such as Mixtral 8x7b

Signed-off-by: Samuel Walker <sfxworks@gmail.com>
@jxy
Copy link
Contributor

jxy commented Dec 15, 2023

I'm not sure how @capdevc tested it, but I got short stopped answers from API, too.

$ curl --location "https://api.mistral.ai/v1/chat/completions" \
     --header 'Content-Type: application/json' \
     --header 'Accept: application/json' \
     --header "Authorization: Bearer $(GET_KEY_OBFUSCATED)" \
     --data '{
    "model": "mistral-small",
    "temperature": 0,
    "max_tokens": 1024,
    "messages": [
     {
        "role": "user",
        "content": "write a pong game using python make sure I can use the arrow keys to move."
      }
    ]
  }'                                      
{"id":"ID_OBFUSCATED","object":"chat.completion","created":1702657792,"model":"mistral-small","choices":[{"index":0,"message":{"role":"assistant","content":"Here is a simple implementation of Pong using the `curses` library in Python. This version allows the player to control the paddle using the arrow keys.\n```\nimport curses\n\n"},"finish_reason":"stop"}],"usage":{"prompt_tokens":26,"total_tokens":68,"completion_tokens":42}}

@ggerganov
Copy link
Owner

I'm not sure how @capdevc tested it

HF chat either has a system prompt or does not use temp = 0 so the test is invalid. Your results confirm that it's not a llama.cpp problem, but a model behaviour.

@capdevc
Copy link

capdevc commented Dec 15, 2023

@jxy @ggerganov

Just confirmed, that I called the API with a non-zero temp due to a screwup with some defaults handling on my part. Replicating with curl directly, I get the same result you two did. Apologies for the noise there.

Also, calling the Q8 via llama.cpp directly gives the same truncated result:

./main -m ../../../Models/mixtral-8x7b-instruct-v0.1.Q8_0.gguf --color --temp 0 --repeat_penalty 1 -c 32768 -n -1 -p "[INST] write a pong game using python make sure I can use the arrow keys to move. [/INST]"

gives

 [INST] write a pong game using python make sure I can use the arrow keys to move. [/INST] Here is a simple implementation of Pong using the `curses` library in Python. This version allows the player to control the paddle using the arrow keys.
```
import curses

 [end of text]

@Rotatingxenomorph
Copy link

@jxy @ggerganov

Just confirmed, that I called the API with a non-zero temp due to a screwup with some defaults handling on my part. Replicating with curl directly, I get the same result you two did. Apologies for the noise there.

Also, calling the Q8 via llama.cpp directly gives the same truncated result:

./main -m ../../../Models/mixtral-8x7b-instruct-v0.1.Q8_0.gguf --color --temp 0 --repeat_penalty 1 -c 32768 -n -1 -p "[INST] write a pong game using python make sure I can use the arrow keys to move. [/INST]"

gives

 [INST] write a pong game using python make sure I can use the arrow keys to move. [/INST] Here is a simple implementation of Pong using the `curses` library in Python. This version allows the player to control the paddle using the arrow keys.

import curses

[end of text]

That's weird. The Q8 works for me, and the sha256 is correct.

@Rotatingxenomorph
Copy link

Rotatingxenomorph commented Dec 16, 2023

I think my issue was with the latest build of avx2 for windows llamacpp.

It acts completely differently (way worse) than the cublas version I have main: build = 1629 (799a1cb)
main: built with MSVC 19.37.32826.1 for x64

Edit the latest cublas works okay.

For this prompt:

mixtral-8x7b-instruct-v0.1.Q8_0.gguf --top-p 1 --color -t 5 --temp 3 --repeat_penalty 1.2 -c 4096 -n -1 --min-p 0.050 -s 1702748009 -p "[INST] --------------------------------------------------

You are an expert in analogies.

Marathon is to race as hibernation is to

winter

bear

dream

sleep

Think this through logically step by step. [/INST]"

@brozkrut
Copy link

M2 Max Studio, 8+4 CPU, 38 GPU, 96 GB - Mixtral 8x 7B Instruct

I ran some benchmarks for Mixtral Instruct - here are the results:

Model (Build: 9fb13f9) Size PPL (final estimate) pp 512 tg 128
llama 7B mostly Q2_K 14.57 GiB 7.0648 +/- 0.04209 75.22 ± 0.01 39.89 ± 0.03
llama 7B mostly Q3_K - Medium 18.96 GiB 4.6644 +/- 0.02530 41.19 ± 0.00 28.35 ± 0.01
llama 7B mostly Q4_K - Medium 24.62 GiB 4.5111 +/- 0.02423 67.72 ± 0.01 34.72 ± 0.03
llama 7B mostly Q4_0 24.62 GiB 4.5087 +/- 0.02428 89.41 ± 0.01 36.66 ± 0.05
llama 7B mostly Q5_K - Medium 30.02 GiB 4.4396 +/- 0.02379 42.19 ± 0.00 26.06 ± 0.02
llama 7B mostly Q5_0 30.02 GiB 4.4345 +/- 0.02373 44.15 ± 0.00 28.43 ± 0.02
llama 7B mostly Q6_K 35.74 GiB 4.424 +/- 0.02368 40.27 ± 0.01 25.44 ± 0.01
llama 7B mostly Q8_0 46.22 GiB 4.4088 +/- 0.02353 67.48 ± 0.01 22.87 ± 0.01

Prompt Processing:
image

Text Generation:
image


% ./main --version
version: 1631 (9fb13f9)
built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.1.0

# https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/tree/main
# https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip?ref=salesforce-research
% shasum -a 256 mixtral-8x7b-instruct-v0.1.Q* wikitext-2-raw/wiki.test.raw
d54b4f4ec06dbae558d25b2d1542417cdf9547907342db85eecd05b6e96e88f8  mixtral-8x7b-instruct-v0.1.Q2_K.gguf
bd2e1499e68195f1a6ff151e6fa5c6632acc150b80cca4a3772cbb7ca59d44cd  mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf
0c57465507f21bed4364fca37efd310bee92e25a4ce4f5678ef9b44e95830e4e  mixtral-8x7b-instruct-v0.1.Q4_0.gguf
9193684683657e90707087bd1ed19fd0b277ab66358d19edeadc26d6fdec4f53  mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
6c4dd1082dfa8d89f901039c7095f8b4343dca7a6782c82a72decb6a44475803  mixtral-8x7b-instruct-v0.1.Q5_0.gguf
af12961e014037ee8c5c9f3bf7cf9fd99cadc9dabd50f528a4248c4a8ee8fe77  mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf
56638f9853b8fff80ac1fd4a91434a1c15c21d4c910811c5458df9ef092615fd  mixtral-8x7b-instruct-v0.1.Q6_K.gguf
cdca4a8c09dfd722702f781d479695cda0d45e1bd1cd602ba1b6085ad921fc5f  mixtral-8x7b-instruct-v0.1.Q8_0.gguf
173c87a53759e0201f33e0ccf978e510c2042d7f2cb78229d9a50d79b9e7dd08  wikitext-2-raw/wiki.test.raw

% ./perplexity -m models/mixtral-8x7b-32k/mixtral-8x7b-instruct-v0.1.Q4_0.gguf -f wikitext-2-raw/wiki.test.raw

% ./llama-bench \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q2_K.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q4_0.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q5_0.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q6_K.gguf \
  -m ./models/mixtral-8x7b-instruct-v0.1.Q8_0.gguf \
  -p 512 -n 128 -ngl 99 2> /dev/null

@shuhongwu
Copy link

shuhongwu commented Dec 17, 2023

just tested Mixtral 8 x 7B on M2 Ultra 192G with Q4_K_M, thanks for you hard work.
https://twitter.com/AlexWuKing/status/1736247587404210600
And I also tested Q8 on M2 Ultra, I can confidently say that Mixtral 8 x 7B with Q8 can defeat ChatGPT3.5

@moshemalawach
Copy link

Any BLAS available to speed up context processing with this model (cublas/clblast/openblas)? Haven't been able to get it to work.

@toncho11
Copy link

toncho11 commented Dec 19, 2023

I have only 16 GB of memory.
Would it work on CPU? My GPU is only 4 GB.

@teleprint-me
Copy link
Contributor

@toncho11 Yes, you'll be fine. Full 32k context might not fit though, something to keep in mind. Play around with the values and see what works for you.

teleprint-me pushed a commit to teleprint-me/llama.cpp that referenced this pull request Dec 21, 2023
* convert : support Mixtral as LLAMA arch

* convert : fix n_ff typo

* llama : model loading

* ggml : sync latest ggml_mul_mat_id

* llama : update graph to support MoE

* llama : fix cur -> cur_expert

* llama : first working version

* llama : fix expert weighting in the FFN

* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

* ggml : add n_as argument to ggml_mul_mat_id

* ggml : fix ggml_get_rows to take into account ne02 / ne11

* metal : add more general support for ggml_get_rows + tests

* llama : add basic support for offloading moe with CUDA

* metal : add/mul/div use general kernel when src1 not cont

* metal : reduce the kernel launches for ggml_mul_mat_id

* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

* ggml : update get_rows f16 and q

* cuda : support non-contiguous src1 in get_rows

* llama : offload missing ffn_moe_silu

* metal : fix ggml_get_rows to work with non-cont src1

* metal : add indirect mat-vec kernels for all quantization types

* llama : do not quantize expert gating tensors

* llama : add n_expert and n_expert_used to hparams + change quants

* test-backend-ops : add moe test

* cuda : fix get_rows when ncols is odd

* convert : determine n_ctx correctly

* metal : fix ggml_mul_mat_id for F32

* test-backend-ops : make experts more evenly probable (test_moe)

* test-backend-ops : cleanup, add moe test for batches

* test-backend-ops : add cpy from f32 -> all types test

* test-backend-ops : fix dequantize block offset

* llama : fix hard-coded number of experts

* test-backend-ops : simplify and disable slow tests to avoid CI timeout

* test-backend-ops : disable MOE test with thread sanitizer

* cuda : fix mul_mat_id with multi gpu

* convert : use 1e6 rope_freq_base for mixtral

* convert : fix style

* convert : support safetensors format

* gguf-py : bump version

* metal : add cpy f16 -> f32 kernel

* metal : fix binary ops for ne10 % 4 != 0

* test-backend-ops : add one more sum_rows test

* ggml : do not use BLAS with ggml_mul_mat_id

* convert-hf : support for mixtral-instruct (ggerganov#4428)

* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct

* convert : use sentencepiece tokenizer for Mixtral-instruct

* convert : make flake8 happy

* metal : fix soft_max kernels

ref: ggerganov/ggml@1914017

* metal : limit kernels to not use more than the allowed threads

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Radek Pilar <github@mrkva.eu>
@Limezero
Copy link

Could anyone clarify what the current state of Mixtral support in llama.cpp is?

I've seen a ton of conflicting information out there saying that the models are broken, k-quants have unusually high perplexity so we shouldn't use them, BLAS and prompt processing acceleration doesn't work, GPU support doesn't work, Mixtral generates worse results than a 7B, etc., so I'm hoping someone knowledgeable and up-to-date on this stuff could chip in.

@eugenepyvovarov
Copy link

Could anyone clarify what the current state of Mixtral support in llama.cpp is?

I've seen a ton of conflicting information out there saying that the models are broken, k-quants have unusually high perplexity so we shouldn't use them, BLAS and prompt processing acceleration doesn't work, GPU support doesn't work, Mixtral generates worse results than a 7B, etc., so I'm hoping someone knowledgeable and up-to-date on this stuff could chip in.

works great on m1 ultra 128g, quantified

@ggerganov
Copy link
Owner

I've seen a ton of conflicting information out there saying that ...

Whatever information you see out there you can safely assume it is wrong unless some specific example is provided. There are many little details involved in using LLMs correctly and the chance of getting something wrong is very high. My advice is do your own tests and make your own conclusions

@Limezero
Copy link

Whatever information you see out there you can safely assume it is wrong unless some specific example is provided. There are many little details involved in using LLMs correctly and the chance of getting something wrong is very high. My advice is do your own tests and make your own conclusions

Are there no obvious "oh yeah we know X is broken/not implemented/performs worse than it should, someone is working on a PR" caveats with Mixtral support currently, then?

@ggerganov
Copy link
Owner

Other than what is already written in OP - no

@shuhongwu
Copy link

Could anyone clarify what the current state of Mixtral support in llama.cpp is?
I've seen a ton of conflicting information out there saying that the models are broken, k-quants have unusually high perplexity so we shouldn't use them, BLAS and prompt processing acceleration doesn't work, GPU support doesn't work, Mixtral generates worse results than a 7B, etc., so I'm hoping someone knowledgeable and up-to-date on this stuff could chip in.

works great on m1 ultra 128g, quantified

Yes, I tried on M2 Ultra with 192GB unified memory and it almost have 150GB GPU memory, tested with Mixtral 8x7B Q8, it give me the impression that it can be comparable to the ChatGPT 3.5.

@ggerganov ggerganov mentioned this pull request Jan 6, 2024
@clemens98
Copy link

Could anyone clarify what the current state of Mixtral support in llama.cpp is?

I've seen a ton of conflicting information out there saying that the models are broken, k-quants have unusually high perplexity so we shouldn't use them, BLAS and prompt processing acceleration doesn't work, GPU support doesn't work, Mixtral generates worse results than a 7B, etc., so I'm hoping someone knowledgeable and up-to-date on this stuff could chip in.

For me it Matches perfectly
K quants perform extremely Poor(q5k_m was worse than Mistral 7b)
4.0 quantization works very Good has long gpu offloading is set to Zero
Clblast doesn't work
Don't know if ROCm works

@pudepiedj
Copy link
Contributor

Could anyone clarify what the current state of Mixtral support in llama.cpp is?
I've seen a ton of conflicting information out there saying that the models are broken, k-quants have unusually high perplexity so we shouldn't use them, BLAS and prompt processing acceleration doesn't work, GPU support doesn't work, Mixtral generates worse results than a 7B, etc., so I'm hoping someone knowledgeable and up-to-date on this stuff could chip in.

For me it Matches perfectly K quants perform extremely Poor(q5k_m was worse than Mistral 7b) 4.0 quantization works very Good has long gpu offloading is set to Zero Clblast doesn't work Don't know if ROCm works

I have been running mixtral-8x7b-instruct-v0.1.Q_K_M.gguf on a 32GB M2 MAX by increasing the gpu allocation using the sudo sysctl iogpu.wired_limit_mb=27500 trick and -ngl 99 with -c 4096 and -i -ins and the performance has been very good. Slightly less good using -p prompting in a python loop. Speed is around 24 t/s and quality of responses is high. Here's what I use:

./bin/main -m ../models/Mixtral-8x7b/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 4096 -ngl 99 -n -1 -s 1 -i -ins -ctk q8_0 --override-kv llama.expert_used_count=int:3

@clemens98
Copy link

clemens98 commented Jan 7, 2024

I have been running mixtral-8x7b-instruct-v0.1.Q_K_M.gguf on a 32GB M2 MAX by increasing the gpu allocation using the sudo sysctl iogpu.wired_limit_mb=27500 trick and -ngl 99 with -c 4096 and -i -ins and the performance has been very good. Slightly less good using -p prompting in a python loop. Speed is around 24 t/s and quality of responses is high. Here's what I use:

./bin/main -m ../models/Mixtral-8x7b/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 4096 -ngl 99 -n -1 -s 1 -i -ins -ctk q8_0 --override-kv llama.expert_used_count=int:3

Is that clblast or is that the apple equivalent ?
I am using a rx7900xt and a Ryzen 5600

The iogpu Wired limit seams to be an Apple thing

@pudepiedj
Copy link
Contributor

I have been running mixtral-8x7b-instruct-v0.1.Q_K_M.gguf on a 32GB M2 MAX by increasing the gpu allocation using the sudo sysctl iogpu.wired_limit_mb=27500 trick and -ngl 99 with -c 4096 and -i -ins and the performance has been very good. Slightly less good using -p prompting in a python loop. Speed is around 24 t/s and quality of responses is high. Here's what I use:

./bin/main -m ../models/Mixtral-8x7b/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 4096 -ngl 99 -n -1 -s 1 -i -ins -ctk q8_0 --override-kv llama.expert_used_count=int:3

Is that clblast or is that the apple equivalent ? I am using a rx7900xt and a Ryzen 5600

The iogpu Wired limit seams to be an Apple thing

Yes, this is on Apple M2 MAX silicon using the ggml metal implementation.

@clemens98
Copy link

Someone on Reddit sayd Mixtral acceleration isn't supported on AMD GPUs . Is that True?

@pudepiedj
Copy link
Contributor

Someone on Reddit sayd Mixtral acceleration isn't supported on AMD GPUs . Is that True?

Short answer, at least on my hardware: yes. But even if it were supported, the hardware can't deal with it; at least, my 16GB Intel i9 8-core CPU and 4GB AMD Radeon Pro 5500M GPU can't.

Lots of mat_mul kernels are not supported (see below), but even if they were there are bigger problems.
Here's an ls -l ../models/Mixtral-8x7b report:

-rw-r--r--@ 1 edsil  staff  26441533376 13 Dec 08:57 mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
-rw-r--r--@ 1 edsil  staff  15644034176 11 Dec 16:48 mixtral-8x7b-v0.1.Q2_K.gguf
-rw-r--r--@ 1 edsil  staff  20363355584 12 Dec 10:50 mixtral-8x7b-v0.1.Q3_K_M.gguf

So mixtral-8x7b-v0.1.Q2_K.gguf is about 15GB and if I try to load it on my 2019 16/4 Intel/AMD Radeon Pro 5500M` using

./bin/main -m ../models/Mixtral-8x7b/mixtral-8x7b-v0.1.Q2_K.gguf -ngl 7 -c 2048 -n 2048 -p "What is the meaning of life?"

it tries to use (14395.95 / 4080.00) of VRAM! :)

llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Intel(R) UHD Graphics 630
ggml_metal_init: found device: AMD Radeon Pro 5500M
ggml_metal_init: picking default device: AMD Radeon Pro 5500M
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/edsil/llama.cpp/build/bin/ggml-metal.metal'
ggml_metal_init: GPU name:   AMD Radeon Pro 5500M
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = false
ggml_metal_init: hasUnifiedMemory              = false
ggml_metal_init: recommendedMaxWorkingSetSize  =  4278.19 MB
ggml_metal_init: maxTransferRate               = built-in GPU
ggml_metal_init: skipping kernel_mul_mm_f32_f32            (not supported)
ggml_metal_init: skipping kernel_mul_mm_f16_f32            (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_1_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_1_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q8_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q2_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q3_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q6_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq2_xxs_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq2_xs_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_f32_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_f16_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q4_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q4_1_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q5_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q5_1_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q8_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q2_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q3_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q4_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q5_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q6_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq2_xxs_f32     (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq2_xs_f32      (not supported)
llama_kv_cache_init:        CPU KV buffer size =   200.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =    56.00 MiB, (14395.95 /  4080.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
llama_kv_cache_init:      Metal KV buffer size =    56.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =     0.00 MiB, (14395.95 /  4080.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   192.01 MiB, (14587.96 /  4080.00)ggml_backend_metal_buffer_type_alloc_buffer: warning: current allocated size is greater than the recommended max working set size
llama_new_context_with_model: graph splits (measure): 5
llama_new_context_with_model:      Metal compute buffer size =   192.01 MiB
llama_new_context_with_model:        CPU compute buffer size =   184.04 MiB

If I change the model to a Q8_0.gguf version of llama-2-7b it runs - albeit slowly.

@clemens98
Copy link

Someone on Reddit sayd Mixtral acceleration isn't supported on AMD GPUs . Is that True?

Short answer, at least on my hardware: yes. But even if it were supported, the hardware can't deal with it; at least, my 16GB Intel i9 8-core CPU and 4GB AMD Radeon Pro 5500M GPU can't.

Lots of mat_mul kernels are not supported (see below), but even if they were there are bigger problems. Here's an ls -l ../models/Mixtral-8x7b report:

-rw-r--r--@ 1 edsil  staff  26441533376 13 Dec 08:57 mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
-rw-r--r--@ 1 edsil  staff  15644034176 11 Dec 16:48 mixtral-8x7b-v0.1.Q2_K.gguf
-rw-r--r--@ 1 edsil  staff  20363355584 12 Dec 10:50 mixtral-8x7b-v0.1.Q3_K_M.gguf

So mixtral-8x7b-v0.1.Q2_K.gguf is about 15GB and if I try to load it on my 2019 16/4 Intel/AMD Radeon Pro 5500M` using

./bin/main -m ../models/Mixtral-8x7b/mixtral-8x7b-v0.1.Q2_K.gguf -ngl 7 -c 2048 -n 2048 -p "What is the meaning of life?"

it tries to use (14395.95 / 4080.00) of VRAM! :)

Very Strange my 4.0 Mixtral uses 17gb with NGL 13-15

hodlen added a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
llama : restore prefix space in llama tokenizer (ggerganov#4081)

gguf : fix potential infinite loops while parsing (ggerganov#4100)

Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>

Respect tokenizer.ggml.add_bos_token value when tokenizing (ggerganov#4040)

* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.

* Respect add_bos_token GGUF metadata value

* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time

llama : fix data units (ggerganov#4101)

* llama : fix data units

ggml-ci

* Revert "llama : fix data units"

This reverts commit f5feac8.

* llama : disambiguate data units

ggml-ci

cuda : get_row_rounding F32 (ggerganov#4095)

* Fix ggerganov#4017

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

finetune : zero the loraB initial vectors (ggerganov#4082)

* finetune : zero the loraB initial vectors

Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).

* tabs to spaces

* Use ggml_set_zero instead of adding a new function

finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (ggerganov#4079)

* Remove logically superfluous assertions and order by dimension

* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()

* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace

* Add openBLAS support for sgemm() in compute_forward_out_prod()

llama : add functions to get the model's metadata (ggerganov#4013)

* llama : add functions to get the model's metadata

* format -> std::to_string

* better documentation

train : move number of gpu layers argument parsing to common/train.cpp (ggerganov#4074)

- introduces help entry for the argument
 - cuts '--gpu-layers' form in order to simplify usage and documentation.

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>

py : remove superfluous import statements (ggerganov#4076)

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>

llava : fix compilation warning that fread return value is not used (ggerganov#4069)

common : improve yaml log escaping (ggerganov#4080)

* logging: improve escaping in yaml output

* logging: include review feedback

py : Falcon HF compatibility (ggerganov#4104)

Falcon HF compatibility

convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (ggerganov#4089)

Co-authored-by: Don Mahurin <@>

examples : add tokenize (ggerganov#4039)

tokenize : fix trailing whitespace

build : support ppc64le build for make and CMake (ggerganov#3963)

* build: support ppc64le build for make and CMake

* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : increase max nodes (ggerganov#4115)

Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (ggerganov#4124)

* ggml-cuda.cu: Clean up warnings when compiling with clang

* ggml-cuda.cu: Move static items into anonymous namespace

* ggml-cuda.cu: Fix use of namespace start macro

* Revert "ggml-cuda.cu: Fix use of namespace start macro"

This reverts commit 26c1149.

* Revert "ggml-cuda.cu: Move static items into anonymous namespace"

This reverts commit e29757e.

scripts : Remove missed baichuan convert script (ggerganov#4127)

tokenize example: Respect normal add BOS token behavior (ggerganov#4126)

Allow building with Makefile

gguf-py : export chat templates (ggerganov#4125)

* gguf-py : export chat templates

* llama.cpp : escape new lines in gguf kv info prints

* gguf-py : bump version

* gguf-py : check chat_template type

* gguf-py : initialize chat_template

gitignore : tokenize

common : comma should be semicolon (ggerganov#4137)

server : relay error messages (ggerganov#4131)

finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)

Revert "finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)"

This reverts commit 05e8301.

speculative : fix prompt tokenization in speculative example (ggerganov#4025)

* Support special tokens and not adding BOS to prompt in speculative

* Adapt to new should_add_bos function

* Ensure tgt and dft have same add_bos setting

ci : add flake8 to github actions (python linting) (ggerganov#4129)

Disabled rules:

* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned

* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned

* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned

* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard

* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned

* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned

* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard

* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard

* E266 Too many leading '#' for block comment - sometimes used as "section" separator

* E501 Line too long - disabled because it's broken so often it seems like a standard

* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)

* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)

main : Add ChatML functionality to main example (ggerganov#4046)

Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>

readme : update ROCm Windows instructions (ggerganov#4122)

* Update README.md

* Update README.md

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

finetune - update readme to mention llama support only (ggerganov#4148)

stablelm : simplify + speedup generation (ggerganov#4153)

docs : add llama-star arch idea

examples : fix typo in parallel example doc comment (ggerganov#4181)

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

readme : update hot topics

llama : KV cache view API + better KV cache management (ggerganov#4170)

* llama : keep track of used KV cells + better KV cache management

* llama : zero KV cache used upon clear

ggml-ci

* llama : allow exporting a view of the KV cache (ggerganov#4180)

* Allow exporting a view of the KV cache

* Allow dumping the sequences per cell in common

* Track max contiguous cells value and position as well

* Fix max contiguous empty cells index calculation

Make dump functions deal with lengths or sequences counts > 10 better

* Fix off by one error in dump_kv_cache_view

* Add doc comments for KV cache view functions

Eliminate cell sequence struct; use llama_seq_id directly

Minor cleanups

* common : add -dkvc arg for enabling kv cache dumps

---------

Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>

Fix incorrect format strings and uninitialized variables. (ggerganov#4133)

* Fix incorrect format strings and uninitialized variables.

* Address comments

* Add the missing include statement

readme : use PATH for Windows ROCm (ggerganov#4195)

* Update README.md to use PATH for Windows ROCm

* Update README.md

* Update README.md

main.swift : fix eos checking (ggerganov#4197)

llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.

convert : fix tensors using grad in some models (ggerganov#4173)

ggml-cuda : support stablelm rope (ggerganov#4156)

* ggml-cuda : support stablelm rope

* remove unused freq_base kernel parameter

* add n_dims parameter to llm_build_k_shift, default to n_rot via overload

* llama : fix llm_build_k_shift args

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : set metal log callback correctly (ggerganov#4204)

server : OAI API compatibility (ggerganov#4198)

* Add openai-compatible POST /v1/chat/completions API endpoint to server example

* fix code style

* Update server README.md

* Improve server README.md

* Fix server.cpp code style according to review

* server : some style changes

* server : indentation

* server : enable special tokens during tokenization by default

* server : minor code style

* server : change random string generator

* straightforward /v1/models endpoint

---------

Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com>
Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>

readme : update hot topics

Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (ggerganov#4189)

llama : grammar `reserve` space in `decode_utf8` (ggerganov#4210)

* reserve space for codepoints

* improvement for the appended 0

scripts : Use mmap in torch load (ggerganov#4202)

* Use mmap in torch load, prefer .bin files when loading

* Revert .bin > .safetensors preference

metal : fix yarn (ggerganov#4220)

get the correct n_orig_ctx in metal

lookahead : add example for lookahead decoding (ggerganov#4207)

* lookahead : init

* lookahead : generate and store n-grams

* lookahead : use loop instead recursion to generate n-grams

* lookahead : initial working implementation

* lookahead : filter repeating n-grams

* lookahead : use deterministic init

* lookahead : add to Makefile

* lookahead : fix a bug in the seq_id of the lookahead tokens

* lookahead : add comments

---------

Co-authored-by: slaren <slarengh@gmail.com>

readme : update hot topics

lookahead : support `-n -1` infinite generation

ggml : fix -Warray-bounds warning with gcc (ggerganov#4231)

examples : iOS example with swift ui (ggerganov#4159)

* copy to llama.cpp as subdir

* attempt enabling metal, fails

* ggml metal compiles!

* Update README.md

* initial conversion to new format, utf8 errors?

* bug fixes, but now has an invalid memory access :(

* added O3, now has insufficient memory access

* begin sync with master

* update to match latest code, new errors

* fixed it!

* fix for loop conditionals, increase result size

* fix current workflow errors

* attempt a llama.swiftui workflow

* Update .github/workflows/build.yml

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

readme : add Amica to UI list (ggerganov#4230)

cmake : fix issue with version info not getting baked into LlamaConfig.cmake (ggerganov#3970)

* Split CPP generation from build-info query

* Remove blank lines

* Add BUILD_SHARED_LIBS option

ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (ggerganov#4240)

* ggml : use blas even if src0 is not F32

* llama : use n_threads_batch only when n_tokens >= 32

ggml-ci

* llama : revert n_threads_batch logic

ggml-ci

ggml : restore abort() in GGML_ASSERT (ggerganov#4242)

readme : add FreeChat (ggerganov#4248)

examples : add readme files

py : fix oai proxy (ggerganov#3972)

* fix oai proxy

fix generation not stoped while bot stop talking in chat mode

fix possible `slot_id` not exist

response for cors (and pre flight)

* oai proxy: workaround for some client (such as Chatbox)

* use stop as separator to replace hardcoded `\n`

llama : fix typical sampling (ggerganov#4261)

Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.

Test: Generating with temp=0.0001 (approx. argmax)  should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).

convert.py : fix llama/llama2 conversion due to vocab_size=-1 (ggerganov#4258)

llama : fix alignment of general.name in print meta (ggerganov#4254)

* llama: fix alignment of general.name in print meta

This commit fixes the alignment of the general.name field in the
llm_load_print_meta function.

Currently the output looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name   = LLaMA v2
```
And with this commit it looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name     = LLaMA v2
```

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* llama: fix alignment of special tokens

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

---------

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

readme : fix typo (ggerganov#4253)

llama.cpp uses GitHub Actions, not Gitlab Actions.

cmake : fix the metal file foder path (ggerganov#4217)

batched.swift : update README.md (ggerganov#4214)

docs: update how to run

docker : add finetune option (ggerganov#4211)

readme : fix (ggerganov#4135)

* fix: readme

* chore: resolve comments

* chore: resolve comments

main : pass LOG_TEE callback to llama.cpp log (ggerganov#4033)

* main : Call llama_log_set to use LOG_TEE

* tabs to spaces

llava : ShareGPT4V compatibility (vision encoder only loading) (ggerganov#4172)

* ShareGPT4 compatibility (vision encoder only loading)

Load only a CLIP vision encoder (as supplied by ShareGPT finetunes)
Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access)
Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them

* Update convert-image-encoder-to-gguf.py

build : fix build info generation and cleanup Makefile (ggerganov#3920)

* cmake : fix joining of REAL_GIT_DIR

* fix includes with help from include-what-you-use

* make : remove unneeded deps and add test-rope target

* fix C includes in C++ source files

* Revert "fix includes with help from include-what-you-use"

This reverts commit 635e9fa.

make : fix Apple clang determination bug (ggerganov#4272)

Co-authored-by: Will Findley <findley@gmail.com>

server : add single-client multi-prompt support (ggerganov#4232)

* * add multiprompt support

* * cleanup

* * more cleanup

* * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests

* * remove all references to mutex_multitasks

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* * change to set

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

server : add --log-disable to disable logging to file (ggerganov#4260)

* * add --log-disable to disable logging to file in the server example

* * typo fix

ggml : add ggml_soft_max_ext (ggerganov#4256)

* metal : implement soft_max_ext

* cuda : implement soft_max_ext

* ggml : implement soft_max_ext (CPU)

* batched-bench : print threads

ggml-ci

* metal : simplify soft_max encoding

ggml-ci

* cuda : use 512 threads for soft_max instead of 32

* ggml : update soft max cpu

* cuda : do warp-based block reduce

* cuda : increase max block size to 1024

* cuda : fix warp reduction initialization of shared mem

* metal : warp-based reduction for soft max kernel

* metal : warp-based reduce for rms_norm

* metal : simplify soft max kernel

ggml-ci

* alloc : fix build with debug

py : add requirements file for convert-hf-to-gguf.py (ggerganov#4277)

This commit adds a requirements file for the convert-hf-to-gguf.py
script, and also add the torch and transformers packages to it.

The motivation for this is that currently running convert-hf-to-gguf.py
will produce the following error:
```console
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt
Collecting numpy==1.24.4
Collecting sentencepiece==0.1.98
Collecting gguf>=0.1.0
Installing collected packages: sentencepiece, numpy, gguf
Successfully installed gguf-0.5.1 numpy-1.24.4 sentencepiece-0.1.98

(venv) $ python convert-hf-to-gguf.py --help
Traceback (most recent call last):
  File "llama.cpp/convert-hf-to-gguf.py", line 16, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'
```
With this commit, and using requirements-hf-to-gguf.txt instead of
requirements.txt, the script can be run and shows the help output.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

llama : fix integer overflow during quantization (ggerganov#4284)

happens with multi-threaded quantization of Qwen-72B

ggml-ci

llama : add Qwen support (ggerganov#4281)

* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : support attention bias on LLaMA architecture (ggerganov#4283)

* Support attention_bias on LLaMA architecture

QKVO bias, should fix InternLM (ggerganov#3133) and works for LLaMAfied Qwen models (ggerganov#3743 (comment)).

* check existence of qkvo bias while loading llama models

Tested on LLaMA2, CUDA and CPU.

* Update llama.cpp

build : enable libstdc++ assertions for debug builds (ggerganov#4275)

swift : fix token_to_piece implementation (ggerganov#4278)

* Fix token_to_piece implementation in Swift

* Fix errors

llama : support optional tensors (ggerganov#4283)

llama : avoid using "optional" keyword (ggerganov#4283)

llama : pad KV cache size (ggerganov#4280)

* llama : pad KV cache size to 32

* metal : try to improve batched decoding

py : add grammar to oai like api (ggerganov#4294)

server : fix OpenAI API `stop` field to be optional (ggerganov#4299)

(cherry picked from commit Mozilla-Ocho/llamafile@e8c92bc)

ggml : fix soft max out-of-bounds access (ggerganov#4307)

ggml-ci

ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (ggerganov#4308)

* ggml : fix soft max out-of-bounds access

ggml-ci

* ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()

ggml-ci

grammar-parser : fix typo (ggerganov#4318)

preceeding -> preceding

swift : fix prompt tokenization logic (ggerganov#4321)

swift : fix concatenation method to avoid invalid UTF8 stringfication (ggerganov#4325)

simple : update error message for KV cache check (ggerganov#4324)

This commit updates the error message that is printed when the
KV cache is not big enough to hold all the prompt and generated
tokens. Specifically it removes the reference to n_parallel and
replaces it with n_len.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

swift : revert compiler checks for swift package (ggerganov#4332)

sampling : custom samplers order (ggerganov#4285)

* Samplers sequence order w parameter

* Cleaned commented code

* Fixed formatting

* Rewrote with unordered_map

* Revert and rewrite, too many problems and safeguards would be needed

* Fixed code style

* Code style fixes according to review

* More readable samplers input string, fixed help

* Style fix in sampler_queue

* Formatting fixes

* Fixing whitespaces

llama : allow overriding GGUF metadata when loading model (ggerganov#4092)

* feat: Allow overriding GGUF metadata when loading model

* Fix the one time GCC is stricter than clang about something

* Step1

* Refactor... basically everything!

* Nuke obsolete GetArrayLen struct

* simplify std::string specialization

* Various cleanups

Add informational output when overrides are applied

Warn user when an override with the wrong type is specified

* Fix broken logic for parsing bool KV overrides
Fix issue where overrides didn't apply when key missing in GGUF metadata
Resolve merge changes

* llama : rearrange model params

* Update new GET_KEY call

Add note that metadata KV overrides aren't reflected in initial metadata KV info dump

---------

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

grammar : pre-computed pieces + reserve mem + less string copies (ggerganov#4330)

* reserve space for codepoints

* improvement for the appended 0

* used precomputed token text for grammar sample

* reserve canidates_decoded

* reserve canidates_grammar

* remove candidates_decoded

* Revert "remove candidates_decoded"

This reverts commit 3773328.

* changed decode_utf8 to take src by ref

speculative : support `--color` (ggerganov#4343)

* speculative: add some colors

* minor : add braces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

common : fix compile warning

server : recognize cache_prompt parameter in OAI API (ggerganov#4347)

train : fix ggerganov#4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (ggerganov#4351)

On commit b1108 (44c117f) xaedes added

    ggml_allocr * alloc = NULL;

    ... (many lines in between)

    if (alloc) {
        ggml_allocr_free(alloc);
    }

Which is correct, but it's easy to lose context after many lines in between.

On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly.

    alloc = ggml_allocr_new(...)
    ... (short lines of code)
    ggml_allocr_free(alloc)

This happens a few times, but alloc is never set to NULL, and many lines below,
we still have

    if (alloc) {
        ggml_allocr_free(alloc);
    }

which causes a double-free.

llama : per-layer KV cache + quantum K cache (ggerganov#4309)

* per-layer KV

* remove unnecessary copies

* less code duplication, offload k and v separately

* llama : offload KV cache per-layer

* llama : offload K shift tensors

* llama : offload for rest of the model arches

* llama : enable offload debug temporarily

* llama : keep the KV related layers on the device

* llama : remove mirrors, perform Device -> Host when partial offload

* common : add command-line arg to disable KV cache offloading

* llama : update session save/load

* llama : support quantum K cache (ggerganov#4312)

* llama : support quantum K cache (wip)

* metal : add F32 -> Q8_0 copy kernel

* cuda : add F32 -> Q8_0 copy kernel

ggml-ci

* cuda : use mmv kernel for quantum cache ops

* llama : pass KV cache type through API

* llama : fix build

ggml-ci

* metal : add F32 -> Q4_0 copy kernel

* metal : add F32 -> Q4_1 copy kernel

* cuda : wip

* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels

* llama-bench : support type_k/type_v

* metal : use mm kernel only for quantum KV cache

* cuda : add comment

* llama : remove memory_f16 and kv_f16 flags

---------

Co-authored-by: slaren <slarengh@gmail.com>

* readme : add API change notice

---------

Co-authored-by: slaren <slarengh@gmail.com>

sync : ggml (new ops, tests, backend, etc.) (ggerganov#4359)

* sync : ggml (part 1)

* sync : ggml (part 2, CUDA)

* sync : ggml (part 3, Metal)

* ggml : build fixes

ggml-ci

* cuda : restore lost changes

* cuda : restore lost changes (StableLM rope)

* cmake : enable separable compilation for CUDA

ggml-ci

* ggml-cuda : remove device side dequantize

* Revert "cmake : enable separable compilation for CUDA"

This reverts commit 09e35d0.

* cuda : remove assert for rope

* tests : add test-backend-ops

* ggml : fix bug in ggml_concat

* ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`

* ci : try to fix macOS

* ggml-backend : remove backend self-registration

* ci : disable Metal for macOS cmake build

ggml-ci

* metal : fix "supports family" call

* metal : fix assert

* metal : print resource path

ggml-ci

---------

Co-authored-by: slaren <slarengh@gmail.com>

grammar : revert the replacement of llama_token_to_piece with id_to_token (ggerganov#4396)

Update README.md (ggerganov#4388)

Fix small typo.

ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (ggerganov#4424)

server : fix local model name in server (ggerganov#4420)

llama : document logits_all deprecation (ggerganov#4418)

llama_context_params.logits_all is a parameter for controlling
llama_eval. This documents that logits_all should not be used with
llama_decode and llama_batch.

build : target Windows 8 for standard mingw-w64 (ggerganov#4405)

* build : target Windows 8 for standard mingw-w64

* make : fix missing console.o deps

This was causing a link error with `make all` on Windows.

english : use `typos` to fix comments and logs (ggerganov#4354)

server : tweak default sampling parameters (ggerganov#4367)

* Set a more typical Top P setting as the default

* Update temp max

llama : add Mixtral support (ggerganov#4406)

* convert : support Mixtral as LLAMA arch

* convert : fix n_ff typo

* llama : model loading

* ggml : sync latest ggml_mul_mat_id

* llama : update graph to support MoE

* llama : fix cur -> cur_expert

* llama : first working version

* llama : fix expert weighting in the FFN

* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

* ggml : add n_as argument to ggml_mul_mat_id

* ggml : fix ggml_get_rows to take into account ne02 / ne11

* metal : add more general support for ggml_get_rows + tests

* llama : add basic support for offloading moe with CUDA

* metal : add/mul/div use general kernel when src1 not cont

* metal : reduce the kernel launches for ggml_mul_mat_id

* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

* ggml : update get_rows f16 and q

* cuda : support non-contiguous src1 in get_rows

* llama : offload missing ffn_moe_silu

* metal : fix ggml_get_rows to work with non-cont src1

* metal : add indirect mat-vec kernels for all quantization types

* llama : do not quantize expert gating tensors

* llama : add n_expert and n_expert_used to hparams + change quants

* test-backend-ops : add moe test

* cuda : fix get_rows when ncols is odd

* convert : determine n_ctx correctly

* metal : fix ggml_mul_mat_id for F32

* test-backend-ops : make experts more evenly probable (test_moe)

* test-backend-ops : cleanup, add moe test for batches

* test-backend-ops : add cpy from f32 -> all types test

* test-backend-ops : fix dequantize block offset

* llama : fix hard-coded number of experts

* test-backend-ops : simplify and disable slow tests to avoid CI timeout

* test-backend-ops : disable MOE test with thread sanitizer

* cuda : fix mul_mat_id with multi gpu

* convert : use 1e6 rope_freq_base for mixtral

* convert : fix style

* convert : support safetensors format

* gguf-py : bump version

* metal : add cpy f16 -> f32 kernel

* metal : fix binary ops for ne10 % 4 != 0

* test-backend-ops : add one more sum_rows test

* ggml : do not use BLAS with ggml_mul_mat_id

* convert-hf : support for mixtral-instruct (ggerganov#4428)

* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct

* convert : use sentencepiece tokenizer for Mixtral-instruct

* convert : make flake8 happy

* metal : fix soft_max kernels

ref: ggerganov/ggml@1914017

* metal : limit kernels to not use more than the allowed threads

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Radek Pilar <github@mrkva.eu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

llama : add Mixtral support