Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama : pad KV cache size #4280

Merged
merged 2 commits into from
Dec 3, 2023
Merged

llama : pad KV cache size #4280

merged 2 commits into from
Dec 3, 2023

Conversation

ggerganov
Copy link
Owner

@ggerganov ggerganov commented Dec 1, 2023

Should result in slight TG speedup

ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: Tesla V100-PCIE-16GB, compute capability 7.0

model size backend ngl test t/s t/s speedup
llama 7B F16 12.55 GiB CUDA 99 pp 512 3457.68 ± 89.53 3444.83 ± 126.66 1.000
llama 7B F16 12.55 GiB CUDA 99 tg 128 53.01 ± 0.05 53.14 ± 0.07 1.002
llama 7B F16 12.55 GiB CUDA 99 tg 256 52.56 ± 0.03 52.84 ± 0.07 1.005
llama 7B F16 12.55 GiB CUDA 99 tg 512 51.83 ± 0.06 52.43 ± 0.06 1.012
llama 7B Q8_0 6.67 GiB CUDA 99 pp 512 2513.48 ± 31.58 2516.75 ± 34.50 1.000
llama 7B Q8_0 6.67 GiB CUDA 99 tg 128 78.91 ± 0.15 79.25 ± 0.23 1.004
llama 7B Q8_0 6.67 GiB CUDA 99 tg 256 78.16 ± 0.22 78.91 ± 0.08 1.010
llama 7B Q8_0 6.67 GiB CUDA 99 tg 512 76.54 ± 0.14 77.83 ± 0.16 1.017
llama 7B Q4_0 3.56 GiB CUDA 99 pp 512 2720.43 ± 34.16 2716.96 ± 32.91 1.000
llama 7B Q4_0 3.56 GiB CUDA 99 tg 128 115.78 ± 0.34 116.34 ± 0.95 1.005
llama 7B Q4_0 3.56 GiB CUDA 99 tg 256 114.17 ± 0.37 115.70 ± 0.31 1.013
llama 7B Q4_0 3.56 GiB CUDA 99 tg 512 111.05 ± 0.27 113.79 ± 0.23 1.025

build: 75ba5ba (1594)


Also, try to improve the batched decoding performance for quantum models on Apple Silicon.

make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q8_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.122 1049.89 1.899 67.39 2.021 126.65
128 128 2 384 0.120 1062.68 7.212 35.50 7.332 52.37
128 128 4 640 0.120 1067.81 7.207 71.04 7.327 87.35
128 128 8 1152 0.120 1065.87 7.380 138.75 7.500 153.59
2048 128 1 2176 1.665 1229.84 2.293 55.82 3.958 549.71
2048 128 2 2304 1.659 1234.55 7.568 33.83 9.227 249.72
2048 128 4 2560 1.658 1235.25 7.791 65.72 9.449 270.93
2048 128 8 3072 1.659 1234.77 8.187 125.08 9.845 312.03

PR

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.120 1062.85 1.898 67.42 2.019 126.81
128 128 2 384 0.121 1060.96 2.614 97.92 2.735 140.40
128 128 4 640 0.120 1069.25 3.983 128.54 4.103 155.98
128 128 8 1152 0.120 1066.80 6.810 150.37 6.930 166.24
2048 128 1 2176 1.665 1229.74 2.290 55.89 3.956 550.11
2048 128 2 2304 1.662 1232.19 3.085 82.98 4.747 485.35
2048 128 4 2560 1.658 1235.08 4.462 114.74 6.120 418.27
2048 128 8 3072 1.657 1235.68 7.299 140.29 8.956 343.00
make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q4_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.121 1060.20 1.335 95.90 1.455 175.89
128 128 2 384 0.120 1065.36 7.042 36.35 7.162 53.62
128 128 4 640 0.118 1083.25 7.062 72.50 7.180 89.13
128 128 8 1152 0.118 1080.54 7.322 139.85 7.441 154.83
2048 128 1 2176 1.675 1222.93 1.730 73.99 3.405 639.12
2048 128 2 2304 1.674 1223.57 7.451 34.36 9.124 252.51
2048 128 4 2560 1.670 1225.99 7.656 66.88 9.326 274.50
2048 128 8 3072 1.670 1226.32 8.036 127.43 9.706 316.50

PR

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.121 1055.03 1.342 95.37 1.463 174.93
128 128 2 384 0.121 1060.52 1.846 138.67 1.967 195.24
128 128 4 640 0.119 1075.47 2.680 191.02 2.799 228.62
128 128 8 1152 0.120 1068.20 4.398 232.85 4.518 255.01
2048 128 1 2176 1.680 1218.71 1.735 73.77 3.416 637.06
2048 128 2 2304 1.676 1221.80 2.315 110.60 3.991 577.30
2048 128 4 2560 1.674 1223.71 3.159 162.08 4.832 529.75
2048 128 8 3072 1.672 1224.88 4.921 208.07 6.593 465.92
make -j batched-bench && ./batched-bench ./models/codellama-34b/ggml-model-q8_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.532 240.41 7.370 17.37 7.902 32.40
128 128 2 384 0.532 240.57 24.895 10.28 25.428 15.10
128 128 4 640 0.532 240.65 25.136 20.37 25.668 24.93
128 128 8 1152 0.532 240.71 25.648 39.93 26.180 44.00
2048 128 1 2176 7.400 276.77 8.464 15.12 15.863 137.17
2048 128 2 2304 7.393 277.01 25.747 9.94 33.141 69.52
2048 128 4 2560 7.391 277.09 26.215 19.53 33.606 76.18
2048 128 8 3072 7.392 277.07 27.027 37.89 34.419 89.25

PR

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.532 240.63 7.371 17.36 7.903 32.39
128 128 2 384 0.532 240.73 13.211 19.38 13.743 27.94
128 128 4 640 0.531 240.95 24.350 21.03 24.881 25.72
128 128 8 1152 0.530 241.30 46.880 21.84 47.410 24.30
2048 128 1 2176 7.398 276.82 8.457 15.14 15.855 137.24
2048 128 2 2304 7.397 276.86 14.142 18.10 21.539 106.97
2048 128 4 2560 7.392 277.06 25.311 20.23 32.703 78.28
2048 128 8 3072 7.392 277.05 47.932 21.36 55.325 55.53
make -j batched-bench && ./batched-bench ./models/codellama-34b/ggml-model-q4_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

`master

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.530 241.28 4.454 28.74 4.984 51.36
128 128 2 384 0.530 241.32 23.865 10.73 24.396 15.74
128 128 4 640 0.531 241.21 24.083 21.26 24.614 26.00
128 128 8 1152 0.530 241.60 24.660 41.53 25.190 45.73
2048 128 1 2176 7.475 274.00 5.536 23.12 13.010 167.26
2048 128 2 2304 7.466 274.32 24.595 10.41 32.061 71.86
2048 128 4 2560 7.465 274.34 25.176 20.34 32.641 78.43
2048 128 8 3072 7.464 274.37 26.181 39.11 33.645 91.31

PR

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.531 240.92 4.473 28.62 5.004 51.16
128 128 2 384 0.529 241.79 7.421 34.50 7.951 48.30
128 128 4 640 0.530 241.66 13.086 39.13 13.616 47.00
128 128 8 1152 0.530 241.64 24.512 41.78 25.041 46.00
2048 128 1 2176 7.471 274.11 5.564 23.00 13.036 166.93
2048 128 2 2304 7.473 274.07 8.379 30.55 15.851 145.35
2048 128 4 2560 7.464 274.40 14.202 36.05 21.666 118.16
2048 128 8 3072 7.472 274.10 25.890 39.55 33.362 92.08

@slaren
Copy link
Collaborator

slaren commented Dec 1, 2023

3090 Ti/ WSL2

Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6

model size backend ngl test 8d6d9f0 t/s PR 75ba5ba t/s speedup
llama 7B mostly F16 12.55 GiB CUDA 99 pp 512 4982.86 ± 71.87 5069.56 ± 68.03 1.017
llama 7B mostly F16 12.55 GiB CUDA 99 tg 128 54.58 ± 0.55 54.65 ± 0.70 1.001
llama 7B mostly F16 12.55 GiB CUDA 99 tg 256 54.90 ± 0.20 53.97 ± 0.29 0.983
llama 7B mostly F16 12.55 GiB CUDA 99 tg 512 53.99 ± 0.34 54.36 ± 0.10 1.006
llama 7B mostly Q8_0 6.67 GiB CUDA 99 pp 512 3809.50 ± 77.34 3758.40 ± 87.26 0.986
llama 7B mostly Q8_0 6.67 GiB CUDA 99 tg 128 86.86 ± 0.33 85.46 ± 1.18 0.983
llama 7B mostly Q8_0 6.67 GiB CUDA 99 tg 256 86.44 ± 0.10 84.48 ± 1.04 0.977
llama 7B mostly Q8_0 6.67 GiB CUDA 99 tg 512 84.50 ± 0.92 85.21 ± 0.72 1.008
llama 7B mostly Q4_0 3.56 GiB CUDA 99 pp 512 3810.83 ± 84.89 3887.95 ± 11.28 1.020
llama 7B mostly Q4_0 3.56 GiB CUDA 99 tg 128 126.26 ± 0.98 127.93 ± 0.41 1.013
llama 7B mostly Q4_0 3.56 GiB CUDA 99 tg 256 126.53 ± 0.47 127.25 ± 0.18 1.005
llama 7B mostly Q4_0 3.56 GiB CUDA 99 tg 512 124.33 ± 0.51 124.98 ± 0.27 1.005

@ggerganov
Copy link
Owner Author

Looking for Apple Silicon runs of the following 7B benches, master vs PR:

make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q8_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8
make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q4_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

cc @slaren @jhen0409 and everyone else with a Mac

@jhen0409
Copy link
Collaborator

jhen0409 commented Dec 2, 2023

M1 Max MBP (32c):

make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q8_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.247 517.85 3.205 39.94 3.452 74.16
128 128 2 384 0.248 516.60 10.249 24.98 10.497 36.58
128 128 4 640 0.246 519.61 10.387 49.29 10.633 60.19
128 128 8 1152 0.246 520.45 10.785 94.94 11.031 104.43
2048 128 1 2176 3.901 525.00 3.707 34.53 7.608 286.01
2048 128 2 2304 3.895 525.76 11.045 23.18 14.940 154.22
2048 128 4 2560 3.891 526.37 11.457 44.69 15.348 166.80
2048 128 8 3072 3.889 526.55 12.349 82.92 16.238 189.18

PR

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.248 515.87 3.191 40.11 3.439 74.43
128 128 2 384 0.248 516.49 5.758 44.46 6.006 63.94
128 128 4 640 0.247 518.64 10.320 49.61 10.567 60.57
128 128 8 1152 0.246 519.55 19.319 53.00 19.565 58.88
2048 128 1 2176 3.906 524.27 3.695 34.64 7.602 286.25
2048 128 2 2304 3.898 525.39 6.510 39.32 10.408 221.36
2048 128 4 2560 3.891 526.30 11.049 46.34 14.940 171.35
2048 128 8 3072 3.891 526.30 20.171 50.77 24.062 127.67
make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q4_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.249 513.28 2.092 61.19 2.341 109.35
128 128 2 384 0.249 514.12 10.313 24.82 10.562 36.36
128 128 4 640 0.248 516.87 10.473 48.89 10.720 59.70
128 128 8 1152 0.247 518.07 10.871 94.20 11.118 103.62
2048 128 1 2176 3.978 514.80 2.620 48.85 6.599 329.76
2048 128 2 2304 3.972 515.62 11.055 23.16 15.027 153.33
2048 128 4 2560 3.968 516.12 11.518 44.45 15.486 165.31
2048 128 8 3072 3.967 516.27 12.458 82.20 16.425 187.04

PR

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.255 501.64 2.124 60.27 2.379 107.61
128 128 2 384 0.249 515.07 3.555 72.01 3.803 100.96
128 128 4 640 0.248 515.99 6.051 84.61 6.299 101.60
128 128 8 1152 0.247 517.46 11.131 92.00 11.378 101.25
2048 128 1 2176 3.982 514.36 2.620 48.85 6.602 329.60
2048 128 2 2304 3.979 514.73 4.363 58.67 8.342 276.19
2048 128 4 2560 3.974 515.33 6.885 74.36 10.859 235.74
2048 128 8 3072 3.975 515.21 11.925 85.87 15.900 193.21

q8_0 drops significantly at B=8.

@slaren
Copy link
Collaborator

slaren commented Dec 2, 2023

M3 Max

make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q8_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.182 703.44 2.997 42.71 3.179 80.53
128 128 2 384 0.181 705.39 8.238 31.07 8.420 45.61
128 128 4 640 0.181 706.57 8.285 61.80 8.466 75.60
128 128 8 1152 0.185 691.87 8.621 118.78 8.806 130.82
2048 128 1 2176 3.327 615.60 3.851 33.24 7.178 303.15
2048 128 2 2304 3.190 641.97 9.112 28.10 12.302 187.29
2048 128 4 2560 3.430 597.02 9.608 53.29 13.038 196.35
2048 128 8 3072 3.613 566.77 10.194 100.45 13.808 222.49

PR:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.183 700.89 3.008 42.55 3.191 80.23
128 128 2 384 0.182 705.07 3.203 79.94 3.384 113.47
128 128 4 640 0.181 706.02 4.287 119.42 4.469 143.22
128 128 8 1152 0.186 687.37 8.373 122.30 8.559 134.60
2048 128 1 2176 3.171 645.93 3.833 33.40 7.003 310.71
2048 128 2 2304 3.156 648.94 4.256 60.15 7.412 310.85
2048 128 4 2560 3.319 617.07 6.085 84.14 9.404 272.22
2048 128 8 3072 3.492 586.42 9.784 104.66 13.276 231.39
make -j batched-bench && ./batched-bench ./models/llama-7b-v2/ggml-model-q4_0.gguf 10880 1 99 0 128,2048 128 1,2,4,8

master:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.182 703.85 1.969 65.00 2.151 119.01
128 128 2 384 0.182 705.21 8.237 31.08 8.418 45.61
128 128 4 640 0.181 706.64 8.289 61.77 8.470 75.56
128 128 8 1152 0.198 646.09 8.831 115.95 9.029 127.59
2048 128 1 2176 3.558 575.60 2.736 46.79 6.294 345.73
2048 128 2 2304 3.584 571.38 9.400 27.24 12.984 177.45
2048 128 4 2560 3.534 579.60 9.520 53.78 13.054 196.12
2048 128 8 3072 3.504 584.48 9.930 103.12 13.434 228.68

PR:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
128 128 1 256 0.182 702.75 1.982 64.58 2.164 118.29
128 128 2 384 0.181 705.49 2.263 113.13 2.444 157.09
128 128 4 640 0.181 706.66 3.698 138.44 3.880 164.97
128 128 8 1152 0.182 703.27 7.024 145.79 7.206 159.87
2048 128 1 2176 3.169 646.36 2.755 46.45 5.924 367.33
2048 128 2 2304 3.199 640.28 3.182 80.44 6.381 361.07
2048 128 4 2560 3.279 624.66 4.917 104.13 8.196 312.36
2048 128 8 3072 3.333 614.53 8.237 124.31 11.570 265.51

@ggerganov ggerganov merged commit d7b800b into master Dec 3, 2023
37 checks passed
YellowRoseCx added a commit to YellowRoseCx/koboldcpp-rocm that referenced this pull request Dec 12, 2023
commit 53b5ae02cb1b533b78302422951bcfdeca6e2738
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Tue Dec 12 12:08:29 2023 -0600

    mixtral fan service

commit 168b1d74e26d0321e2e89358303b6c33e8d7d33e
Merge: f13295b de15d4a6
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Tue Dec 12 12:00:52 2023 -0600

    Merge branch 'kcpp-rocm-mixtral2' into main2

commit de15d4a632939a685ec12fa17355298542facf15
Merge: 74acc54 ea4402b
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Tue Dec 12 11:45:19 2023 -0600

    Merge branch 'mixtral' into kcpp-rocm-mixtral

commit ea4402b
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Dec 12 17:03:38 2023 +0200

    test-backend-ops : add one more sum_rows test

commit a51bc0c
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Dec 12 15:55:42 2023 +0200

    metal : fix binary ops for ne10 % 4 != 0

commit 08eb991
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Dec 12 14:14:15 2023 +0200

    metal : add cpy f16 -> f32 kernel

commit a742d9f
Author: slaren <slarengh@gmail.com>
Date:   Tue Dec 12 12:46:33 2023 +0100

    gguf-py : bump version

commit 6a419f4
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Dec 12 13:04:33 2023 +0200

    convert : support safetensors format

commit 74acc54
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Dec 12 10:53:34 2023 +0800

    Revert "Hide hipBLAS (ROCm) if CuBLAS exists - vice versa"

    This reverts commit 4b854d4.

commit f1cbfab
Author: slaren <slarengh@gmail.com>
Date:   Mon Dec 11 20:02:55 2023 +0100

    convert : fix style

commit 7dc75e3
Author: slaren <slarengh@gmail.com>
Date:   Mon Dec 11 20:00:28 2023 +0100

    convert : use 1e6 rope_freq_base for mixtral

commit 296c945
Author: slaren <slarengh@gmail.com>
Date:   Mon Dec 11 16:53:25 2023 +0100

    cuda : fix mul_mat_id with multi gpu

commit 33e50f1
Author: slaren <slarengh@gmail.com>
Date:   Mon Dec 11 12:27:48 2023 +0100

    test-backend-ops : disable MOE test with thread sanitizer

commit ffda94c
Author: slaren <slarengh@gmail.com>
Date:   Mon Dec 11 12:15:31 2023 +0100

    test-backend-ops : simplify and disable slow tests to avoid CI timeout

commit 06581f2
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Dec 11 16:54:42 2023 +0800

    perf endpoint lets you monitor if the embedded horde worker has issues

commit fce971d
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Dec 11 16:17:10 2023 +0800

    do not build the clblast noavx2 binary if not on windows

commit 8cbaed1
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Dec 11 08:55:16 2023 +0200

    llama : fix hard-coded number of experts

commit 4b854d4
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Sun Dec 10 22:49:35 2023 -0600

    Hide hipBLAS (ROCm) if CuBLAS exists - vice versa

commit b002981
Author: slaren <slarengh@gmail.com>
Date:   Mon Dec 11 02:43:52 2023 +0100

    test-backend-ops : fix dequantize block offset

commit f1380d7
Author: slaren <slarengh@gmail.com>
Date:   Sun Dec 10 22:58:31 2023 +0100

    test-backend-ops : add cpy from f32 -> all types test

commit 54d254b
Author: slaren <slarengh@gmail.com>
Date:   Sun Dec 10 21:52:11 2023 +0100

    test-backend-ops : cleanup, add moe test for batches

commit e2cf3b7
Author: henk717 <henk@henk.tech>
Date:   Sun Dec 10 14:30:17 2023 +0100

    koboldcpp.sh - The Mamba Multitool (LostRuins#554)

    * .sh script V1

    * koboldcpp.sh polish

    * koboldcpp.sh dist generator

    * Include html's in dist

    * RWKV in Linux Dist

    * Lower dependency requirements

    * Eliminate wget dependency

    * More distinct binary name

    I know its technically amd64, but I don't want to cause confusion among nvidia users.

    * Use System OpenCL

    Unsure how this will behave in the pyinstaller build, but pocl ended up CPU only. With a bit of luck the pyinstaller uses the one from the actual system if compiled in a system without opencl, while conda now includes it for that specific system.

    * Add cblas dependency

    Missing this causes compile failures on some system's

    * ICD workaround

    Ideally we find a better solution, but conda forces ICD and needs this for the successful compile. However, pyinstaller then embeds the ICD causing it to be limited to the system it was compiled for. By temporarily removing the ICD pyinstaller can't find it and everything remains functional. Ideally we do this on a pyinstaller level, but I could not find any good options to do so yet.

    ---------

    Co-authored-by: root <root@DESKTOP-DQ1QRAG>

commit 54ba263
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 15:27:41 2023 +0200

    test-backend-ops : make experts more evenly probable (test_moe)

commit b0b83dd
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 14:30:38 2023 +0200

    metal : fix ggml_mul_mat_id for F32

commit 65923a8
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 14:17:46 2023 +0200

    convert : determine n_ctx correctly

commit 8614aa7
Author: slaren <slarengh@gmail.com>
Date:   Sun Dec 10 13:12:11 2023 +0100

    cuda : fix get_rows when ncols is odd

commit cefebb3
Author: slaren <slarengh@gmail.com>
Date:   Sun Dec 10 13:11:39 2023 +0100

    test-backend-ops : add moe test

commit e640cbe
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 13:57:54 2023 +0200

    llama : add n_expert and n_expert_used to hparams + change quants

commit d1259b7
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 13:00:13 2023 +0200

    llama : do not quantize expert gating tensors

commit 6cfb31f
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 10:59:13 2023 +0200

    metal : add indirect mat-vec kernels for all quantization types

commit 016f9bb
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 10 09:38:21 2023 +0200

    metal : fix ggml_get_rows to work with non-cont src1

commit 0710b0f
Author: slaren <slarengh@gmail.com>
Date:   Sat Dec 9 23:29:47 2023 +0100

    llama : offload missing ffn_moe_silu

commit 62b95f9
Author: slaren <slarengh@gmail.com>
Date:   Sat Dec 9 22:39:34 2023 +0100

    cuda : support non-contiguous src1 in get_rows

commit 2e4db48
Author: slaren <slarengh@gmail.com>
Date:   Sat Dec 9 22:38:22 2023 +0100

    ggml : update get_rows f16 and q

commit ac3f7d8
Author: slaren <slarengh@gmail.com>
Date:   Sat Dec 9 19:19:03 2023 +0100

    ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

commit 8c5b66e
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 15:30:34 2023 +0200

    metal : reduce the kernel launches for ggml_mul_mat_id

commit 7e2006b
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 14:24:58 2023 +0200

    metal : add/mul/div use general kernel when src1 not cont

commit 06dfde3
Author: slaren <slarengh@gmail.com>
Date:   Sat Dec 9 13:21:09 2023 +0100

    llama : add basic support for offloading moe with CUDA

commit 2cbcba8
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 14:18:42 2023 +0200

    metal : add more general support for ggml_get_rows + tests

commit 9064b1c
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 14:04:54 2023 +0200

    ggml : fix ggml_get_rows to take into account ne02 / ne11

commit ee8fb39
Author: slaren <slarengh@gmail.com>
Date:   Sat Dec 9 12:42:25 2023 +0100

    ggml : add n_as argument to ggml_mul_mat_id

commit 7372b62
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 13:18:58 2023 +0200

    ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

commit 8b185b7
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 13:01:42 2023 +0200

    llama : fix expert weighting in the FFN

commit 7ea3695
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 12:45:15 2023 +0200

    llama : first working version

commit af1a096
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 12:07:39 2023 +0200

    llama : fix cur -> cur_expert

commit aedfad1
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 11:47:40 2023 +0200

    llama : update graph to support MoE

commit 861cd67
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 11:19:46 2023 +0200

    ggml : sync latest ggml_mul_mat_id

commit a3eefe9
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 11:14:03 2023 +0200

    llama : model loading

commit d38e41e
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 10:59:37 2023 +0200

    convert : fix n_ff typo

commit dff8cbe
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Dec 9 10:51:58 2023 +0200

    convert : support Mixtral as LLAMA arch

commit 7a69152
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Dec 8 21:06:32 2023 +0800

    lowvram var defaults

commit 7418bca
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Dec 8 19:20:30 2023 +0800

    up ver

commit c47bc28
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Dec 8 18:35:45 2023 +0800

    slight refactor for noscript ui

commit 7469f20
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Dec 8 18:16:14 2023 +0800

    use lowvram flag for offload qkv

commit ec21fa7
Merge: 930cdfb fe680e3
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Dec 8 17:42:26 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	.github/workflows/build.yml
    #	.gitignore
    #	CMakeLists.txt
    #	Makefile
    #	Package.swift
    #	README.md
    #	ggml-cuda.cu
    #	llama.cpp
    #	llama.h
    #	scripts/sync-ggml.sh
    #	tests/CMakeLists.txt

commit 930cdfb
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Dec 8 16:53:30 2023 +0800

    updated lite, added patch that links to noscript mode

commit fe680e3
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Thu Dec 7 22:26:54 2023 +0200

    sync : ggml (new ops, tests, backend, etc.) (ggerganov#4359)

    * sync : ggml (part 1)

    * sync : ggml (part 2, CUDA)

    * sync : ggml (part 3, Metal)

    * ggml : build fixes

    ggml-ci

    * cuda : restore lost changes

    * cuda : restore lost changes (StableLM rope)

    * cmake : enable separable compilation for CUDA

    ggml-ci

    * ggml-cuda : remove device side dequantize

    * Revert "cmake : enable separable compilation for CUDA"

    This reverts commit 09e35d0.

    * cuda : remove assert for rope

    * tests : add test-backend-ops

    * ggml : fix bug in ggml_concat

    * ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`

    * ci : try to fix macOS

    * ggml-backend : remove backend self-registration

    * ci : disable Metal for macOS cmake build

    ggml-ci

    * metal : fix "supports family" call

    * metal : fix assert

    * metal : print resource path

    ggml-ci

    ---------

    Co-authored-by: slaren <slarengh@gmail.com>

commit bcc0eb4
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Thu Dec 7 13:03:17 2023 +0200

    llama : per-layer KV cache + quantum K cache (ggerganov#4309)

    * per-layer KV

    * remove unnecessary copies

    * less code duplication, offload k and v separately

    * llama : offload KV cache per-layer

    * llama : offload K shift tensors

    * llama : offload for rest of the model arches

    * llama : enable offload debug temporarily

    * llama : keep the KV related layers on the device

    * llama : remove mirrors, perform Device -> Host when partial offload

    * common : add command-line arg to disable KV cache offloading

    * llama : update session save/load

    * llama : support quantum K cache (ggerganov#4312)

    * llama : support quantum K cache (wip)

    * metal : add F32 -> Q8_0 copy kernel

    * cuda : add F32 -> Q8_0 copy kernel

    ggml-ci

    * cuda : use mmv kernel for quantum cache ops

    * llama : pass KV cache type through API

    * llama : fix build

    ggml-ci

    * metal : add F32 -> Q4_0 copy kernel

    * metal : add F32 -> Q4_1 copy kernel

    * cuda : wip

    * cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels

    * llama-bench : support type_k/type_v

    * metal : use mm kernel only for quantum KV cache

    * cuda : add comment

    * llama : remove memory_f16 and kv_f16 flags

    ---------

    Co-authored-by: slaren <slarengh@gmail.com>

    * readme : add API change notice

    ---------

    Co-authored-by: slaren <slarengh@gmail.com>

commit 81bc921
Author: Hongyu Ouyang <96765450+casavaca@users.noreply.github.com>
Date:   Thu Dec 7 02:25:22 2023 -0800

    train : fix ggerganov#4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (ggerganov#4351)

    On commit b1108 (44c117f) xaedes added

        ggml_allocr * alloc = NULL;

        ... (many lines in between)

        if (alloc) {
            ggml_allocr_free(alloc);
        }

    Which is correct, but it's easy to lose context after many lines in between.

    On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly.

        alloc = ggml_allocr_new(...)
        ... (short lines of code)
        ggml_allocr_free(alloc)

    This happens a few times, but alloc is never set to NULL, and many lines below,
    we still have

        if (alloc) {
            ggml_allocr_free(alloc);
        }

    which causes a double-free.

commit 05cd6e5
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Dec 6 20:21:59 2023 +0200

    server : recognize cache_prompt parameter in OAI API (ggerganov#4347)

commit c751152
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Dec 7 00:52:25 2023 +0800

    noscript mode is done

commit 12002d8
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Dec 6 17:51:08 2023 +0800

    very basic noscript mode

commit caa9249
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Dec 6 10:41:03 2023 +0200

    common : fix compile warning

commit da5eaef
Author: stduhpf <stephduh@live.fr>
Date:   Wed Dec 6 09:08:17 2023 +0100

    speculative : support `--color` (ggerganov#4343)

    * speculative: add some colors

    * minor : add braces

    ---------

    Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

commit 5f6e0c0
Author: Marcus Dunn <51931484+MarcusDunn@users.noreply.github.com>
Date:   Tue Dec 5 10:55:12 2023 -1000

    grammar : pre-computed pieces + reserve mem + less string copies (ggerganov#4330)

    * reserve space for codepoints

    * improvement for the appended 0

    * used precomputed token text for grammar sample

    * reserve canidates_decoded

    * reserve canidates_grammar

    * remove candidates_decoded

    * Revert "remove candidates_decoded"

    This reverts commit 3773328.

    * changed decode_utf8 to take src by ref

commit 5aa365d
Author: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Date:   Tue Dec 5 10:19:18 2023 -0700

    llama : allow overriding GGUF metadata when loading model (ggerganov#4092)

    * feat: Allow overriding GGUF metadata when loading model

    * Fix the one time GCC is stricter than clang about something

    * Step1

    * Refactor... basically everything!

    * Nuke obsolete GetArrayLen struct

    * simplify std::string specialization

    * Various cleanups

    Add informational output when overrides are applied

    Warn user when an override with the wrong type is specified

    * Fix broken logic for parsing bool KV overrides
    Fix issue where overrides didn't apply when key missing in GGUF metadata
    Resolve merge changes

    * llama : rearrange model params

    * Update new GET_KEY call

    Add note that metadata KV overrides aren't reflected in initial metadata KV info dump

    ---------

    Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
    Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

commit b6f952f
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Dec 5 21:08:10 2023 +0800

    improved exit logic

commit 52c8bc3
Author: MaggotHATE <clay1326@gmail.com>
Date:   Tue Dec 5 15:05:51 2023 +0500

    sampling : custom samplers order (ggerganov#4285)

    * Samplers sequence order w parameter

    * Cleaned commented code

    * Fixed formatting

    * Rewrote with unordered_map

    * Revert and rewrite, too many problems and safeguards would be needed

    * Fixed code style

    * Code style fixes according to review

    * More readable samplers input string, fixed help

    * Style fix in sampler_queue

    * Formatting fixes

    * Fixing whitespaces

commit e4b76bb
Author: kchro3 <62481661+kchro3@users.noreply.github.com>
Date:   Mon Dec 4 23:29:46 2023 -0800

    swift : revert compiler checks for swift package (ggerganov#4332)

commit 23b5e12
Author: Daniel Bevenius <daniel.bevenius@gmail.com>
Date:   Mon Dec 4 17:04:21 2023 +0100

    simple : update error message for KV cache check (ggerganov#4324)

    This commit updates the error message that is printed when the
    KV cache is not big enough to hold all the prompt and generated
    tokens. Specifically it removes the reference to n_parallel and
    replaces it with n_len.

    Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

commit d208995
Author: Miwa / Ensan <63481257+ensan-hcl@users.noreply.github.com>
Date:   Tue Dec 5 01:03:49 2023 +0900

    swift : fix concatenation method to avoid invalid UTF8 stringfication (ggerganov#4325)

commit 5c9f90c
Author: Miwa / Ensan <63481257+ensan-hcl@users.noreply.github.com>
Date:   Mon Dec 4 22:43:45 2023 +0900

    swift : fix prompt tokenization logic (ggerganov#4321)

commit a5a5839
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Dec 4 21:10:42 2023 +0800

    handle accidentally selecting a kcpps file as model instead

commit 4fa44e8
Author: Ikko Eltociear Ashimine <eltociear@gmail.com>
Date:   Mon Dec 4 16:57:35 2023 +0900

    grammar-parser : fix typo (ggerganov#4318)

    preceeding -> preceding

commit 8602f5a
Merge: ac36aee fbbc428
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Sun Dec 3 22:00:14 2023 +0800

    Merge branch 'master' into concedo_experimental

commit fbbc428
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 3 15:56:35 2023 +0200

    ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (ggerganov#4308)

    * ggml : fix soft max out-of-bounds access

    ggml-ci

    * ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()

    ggml-ci

commit ac36aee
Merge: 48544cd 33e171d
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Sun Dec 3 21:56:29 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	CMakeLists.txt
    #	Makefile

commit adf3de4
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 3 15:56:22 2023 +0200

    ggml : fix soft max out-of-bounds access (ggerganov#4307)

    ggml-ci

commit 48544cd
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Sun Dec 3 21:46:50 2023 +0800

    Revert "Revert "ggml : add ggml_soft_max_ext (ggerganov#4256)""

    This reverts commit a8e66ef.

commit 33e171d
Author: Ed Lee <edilee@mozilla.com>
Date:   Sun Dec 3 01:10:43 2023 -0800

    server : fix OpenAI API `stop` field to be optional (ggerganov#4299)

    (cherry picked from commit Mozilla-Ocho/llamafile@e8c92bc)

commit 6949b50
Author: Rickard Edén <rickardeden@gmail.com>
Date:   Sun Dec 3 10:03:25 2023 +0100

    py : add grammar to oai like api (ggerganov#4294)

commit d7b800b
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sun Dec 3 10:58:16 2023 +0200

    llama : pad KV cache size (ggerganov#4280)

    * llama : pad KV cache size to 32

    * metal : try to improve batched decoding

commit 6570a20
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Sun Dec 3 15:44:53 2023 +0800

    token count includes ids

commit 5a7d312
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Fri Dec 1 20:39:12 2023 +0200

    llama : avoid using "optional" keyword (ggerganov#4283)

commit d5a1cbd
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Fri Dec 1 20:35:03 2023 +0200

    llama : support optional tensors (ggerganov#4283)

commit b220222
Author: Miwa / Ensan <63481257+ensan-hcl@users.noreply.github.com>
Date:   Sat Dec 2 03:19:45 2023 +0900

    swift : fix token_to_piece implementation (ggerganov#4278)

    * Fix token_to_piece implementation in Swift

    * Fix errors

commit 511f52c
Author: Jared Van Bortel <jared@nomic.ai>
Date:   Fri Dec 1 13:18:35 2023 -0500

    build : enable libstdc++ assertions for debug builds (ggerganov#4275)

commit 03562f3
Author: CausalLM <148736309+CausalLM@users.noreply.github.com>
Date:   Sat Dec 2 02:17:06 2023 +0800

    llama : support attention bias on LLaMA architecture (ggerganov#4283)

    * Support attention_bias on LLaMA architecture

    QKVO bias, should fix InternLM (ggerganov#3133) and works for LLaMAfied Qwen models (ggerganov#3743 (comment)).

    * check existence of qkvo bias while loading llama models

    Tested on LLaMA2, CUDA and CPU.

    * Update llama.cpp

commit 37c746d
Author: Shijie <821898965@qq.com>
Date:   Sat Dec 2 02:16:31 2023 +0800

    llama : add Qwen support (ggerganov#4281)

    * enable qwen to llama.cpp

    * llama : do not GPU split bias tensors

    ---------

    Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

commit 880f579
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Fri Dec 1 18:42:11 2023 +0200

    llama : fix integer overflow during quantization (ggerganov#4284)

    happens with multi-threaded quantization of Qwen-72B

    ggml-ci
wordshk pushed a commit to wordshk/llama.cpp that referenced this pull request Feb 22, 2024
I found that parallel (in examples/parallel) was unusable when -np > 1.
I bisected the issue down to d7b800b

I don't really understand anything about kv-cache, just that the change
caused parallel to emit nonsense on my M2 Mac Studio (Apple M2 Max,
macOS 14.1.2 (23B92)). The comments around it say kv_self.n is a
heuristic (and seems to have comments suggesting other possible values
for assignment), so I presume that it shouldn't be a problem to remove
the GGML_PAD(). Empirically it seems to work fine. That said, it does
sound like the bug could run deeper, but it is beyond my ability to
understand what the root cause might be.

It apparently reproduces on various models not only tinyllama, but this
one is small so it should be more convenient to reproduce. While
tinyllama isn't known for the quality of its output, there is still an
obvious difference between the nonsense output and the normal output.

Reproduction:

`./parallel -c 99999 -n 30 -ns 10 -np 2 -m ~/Downloads/tinyllama-1.1b-chat-v1.0.Q5_K_M.gguf`

Before fix (example bad output):

```
Input:    If you could have any superpower, what would it be?
Response: . In the 812
```

After fix (example expected output):

```
Input:    If you could have any superpower, what would it be?
Response: I would choose the power of being able to control time. The power
```

----

After typing the above I realized when running larger models, the
problem is less apparent, but still exists. For example, I tried
mixtral:

`./parallel -c 99999 -n 50 -ns 50 -np 2 -m ~/Downloads/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf`

And one of the outputs was:

```
Input:    Recommend some interesting books to read.
Response: I recommend the book "Surelyourecommend would suggest starting
with the book "The Foundation for Self-Help by Dr. Micahelle myself to
anywhere in the world"
```

The problem with the above response is obvious, but here's another that
isn't so obvious if you just glance at it:

```
Input:    I want to learn how to play the piano.
Response: That's great! I could recommend a personalize piano lessons
with a piano teacher. This will allow you to learn at your own pace. You
can practice scales and chords,
```

Note that "a personalize piano lessons" is not grammatical English, a
mistake that mixtral should not make. I didn't notice any such errors
when testing with this patch applied.
wordshk pushed a commit to wordshk/llama.cpp that referenced this pull request Feb 22, 2024
I found that parallel (in examples/parallel) was unusable when -np > 1.
I bisected the issue down to d7b800b

I don't really understand anything about kv-cache, just that the change
caused parallel to emit nonsense on my M2 Mac Studio (Apple M2 Max,
macOS 14.1.2 (23B92)). The comments around it say kv_self.n is a
heuristic (and seems to have comments suggesting other possible values
for assignment), so I presume that it shouldn't be a problem to remove
the GGML_PAD(). Empirically it seems to work fine. That said, it does
sound like the bug could run deeper, but it is beyond my ability to
understand what the root cause might be.

It apparently reproduces on various models not only tinyllama, but this
one is small so it should be more convenient to reproduce. While
tinyllama isn't known for the quality of its output, there is still an
obvious difference between the nonsense output and the normal output.

Reproduction:

`./parallel -c 99999 -n 30 -ns 10 -np 2 -m ~/Downloads/tinyllama-1.1b-chat-v1.0.Q5_K_M.gguf`

Before fix (example bad output):

```
Input:    If you could have any superpower, what would it be?
Response: . In the 812
```

After fix (example expected output):

```
Input:    If you could have any superpower, what would it be?
Response: I would choose the power of being able to control time. The power
```

----

After typing the above I realized when running larger models, the
problem is less apparent, but still exists. For example, I tried
mixtral:

`./parallel -c 99999 -n 50 -ns 50 -np 2 -m ~/Downloads/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf`

And one of the outputs was:

```
Input:    Recommend some interesting books to read.
Response: I recommend the book "Surelyourecommend would suggest starting
with the book "The Foundation for Self-Help by Dr. Micahelle myself to
anywhere in the world"
```

The problem with the above response is obvious, but here's another that
isn't so obvious if you just glance at it:

```
Input:    I want to learn how to play the piano.
Response: That's great! I could recommend a personalize piano lessons
with a piano teacher. This will allow you to learn at your own pace. You
can practice scales and chords,
```

Note that "a personalize piano lessons" is not grammatical English, a
mistake that mixtral should not make. I didn't notice any such errors
when testing with this patch applied.
hodlen added a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
llama : restore prefix space in llama tokenizer (ggerganov#4081)

gguf : fix potential infinite loops while parsing (ggerganov#4100)

Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>

Respect tokenizer.ggml.add_bos_token value when tokenizing (ggerganov#4040)

* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.

* Respect add_bos_token GGUF metadata value

* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time

llama : fix data units (ggerganov#4101)

* llama : fix data units

ggml-ci

* Revert "llama : fix data units"

This reverts commit f5feac8.

* llama : disambiguate data units

ggml-ci

cuda : get_row_rounding F32 (ggerganov#4095)

* Fix ggerganov#4017

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

finetune : zero the loraB initial vectors (ggerganov#4082)

* finetune : zero the loraB initial vectors

Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).

* tabs to spaces

* Use ggml_set_zero instead of adding a new function

finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (ggerganov#4079)

* Remove logically superfluous assertions and order by dimension

* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()

* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace

* Add openBLAS support for sgemm() in compute_forward_out_prod()

llama : add functions to get the model's metadata (ggerganov#4013)

* llama : add functions to get the model's metadata

* format -> std::to_string

* better documentation

train : move number of gpu layers argument parsing to common/train.cpp (ggerganov#4074)

- introduces help entry for the argument
 - cuts '--gpu-layers' form in order to simplify usage and documentation.

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>

py : remove superfluous import statements (ggerganov#4076)

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>

llava : fix compilation warning that fread return value is not used (ggerganov#4069)

common : improve yaml log escaping (ggerganov#4080)

* logging: improve escaping in yaml output

* logging: include review feedback

py : Falcon HF compatibility (ggerganov#4104)

Falcon HF compatibility

convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (ggerganov#4089)

Co-authored-by: Don Mahurin <@>

examples : add tokenize (ggerganov#4039)

tokenize : fix trailing whitespace

build : support ppc64le build for make and CMake (ggerganov#3963)

* build: support ppc64le build for make and CMake

* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : increase max nodes (ggerganov#4115)

Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (ggerganov#4124)

* ggml-cuda.cu: Clean up warnings when compiling with clang

* ggml-cuda.cu: Move static items into anonymous namespace

* ggml-cuda.cu: Fix use of namespace start macro

* Revert "ggml-cuda.cu: Fix use of namespace start macro"

This reverts commit 26c1149.

* Revert "ggml-cuda.cu: Move static items into anonymous namespace"

This reverts commit e29757e.

scripts : Remove missed baichuan convert script (ggerganov#4127)

tokenize example: Respect normal add BOS token behavior (ggerganov#4126)

Allow building with Makefile

gguf-py : export chat templates (ggerganov#4125)

* gguf-py : export chat templates

* llama.cpp : escape new lines in gguf kv info prints

* gguf-py : bump version

* gguf-py : check chat_template type

* gguf-py : initialize chat_template

gitignore : tokenize

common : comma should be semicolon (ggerganov#4137)

server : relay error messages (ggerganov#4131)

finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)

Revert "finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)"

This reverts commit 05e8301.

speculative : fix prompt tokenization in speculative example (ggerganov#4025)

* Support special tokens and not adding BOS to prompt in speculative

* Adapt to new should_add_bos function

* Ensure tgt and dft have same add_bos setting

ci : add flake8 to github actions (python linting) (ggerganov#4129)

Disabled rules:

* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned

* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned

* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned

* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard

* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned

* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned

* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard

* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard

* E266 Too many leading '#' for block comment - sometimes used as "section" separator

* E501 Line too long - disabled because it's broken so often it seems like a standard

* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)

* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)

main : Add ChatML functionality to main example (ggerganov#4046)

Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>

readme : update ROCm Windows instructions (ggerganov#4122)

* Update README.md

* Update README.md

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

finetune - update readme to mention llama support only (ggerganov#4148)

stablelm : simplify + speedup generation (ggerganov#4153)

docs : add llama-star arch idea

examples : fix typo in parallel example doc comment (ggerganov#4181)

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

readme : update hot topics

llama : KV cache view API + better KV cache management (ggerganov#4170)

* llama : keep track of used KV cells + better KV cache management

* llama : zero KV cache used upon clear

ggml-ci

* llama : allow exporting a view of the KV cache (ggerganov#4180)

* Allow exporting a view of the KV cache

* Allow dumping the sequences per cell in common

* Track max contiguous cells value and position as well

* Fix max contiguous empty cells index calculation

Make dump functions deal with lengths or sequences counts > 10 better

* Fix off by one error in dump_kv_cache_view

* Add doc comments for KV cache view functions

Eliminate cell sequence struct; use llama_seq_id directly

Minor cleanups

* common : add -dkvc arg for enabling kv cache dumps

---------

Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>

Fix incorrect format strings and uninitialized variables. (ggerganov#4133)

* Fix incorrect format strings and uninitialized variables.

* Address comments

* Add the missing include statement

readme : use PATH for Windows ROCm (ggerganov#4195)

* Update README.md to use PATH for Windows ROCm

* Update README.md

* Update README.md

main.swift : fix eos checking (ggerganov#4197)

llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.

convert : fix tensors using grad in some models (ggerganov#4173)

ggml-cuda : support stablelm rope (ggerganov#4156)

* ggml-cuda : support stablelm rope

* remove unused freq_base kernel parameter

* add n_dims parameter to llm_build_k_shift, default to n_rot via overload

* llama : fix llm_build_k_shift args

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : set metal log callback correctly (ggerganov#4204)

server : OAI API compatibility (ggerganov#4198)

* Add openai-compatible POST /v1/chat/completions API endpoint to server example

* fix code style

* Update server README.md

* Improve server README.md

* Fix server.cpp code style according to review

* server : some style changes

* server : indentation

* server : enable special tokens during tokenization by default

* server : minor code style

* server : change random string generator

* straightforward /v1/models endpoint

---------

Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com>
Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>

readme : update hot topics

Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (ggerganov#4189)

llama : grammar `reserve` space in `decode_utf8` (ggerganov#4210)

* reserve space for codepoints

* improvement for the appended 0

scripts : Use mmap in torch load (ggerganov#4202)

* Use mmap in torch load, prefer .bin files when loading

* Revert .bin > .safetensors preference

metal : fix yarn (ggerganov#4220)

get the correct n_orig_ctx in metal

lookahead : add example for lookahead decoding (ggerganov#4207)

* lookahead : init

* lookahead : generate and store n-grams

* lookahead : use loop instead recursion to generate n-grams

* lookahead : initial working implementation

* lookahead : filter repeating n-grams

* lookahead : use deterministic init

* lookahead : add to Makefile

* lookahead : fix a bug in the seq_id of the lookahead tokens

* lookahead : add comments

---------

Co-authored-by: slaren <slarengh@gmail.com>

readme : update hot topics

lookahead : support `-n -1` infinite generation

ggml : fix -Warray-bounds warning with gcc (ggerganov#4231)

examples : iOS example with swift ui (ggerganov#4159)

* copy to llama.cpp as subdir

* attempt enabling metal, fails

* ggml metal compiles!

* Update README.md

* initial conversion to new format, utf8 errors?

* bug fixes, but now has an invalid memory access :(

* added O3, now has insufficient memory access

* begin sync with master

* update to match latest code, new errors

* fixed it!

* fix for loop conditionals, increase result size

* fix current workflow errors

* attempt a llama.swiftui workflow

* Update .github/workflows/build.yml

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

readme : add Amica to UI list (ggerganov#4230)

cmake : fix issue with version info not getting baked into LlamaConfig.cmake (ggerganov#3970)

* Split CPP generation from build-info query

* Remove blank lines

* Add BUILD_SHARED_LIBS option

ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (ggerganov#4240)

* ggml : use blas even if src0 is not F32

* llama : use n_threads_batch only when n_tokens >= 32

ggml-ci

* llama : revert n_threads_batch logic

ggml-ci

ggml : restore abort() in GGML_ASSERT (ggerganov#4242)

readme : add FreeChat (ggerganov#4248)

examples : add readme files

py : fix oai proxy (ggerganov#3972)

* fix oai proxy

fix generation not stoped while bot stop talking in chat mode

fix possible `slot_id` not exist

response for cors (and pre flight)

* oai proxy: workaround for some client (such as Chatbox)

* use stop as separator to replace hardcoded `\n`

llama : fix typical sampling (ggerganov#4261)

Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.

Test: Generating with temp=0.0001 (approx. argmax)  should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).

convert.py : fix llama/llama2 conversion due to vocab_size=-1 (ggerganov#4258)

llama : fix alignment of general.name in print meta (ggerganov#4254)

* llama: fix alignment of general.name in print meta

This commit fixes the alignment of the general.name field in the
llm_load_print_meta function.

Currently the output looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name   = LLaMA v2
```
And with this commit it looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name     = LLaMA v2
```

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* llama: fix alignment of special tokens

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

---------

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

readme : fix typo (ggerganov#4253)

llama.cpp uses GitHub Actions, not Gitlab Actions.

cmake : fix the metal file foder path (ggerganov#4217)

batched.swift : update README.md (ggerganov#4214)

docs: update how to run

docker : add finetune option (ggerganov#4211)

readme : fix (ggerganov#4135)

* fix: readme

* chore: resolve comments

* chore: resolve comments

main : pass LOG_TEE callback to llama.cpp log (ggerganov#4033)

* main : Call llama_log_set to use LOG_TEE

* tabs to spaces

llava : ShareGPT4V compatibility (vision encoder only loading) (ggerganov#4172)

* ShareGPT4 compatibility (vision encoder only loading)

Load only a CLIP vision encoder (as supplied by ShareGPT finetunes)
Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access)
Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them

* Update convert-image-encoder-to-gguf.py

build : fix build info generation and cleanup Makefile (ggerganov#3920)

* cmake : fix joining of REAL_GIT_DIR

* fix includes with help from include-what-you-use

* make : remove unneeded deps and add test-rope target

* fix C includes in C++ source files

* Revert "fix includes with help from include-what-you-use"

This reverts commit 635e9fa.

make : fix Apple clang determination bug (ggerganov#4272)

Co-authored-by: Will Findley <findley@gmail.com>

server : add single-client multi-prompt support (ggerganov#4232)

* * add multiprompt support

* * cleanup

* * more cleanup

* * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests

* * remove all references to mutex_multitasks

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* * change to set

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

server : add --log-disable to disable logging to file (ggerganov#4260)

* * add --log-disable to disable logging to file in the server example

* * typo fix

ggml : add ggml_soft_max_ext (ggerganov#4256)

* metal : implement soft_max_ext

* cuda : implement soft_max_ext

* ggml : implement soft_max_ext (CPU)

* batched-bench : print threads

ggml-ci

* metal : simplify soft_max encoding

ggml-ci

* cuda : use 512 threads for soft_max instead of 32

* ggml : update soft max cpu

* cuda : do warp-based block reduce

* cuda : increase max block size to 1024

* cuda : fix warp reduction initialization of shared mem

* metal : warp-based reduction for soft max kernel

* metal : warp-based reduce for rms_norm

* metal : simplify soft max kernel

ggml-ci

* alloc : fix build with debug

py : add requirements file for convert-hf-to-gguf.py (ggerganov#4277)

This commit adds a requirements file for the convert-hf-to-gguf.py
script, and also add the torch and transformers packages to it.

The motivation for this is that currently running convert-hf-to-gguf.py
will produce the following error:
```console
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt
Collecting numpy==1.24.4
Collecting sentencepiece==0.1.98
Collecting gguf>=0.1.0
Installing collected packages: sentencepiece, numpy, gguf
Successfully installed gguf-0.5.1 numpy-1.24.4 sentencepiece-0.1.98

(venv) $ python convert-hf-to-gguf.py --help
Traceback (most recent call last):
  File "llama.cpp/convert-hf-to-gguf.py", line 16, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'
```
With this commit, and using requirements-hf-to-gguf.txt instead of
requirements.txt, the script can be run and shows the help output.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

llama : fix integer overflow during quantization (ggerganov#4284)

happens with multi-threaded quantization of Qwen-72B

ggml-ci

llama : add Qwen support (ggerganov#4281)

* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : support attention bias on LLaMA architecture (ggerganov#4283)

* Support attention_bias on LLaMA architecture

QKVO bias, should fix InternLM (ggerganov#3133) and works for LLaMAfied Qwen models (ggerganov#3743 (comment)).

* check existence of qkvo bias while loading llama models

Tested on LLaMA2, CUDA and CPU.

* Update llama.cpp

build : enable libstdc++ assertions for debug builds (ggerganov#4275)

swift : fix token_to_piece implementation (ggerganov#4278)

* Fix token_to_piece implementation in Swift

* Fix errors

llama : support optional tensors (ggerganov#4283)

llama : avoid using "optional" keyword (ggerganov#4283)

llama : pad KV cache size (ggerganov#4280)

* llama : pad KV cache size to 32

* metal : try to improve batched decoding

py : add grammar to oai like api (ggerganov#4294)

server : fix OpenAI API `stop` field to be optional (ggerganov#4299)

(cherry picked from commit Mozilla-Ocho/llamafile@e8c92bc)

ggml : fix soft max out-of-bounds access (ggerganov#4307)

ggml-ci

ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (ggerganov#4308)

* ggml : fix soft max out-of-bounds access

ggml-ci

* ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()

ggml-ci

grammar-parser : fix typo (ggerganov#4318)

preceeding -> preceding

swift : fix prompt tokenization logic (ggerganov#4321)

swift : fix concatenation method to avoid invalid UTF8 stringfication (ggerganov#4325)

simple : update error message for KV cache check (ggerganov#4324)

This commit updates the error message that is printed when the
KV cache is not big enough to hold all the prompt and generated
tokens. Specifically it removes the reference to n_parallel and
replaces it with n_len.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

swift : revert compiler checks for swift package (ggerganov#4332)

sampling : custom samplers order (ggerganov#4285)

* Samplers sequence order w parameter

* Cleaned commented code

* Fixed formatting

* Rewrote with unordered_map

* Revert and rewrite, too many problems and safeguards would be needed

* Fixed code style

* Code style fixes according to review

* More readable samplers input string, fixed help

* Style fix in sampler_queue

* Formatting fixes

* Fixing whitespaces

llama : allow overriding GGUF metadata when loading model (ggerganov#4092)

* feat: Allow overriding GGUF metadata when loading model

* Fix the one time GCC is stricter than clang about something

* Step1

* Refactor... basically everything!

* Nuke obsolete GetArrayLen struct

* simplify std::string specialization

* Various cleanups

Add informational output when overrides are applied

Warn user when an override with the wrong type is specified

* Fix broken logic for parsing bool KV overrides
Fix issue where overrides didn't apply when key missing in GGUF metadata
Resolve merge changes

* llama : rearrange model params

* Update new GET_KEY call

Add note that metadata KV overrides aren't reflected in initial metadata KV info dump

---------

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

grammar : pre-computed pieces + reserve mem + less string copies (ggerganov#4330)

* reserve space for codepoints

* improvement for the appended 0

* used precomputed token text for grammar sample

* reserve canidates_decoded

* reserve canidates_grammar

* remove candidates_decoded

* Revert "remove candidates_decoded"

This reverts commit 3773328.

* changed decode_utf8 to take src by ref

speculative : support `--color` (ggerganov#4343)

* speculative: add some colors

* minor : add braces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

common : fix compile warning

server : recognize cache_prompt parameter in OAI API (ggerganov#4347)

train : fix ggerganov#4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (ggerganov#4351)

On commit b1108 (44c117f) xaedes added

    ggml_allocr * alloc = NULL;

    ... (many lines in between)

    if (alloc) {
        ggml_allocr_free(alloc);
    }

Which is correct, but it's easy to lose context after many lines in between.

On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly.

    alloc = ggml_allocr_new(...)
    ... (short lines of code)
    ggml_allocr_free(alloc)

This happens a few times, but alloc is never set to NULL, and many lines below,
we still have

    if (alloc) {
        ggml_allocr_free(alloc);
    }

which causes a double-free.

llama : per-layer KV cache + quantum K cache (ggerganov#4309)

* per-layer KV

* remove unnecessary copies

* less code duplication, offload k and v separately

* llama : offload KV cache per-layer

* llama : offload K shift tensors

* llama : offload for rest of the model arches

* llama : enable offload debug temporarily

* llama : keep the KV related layers on the device

* llama : remove mirrors, perform Device -> Host when partial offload

* common : add command-line arg to disable KV cache offloading

* llama : update session save/load

* llama : support quantum K cache (ggerganov#4312)

* llama : support quantum K cache (wip)

* metal : add F32 -> Q8_0 copy kernel

* cuda : add F32 -> Q8_0 copy kernel

ggml-ci

* cuda : use mmv kernel for quantum cache ops

* llama : pass KV cache type through API

* llama : fix build

ggml-ci

* metal : add F32 -> Q4_0 copy kernel

* metal : add F32 -> Q4_1 copy kernel

* cuda : wip

* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels

* llama-bench : support type_k/type_v

* metal : use mm kernel only for quantum KV cache

* cuda : add comment

* llama : remove memory_f16 and kv_f16 flags

---------

Co-authored-by: slaren <slarengh@gmail.com>

* readme : add API change notice

---------

Co-authored-by: slaren <slarengh@gmail.com>

sync : ggml (new ops, tests, backend, etc.) (ggerganov#4359)

* sync : ggml (part 1)

* sync : ggml (part 2, CUDA)

* sync : ggml (part 3, Metal)

* ggml : build fixes

ggml-ci

* cuda : restore lost changes

* cuda : restore lost changes (StableLM rope)

* cmake : enable separable compilation for CUDA

ggml-ci

* ggml-cuda : remove device side dequantize

* Revert "cmake : enable separable compilation for CUDA"

This reverts commit 09e35d0.

* cuda : remove assert for rope

* tests : add test-backend-ops

* ggml : fix bug in ggml_concat

* ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`

* ci : try to fix macOS

* ggml-backend : remove backend self-registration

* ci : disable Metal for macOS cmake build

ggml-ci

* metal : fix "supports family" call

* metal : fix assert

* metal : print resource path

ggml-ci

---------

Co-authored-by: slaren <slarengh@gmail.com>

grammar : revert the replacement of llama_token_to_piece with id_to_token (ggerganov#4396)

Update README.md (ggerganov#4388)

Fix small typo.

ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (ggerganov#4424)

server : fix local model name in server (ggerganov#4420)

llama : document logits_all deprecation (ggerganov#4418)

llama_context_params.logits_all is a parameter for controlling
llama_eval. This documents that logits_all should not be used with
llama_decode and llama_batch.

build : target Windows 8 for standard mingw-w64 (ggerganov#4405)

* build : target Windows 8 for standard mingw-w64

* make : fix missing console.o deps

This was causing a link error with `make all` on Windows.

english : use `typos` to fix comments and logs (ggerganov#4354)

server : tweak default sampling parameters (ggerganov#4367)

* Set a more typical Top P setting as the default

* Update temp max

llama : add Mixtral support (ggerganov#4406)

* convert : support Mixtral as LLAMA arch

* convert : fix n_ff typo

* llama : model loading

* ggml : sync latest ggml_mul_mat_id

* llama : update graph to support MoE

* llama : fix cur -> cur_expert

* llama : first working version

* llama : fix expert weighting in the FFN

* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

* ggml : add n_as argument to ggml_mul_mat_id

* ggml : fix ggml_get_rows to take into account ne02 / ne11

* metal : add more general support for ggml_get_rows + tests

* llama : add basic support for offloading moe with CUDA

* metal : add/mul/div use general kernel when src1 not cont

* metal : reduce the kernel launches for ggml_mul_mat_id

* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

* ggml : update get_rows f16 and q

* cuda : support non-contiguous src1 in get_rows

* llama : offload missing ffn_moe_silu

* metal : fix ggml_get_rows to work with non-cont src1

* metal : add indirect mat-vec kernels for all quantization types

* llama : do not quantize expert gating tensors

* llama : add n_expert and n_expert_used to hparams + change quants

* test-backend-ops : add moe test

* cuda : fix get_rows when ncols is odd

* convert : determine n_ctx correctly

* metal : fix ggml_mul_mat_id for F32

* test-backend-ops : make experts more evenly probable (test_moe)

* test-backend-ops : cleanup, add moe test for batches

* test-backend-ops : add cpy from f32 -> all types test

* test-backend-ops : fix dequantize block offset

* llama : fix hard-coded number of experts

* test-backend-ops : simplify and disable slow tests to avoid CI timeout

* test-backend-ops : disable MOE test with thread sanitizer

* cuda : fix mul_mat_id with multi gpu

* convert : use 1e6 rope_freq_base for mixtral

* convert : fix style

* convert : support safetensors format

* gguf-py : bump version

* metal : add cpy f16 -> f32 kernel

* metal : fix binary ops for ne10 % 4 != 0

* test-backend-ops : add one more sum_rows test

* ggml : do not use BLAS with ggml_mul_mat_id

* convert-hf : support for mixtral-instruct (ggerganov#4428)

* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct

* convert : use sentencepiece tokenizer for Mixtral-instruct

* convert : make flake8 happy

* metal : fix soft_max kernels

ref: ggerganov/ggml@1914017

* metal : limit kernels to not use more than the allowed threads

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Radek Pilar <github@mrkva.eu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants