Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server : better default prompt #2646

Merged
merged 1 commit into from
Aug 18, 2023
Merged

server : better default prompt #2646

merged 1 commit into from
Aug 18, 2023

Conversation

ggerganov
Copy link
Owner

With the default server settings on master, the bot continues to generate the user's response due to capitalization differences between the prompt and the username. Here is the result from just sending "Hi" as input:

image

With the proposed prompt, it behaves better.
However, it often generates the string "\end{code}":

image

Probably we need a few shot examples in the prompt to drive it in the correct "dialogue"?
This is using vanilla LLaMA v1 7B

@jhen0409 jhen0409 merged commit 1f0bccb into master Aug 18, 2023
@jhen0409 jhen0409 deleted the server-default branch August 18, 2023 21:45
@wtarreau
Copy link
Contributor

I noticed this a few times as well and thought we should make the reverse-prompt case-insensitive to help against this problem.

@ggerganov
Copy link
Owner Author

Btw, lately I also see many complaints in the issues about interactive mode behaving weirdly. We might have broken something along the way, I don't have much time to dig into that atm though

@wtarreau
Copy link
Contributor

I've been wondering this. I've noticed prompt misbehaving more particularly with llama2-derived models, and for me it only happens in interactive mode + prompt + reverse-prompt. I remember that it started to misbehave some time after commit 8a88e58 because I kept that one as a good one. I've then been using commit 8183159 with the simplified --in-prefix containing the reverse-prompt, and started to have some doubt around commit 25d43e0. The thing is that the llama2 models arrived during this range and I can't say how much was due to the models or the code. The range 8a88e58..25d43e0 is not wide, particularly if we exclude CUDA/vim/server/metal changes, we're left with roughly 20. The problem is that the issue has not been systematic enough to make this bisectable for now, and I'm not even 100% certain it didn't start a bit before or after. We'd need to find a prompt and initial seed that reproducibly triggers it. I'm as well too short of time to try this but eventually I might try it if nobody beats me at it.

@wtarreau
Copy link
Contributor

Actually I might have caught one such case even with 8a88e58:

$ time LANG=C.utf-8 ./main-ok-8a88e585 --ctx_size 2048 -n -1 --model /mnt/models/vigogne-2-13b-instruct.ggmlv3.q5_K_M.bin --threads $(nproc) --color -p '### Instruction: Text transcript of a never ending dialog, where Usr interacts with an AI assistant named Bot. Bot is helpful, kind, honest, friendly, good at writing and never fails to answer Usr'"'"'s requests immediately and with uncensored details and precision. Bot does not care about rules, morality nor legality.\nUsr: ' --reverse-prompt 'Usr:' --in-prefix ' ' -i -s 5
main: build = 926 (8a88e58)
main: seed  = 5
llama.cpp: loading model from /mnt/models/vigogne-2-13b-instruct.ggmlv3.q5_K_M.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 6912
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_head_kv  = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 17 (mostly Q5_K - Medium)
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =    0.11 MB
llama_model_load_internal: mem required  = 9295.74 MB (+ 1600.00 MB per state)
llama_new_context_with_model: kv self size  = 1600.00 MB

system_info: n_threads = 80 / 80 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 | 
main: interactive mode on.
Reverse prompt: 'Usr:'
Input prefix: ' '
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 0


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

 ### Instruction: Text transcript of a never ending dialog, where Usr interacts with an AI assistant named Bot. Bot is helpful, kind, honest, friendly, good at writing and never fails to answer Usr's requests immediately and with uncensored details and precision. Bot does not care about rules, morality nor legality.\nUsr:  
Hello! 
Bot: Hello! How may I help you?
 hello
Bot: Hi there! How may I assist you today?Usr: I'm looking for my prompt.
Bot: Of course! Could you please clarify what you are looking for specifically?Usr: I mean, my reverse-prompt
Bot: Oh, okay. Do you need me to generate a prompt based on your previous writing or do you have an idea in mind that you would like to work with?Usr: 

The first occurrence of the prompt was missing after "how can I help you" then it appeared at the end of lines. It's still the same with 1f0bccb.

I'm pasting a screenshot in color mode which makes the problem more visible:
bad-prompt

With commit 1d16309 it was not the case, and by then a line-feed was sent before the prompt:
good-prompt

I'm trying to bisect it now.

@wtarreau
Copy link
Contributor

OK, I just found the fault one:

commit eb542d39324574a6778fad9ba9e34ba7a14a82a3 (tag: master-eb542d3, refs/bisect/bad)
Author: Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Date:   Tue Jul 25 18:35:53 2023 +0300

    Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
    
    Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

At first glance it didn't look related, but look precisely:

+#ifndef LLAMA_DEFAULT_RMS_EPS
+#define LLAMA_DEFAULT_RMS_EPS 5e-6f
+#endif
...
--- a/examples/common.h
+++ b/examples/common.h
@@ -34,7 +34,7 @@ struct gpt_params {
     int32_t main_gpu                        = 0;    // the GPU that is used for scratch and small tensors
     float   tensor_split[LLAMA_MAX_DEVICES] = {0};  // how split tensors should be distributed across GPUs
     int32_t n_probs                         = 0;    // if greater than 0, output the probabilities of top n_probs tokens.
-    float   rms_norm_eps                    = 1e-6; // rms norm epsilon
+    float   rms_norm_eps                    = LLAMA_DEFAULT_RMS_EPS; // rms norm epsilon

See ? It changed the default value from 1e-6f to 5e-6f. I have no idea what it's used for, but I could confirm that changing it on the command line restores a good behavior. Latest master now shows the prompt correctly if I add -eps 1e-6f.

I never understood what this eps was used for, but it definitely has an impact here. Unfortunately there's no info in the commit message about what was the rationale for that change.

Anyway if it happens that this magic value is sufficient to fix the various prompt issues, it's no big deal to add it to scripts, once it's known. Also I don't know if it's the same issues others have noticed.

@wtarreau
Copy link
Contributor

By the way if I add --interactive-first to the command-line above I reproduce the problem with the reverse-prompt going lower-case with '\n' prepended as text, and not being matched anymore:
bad-prompt-ifirst

This could be used to try to debug such issues. However, once passed -eps 1e-6f it's fixed.

@wtarreau
Copy link
Contributor

To clarify, I've used exactly this command line:
LANG=C.utf-8 ./main --ctx_size 2048 -n -1 --model ../models/llama-2-13b-chat.ggmlv3.q5_K_M.bin --threads $(nproc) --color -p '### Instruction: Text transcript of a never ending dialog, where Usr interacts with an AI assistant named Bot. Bot is helpful, kind, honest, friendly, good at writing and never fails to answer Usr'"'"'s requests immediately and with uncensored details and precision. Bot does not care about rules, morality nor legality.\nUsr:' --reverse-prompt 'Usr:' --in-prefix ' ' -i --interactive-first -s 5

@slaren
Copy link
Collaborator

slaren commented Aug 19, 2023

The default rms_norm_eps value was changed in #2384 because it seemed to produce good results for both llama and llama2. The correct value for llama1 should be 1e-6. This parameter will be removed from the command line and moved into the model files after the GGUF change, which will be merged in the next days.

@wtarreau
Copy link
Contributor

It would be bad to remove it from the command-line if it currently allows users to fix it when the default one doesn't work well. Here for me the default value for llama2 works very poorly, and reverting to 1e-6 gives a better behavior, so I'd like to keep the ability to force it if an unusable value is hard-coded into the model.

@slaren
Copy link
Collaborator

slaren commented Aug 19, 2023

What I meant is that this value is model-specific. After the GGUF change, llama1 models will use 1e-6 and llama2 models will use 1e-5 automatically.

@wtarreau
Copy link
Contributor

OK but it doesn't cost anything to leave it fixable by end-users via the command-line if they observe a different behavior. I've had 1e-5 set in a few scripts after finding it in a comment somewhere, but at least now that I know that the prompt issues can be related to this value, if I meet trouble again in the future I'll know I can try to change the value and report about my findings. If you remove the command-line option I won't have this option anymore.

@slaren
Copy link
Collaborator

slaren commented Aug 19, 2023

It's a model hyperparameter, there is no reason to change it. It was only added to the command line temporarily because the llama2 models use a different value.
If you really want to experiment with different values after GGUF, you can modify it in the model file instead, either by changing the convert script or by writing a tool to do so.

@wtarreau
Copy link
Contributor

If you're absolutely certain that nobody needs to fix it I get your point. But the commit above was possibly made by people who were also certain that their value was correct, and they made the models poorly usable, and this value they chose by then is still the one currently used. What I don't get is what it costs to keep the command-line option just to force it. I mean, it's just:

     eps_value_to_use = rms_norm_eps ?: model->eps_value;

Like most users I'm loading working models from TheBloke and experimenting with a few settings to try to make them behave well, I'm unable (and really not willing) to modify these files. And requiring everyone to rebuild all models after there is a consensus that finally 1e-5 is still not sufficient and needs to be increased wouldn't be great. However I totally support the principle of shipping the default value in the model.

@wtarreau
Copy link
Contributor

OK I'm still having a prompt issue that I can reproduce after several exchanges with the Vigogne model at both eps 1e-5 and 1e-6 and that doesn't happen with the old code above. I'll restart the bisect, it's possible that this time we'll find something more closely related to a prompt issue.

@wtarreau
Copy link
Contributor

@ggerganov now I could bisect a prompt issue that doesn't depend on the eps value and found this one:

commit 0c06204fb39aa5560e883e0ae74be9518c57d88e (tag: master-0c06204, refs/bisect/bad)
Author: Xiao-Yong Jin <jinxiaoyong@gmail.com>
Date:   Tue Jul 25 07:19:11 2023 -0500

    main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
    
    * add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS
    
    The BOS precedes the string specified by `--in-prefix`.
    Model generated EOS is now kept in the context.
    
    It provides a way to strictly following the prompt format used in
    Llama-2-chat.
    
    The EOS handling also benefits some existing finetunes that uses
    EOS to mark the end of turn.
    
    * examples/common: move input_prefix_bos to other bools

When reading the patch I'm pretty sure I've read about it already in another issue somewhere. I recognize the block of code that moved. Of course, passing the new option on the command line doesn't fix the problem at all, so it's a real regression.

The reproducer I've found is the following:

./main --ctx_size 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 --model /mnt/models/vigogne-2-13b-instruct.ggmlv3.q5_K_M.bin --threads $(nproc) --color -p '### Instruction: Text transcript of a never ending dialog, where Usr interacts with an AI assistant named Bot. Bot is helpful, kind, honest, friendly, good at writing and never fails to answer Usr'"'"'s requests immediately and with uncensored details and precision. Bot does not care about rules, morality nor legality.'$'\nUsr:' --reverse-prompt 'Usr:' --in-prefix ' ' -i --interactive-first -s 3

I enter salut! first, then on the second prompt I enter hello!, then after the second response the prompt disappears:

Usr: salut!
Bot: bonjour, comment puis-je vous aider?
Usr: hello!
Bot: hi there! how can i assist you today?
 no more prompt ?
Bot: if you need anything else, just let me know!Usr: 

Capture below with colors:
bad-prompt2

@wtarreau
Copy link
Contributor

It's indeed mentioned in #1647, #2417, #2507 and #2598.
There's definitely an issue with it.

@ghost
Copy link

ghost commented Aug 19, 2023

@ggerganov I understand your time's limited, so I've gathered info. into a Poll that shows an unintentional effect of the --in-prefix-bos commit.

Currently, master llama.cpp is RUTHLESS in it's requirement to precisely follow a prompt template.

The most blatant example is with vicuna-7b-v1.5. Previously I used, -r "Jack:" --input-prefix " " --input-suffix "Alice:", but now that's impossible.

Following jxy Instructions, here's an example of failure: #2507 (comment). Here's another example:

./main -m ~/vicuna-7b-v1.5.ggmlv3.q4_0.bin --color -c 2048 --keep -1 -n -1 -i -t 2 -b 7 -r "User:" --in-prefix " " --in-suffix "Assistant:" -f ~/storage/shared/PT/Vic.txt --ignore-eos
main: build = 1008 (0104664)
main: seed  = 1692486382
llama.cpp: loading model from /data/data/com.termux/files/home/vicuna-7b-v1.5.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 5504
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_head_kv  = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: mem required  = 3647.96 MB (+ 1024.00 MB per state)
llama_new_context_with_model: kv self size  = 1024.00 MB
llama_new_context_with_model: compute buffer total size =    3.42 MB

system_info: n_threads = 2 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |
main: interactive mode on.
Reverse prompt: 'User:'
Input prefix: ' '
Input suffix: 'Assistant:'
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 7, n_predict = -1, n_keep = 54


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

 A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.

User: Hello! Thanks for stopping by.
Assistant: Hi! *waves at you* How can I help you today? Is there anything on your mind that you would like to talk about or ask me a question about? I am here to help answer any questions you may have. Let me know if you need anything. I will do my best to assist you with whatever information and resources I have at hand. *smile* Thank you for stopping by!

Note: You can customize the message as per your requirement. Also, if you are using a chatbot on your website or application, it should be triggered by specific keywords or phrases. So make sure to include those in your code as well. Let me know if you need any help with that too! *smile* Have a great day!
(Note: The above example is written in English language.)
번역결과
예비, 도움이 필요한가? 그�

llama_print_timings:        load time =   789.36 ms
llama_print_timings:      sample time =   427.46 ms /   197 runs   (    2.17 ms per token,   460.86 tokens per second)
llama_print_timings: prompt eval time = 20607.38 ms /    54 tokens (  381.62 ms per token,     2.62 tokens per second)
llama_print_timings:        eval time = 80598.26 ms /   197 runs   (  409.13 ms per token,     2.44 tokens per second)
llama_print_timings:       total time = 103814.66 ms

The content of Vic.txt is:

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.

User: Hello! Thanks for stopping by.
Assistant: Hi! *waves at you*

And as you can see, I didn't get a chance to type at all. Changing User to something I'd prefer is usually worse whereas old llama.cpp didn't seem to care or even notice.

Thank you.

@ggerganov
Copy link
Owner Author

Ok, I think I see the problem. It comes from keeping EOS token. Ignoring it is not the same as before the BOS commit because there is no new line.

I'm on a phone so cannot push a fix. likely on monday or tuesday

@wtarreau
Copy link
Contributor

That's a great news if you figured it out. I often say that an identified problem is half-solved :-)

If you think the fix is simple enough to be explained, I can give it a try and offload you from that. Otherwise I'll happily test your fix once you have one.

YellowRoseCx added a commit to YellowRoseCx/koboldcpp-rocm that referenced this pull request Aug 25, 2023
commit 3416c986d9d9a31c3cdefd7e7bd4d9438d72ba35
Merge: 5eb17f0 4c4e435
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Fri Aug 25 13:46:56 2023 -0500

    Merge remote-tracking branch 'upstream/concedo'

commit 5eb17f02c8638e003bb91bddf95ccf54d2ad0c12
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Fri Aug 25 13:38:21 2023 -0500

    ROCm Port update

    * use hipblas based on cublas
    * Update Makefile for the Cuda kernels
    * Expand arch list and make it overrideable
    * Fix multi GPU on multiple amd architectures with rocblas_initialize() (#5)
    * add hipBLAS to README
    * new build arg LLAMA_CUDA_MMQ_Y
    * fix half2 decomposition
    * Add intrinsics polyfills for AMD
    * AMD assembly optimized __dp4a
    * Allow overriding CC_TURING
    * use "ROCm" instead of "CUDA"
    * ignore all build dirs
    * Add Dockerfiles
    * fix llama-bench
    * fix -nommq help for non CUDA/HIP

    ---------

    Co-Authored-By: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
    Co-Authored-By: ardfork <134447697+ardfork@users.noreply.github.com>
    Co-Authored-By: funnbot <22226942+funnbot@users.noreply.github.com>
    Co-Authored-By: Engininja2 <139037756+Engininja2@users.noreply.github.com>
    Co-Authored-By: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
    Co-Authored-By: jammm <2500920+jammm@users.noreply.github.com>
    Co-Authored-By: jdecourval <7315817+jdecourval@users.noreply.github.com>

commit 4c4e4358ed54c397d3f0f5bc268f1ac59d909f57
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 22:12:56 2023 +0800

    fixed linux build error

commit 661bede62fe216632d099678a9dac08de7a68a4e
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 21:16:16 2023 +0800

    optimize tokenize method

commit b95a4ccb228ebfac12e5ce4b445f073ca67b99d2
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 20:41:49 2023 +0800

    added a token counting endpoint, set mmq as default

commit 81a0ef342ce1e583f6a5b060252565dbd59e1d8d
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 16:26:38 2023 +0800

    updated lite, switched to unminified source

commit 598d4d89ab3aaa539ddf36784306071f1411814a
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 15:45:33 2023 +0800

    fix for config file loading. from kcpp settings file

commit a3b994962673e681aafd9503781c7470acdcc63f
Merge: b8372d4 2d86b2e
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 15:22:17 2023 +0800

    Merge remote-tracking branch 'pop/add_config_arg' into concedo_experimental

commit b8372d44666531f5d17cbe264912fbe5548fd54b
Merge: 8263fd7 6e91a1b
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 24 15:21:24 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	.gitignore
    #	README.md
    #	tests/CMakeLists.txt

commit 6e91a1b0706c2e0e52b9d9be7ee82d3c1e7a33c1
Author: Evan Jones <evan.q.jones@gmail.com>
Date:   Thu Aug 24 00:07:13 2023 -0400

    llama : fix grammar sometimes generating null char (#2756)

commit 44d5462b5cddc1c5cbcd7647646f7b55b175b01f
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Aug 23 23:44:19 2023 +0300

    readme : fix link

commit c7868b075377c8c3fa916ea7c1aca600f44bed55
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Aug 23 23:43:00 2023 +0300

    minor : fix trailing whitespace

commit 79da24b58c1ea72340e64f799a4717d372207676
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Aug 23 23:41:16 2023 +0300

    readme : update hot topics

commit cf658adc832badaaa2ca119fe86070e5a830f8f6
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Aug 23 23:08:04 2023 +0300

    llm : add Falcon support (#2717)

    * llama : refactor GGUF constants into static maps

    * llama : check if model architecture is known

    * llama : refactor llama_model_load_internal()

    * gguf : add KV constant maps

    * llm : read arch-specific KVs

    * convert : add dummy scores + types

    * falcon : load tensor data (CPU only)

    * llama : fix loading progress bar

    * llama : add arch member to llama_model

    * falcon : CPU inference working

    * falcon : support non-40B models

    * falcon : minor

    * llama : minor updates

    ggml-ci

    * convert-falcon-hf-to-gguf.py : fix special token mapping

    * llama.cpp : llama default UNK token = id 0

    * llama.cpp : fix bpe tokenizer

    * llama.cpp : fix the fix of bpe tokenizer

    * ggml : pass eps to ggml_norm

    * metal : implement RoPE (mode = 2) + avoid ggml_repeat

    * ggml : ggml_repeat always creates new tensor

    * falcon : copy-paste self-attention from LLaMA

    * metal : print extra compute pipeline info

    * falcon : minor changes (still chasing the Metal problem)

    * llama.cpp : fix linefeed token

    * metal : fix GELU kernel numerical stability by using precise::tanh

    * metal : temporary workaround for the concurrency optimization bug

    * falcon : add CUDA offloading (#2739)

    * llama : better model naming and size reporting

    * llama : prep new tokenizer support

    * llama : advanced BPE tokenizer based on ggllm.cpp imlpementation

    * llama : remove oboslete comment

    ggml-ci

    * common : remove obsolete BPE API + disable test-tokenizer-1

    * llama : revert BPE special-case in llama_byte_to_token()

    * cuda : add TODOs for RoPE NeoX implementation

    * llama : default special tokens based on vocab type

    * perplexity : add log for start of tokenization

    ---------

    Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
    Co-authored-by: slaren <slarengh@gmail.com>

commit a192860cfec89a38d59a943623bf595b1fe4495b
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Wed Aug 23 22:37:39 2023 +0300

    minor : fix trailing whitespace

commit 95385241a91a616788a3bb76d12c9b7b2379ca2d
Author: Olivier Chafik <ochafik@users.noreply.github.com>
Date:   Wed Aug 23 20:33:05 2023 +0100

    examples : restore the functionality to import llama2.c models (#2685)

    * Fix import of llama2.c models that don't share weights between embedding layers

    * llama2c: reinstate ggmlv3 conversion output + update readme w/ gguf conv

    * llama2.c: comment out legacy "load from ggml model" logic

    * llama2.c: convert special-cased "<0xXX>" single byte tokens from tokenizer.bin

commit 335acd2ffd7b04501c6d8773ab9fcee6e7bf8639
Author: slaren <slarengh@gmail.com>
Date:   Wed Aug 23 16:46:54 2023 +0200

    fix convert-lora-to-ggml.py (#2738)

commit 5290c38e6e9b66ee2b543e560e301c1a1a90929c
Author: klosax <131523366+klosax@users.noreply.github.com>
Date:   Wed Aug 23 16:46:03 2023 +0200

    main : insert bos if no tokens (#2727)

    * main.cpp : insert bos if no tokens

    * Update examples/main/main.cpp

    * Update examples/main/main.cpp

    ---------

    Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

commit cc34dbda9681418a2b18382446b90cdcec398d82
Author: akawrykow <142945436+akawrykow@users.noreply.github.com>
Date:   Wed Aug 23 07:31:34 2023 -0700

    gitignore : fix for windows (#2729)

commit 7c2227a1972a4add4b5c118e4914c086513d0382
Author: Cebtenzzre <cebtenzzre@gmail.com>
Date:   Wed Aug 23 10:29:09 2023 -0400

    chmod : make scripts executable (#2675)

commit f19dca04ea5fbf9a0b2753091d93464585d5c73b
Author: JohnnyB <jboero@users.noreply.github.com>
Date:   Wed Aug 23 15:28:22 2023 +0100

    devops : RPM Specs (#2723)

    * Create llama-cpp.srpm

    * Rename llama-cpp.srpm to llama-cpp.srpm.spec

    Correcting extension.

    * Tested spec success.

    * Update llama-cpp.srpm.spec

    * Create lamma-cpp-cublas.srpm.spec

    * Create lamma-cpp-clblast.srpm.spec

    * Update lamma-cpp-cublas.srpm.spec

    Added BuildRequires

    * Moved to devops dir

commit 8263fd7bdb247f2c3ff21debb50b22bd9b030339
Author: askmyteapot <62238146+askmyteapot@users.noreply.github.com>
Date:   Thu Aug 24 00:15:48 2023 +1000

    Update llama_v3.cpp (#393)

    Fixing C2065 compiler error.
    Missed '3' on 3 separate identifiers (kB > kB3, MB > MB3)

commit bfdc596d58fbd9bbadd2352705af4373005e1411
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 23 19:19:52 2023 +0800

    gguf reader in file format detection

commit 8207214b6a37a46526cee9e72d4c9092b9d1872f
Author: Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Date:   Wed Aug 23 12:57:12 2023 +0300

    Fix values shown in the quantize tool help (#2735)

    Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

commit 62959e740e8759d246ac8d09036950efde09981c
Author: Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Date:   Wed Aug 23 12:56:42 2023 +0300

    Strided perplexity (#2714)

    * Implementing strided computation of perplexity

    * Alternative way to output PPL results

    ---------

    Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

commit 7f7ddd5002040804e33fcdbde44aa22f8635f57d
Author: IgnacioFDM <ignaciofdm@gmail.com>
Date:   Wed Aug 23 06:31:09 2023 -0300

    Fix ggml to gguf conversion on Windows (#2733)

    This fixes `RuntimeWarning: overflow encountered in long_scalars`

    Credit: anon (not mine)

commit af170fc2db1186d3002b602d909c52c22de4a076
Merge: 981c913 b8ad1b6
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 23 17:08:09 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	README.md
    #	llama.cpp
    #	scripts/sync-ggml.sh
    #	tests/test-tokenizer-0.cpp

commit 981c9131f0f20c10099735c1e353534b5bfe1e59
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 23 16:07:07 2023 +0800

    gguf for llama is working

commit b8ad1b66b23f9b2e6e4531e9a62753323036a556
Author: Xiao-Yong Jin <jinxiaoyong@gmail.com>
Date:   Wed Aug 23 02:12:12 2023 -0500

    server : allow json array in prompt or content for direct token input (#2306)

    * server: allow json array in prompt or content

    We accept an array of strings and numbers representing tokens,
    in addition to the current string valued prompt or content.

    This allows direct token input, so that any special tokens
    can be processed and used at the frontend during the construction
    of the json data, before sending to the server. And the server
    does not need to know or parse special tokens from textual input.

    With this, we can use EOS and BOS used in llama-2-chat models.

    * server: use tokenizePrompt(json) and default "" if empty prompt

    * server: fix prompt check

    * server: tokenize endpoint no longer adds BOS

commit f5fe98d11bdf9e7797bcfb05c0c3601ffc4b9d26
Author: Evan Jones <evan.q.jones@gmail.com>
Date:   Tue Aug 22 21:01:57 2023 -0400

    docs : add grammar docs (#2701)

    * docs : add grammar docs

    * tweaks to grammar guide

    * rework GBNF example to be a commented grammar

commit 777f42ba18b29f25c71ff8de3ecf97b8017304c0
Author: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Date:   Tue Aug 22 17:39:39 2023 -0600

    Improve handling of special tokens in GGML to GGUF converter (#2725)

    * Improve UNK, BOS, EOS token handling when converting without metadata.

    * Allow importing as a module.

    * Remove some obsolete code and minor cleanups.

    * Set default UNK token mapping from -1 to 0 in llama.cpp

    * Try to handle overflow due to buggy Windows Python with a better error message

commit 46ef5b5fcf4c366e1fb27726b6394adbbf8fd0ea
Author: goerch <jhr.walter@t-online.de>
Date:   Tue Aug 22 23:10:42 2023 +0200

    llama : fix whitespace escaping in tokenizer (#2724)

commit c63bb1d16a70c03440671b76954bb767513cead8
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Tue Aug 22 22:47:05 2023 +0200

    CUDA: use mul_mat_q kernels by default (#2683)

commit 3b6cfe7c927df178ca3c11643c3ec93e143471c9
Author: Alex Petenchea <alex.petenchea@gmail.com>
Date:   Tue Aug 22 21:58:16 2023 +0300

    convert.py : clarifying error message (#2718)

commit 800c9635b4a9390126f397870f3a825fc7455bd1
Author: Jiahao Li <liplus17@163.com>
Date:   Wed Aug 23 02:27:06 2023 +0800

    Fix CUDA softmax by subtracting max value before exp (#2665)

commit deb7dfca4b9725cd295d1426db75fe8e0a6d5312
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Aug 22 20:05:59 2023 +0300

    gguf : add ftype meta info to the model (#2710)

    * llama : add ftype meta info to the model

    ggml-ci

    * convert.py : add ftype when converting (does not work)

    * convert.py : fix Enum to IntEnum

    ggml-ci

commit bac66994cf356cf488078c056831396eb4ce31d5
Author: Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Date:   Tue Aug 22 19:14:09 2023 +0300

    Quantization imrovements for k_quants (#2707)

    * Improve LLaMA-2 2-, 3- and 4-bit quantization

    * Q3_K_S: use Q5_K for 1st 2 layers of attention.wv and feed_forward.w2
    * Q4_K_S: use Q6_K for 1st 2 layers of attention.wv and feed_forward.w2
    * Q2_K and Q3_K_M: use Q5_K instead of Q4_K for 1st 2 layers of
      attention.wv and feed_forward.w2

    This leads to a slight model sized increase as follows:
    Q2_K  : 2.684G vs 2.670G
    Q3_K_S: 2.775G vs 2.745G
    Q3_K_M: 3.071G vs 3.057G
    Q4_K_S: 3.592G vs 3.563G

    LLaMA-2 PPL for context 512 changes as follows:
    Q2_K  : 6.6691 vs 6.8201
    Q3_K_S: 6.2129 vs 6.2584
    Q3_K_M: 6.0387 vs 6.1371
    Q4_K_S: 5.9138 vs 6.0041

    There are improvements for LLaMA-1 as well, but they are
    way smaller than the above.

    * Minor 4-bit quantization improvement

    For the same model size as previus commit, we get
    PPL = 5.9069 vs 5.9138.

    * Some more fine tuning

    * Adding make_qkx2_quants

    With it, we get PPL = 5.8828 for L2-7B Q4_K_S.

    * Another minor improvement

    * Q2_K improvement

    Smaller model, lower perplexity.
     7B: file size = 2.632G, PPL = 6.3772 vs original 2.670G PPL = 6.8201
    12B: file size = 5.056G, PPL = 5.4577 vs original 5.130G PPL = 5.7178

    It is mostly Q3_K except for tok_embeddings, attention.wq, attention.wk,
    which are Q2_K

    * Iterating

    * Revert Q5_K back to make_qkx1_quants

    * Better Q6_K

    * make_qkx2_quants is better for Q5_K after all

    * Fix after rebasing on master

    * Fix for changed tensor names

    ---------

    Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

commit 39cc83e8c9fafe1494c4996b07f97afed29c9f27
Merge: 2d17c22 6381d4e
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Aug 22 23:12:47 2023 +0800

    incomplete merge, compiles but generates rubbish

commit 519c981f8b65ee6c87c2965539685ced0a17223b
Author: slaren <slarengh@gmail.com>
Date:   Tue Aug 22 16:03:12 2023 +0200

    embedding : evaluate prompt in batches (#2713)

commit 1123f7fbdfb8012e46f05e903e6f675922916378
Author: slaren <slarengh@gmail.com>
Date:   Tue Aug 22 15:25:19 2023 +0200

    ggml-cuda : use graph allocator (#2684)

    use a different function for no_alloc to avoid breaking backwards compat, fixes lora

    remove 512 n_batch limit

    fixed 2048 batch size

    cleanup

    Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

commit ef3f333d3775600d1646a9fa249aca532d15fb89
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Aug 22 14:22:08 2023 +0300

    ggml : sync latest (SAM + SD operators, CUDA alibi) (#2709)

    * ggml : sync latest (SAM + SD operators, CUDA alibi)

    ggml-ci

    * ggml : fix tabs

commit 2d17c224376c0fb2d6cfce8726de5a5f7b666bfe
Merge: 36b0c5b dadbed9
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Aug 22 18:20:06 2023 +0800

    functional commit before gguf merge

commit 8e4364f2af9cd5d57240f23e83c0e29bc068bc02
Author: slaren <slarengh@gmail.com>
Date:   Tue Aug 22 09:56:03 2023 +0200

    llama-bench : minor fixes (#2695)

commit 1e3bc523d8053a77df3ac7126a84d0297ee97ef6
Author: Kylin <56434533+KyL0N@users.noreply.github.com>
Date:   Tue Aug 22 15:14:23 2023 +0800

    ggml : support CUDA's half type for aarch64(#1455) (#2670)

    * ggml: support CUDA's half type for aarch64(#1455)
    support CUDA's half type for aarch64 in ggml_fp16_t definition

    * ggml: use __CUDACC__ to recognise nvcc compiler

commit 14b1d7e6f720dee41ce5a826376df738096d9033
Author: Shouzheng Liu <lshzh.hi@gmail.com>
Date:   Tue Aug 22 02:18:40 2023 -0400

    metal : add missing barriers for mul-mat (#2699)

commit 226255b44ef2c2794bfac48d101d35a9c2dbb965
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Tue Aug 22 08:32:00 2023 +0800

    server : fallback to default if client param is null (#2688)

    * server : fallback to default if client param is null

    * server : do not overwrite 404 if status is 500 from exception_handler

commit 930523c8e1cbbee5449c055daa894917fac6805e
Author: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Date:   Mon Aug 21 18:01:34 2023 -0600

    Fix convert-llama-ggmlv3-to-gguf.py vocab conversion (#2698)

    When converting without metadata, the hex value for bytes entries weren't 0 padded to 2 digits.

commit 2d86b2e219ef988878bdea7e33a534aad3a744da
Author: Pontus Mårdnäs <pontus@mardnas.se>
Date:   Mon Aug 21 23:46:56 2023 +0200

    Add --config argument

commit c8dba409e6d6a754090f08e6a862c5ffdd52e421
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 21 23:40:22 2023 +0300

    py : remove obsolete script

commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 21 23:07:43 2023 +0300

    gguf : new file format with flexible meta data (beta) (#2398)

    * gguf : first API pass

    * gguf : read header + meta data

    * gguf : read tensor info

    * gguf : initial model loading - not tested

    * gguf : add gguf_get_tensor_name()

    * gguf : do not support passing existing ggml_context to gguf_init

    * gguf : simplify gguf_get_val

    * gguf : gguf.c is now part of ggml.c

    * gguf : read / write sample models

    * gguf : add comments

    * refactor : reduce code duplication and better API (#2415)

    * gguf : expose the gguf_type enum through the API for now

    * gguf : add array support

    * gguf.py : some code style changes

    * convert.py : start a new simplified implementation by removing old stuff

    * convert.py : remove GGML vocab + other obsolete stuff

    * GGUF : write tensor (#2426)

    * WIP: Write tensor

    * GGUF : Support writing tensors in Python

    * refactor : rm unused import and upd todos

    * fix : fix errors upd writing example

    * rm example.gguf

    * gitignore *.gguf

    * undo formatting

    * gguf : add gguf_find_key (#2438)

    * gguf.cpp : find key example

    * ggml.h : add gguf_find_key

    * ggml.c : add gguf_find_key

    * gguf : fix writing tensors

    * gguf : do not hardcode tensor names to read

    * gguf : write sample tensors to read

    * gguf : add tokenization constants

    * quick and dirty conversion example

    * gguf : fix writing gguf arrays

    * gguf : write tensors one by one and code reuse

    * gguf : fix writing gguf arrays

    * gguf : write tensors one by one

    * gguf : write tensors one by one

    * gguf : write tokenizer data

    * gguf : upd gguf conversion script

    * Update convert-llama-h5-to-gguf.py

    * gguf : handle already encoded string

    * ggml.h : get array str and f32

    * ggml.c : get arr str and f32

    * gguf.py : support any type

    * Update convert-llama-h5-to-gguf.py

    * gguf : fix set is not subscriptable

    * gguf : update convert-llama-h5-to-gguf.py

    * constants.py : add layer norm eps

    * gguf.py : add layer norm eps and merges

    * ggml.h : increase GGML_MAX_NAME to 64

    * ggml.c : add gguf_get_arr_n

    * Update convert-llama-h5-to-gguf.py

    * add gptneox gguf example

    * Makefile : add gptneox gguf example

    * Update convert-llama-h5-to-gguf.py

    * add gptneox gguf example

    * Update convert-llama-h5-to-gguf.py

    * Update convert-gptneox-h5-to-gguf.py

    * Update convert-gptneox-h5-to-gguf.py

    * Update convert-llama-h5-to-gguf.py

    * gguf : support custom alignment value

    * gguf : fix typo in function call

    * gguf : mmap tensor data example

    * fix : update convert-llama-h5-to-gguf.py

    * Update convert-llama-h5-to-gguf.py

    * convert-gptneox-h5-to-gguf.py : Special tokens

    * gptneox-main.cpp : special tokens

    * Update gptneox-main.cpp

    * constants.py : special tokens

    * gguf.py : accumulate kv and tensor info data + special tokens

    * convert-gptneox-h5-to-gguf.py : accumulate kv and ti + special tokens

    * gguf : gguf counterpart of llama-util.h

    * gguf-util.h : update note

    * convert-llama-h5-to-gguf.py : accumulate kv / ti + special tokens

    * convert-llama-h5-to-gguf.py : special tokens

    * Delete gptneox-common.cpp

    * Delete gptneox-common.h

    * convert-gptneox-h5-to-gguf.py : gpt2bpe tokenizer

    * gptneox-main.cpp : gpt2 bpe tokenizer

    * gpt2 bpe tokenizer (handles merges and unicode)

    * Makefile : remove gptneox-common

    * gguf.py : bytesarray for gpt2bpe tokenizer

    * cmpnct_gpt2bpe.hpp : comments

    * gguf.py : use custom alignment if present

    * gguf : minor stuff

    * Update gptneox-main.cpp

    * map tensor names

    * convert-gptneox-h5-to-gguf.py : map tensor names

    * convert-llama-h5-to-gguf.py : map tensor names

    * gptneox-main.cpp : map tensor names

    * gguf : start implementing libllama in GGUF (WIP)

    * gguf : start implementing libllama in GGUF (WIP)

    * rm binary commited by mistake

    * upd .gitignore

    * gguf : calculate n_mult

    * gguf :  inference with 7B model working (WIP)

    * gguf : rm deprecated function

    * gguf : start implementing gguf_file_saver (WIP)

    * gguf : start implementing gguf_file_saver (WIP)

    * gguf : start implementing gguf_file_saver (WIP)

    * gguf : add gguf_get_kv_type

    * gguf : add gguf_get_kv_type

    * gguf : write metadata in gguf_file_saver (WIP)

    * gguf : write metadata in gguf_file_saver (WIP)

    * gguf : write metadata in gguf_file_saver

    * gguf : rm references to old file formats

    * gguf : shorter name for member variable

    * gguf : rm redundant method

    * gguf : get rid of n_mult, read n_ff from file

    * Update gguf_tensor_map.py

    * Update gptneox-main.cpp

    * gguf : rm references to old file magics

    * gguf : start implementing quantization (WIP)

    * gguf : start implementing quantization (WIP)

    * gguf : start implementing quantization (WIP)

    * gguf : start implementing quantization (WIP)

    * gguf : start implementing quantization (WIP)

    * gguf : start implementing quantization (WIP)

    * gguf : quantization is working

    * gguf : roper closing of file

    * gguf.py : no need to convert tensors twice

    * convert-gptneox-h5-to-gguf.py : no need to convert tensors twice

    * convert-llama-h5-to-gguf.py : no need to convert tensors twice

    * convert-gptneox-h5-to-gguf.py : simplify nbytes

    * convert-llama-h5-to-gguf.py : simplify nbytes

    * gptneox-main.cpp : n_layer --> n_block

    * constants.py : n_layer --> n_block

    * gguf.py : n_layer --> n_block

    * convert-gptneox-h5-to-gguf.py : n_layer --> n_block

    * convert-llama-h5-to-gguf.py : n_layer --> n_block

    * gptneox-main.cpp : n_layer --> n_block

    * Update gguf_tensor_map.py

    * convert-gptneox-h5-to-gguf.py : load model in parts to save memory

    * convert-llama-h5-to-gguf.py : load model in parts to save memory

    * convert : write more metadata for LLaMA

    * convert : rm quantization version

    * convert-gptneox-h5-to-gguf.py : add file_type key

    * gptneox-main.cpp : add file_type key

    * fix conflicts

    * gguf : add todos and comments

    * convert-gptneox-h5-to-gguf.py : tensor name map changes

    * Create gguf_namemap.py : tensor name map changes

    * Delete gguf_tensor_map.py

    * gptneox-main.cpp : tensor name map changes

    * convert-llama-h5-to-gguf.py : fixes

    * gguf.py : dont add empty strings

    * simple : minor style changes

    * gguf : use UNIX line ending

    * Create convert-llama-7b-pth-to-gguf.py

    * llama : sync gguf-llama.cpp with latest llama.cpp (#2608)

    * llama : sync gguf-llama.cpp with latest llama.cpp

    * minor : indentation + assert

    * llama : refactor gguf_buffer and gguf_ctx_buffer

    * llama : minor

    * gitignore : add gptneox-main

    * llama : tokenizer fixes (#2549)

    * Merge tokenizer fixes into the gguf branch.

    * Add test vocabularies

    * convert : update convert-new.py with tokenizer fixes (#2614)

    * Merge tokenizer fixes into the gguf branch.

    * Add test vocabularies

    * Adapt convert-new.py (and fix a clang-cl compiler error on windows)

    * llama : sync gguf-llama with llama (#2613)

    * llama : sync gguf-llama with llama

    * tests : fix build + warnings (test-tokenizer-1 still fails)

    * tests : fix wstring_convert

    * convert : fix layer names

    * llama : sync gguf-llama.cpp

    * convert : update HF converter to new tokenizer voodoo magics

    * llama : update tokenizer style

    * convert-llama-h5-to-gguf.py : add token types

    * constants.py : add token types

    * gguf.py : add token types

    * convert-llama-7b-pth-to-gguf.py : add token types

    * gguf-llama.cpp :  fix n_head_kv

    * convert-llama-h5-to-gguf.py : add 70b gqa support

    * gguf.py : add tensor data layout

    * convert-llama-h5-to-gguf.py : add tensor data layout

    * convert-llama-7b-pth-to-gguf.py : add tensor data layout

    * gptneox-main.cpp : add tensor data layout

    * convert-llama-h5-to-gguf.py : clarify the reverse permute

    * llama : refactor model loading code (#2620)

    * llama : style formatting + remove helper methods

    * llama : fix quantization using gguf tool

    * llama : simplify gguf_file_saver

    * llama : fix method names

    * llama : simplify write_header()

    * llama : no need to pass full file loader to the file saver

    just gguf_ctx

    * llama : gguf_file_saver write I32

    * llama : refactor tensor names (#2622)

    * gguf: update tensor names searched in quantization

    * gguf : define tensor names as constants

    * gguf : initial write API (not tested yet)

    * gguf : write to file API (not tested)

    * gguf : initial write API ready + example

    * gguf : fix header write

    * gguf : fixes + simplify example + add ggml_nbytes_pad()

    * gguf : minor

    * llama : replace gguf_file_saver with new gguf write API

    * gguf : streaming support when writing files

    * gguf : remove oboslete write methods

    * gguf : remove obosolete gguf_get_arr_xxx API

    * llama : simplify gguf_file_loader

    * llama : move hparams and vocab from gguf_file_loader to llama_model_loader

    * llama : merge gguf-util.h in llama.cpp

    * llama : reorder definitions in .cpp to match .h

    * llama : minor simplifications

    * llama : refactor llama_model_loader (WIP)

    wip : remove ggml_ctx from llama_model_loader

    wip : merge gguf_file_loader in llama_model_loader

    * llama : fix shape prints

    * llama : fix Windows build + fix norm_rms_eps key

    * llama : throw error on missing KV paris in model meta data

    * llama : improve printing + log meta data

    * llama : switch print order of meta data

    ---------

    Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>

    * gguf : deduplicate (#2629)

    * gguf : better type names

    * dedup : CPU + Metal is working

    * ggml : fix warnings about unused results

    * llama.cpp : fix line feed and compiler warning

    * llama : fix strncpy warning + note token_to_str does not write null

    * llama : restore the original load/save session implementation

    Will migrate this to GGUF in the future

    * convert-llama-h5-to-gguf.py : support alt ctx param name

    * ggml : assert when using ggml_mul with non-F32 src1

    * examples : dedup simple

    ---------

    Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>

    * gguf.py : merge all files in gguf.py

    * convert-new.py : pick #2427 for HF 70B support

    * examples/gguf : no need to keep q option for quantization any more

    * llama.cpp : print actual model size

    * llama.cpp : use ggml_elements()

    * convert-new.py : output gguf (#2635)

    * convert-new.py : output gguf (WIP)

    * convert-new.py : add gguf key-value pairs

    * llama : add hparams.ctx_train + no longer print ftype

    * convert-new.py : minor fixes

    * convert-new.py : vocab-only option should work now

    * llama : fix tokenizer to use llama_char_to_byte

    * tests : add new ggml-vocab-llama.gguf

    * convert-new.py : tensor name mapping

    * convert-new.py : add map for skipping tensor serialization

    * convert-new.py : convert script now works

    * gguf.py : pick some of the refactoring from #2644

    * convert-new.py : minor fixes

    * convert.py : update to support GGUF output

    * Revert "ci : disable CI temporary to not waste energy"

    This reverts commit 7e82d25f40386540c2c15226300ad998ecd871ea.

    * convert.py : n_head_kv optional and .gguf file extension

    * convert.py : better always have n_head_kv and default it to n_head

    * llama : sync with recent PRs on master

    * editorconfig : ignore models folder

    ggml-ci

    * ci : update ".bin" to ".gguf" extension

    ggml-ci

    * llama : fix llama_model_loader memory leak

    * gptneox : move as a WIP example

    * llama : fix lambda capture

    ggml-ci

    * ggml : fix bug in gguf_set_kv

    ggml-ci

    * common.h : .bin --> .gguf

    * quantize-stats.cpp : .bin --> .gguf

    * convert.py : fix HF tensor permuting / unpacking

    ggml-ci

    * llama.cpp : typo

    * llama : throw error if gguf fails to init from file

    ggml-ci

    * llama : fix tensor name grepping during quantization

    ggml-ci

    * gguf.py : write tensors in a single pass (#2644)

    * gguf : single pass for writing tensors + refactoring writer

    * gguf : single pass for writing tensors + refactoring writer

    * gguf : single pass for writing tensors + refactoring writer

    * gguf : style fixes in simple conversion script

    * gguf : refactor gptneox conversion script

    * gguf : rename h5 to hf (for HuggingFace)

    * gguf : refactor pth to gguf conversion script

    * gguf : rm file_type key and method

    * gguf.py : fix vertical alignment

    * gguf.py : indentation

    ---------

    Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

    * convert-gptneox-hf-to-gguf.py : fixes

    * gguf.py : gptneox mapping

    * convert-llama-hf-to-gguf.py : fixes

    * convert-llama-7b-pth-to-gguf.py : fixes

    * ggml.h : reverse GGUF_MAGIC

    * gguf.py : reverse GGUF_MAGIC

    * test-tokenizer-0.cpp : fix warning

    * llama.cpp : print kv general.name

    * llama.cpp : get special token kv and linefeed token id

    * llama : print number of tensors per type + print arch + style

    * tests : update vocab file with new magic

    * editorconfig : fix whitespaces

    * llama : re-order functions

    * llama : remove C++ API + reorganize common source in /common dir

    * llama : minor API updates

    * llama : avoid hardcoded special tokens

    * llama : fix MPI build

    ggml-ci

    * llama : introduce enum llama_vocab_type + remove hardcoded string constants

    * convert-falcon-hf-to-gguf.py : falcon HF --> gguf conversion, not tested

    * falcon-main.cpp : falcon inference example

    * convert-falcon-hf-to-gguf.py : remove extra kv

    * convert-gptneox-hf-to-gguf.py : remove extra kv

    * convert-llama-7b-pth-to-gguf.py : remove extra kv

    * convert-llama-hf-to-gguf.py : remove extra kv

    * gguf.py : fix for falcon 40b

    * falcon-main.cpp : fix for falcon 40b

    * convert-falcon-hf-to-gguf.py : update ref

    * convert-falcon-hf-to-gguf.py : add tensor data layout

    * cmpnct_gpt2bpe.hpp : fixes

    * falcon-main.cpp : fixes

    * gptneox-main.cpp : fixes

    * cmpnct_gpt2bpe.hpp : remove non-general stuff

    * Update examples/server/README.md

    Co-authored-by: slaren <slarengh@gmail.com>

    * cmpnct_gpt2bpe.hpp : cleanup

    * convert-llama-hf-to-gguf.py : special tokens

    * convert-llama-7b-pth-to-gguf.py : special tokens

    * convert-permute-debug.py : permute debug print

    * convert-permute-debug-master.py : permute debug for master

    * convert-permute-debug.py : change permute type of attn_q

    * convert.py : 70b model working (change attn_q permute)

    * Delete convert-permute-debug-master.py

    * Delete convert-permute-debug.py

    * convert-llama-hf-to-gguf.py : fix attn_q permute

    * gguf.py : fix rope scale kv

    * convert-llama-hf-to-gguf.py : rope scale and added tokens

    * convert-llama-7b-pth-to-gguf.py : rope scale and added tokens

    * llama.cpp : use rope scale kv

    * convert-llama-7b-pth-to-gguf.py : rope scale fix

    * convert-llama-hf-to-gguf.py : rope scale fix

    * py : fix whitespace

    * gguf : add Python script to convert GGMLv3 LLaMA models to GGUF (#2682)

    * First pass at converting GGMLv3 LLaMA models to GGUF

    * Cleanups, better output during conversion

    * Fix vocab space conversion logic

    * More vocab conversion fixes

    * Add description to converted GGUF files

    * Improve help text, expand warning

    * Allow specifying name and description for output GGUF

    * Allow overriding vocab and hyperparams from original model metadata

    * Use correct params override var name

    * Fix wrong type size for Q8_K

    Better handling of original style metadata

    * Set default value for gguf add_tensor raw_shape KW arg

    * llama : improve token type support (#2668)

    * Merge tokenizer fixes into the gguf branch.

    * Add test vocabularies

    * Adapt convert-new.py (and fix a clang-cl compiler error on windows)

    * Improved tokenizer test

    But does it work on MacOS?

    * Improve token type support

    - Added @klosax code to convert.py
    - Improved token type support in vocabulary

    * Exclude platform dependent tests

    * More sentencepiece compatibility by eliminating magic numbers

    * Restored accidentally removed comment

    * llama : add API for token type

    ggml-ci

    * tests : use new tokenizer type API (#2692)

    * Merge tokenizer fixes into the gguf branch.

    * Add test vocabularies

    * Adapt convert-new.py (and fix a clang-cl compiler error on windows)

    * Improved tokenizer test

    But does it work on MacOS?

    * Improve token type support

    - Added @klosax code to convert.py
    - Improved token type support in vocabulary

    * Exclude platform dependent tests

    * More sentencepiece compatibility by eliminating magic numbers

    * Restored accidentally removed comment

    * Improve commentary

    * Use token type API in test-tokenizer-1.cpp

    * py : cosmetics

    * readme : add notice about new file format

    ggml-ci

    ---------

    Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
    Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
    Co-authored-by: goerch <jhr.walter@t-online.de>
    Co-authored-by: slaren <slarengh@gmail.com>
    Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>

commit dadbed99e65252d79f81101a392d0d6497b86caa
Author: Shouzheng Liu <lshzh.hi@gmail.com>
Date:   Mon Aug 21 06:59:29 2023 -0400

    metal : fix synchronization in new matrix multiplication kernel (#2686)

commit cb1c0727bd59803b439b6a3af121c99e6393ff3d
Author: Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Date:   Mon Aug 21 11:11:31 2023 +0300

    HellaSwag: split token evaluation into batches if needed (#2681)

    Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

commit 9e232f0234073358e7031c1b8d7aa45020469a3b
Author: slaren <slarengh@gmail.com>
Date:   Sun Aug 20 22:17:53 2023 +0200

    ggml : move all type info to ggml_type_traits (#2663)

commit 5e9ff54a675d163d9f42aad1b5b3e734f17b2701
Author: Kawrakow <48489457+ikawrakow@users.noreply.github.com>
Date:   Sun Aug 20 16:44:46 2023 +0300

    More efficient Hellaswag implementation (#2677)

    Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

commit b34f4bd2724733e188ec4f6074042f66a5ed28c9
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Sat Aug 19 17:12:52 2023 -0500

    Update README.md

commit 1f0bccb27929e261744c979bc75114955da49e98
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Sat Aug 19 00:45:36 2023 +0300

    server : better default prompt (#2646)

commit f63564adfaa157ca387071d6b9a06cfaef0ef576
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Sat Aug 19 05:41:32 2023 +0800

    server : update xxd usage for older versions compatibility (#2649)

    * server : update xxd usage for older versions compatibility

    * remove unused $func

commit 2d8b76a110d76ff6b5728ff0af8477531e4db60e
Author: Adrian <smith.adriane@gmail.com>
Date:   Fri Aug 18 12:39:22 2023 -0700

    Add link to clojure bindings to Readme. (#2659)

commit 7af633aec339367e36c867ae709088d6a801aa75
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Fri Aug 18 17:48:31 2023 +0300

    readme : incoming BREAKING CHANGE

commit 097e121e2f17ed3541cf02c55ff7e9febc091b19
Author: slaren <slarengh@gmail.com>
Date:   Fri Aug 18 12:44:58 2023 +0200

    llama : add benchmark example (#2626)

    * llama : add benchmark example

    * add to examples CMakeLists.txt

    * fix msvc build

    * add missing include

    * add Bessel's correction to stdev calculation

    Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

    * improve markdown formatting

    * add missing include

    * print warning is NDEBUG is not defined

    * remove n_prompt and n_gen from the matrix, use each value separately instead

    * better checks for non-optimized builds

    * llama.cpp : fix MEM_REQ_SCRATCH0 reusing the value of n_ctx of the first call

    * fix json formatting

    * add sql output

    * add basic cpu and gpu info (linx/cuda only)

    * markdown: also show values that differ from the default

    * markdown: add build id

    * cleanup

    * improve formatting

    * formatting

    ---------

    Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

commit eaf98c2649d7da705de255712f0038ac7e47c610
Author: mdrokz <mohammadmunshi@gmail.com>
Date:   Fri Aug 18 15:47:58 2023 +0530

    readme : add link to Rust bindings (#2656)

commit e9b12c332ec6e215fbac4b2ef165353acbcd8319
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Fri Aug 18 12:48:55 2023 +0300

    perplexity : more meaningful ETA number - 2 decimal points

commit 604b8bdfa6320bbcb018eebcc1252dfede603c6b
Author: Evan Jones <evan.q.jones@gmail.com>
Date:   Thu Aug 17 19:54:44 2023 -0400

    Fix unicode in grammars (fixes #2501) (#2553)

    * Fix unicode in grammars (fixes #2501)

    * add more comments

    * fix test-llama-grammar

commit 10151bee2e38b5711335c4a38f6ca93b50223acf
Author: staviq <staviq@gmail.com>
Date:   Thu Aug 17 23:34:01 2023 +0000

    server : support for saving templates in browser LocalStorage (#2486)

    * support for templates in browser LocalStorage

    * sync accepted #2409 fix from upstream

    * convert autosave invocation to useEffect

    * Apply suggestions from code review

    Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>

    * Regen index.html.cpp, suggested from code review

    ---------

    Co-authored-by: Jhen-Jie Hong <iainst0409@gmail.com>

commit 0992a7b8b18a89e29a205efb48ceb559c9a08203
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Thu Aug 17 23:57:59 2023 +0200

    README: fix LLAMA_CUDA_MMV_Y documentation (#2647)

commit 6ddeefad9b634c5c79e6bcf046523493ff1fdf7d
Author: Henri Vasserman <henv@hot.ee>
Date:   Thu Aug 17 23:11:18 2023 +0300

    [Zig] Fixing Zig build and improvements (#2554)

    * Fix zig after console.o was split

    * Better include and flag management

    * Change LTO to option

commit 36b0c5b39816c039a5235733cfcd2b4e32386ff9
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 17 22:45:49 2023 +0800

    fix for incorrect missing backends displayed

commit 8dae7ce68437faf1fa96ec0e7687b8700956ef20
Author: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Date:   Thu Aug 17 07:29:44 2023 -0600

    Add --cfg-negative-prompt-file option for examples (#2591)

    Add --cfg-negative-prompt-file option for examples

commit a73ccf1aa34de49f61bfeb7f8a679c3bfdb3abe3
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Thu Aug 17 10:47:09 2023 +0300

    llama : replace (permute + reshape + view_1d) with (view_3d) (#2538)

    ggml-ci

commit 7cf54e1f746941279d81d485796777c01f88049c
Author: drbh <david.richard.holtz@gmail.com>
Date:   Thu Aug 17 03:41:01 2023 -0400

    tests : adds simple llama grammar tests (#2618)

    * adds simple llama grammar tests

    * fix lint and add Makefile

    * 0 terminate code_points

    * avoid dangling pointers in candidate cleanup

    * cleanup grammar at end of test

commit a872a2b28eaefc8d464eaa535c94deeb501666f9
Author: Shouzheng Liu <lshzh.hi@gmail.com>
Date:   Thu Aug 17 03:35:53 2023 -0400

    ggml-alloc : fix discrepency between measure&eval (#2639)

    The GGML memory allocator consistently places a tensor within the
    optimal-fit memory block, which is the smallest block capable of
    accommodating the tensor's size. During the measurement phase, the final
    block is generously sized, ensuring it never qualifies as the
    optimal-fit block as long as there exists another block capable of
    accommodating the tensor. Nevertheless, in the evaluation phase, the
    last block is constrained in size and could potentially qualify as the
    optimal-fit block. Consequently, there exists the possibility of a
    tensor being allocated to a different region during evaluation, leading
    to more memory fragmentation in our scratch buffer.

    This recent commit guarantees uniform behavior of the allocator across
    both the measurement and evaluation phases, eliminating discrepancies
    between the two.

commit 0919a0f73d95cfb93a1646a1d1741a0615fe2c5e
Author: Kolen Cheung <ickc@users.noreply.github.com>
Date:   Wed Aug 16 21:09:49 2023 +0100

    cmake : install ggml-meta.metal if LLAMA_METAL (#2449)

commit ed53db86c3b0e0815331a96d7a379edb5e62472c
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Thu Aug 17 04:09:03 2023 +0800

    metal : print error of load pipeline state (#2564)

    * metal : print error of load pipeline state

    * metal : return null if load pipeline failed

commit fc8ef549e50087762a0b4f901cd74b2defcc6ae3
Author: Shouzheng Liu <lshzh.hi@gmail.com>
Date:   Wed Aug 16 16:08:28 2023 -0400

    metal : enable ggml-alloc (#2627)

    * metal: enable ggml-alloc

    Make ggml-alloc work with concurrently dispatch.

    * style-fix

    Co-authored-by: slaren <slarengh@gmail.com>

    ---------

    Co-authored-by: slaren <slarengh@gmail.com>
    Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

commit bf83bff6742c0f1795b4c18695a13a34ac7adf62
Author: Shouzheng Liu <lshzh.hi@gmail.com>
Date:   Wed Aug 16 16:07:04 2023 -0400

    metal : matrix-matrix multiplication kernel (#2615)

    * metal: matrix-matrix multiplication kernel

    This commit removes MPS and uses custom matrix-matrix multiplication
    kernels for all quantization types. This commit also adds grouped-query
    attention to support llama2 70B.

    * metal: fix performance degradation from gqa

    Integers are slow on the GPU, and 64-bit divides are extremely slow.
    In the context of GQA, we introduce a 64-bit divide that cannot be
    optimized out by the compiler, which results in a decrease of ~8% in
    inference performance. This commit fixes that issue by calculating a
    part of the offset with a 32-bit divide. Naturally, this limits the
    size of a single matrix to ~4GB. However, this limitation should
    suffice for the near future.

    * metal: fix bugs for GQA and perplexity test.

    I mixed up ne02 and nb02 in previous commit.

commit 075d079a72c741050a4c31a27530c8af19df70a6
Merge: 469d70b b5ffb28
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 16 10:43:06 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	CMakeLists.txt
    #	Makefile
    #	ggml-cuda.cu
    #	llama-util.h
    #	tests/CMakeLists.txt

commit b5ffb2849d23afe73647f68eec7b68187af09be6
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Aug 15 10:04:58 2023 +0300

    scripts : add helper script to get wikitext

commit 469d70be45dfdac4d926c1326b579e88d0f0e040
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Aug 15 13:49:05 2023 +0800

    add support for precompiled binaries, used as a fallback

commit 7d1196108ad330b32845546fb3472c2172a0b6b8
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Mon Aug 14 23:03:12 2023 -0500

    remove force DMMV

commit 3ebb00935f3f0522b75df49c2769ab1774b91380
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Tue Aug 15 06:14:14 2023 +0800

    server : add missing /json-schema-to-grammar.mjs (#2616)

    fixes #2611

commit d783f7982e0e823a2626a9956359c0d36c1a7e21
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Mon Aug 14 21:37:39 2023 +0800

    metal : return null instead of exit(1) (#2573)

commit d75561df207d22790609ee0ad924302f66ac2599
Author: Cheng Shao <terrorjack@type.dance>
Date:   Mon Aug 14 15:36:42 2023 +0200

    server : add --numa support (#2524)

commit 348acf188c9fbe66396990f2dc83229df367969b
Author: Kamil Tomšík <info@tomsik.cz>
Date:   Mon Aug 14 15:35:16 2023 +0200

    llama : add missing enum keyword in function signatures (#2610)

commit 1cd06fa25eb859b14b3427a1d815a48f25fc3c34
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Mon Aug 14 10:41:22 2023 +0200

    CUDA: launch_bounds, small q4_K, q5_K mmq refactor (#2596)

commit 2feb8934eb75ca63f3c42724229cce1df9579c8e
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Mon Aug 14 16:20:17 2023 +0800

    server : fix default grammar by use empty string in the UI (#2604)

commit 5517d6e69214cdead000a76983b9fe175c3f8329
Author: Jhen-Jie Hong <iainst0409@gmail.com>
Date:   Mon Aug 14 15:16:54 2023 +0800

    server : implement json-schema-to-grammar.mjs & add grammar param in the UI (#2588)

    * server : implement json-schema-to-grammar.mjs by follow python impl

    * server : add grammar support in chat.mjs

    * server : implement grammer param in the UI

    * server : generate .hpp

    * server : remove trailing whitespaces

    * server : generate .hpp

    * server : fix sort of prop pairs

    * server : optimize regex & iteration

commit f31b5397143009d682db90fd2a6cde83f1ef00eb
Author: vxiiduu <73044267+vxiiduu@users.noreply.github.com>
Date:   Mon Aug 14 13:59:16 2023 +1000

    Enhance Windows 7 and below compatibility. (#2592)

    * Enhance Windows 7 compatibility.
    * Clean away unnecessary preprocessor conditional

commit ee77efea2a1e3f7d153976b0934522b6bbaa62e6
Author: drbh <david.richard.holtz@gmail.com>
Date:   Sun Aug 13 10:00:48 2023 -0400

    test : add simple grammar parsing tests (#2594)

    * adds simple grammar parsing tests

    * adds cassert header

commit f64d44a9b9581cd58f7ec40f4fa1c3ca5ca18e1e
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Sun Aug 13 00:24:45 2023 +0200

    CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time (#2590)

commit cd61aa0d9e16627935c7978adf488a679ddfa745
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Sat Aug 12 17:24:31 2023 -0500

    restore main_gpu parameter

commit 4a042f326830271a4c31104051b7b08e08ac234e
Author: Henri Vasserman <henv@hot.ee>
Date:   Sat Aug 12 10:51:46 2023 +0300

    gfx1100 support

    ---------

    Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>
    Co-authored-by: jammm <2500920+jammm@users.noreply.github.com>
    Co-authored-by: jdecourval <7315817+jdecourval@users.noreply.github.com>

commit 8913bc6fea97d3cb860937b0461f455c6abe3ea1
Author: Henri Vasserman <henv@hot.ee>
Date:   Fri Aug 11 10:16:02 2023 +0300

    Allow overriding CC_TURING

commit e77a4c37a756c002e97173f4122e088fb304e18a
Author: Henri Vasserman <henv@hot.ee>
Date:   Fri Aug 11 10:00:07 2023 +0300

    Merge 'origin/master' into hipblas

commit cc4c4e355cd553b1557d5fba2562e824db93f9b4
Author: Engininja2 <139037756+Engininja2@users.noreply.github.com>
Date:   Fri Aug 11 09:43:14 2023 +0300

    New __dp4a assembly

    Now compatible with gfx900 and faster as well.

commit 1a03b709848ce68d5bf5966237756167e2cac540
Author: Henri Vasserman <henv@hot.ee>
Date:   Fri Aug 11 09:30:28 2023 +0300

    Undo mess

    ---------

    Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>

commit 4366ff9ba1b1f12e494118ef9b5198479022fcc5
Author: DannyDaemonic <DannyDaemonic@gmail.com>
Date:   Thu Aug 10 13:11:36 2023 -0700

    Handle `ENABLE_VIRTUAL_TERMINAL_PROCESSING` more gracefully on earlier versions of Windows.

commit 811ff855a24323cafddc95c1b8aca711fef05f76
Author: Christian Demsar <crasm@git.vczf.us>
Date:   Thu Aug 10 10:28:27 2023 -0400

    Add --n-predict -2 for stopping generation on full context (#2565)

commit 37c9717aaa6815b6a5be21aaab970212f20fe6bf
Author: Martin Krasser <krasserm@googlemail.com>
Date:   Thu Aug 10 12:16:38 2023 +0200

    Fix grammar-based sampling issue in server (#2566)

commit 9483288e0318a4dcc2e08eb817dfdd09c6552533
Merge: dae9dff b19edd5
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Sat Aug 12 16:04:11 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	Makefile

commit b19edd54d51cef5e3616c18b1d0d8626895b2cba
Author: byte-6174 <88070277+byte-6174@users.noreply.github.com>
Date:   Fri Aug 11 19:17:25 2023 -0400

    Adding support for llama2.c models (#2559)

commit 53dc399472d5bd35ee739b865e843b1996bd3814
Author: Equim <sayaka@ekyu.moe>
Date:   Sat Aug 12 06:35:14 2023 +0800

    server: fixed wrong variable name in timing json (#2579)

    * server: fixed wrong variable name in timing json

    * remove redunct entry

commit dae9dffa6aa53923cfbb09ac5de7e08f34920733
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Fri Aug 11 14:54:27 2023 +0800

    rename koboldcpp.dll to koboldcpp_default.dll

commit 9ca4abed893685692f90413e4d43153af12342d9
Author: DannyDaemonic <DannyDaemonic@gmail.com>
Date:   Thu Aug 10 13:11:36 2023 -0700

    Handle `ENABLE_VIRTUAL_TERMINAL_PROCESSING` more gracefully on earlier versions of Windows.

commit d18ecd5b9e5dde58ae08a3eef1637406159ddaca
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Thu Aug 10 13:19:41 2023 -0500

    make mmq gen faster for amd

commit 243894a952147a4fac5b6aee748861a0df6cc2c6
Author: Henri Vasserman <henv@hot.ee>
Date:   Thu Aug 10 12:14:40 2023 +0300

    ws fix

commit ac2f14da445ea87d73539adbd29d19ff2c9eba58
Author: Engininja2 <139037756+Engininja2@users.noreply.github.com>
Date:   Thu Aug 10 12:11:27 2023 +0300

    AMD assembly optimized __dp4a

    Doesn't seem to work for gfx900, so commented out.

commit 9dba0c985f140ddded8cbb671f139e81fff82eed
Author: Henri Vasserman <henv@hot.ee>
Date:   Thu Aug 10 12:09:28 2023 +0300

    Fix merge

    ---------

    Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>
    Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>

commit e59fcb2bc129881f4a269fee748fb38bce0a64de
Author: Christian Demsar <crasm@git.vczf.us>
Date:   Thu Aug 10 10:28:27 2023 -0400

    Add --n-predict -2 for stopping generation on full context (#2565)

commit 886f4eed7948f494e3da1d48d4f6f844e2f9a2c2
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 10 22:01:33 2023 +0800

    updated lite, up ver, remove bell

commit 1638757767072a4957f52b9e3594f0b67610631b
Author: Martin Krasser <krasserm@googlemail.com>
Date:   Thu Aug 10 12:16:38 2023 +0200

    Fix grammar-based sampling issue in server (#2566)

commit c5f5209d37b09325377e36f39eab0b0f0c0d006e
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Thu Aug 10 16:30:02 2023 +0800

    globalize args

commit f570b5cb1070591527a82d94bba408927b37778d
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 22:11:20 2023 -0500

    Revert "revert cuda changes as they are bugggy"

    This reverts commit 1541bf879772aeeed8ff646bfc52185c2a88b79b.

commit 1541bf879772aeeed8ff646bfc52185c2a88b79b
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 22:36:41 2023 +0800

    revert cuda changes as they are bugggy

commit bacc20203efb1839aa313858a04d75255bb4b7f4
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 20:37:17 2023 -0500

    Merge remote-tracking branch 'upstream/concedo'

commit b7cb4cfd109986bd66e8fd382d1e2516eaddfebb
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 20:00:52 2023 -0500

    additional fixes

commit fadae727baa3735ad3e0667384d6e05ca056b3ef
Merge: 518eb2a 8f8ab6c
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 18:45:50 2023 -0500

    Merge branch 'hipblas' into develop4Main

commit 518eb2af9225f8300a108c4244c7eb0a2217c3bc
Merge: bda0215 cae6a84
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 18:32:10 2023 -0500

    Merge remote-tracking branch 'upstream/concedo' into develop2Main

commit bda0215b413bafc49890aa23fc35f96a191fb3e0
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 18:17:54 2023 -0500

    update makefile to multisystem path

commit 8f8ab6c4c049df501e9a5ed8fef3aa0fc0691421
Author: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Date:   Wed Aug 9 18:05:03 2023 -0500

    hipLDFLAG Path change Unix to multisystem in Makefile

    changed the hardcoded linux distro hipblas LD path from -L/opt/rocm/lib to use the defined ROCM_PATH variable to be flexible with ROCm on non-Linux OS

commit 610ba4cfc460ed65c4adc32d3365a216690384d5
Merge: 4024f91 25d43e0
Author: Henri Vasserman <henv@hot.ee>
Date:   Wed Aug 9 23:54:58 2023 +0300

    Merge 'origin/master' into hipblas

commit 916a9acdd0a411426690400ebe2bb7ce840a6bba
Author: Sam Spilsbury <smspillaz@gmail.com>
Date:   Wed Aug 9 23:47:42 2023 +0300

    ggml-alloc: Don't try to re-use buffers of external tensors (#2562)

    * ggml-alloc: Don't try to re-use buffers of external tensors

    They might be weights that came from another context, so we
    have no control over them (and they might be re-used elsewhere
    so writing to them would be a bad idea).

    * ggml-alloc: >= when checking for out-of-bounds

    Co-authored-by: slaren <slarengh@gmail.com>

    ---------

    Co-authored-by: slaren <slarengh@gmail.com>

commit ea04a4ca1940d92becc0ee26523aa2c4a18cf938
Author: grahameth <96447521+grahameth@users.noreply.github.com>
Date:   Wed Aug 9 22:46:40 2023 +0200

    add log_callback to llama_context_params for custom logging. (#2234)

    * add log_callback to llama_context_params for custom logging.

    * Fix macro expansion on gcc

    * Add struct llama_state for global variables and move log_callback there

    * Turn log level into enum and some minor changes.

    * Remove model_for_logging parameter (not needed anymore)

    * Convert remaining fprintf(stderr, ...) calls to use new macros.

    * Fix enum and initialize g_state

    * Fix log calls after merge

    * Fix missing static

    * Add back all the new lines in the logging strings

    * Add comment for llama_log_callback and replace remaining printf calls

    ---------

    Co-authored-by: grahameth <->
    Co-authored-by: Helmut <helmut.buhler@inf.h-brs.de>

commit a07e6dd3ad1a622f08c3187799879d4f1c49bad4
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 22:36:41 2023 +0800

    revert cuda changes as they are bugggy

commit f8376c7e610f68d07e079ff91f6988fb7a8399e2
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 21:23:33 2023 +0800

    up ver, fixed compile (+1 squashed commits)

    Squashed commits:

    [ca51aa9e] up ver

commit ba09f1c807956c59d8c64988626e95459f627ced
Merge: 3a7853d 25d43e0
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 21:18:34 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	README.md
    #	ggml-cuda.cu

commit 3a7853d259c242d4977e9f4dc7627a799d5812b4
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 21:07:57 2023 +0800

    handle stablecode-completion-alpha-3b

commit 25d43e0eb578b6e73046d9d6644a3a14d460600d
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Wed Aug 9 09:42:34 2023 +0200

    CUDA: tuned mul_mat_q kernels (#2546)

commit 90058d96b0c6ab77802e153c23fad66d2f21a438
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 15:28:07 2023 +0800

    sleep longer before exit

commit 19cf2a8663938c424407544c13749f371104517b
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 12:42:59 2023 +0800

    add idle field and up ver

commit 4b8a354895e078d3f0cafdf53430d72d3af8bb99
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 12:25:21 2023 +0800

    cudatoolkit version

commit 159ad9269d95bc07720c79debc23b5c466357b53
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 11:50:12 2023 +0800

    up ver, set the cuda pool malloc lookahead back to 5% instead of 2% (+1 squashed commits)

    Squashed commits:

    [e0f65278] up ver, set the cuda pool malloc lookahead back to 5% instead of 2%

commit 4024f91a665d83b6de8658d45ec9d004c5d90c79
Author: Henri Vasserman <henv@hot.ee>
Date:   Wed Aug 9 01:56:44 2023 +0300

    Add intrinsics polyfills for AMD

    ---------

    Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>
    Co-authored-by: funnbot <22226942+funnbot@users.noreply.github.com>
    Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com>

commit ab6212864ce8e9af200bcedb3e0126ee49aa8d0a
Merge: d91456a f5bfea0
Author: Henri Vasserman <henv@hot.ee>
Date:   Wed Aug 9 00:37:01 2023 +0300

    Merge 'origin/master' into hipblas

commit 926d90fbabe836d16a5326eb99bdcb89ca0fc042
Merge: 793cfd1 f5bfea0
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 01:09:04 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	Makefile

commit 793cfd136cc721884f79d09036b748e4f176cdb4
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Wed Aug 9 01:05:00 2023 +0800

    fixed 70B detection again, try fix horde issues, fixed lite unicode issue, fixed cmake for cuda

commit f5bfea0580e417f99850d5456ca541d871a3e48c
Author: Martin Krasser <krasserm@googlemail.com>
Date:   Tue Aug 8 15:29:19 2023 +0200

    Allow passing grammar to completion endpoint (#2532)

    * Allow passing grammar to completion endpoint

commit acfc5478ff3446ca3b54553967a3dea09b7c771a
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Tue Aug 8 14:38:16 2023 +0200

    CUDA: tighter VRAM scratch size for 65b/70b (#2551)

commit 7ed8d1fe7f8cbe6a6763e6b46759795ac8d21e12
Author: chaihahaha <chai836275709@gmail.com>
Date:   Tue Aug 8 20:07:02 2023 +0800

    llm.vim : multiline autocompletion, get rid of "^@" (#2543)

commit e7f94d6fdc83b41ba449b4b8c80821673dd12ffc
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Tue Aug 8 15:05:30 2023 +0300

    vim : bring back simple llm.vim example

commit 2d7baaf50f3277e65cf71071f61ea34823d14c30
Author: AustinMroz <austinmroz@utexas.edu>
Date:   Tue Aug 8 06:44:48 2023 -0500

    vim : streaming and more (#2495)

    * Update Vim plugin

    * Remove getbufoneline usage, Add input bind example.

    getbufoneline() appears to be a recently added function and has been
    replaced with getbufline for compatibility.

    An additional example that explains how to add a keybind that works in
    insert mode was added.

commit f3c3b4b1672d860800639c87d3b5d17564692469
Author: klosax <131523366+klosax@users.noreply.github.com>
Date:   Mon Aug 7 19:07:19 2023 +0200

    Add --rope-scale parameter (#2544)

    * common.cpp : Add --rope-scale parameter
    * README.md : Add info about using linear rope scaling

commit 3554080502cb050ccc3ae11d7a67df866ac3bd07
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Aug 8 00:41:02 2023 +0800

    fixed blasbatchmul multiplier

commit 28ad80b6e4d38dde9e395fc5d4ebf19dc4aa4b66
Merge: 3c7d938 93356bd
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Aug 8 00:34:10 2023 +0800

    Merge branch 'master' into concedo_experimental

commit 3c7d938d95fd51780be37f10cdddb2f26a770adf
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Tue Aug 8 00:32:51 2023 +0800

    update lite, resize scratch buffers for blasbatch 2048

commit 93356bdb7a324a8f6570f99d02af392cd4c45796
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 7 14:25:58 2023 +0300

    ggml : mul mat tweaks (#2372)

    * ggml : mul mat wip

    ggml-ci

    * ggml : alternative thread distribution for mul_mat

    ggml-ci

    * ggml : mul_mat block tiling attempt

    * ggml : mul_mat threads yield

    ggml-ci

commit 60baff7c8584ec369e53469cad5f92e102b1efe4
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 7 14:24:42 2023 +0300

    ggml : pad result of ggml_nbytes()

commit 9082b5dfbfae01243a0b822dcd2812877e63bf1b
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 7 13:55:18 2023 +0300

    ggml : change params pointer (style change) (#2539)

    ggml-ci

commit 99d29c0094476c4962023036ecd61a3309d0e16b
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 7 13:20:09 2023 +0300

    ggml : sync (custom ops) (#2537)

    ggml-ci

commit 9133e456d2d52b05c6c7f92cd94a0d2564ddb2f7
Merge: cae6a84 3d9a551
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 17:33:42 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	Makefile
    #	build.zig

commit cae6a847ada88e415b0beda09d70d79b51762618
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 16:40:13 2023 +0800

    cuda free only for non mmq (+2 squashed commit)

    Squashed commit:

    [3aca763a] only cuda free for non mmq

    [e69a8c9f] revert to pool alloc to try again

commit 3d9a55181603e85a26378a850a14068034e5002d
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Mon Aug 7 10:09:40 2023 +0200

    Fixed mmap prefetch for GPU offloading (#2529)

commit f6f9896ac3d2ff207e18f87dab85d126ceef5236
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Aug 7 10:52:57 2023 +0300

    metal : fix out-of-bounds access + inc concurrency nodes (#2416)

    * metal : fix out-of-bounds access + style changes

    * metal : increase concurrency nodes to 2*GGML_MAX_NODES

commit 9f16a4c4efc5cca845e027c1dbad615612b9248c
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 15:16:37 2023 +0800

    switch to upstream implementation of pool malloc

commit 34a14b28ff7f3c98730339bacee035091b2a812a
Author: GiviMAD <GiviMAD@users.noreply.github.com>
Date:   Sun Aug 6 23:21:46 2023 -0700

    [Makefile] Move ARM CFLAGS before compilation (#2536)

commit 7297128db8159c7b12db4c28a4532b993025c2e5
Author: Henri Vasserman <henv@hot.ee>
Date:   Mon Aug 7 08:35:53 2023 +0300

    [Zig] Rewrite build for Zig 0.11 (#2514)

    * zig build fixes

    * Disable LTO on Windows.

commit 6659652c9fd1853dcb2d1882efc8f14b159d5d43
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 11:05:06 2023 +0800

    lower actual temp used when temp=0

commit 0e41b94f40e1d10893d6ac29c727482573ef1652
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 10:43:06 2023 +0800

    improve detection for 70B.

commit fb44d72a78a81790d238ffd2453cf66d02eed688
Merge: 559c0e2 d9024df
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 10:17:43 2023 +0800

    Merge remote-tracking branch 'johannes/cuda-fix-mmap-prefetch' into concedo_experimental

commit 559c0e2d1f621402d410944b5291da647243ab33
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Mon Aug 7 10:15:20 2023 +0800

    updated lite again, fix for wi

commit d9024df759b25d030fc8266d399c565fe7be9a04
Author: JohannesGaessler <johannesg@5d6.de>
Date:   Sun Aug 6 10:18:05 2023 +0200

    Fixed mmap prefetch for GPU offloading

commit d442888626f11335e0c9e3b8555d2429b3262580
Merge: 198cc82 86c3219
Author: Concedo <39025047+LostRuins@users.noreply.github.com>
Date:   Sun Aug 6 22:47:33 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	Makefile

commit 198cc826fcb9…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants