You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for this amazing project! It's insane seeing Llama2 with 246ms per token, 4.06 tokens per second on my old i7-8550U!
The UI is functional but I noted that it has a markdown parsing issue when there are comments in the code block. These are parsed as h3 tags instead of simple comments.
Also, syntax highlighting would be really nice to have, especially for WizardCoder.
The text was updated successfully, but these errors were encountered:
Thanks for reporting this UX issue. llamafile-server came from the llama.cpp examples folder and we very much want to turn it into something that can be more than just an example. Code block formatting is one of many tricky challenges we're facing (see also #46) since sometimes it might not work at all. It's something that will improve and I'd welcome contributions from anyone who's willing to help make it improve sooner.
Thanks a lot for this amazing project! It's insane seeing Llama2 with 246ms per token, 4.06 tokens per second on my old i7-8550U!
The UI is functional but I noted that it has a markdown parsing issue when there are comments in the code block. These are parsed as h3 tags instead of simple comments.
Also, syntax highlighting would be really nice to have, especially for WizardCoder.
The text was updated successfully, but these errors were encountered: