Extend the ability to support third party LLM through customizable API URL #143
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add api_url , max_tokens into settings. These settings extends the aichat to support third party LLM services which using OPENAI-like API such as llama-cpp-python[server], LocalAI and etc.
Example:
start local API service first, below is an example for aichat with a local codellama on a MacBook laptop
python3 -m pip install llama-cpp-python python3 -m llama_cpp.server --n_gpu_layers 1 --model ~/models/codellama_13b.gguf --port 3000