Skip to content

Releases: acon96/home-llm

v0.2.12

11 Apr 04:38
Compare
Choose a tag to compare

Fix cover ICL examples, allow setting number of ICL examples, add min P and typical P sampler options, recommend models during setup, add JSON mode for Ollama backend, fix missing default options

v0.2.11

07 Apr 00:31
da05917
Compare
Choose a tag to compare

Add prompt caching, expose llama.cpp runtime settings, build llama-cpp-python wheels using GitHub actions, and install wheels directly from GitHub

v0.2.10

24 Mar 15:19
Compare
Choose a tag to compare

Allow configuring the model parameters during initial setup, attempt to auto-detect defaults for recommended models, Fix to allow lights to be set to max brightness

v0.2.9

21 Mar 03:28
1ab0d82
Compare
Choose a tag to compare

Fix HuggingFace Download, Fix llama.cpp wheel installation, Fix light color changing, Add in-context-learning support

v0.2.8

06 Mar 02:48
316459b
Compare
Choose a tag to compare

Fix ollama model names with colons

v0.2.7

05 Mar 03:59
474901a
Compare
Choose a tag to compare

Publish model v3, Multiple Ollama backend improvements, Updates for HA 2024.02, support for voice assistant aliases

v0.2.6

09 Feb 01:51
fd9dc2e
Compare
Choose a tag to compare

Bug fixes, add options for limiting chat history, HTTPS endpoint support, added zephyr prompt format.

v0.2.5 (alpha)

29 Jan 00:22
92617f8
Compare
Choose a tag to compare

Fix Ollama max tokens parameter, fix GGUF download from Hugging Face, update included llama-cpp-python to 0.2.32, and add parameters to function calling for dataset + component, & model update

v0.2.4 (alpha)

26 Jan 01:31
Compare
Choose a tag to compare

Fix API key auth on model load for text-generation-webui, and add support for Ollama API backend

v0.2.3 (alpha)

22 Jan 02:49
Compare
Choose a tag to compare

Fix API key auth, Support chat completion endpoint, and refactor to make it easier to add more remote backends