Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update container image docker.io/localai/localai to v2.17.0 by renovate #23480

Merged
merged 1 commit into from
Jun 18, 2024

Conversation

truecharts-admin
Copy link
Collaborator

This PR contains the following updates:

Package Update Change
docker.io/localai/localai minor v2.16.0-aio-cpu -> v2.17.0-aio-cpu
docker.io/localai/localai minor v2.16.0-cublas-cuda11-ffmpeg-core -> v2.17.0-cublas-cuda11-ffmpeg-core
docker.io/localai/localai minor v2.16.0-cublas-cuda11-core -> v2.17.0-cublas-cuda11-core
docker.io/localai/localai minor v2.16.0-cublas-cuda12-ffmpeg-core -> v2.17.0-cublas-cuda12-ffmpeg-core
docker.io/localai/localai minor v2.16.0-cublas-cuda12-core -> v2.17.0-cublas-cuda12-core
docker.io/localai/localai minor v2.16.0-ffmpeg-core -> v2.17.0-ffmpeg-core
docker.io/localai/localai minor v2.16.0 -> v2.17.0

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

mudler/LocalAI (docker.io/localai/localai)

v2.17.0

Compare Source

local-ai-release-2 17-shadow
Ahoj! this new release of LocalAI comes with tons of updates, and enhancements behind the scenes!

🌟 Highlights TLDR;
  • Automatic identification of GGUF models
  • New WebUI page to talk with an LLM!
  • https://models.localai.io is live! 🚀
  • Better arm64 and Apple silicon support
  • More models to the gallery!
  • New quickstart installer script
  • Enhancements to mixed grammar support
  • Major improvements to transformers
  • Linux single binary now supports rocm, nvidia, and intel
🤖 Automatic model identification for llama.cpp-based models

Just drop your GGUF files into the model folders, and let LocalAI handle the configurations. YAML files are now reserved for those who love to tinker with advanced setups.

🔊 Talk to your LLM!

Introduced a new page that allows direct interaction with the LLM using audio transcription and TTS capabilities. This feature is so fun - now you can just talk with any LLM with a couple of clicks away.
Screenshot from 2024-06-08 12-44-41

🍏 Apple single-binary

Experience enhanced support for the Apple ecosystem with a comprehensive single-binary that packs all necessary libraries, ensuring LocalAI runs smoothly on MacOS and ARM64 architectures.

ARM64

Expanded our support for ARM64 with new Docker images and single binary options, ensuring better compatibility and performance on ARM-based systems.

Note: currently we support only arm core images, for instance: localai/localai:master-ffmpeg-core, localai/localai:latest-ffmpeg-core, localai/localai:v2.17.0-ffmpeg-core.

🐞 Bug Fixes and small enhancements

We’ve ironed out several issues, including image endpoint response types and other minor problems, boosting the stability and reliability of our applications. It is now also possible to enable CSRF when starting LocalAI, thanks to @​dave-gray101.

🌐 Models and Galleries

Enhanced the model gallery with new additions like Mirai Nova, Mahou, and several updates to existing models ensuring better performance and accuracy.

Now you can check new models also in https://models.localai.io, without running LocalAI!

Installation and Setup

A new install.sh script is now available for quick and hassle-free installations, streamlining the setup process for new users.

curl https://localai.io/install.sh | sh

Installation can be configured with Environment variables, for example:

curl https://localai.io/install.sh | VAR=value sh

List of the Environment Variables:

  • DOCKER_INSTALL: Set to "true" to enable the installation of Docker images.
  • USE_AIO: Set to "true" to use the all-in-one LocalAI Docker image.
  • API_KEY: Specify an API key for accessing LocalAI, if required.
  • CORE_IMAGES: Set to "true" to download core LocalAI images.
  • PORT: Specifies the port on which LocalAI will run (default is 8080).
  • THREADS: Number of processor threads the application should use. Defaults to the number of logical cores minus one.
  • VERSION: Specifies the version of LocalAI to install. Defaults to the latest available version.
  • MODELS_PATH: Directory path where LocalAI models are stored (default is /usr/share/local-ai/models).

We are looking into improving the installer, and as this is a first iteration any feedback is welcome! Open up an issue if something doesn't work for you!

Enhancements to mixed grammar support

Mixed grammar support continues receiving improvements behind the scenes.

🐍 Transformers backend enhancements
  • Temperature = 0 correctly handled as greedy search
  • Handles custom words as stop words
  • Implement KV cache
  • Phi 3 no more requires trust_remote_code: true flag

Shout-out to @​fakezeta for these enhancements!

Install models with the CLI

Now the CLI can install models directly from the gallery. For instance:

local-ai run <model_name_in gallery>

This command ensures the model is installed in the model folder at startup.

🐧 Linux single binary now supports rocm, nvidia, and intel

Single binaries for Linux now contain Intel, AMD GPU, and NVIDIA support. Note that you need to install the dependencies separately in the system to leverage these features. In upcoming releases, this requirement will be handled by the installer script.

📣 Let's Make Some Noise!

A gigantic THANK YOU to everyone who’s contributed—your feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, you’re the real MVPs!

Remember, LocalAI thrives on community support—not big corporate bucks. If you love what we're building, show some love! A shoutout on social (@​LocalAI_OSS and @​mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Thanks a ton, and.. enjoy this release!

What's Changed
Bug fixes 🐛
Exciting New Features 🎉
🧠 Models
📖 Documentation and examples
👒 Dependencies
Other Changes
New Contributors

Full Changelog: mudler/LocalAI@v2.16.0...v2.17.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@truecharts-admin truecharts-admin added the automerge Categorises a PR or issue that references a new App. label Jun 18, 2024
@truecharts-admin truecharts-admin enabled auto-merge (squash) June 18, 2024 00:38
Copy link

📝 Linting results:

✔️ Linting [charts/stable/local-ai]: Passed - Took 0 seconds
Total Charts Linted: 1
Total Charts Passed: 1
Total Charts Failed: 0

✅ Linting: Passed - Took 0 seconds

@truecharts-admin truecharts-admin merged commit f4098ac into master Jun 18, 2024
14 checks passed
@truecharts-admin truecharts-admin deleted the renovate/docker.io-localai-localai-2.x branch June 18, 2024 00:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge Categorises a PR or issue that references a new App.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants