v0.0.22
What's Changed
- Bump to v0.0.21 by @rhatdan in #410
- Update ggerganov/whisper.cpp digest to 0377596 by @renovate in #409
- Use subpath for OCI Models by @rhatdan in #411
- Consistency changes by @ericcurtin in #408
- Split out kube.py from model.py by @rhatdan in #412
- Fix mounting of Ollama AI Images into containers. by @rhatdan in #414
- Start an Asahi version by @ericcurtin in #369
- Generate MODEL.yaml file locally rather then just to stdout by @rhatdan in #416
- Bugfix comma by @ericcurtin in #421
- Fix nocontainer mode by @rhatdan in #419
- Update ggerganov/whisper.cpp digest to 31aea56 by @renovate in #425
- Add --generate quadlet/kube to create quadlet and kube.yaml by @rhatdan in #423
- Allow default port to be specified in ramalama.conf file by @rhatdan in #424
- Made run and serve consistent with model exec path. Fixes issue #413 by @bmahabirbu in #426
- Bump to v0.0.22 by @rhatdan in #415
Full Changelog: v0.0.21...v0.0.22