Skip to content
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.

Support the bit-shuffling changes from llama.cpp #198

Closed
philpax opened this issue May 9, 2023 · 3 comments
Closed

Support the bit-shuffling changes from llama.cpp #198

philpax opened this issue May 9, 2023 · 3 comments
Labels
issue:enhancement New feature or request
Milestone

Comments

@philpax
Copy link
Collaborator

philpax commented May 9, 2023

A new file version is being introduced to change how the tensors are stored on-disk: ggerganov/llama.cpp#1305

We will need to support this version, as well as the older versions.

@philpax philpax added the issue:enhancement New feature or request label May 9, 2023
@philpax
Copy link
Collaborator Author

philpax commented May 13, 2023

It's been merged: ggerganov/llama.cpp#1405

There doesn't seem to be a migration path at present, so let's wait a bit: ggerganov/llama.cpp#1408

@philpax
Copy link
Collaborator Author

philpax commented May 16, 2023

This is done in #226, but I'd like to set up a migration path before I close this

@philpax philpax added this to the 0.2 milestone May 18, 2023
@philpax
Copy link
Collaborator Author

philpax commented May 22, 2023

No migration path for now. See #261

@philpax philpax closed this as completed May 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue:enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant