-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama Support #134
Llama Support #134
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented by mistake, sorry. The review is in the comment below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to add this as LlamaCpp is part of the dunder all in llms dunder init.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, we just pushed a PR that fixes some things in the Chain nodes, could you test your implementation with those changes, please?
I'll try it out here but loading llama is proving to be a bit troublesome here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ogabrielluiz
Thanks :)
Removed the unnecessary custom object initialization.
Also merged latest changes into the PR and it works 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome. I haven't been able to test. Most likely I got the wrong file or my laptop can't handle it hahha
EDIT: It seems to be working now - I had forgotten to change OLD: ^^ none of the side menu is visible as you can see Console log: [+] Running 3/1 |
@lolxdmainkaisemaanlu your problem is the proxy. |
Hi @yoazmenda I am trying to load the llama.cpp model in the langflow(not in docker) however it didnt recognize the path. Where is yours locate? |
Hi @kaleavess , If you still can't manage, can you share more details about your setup and/or and error details? |
Problem Solved!! It was a model issue!! Thanks @yoazmenda a lot. |
Initial support for llama models as local llm
How to use:
Tested on MacBook Pro M1 using Vicuna for llama model weights
Download the model here