Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add llava support in ludwig #4005

Closed
skanjila opened this issue May 16, 2024 · 0 comments
Closed

Add llava support in ludwig #4005

skanjila opened this issue May 16, 2024 · 0 comments
Labels
lmm Large Multimodal Model related

Comments

@skanjila
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
The feature request is to have multi modal capability inside ludwig using llama-next

Describe the use case
The basic use cases are to add imagery/video on top of text and the llm workflow for training and inference

Describe the solution you'd like
This PR will add the shell code to embed mulitmodal functionality into ludwig leveraging the current code thats already in there

Describe alternatives you've considered
N/A

Additional context
None

@skanjila skanjila assigned skanjila and unassigned skanjila May 16, 2024
@alexsherstinsky alexsherstinsky added the lmm Large Multimodal Model related label Jul 26, 2024
@mhabedank mhabedank closed this as not planned Won't fix, can't repro, duplicate, stale Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lmm Large Multimodal Model related
Projects
None yet
Development

No branches or pull requests

3 participants