-
-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add configurable timeout for LLM task #224
Comments
This makes a lot of sense. Can you share with me the "timeout" logs that you're getting? I want to know where exactly we're timing out to make it configurage. Is it that the background job itself timesout, or is it the call to ollama that timesout |
With GPU enabled, I've got these logs:
Perfect job, no problem.
After the first fail, a new inference job is automatically launched. There are 5 minutes between each job events. I think it's a timeout somewhere... |
Hi, Loving Hoarder, thanks for this app! I am using ollama on my Synology DS920+ Workers Docker Environment Variables: OLLAMA_BASE_URL: http://[address]
Just saying Hi on Open-webUI takes many minutes for a reply. So, I definitely need longer time for Hoarder Inference to do its job. Here are my logs, I hope they help:
|
Yeah, I'd like a adjustable timeout too. I don't really need "instant" replies for tags or images...just get there someday 😄
Edit: Actually after looking a bit more, maybe it's the ollama side.
I think there's an issue on my mini pc swapping the models, or maybe how a tag and image info are requested? I'll dig in more... |
This is going to be available in the next release. Sorry for how long it took me to get to this :) |
I've successfully tried Ollama on GPU to generate keywords. However, when I use it on a CPU, I get no results. I've done a few tests in Python, the calculation time on CPU is much longer with correct results. I think there's a timeout somewhere that stops the Ollama task. Is it possible to configure it so that the CPU can be used (on a single-user lightweight server) for labelling?
The text was updated successfully, but these errors were encountered: