Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Error with Ollama AI provider "unmarshal: invalid character 'p' after top-level value" #1229

Open
3 of 4 tasks
ppatierno opened this issue Aug 16, 2024 · 5 comments
Open
3 of 4 tasks

Comments

@ppatierno
Copy link

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've included steps to reproduce the behavior

Affected Components

  • K8sGPT (CLI)
  • K8sGPT Operator

K8sGPT Version

v0.3.40

Kubernetes Version

v1.26.3

Host OS and its Version

Fedora 40

Steps to reproduce

I configured an Ollama backend by following documentation here https://docs.k8sgpt.ai/reference/providers/backend/ and running ...

k8sgpt auth add --backend ollama --model llama3.1 --baseurl http://localhost:11434/v1

(I tried with llama2 as well but the model should not matter here)

The I just followed the Getting Started guide by creating the broken pod but the analyze by using the ollama backend returns the following error:

k8sgpt analyze --explain --backend ollama
   0% |                                                                                                                                                                                                              | (0/1, 0 it/hr) [0s:0s]
Error: failed while calling AI provider ollama: unmarshal: invalid character 'p' after top-level value

Tried the same by using Azure OpenAI and everything works as expected:

k8sgpt analyze --explain --backend azureopenai
 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (1/1, 35 it/min)        
AI Provider: azureopenai

0: Pod default/broken-pod()
- Error: Back-off pulling image "nginx:1.a.b.c"
Error: The error indicates Kubernetes is unable to pull the image "nginx:1.a.b.c" because the specified tag is invalid.

Solution:
1. Check the image tag for typos.
2. Use a valid version tag, e.g., "nginx:1.21.6".
3. Update your deployment configuration with the correct image tag.
4. Redeploy the application.

Expected behaviour

Using ollama as backend doesn't return any unmarshal error but just an explanation about the broken pod.

Actual behaviour

No response

Additional Information

No response

@matthisholleville
Copy link
Contributor

Hi! Have you tried calling the backend directly at http://localhost:11434/v1 without using k8sgpt? The error might indicate that your backend isn't responding correctly.

@ppatierno
Copy link
Author

ppatierno commented Aug 20, 2024

@matthisholleville I have actually found the issue. The baseurl has to be http://localhost:11434 (without the /v1) and it's working fine with k8sgpt this way. It's also because if I try ollama to be called directly via cURL, you can do something like the following (from ollama doc) and as you can see, there is no /v1 in the url.

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt":"Why is the sky blue?"
}'

FYI I am using ollama 0.3.0.
Maybe I can contribute to the just fixing the k8sgpt doc if you want?

@matthisholleville
Copy link
Contributor

Great! We could indeed validate the URL during the configuration or before calling the AI.

@ppatierno
Copy link
Author

@matthisholleville FYI I opened a PR on the docs here k8sgpt-ai/docs#119

@alansenairj
Copy link

k8sgpt auth remove --backends ollama
k8sgpt auth add --backend ollama --model llama3.1:latest --baseurl http://localhost:11434
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

3 participants