Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jaeger report some error of unsupported value: +Inf #2050

Open
gyliu513 opened this issue Nov 9, 2023 · 0 comments
Open

Jaeger report some error of unsupported value: +Inf #2050

gyliu513 opened this issue Nov 9, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@gyliu513
Copy link
Member

gyliu513 commented Nov 9, 2023

Describe your environment Describe any aspect of your environment relevant to the problem, including your Python version, platform, version numbers of installed dependencies, information about your cloud hosting provider, etc. If you're reporting a problem with a specific version of a library in this repo, please check whether the problem has been fixed on main.

Steps to reproduce
Describe exactly how to reproduce the error. Include a code sample if applicable.

import os
import openai
from dotenv import load_dotenv
from opentelemetry import trace

# If you don't want to use full autoinstrumentation, just add this:
#
# from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# OpenAIInstrumentor().instrument()

tracer = trace.get_tracer("chat.demo")

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

with tracer.start_as_current_span("example") as span:
    span.set_attribute("attr1", 12)

    completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a very accurate calculator. You output only the result of the calculation."},
            {"role": "user", "content": "1 + 1 = "}],
        temperature=0,
    )
    print(completion)
  • Run chat.py as follows:
poetry run opentelemetry-instrument \
  --traces_exporter console,otlp \
  --metrics_exporter console --service_name llm-playground2 \
  --logs_exporter none --exporter_otlp_endpoint 0.0.0.0:4317 \
  python chat.py
{
  "id": "chatcmpl-8IoC4PJcloLvoIgdNNe8SoC9xBKFj",
  "object": "chat.completion",
  "created": 1699493540,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "2"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 33,
    "completion_tokens": 1,
    "total_tokens": 34
  }
}
{
    "resource_metrics": [
        {
            "resource": {
                "attributes": {
                    "telemetry.sdk.language": "python",
                    "telemetry.sdk.name": "opentelemetry",
                    "telemetry.sdk.version": "1.18.0",
                    "service.name": "llm-playground2",
                    "telemetry.auto.version": "0.39b0"
                },
                "schema_url": ""
            },
            "scope_metrics": [
                {
                    "scope": {
                        "name": "opentelemetry.instrumentation.requests",
                        "version": "0.39b0",
                        "schema_url": ""
                    },
                    "metrics": [
                        {
                            "name": "http.client.duration",
                            "description": "measures the duration of the outbound HTTP request",
                            "unit": "ms",
                            "data": {
                                "data_points": [
                                    {
                                        "attributes": {
                                            "http.method": "POST",
                                            "http.scheme": "https",
                                            "http.host": "api.openai.com",
                                            "net.peer.name": "api.openai.com",
                                            "http.status_code": 200,
                                            "http.flavor": "1.1"
                                        },
                                        "start_time_unix_nano": 1699493547266934000,
                                        "time_unix_nano": 1699493547270321000,
                                        "count": 1,
                                        "sum": 6813,
                                        "bucket_counts": [
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            0,
                                            1,
                                            0,
                                            0
                                        ],
                                        "explicit_bounds": [
                                            0.0,
                                            5.0,
                                            10.0,
                                            25.0,
                                            50.0,
                                            75.0,
                                            100.0,
                                            250.0,
                                            500.0,
                                            750.0,
                                            1000.0,
                                            2500.0,
                                            5000.0,
                                            7500.0,
                                            10000.0
                                        ],
                                        "min": 6813,
                                        "max": 6813
                                    }
                                ],
                                "aggregation_temporality": 2
                            }
                        }
                    ],
                    "schema_url": ""
                }
            ],
            "schema_url": ""
        }
    ]
}
{
    "name": "HTTP POST",
    "context": {
        "trace_id": "0x875544a1845731a991ad5ba2639fbe1e",
        "span_id": "0x78e705a0a84c4d30",
        "trace_state": "[]"
    },
    "kind": "SpanKind.CLIENT",
    "parent_id": "0x4becf106c8e3c6a4",
    "start_time": "2023-11-09T01:32:20.453527Z",
    "end_time": "2023-11-09T01:32:27.267121Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "http.method": "POST",
        "http.url": "https://api.openai.com/v1/chat/completions",
        "http.status_code": 200
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.18.0",
            "service.name": "llm-playground2",
            "telemetry.auto.version": "0.39b0"
        },
        "schema_url": ""
    }
}
{
    "name": "openai.chat",
    "context": {
        "trace_id": "0x875544a1845731a991ad5ba2639fbe1e",
        "span_id": "0x4becf106c8e3c6a4",
        "trace_state": "[]"
    },
    "kind": "SpanKind.CLIENT",
    "parent_id": "0x76dc2e021666e5d1",
    "start_time": "2023-11-09T01:32:20.422065Z",
    "end_time": "2023-11-09T01:32:27.268682Z",
    "status": {
        "status_code": "OK"
    },
    "attributes": {
        "openai.api_base": "https://api.openai.com/v1",
        "openai.api_type": "open_ai",
        "openai.api_version": "None",
        "openai.chat.messages.0.role": "system",
        "openai.chat.messages.0.content": "You are a very accurate calculator. You output only the result of the calculation.",
        "openai.chat.messages.1.role": "user",
        "openai.chat.messages.1.content": "1 + 1 = ",
        "openai.chat.temperature": 0,
        "openai.chat.top_p": 1.0,
        "openai.chat.n": 1,
        "openai.chat.stream": false,
        "openai.chat.stop": "",
        "openai.chat.max_tokens": Infinity,
        "openai.chat.presence_penalty": 0.0,
        "openai.chat.frequency_penalty": 0.0,
        "openai.chat.logit_bias": "",
        "openai.chat.user": "",
        "openai.chat.model": "gpt-3.5-turbo",
        "openai.chat.response.choices.0.message.role": "assistant",
        "openai.chat.response.choices.0.message.content": "2",
        "openai.chat.response.choices.0.index": 0,
        "openai.chat.response.choices.0.finish_reason": "stop",
        "openai.chat.response.usage.prompt_tokens": 33,
        "openai.chat.response.usage.completion_tokens": 1,
        "openai.chat.response.usage.total_tokens": 34,
        "openai.chat.response.id": "chatcmpl-8IoC4PJcloLvoIgdNNe8SoC9xBKFj",
        "openai.chat.response.object": "chat.completion",
        "openai.chat.response.created": 1699493540,
        "openai.chat.response.model": "gpt-3.5-turbo-0613"
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.18.0",
            "service.name": "llm-playground2",
            "telemetry.auto.version": "0.39b0"
        },
        "schema_url": ""
    }
}
{
    "name": "example",
    "context": {
        "trace_id": "0x875544a1845731a991ad5ba2639fbe1e",
        "span_id": "0x76dc2e021666e5d1",
        "trace_state": "[]"
    },
    "kind": "SpanKind.INTERNAL",
    "parent_id": null,
    "start_time": "2023-11-09T01:32:20.421971Z",
    "end_time": "2023-11-09T01:32:27.268883Z",
    "status": {
        "status_code": "UNSET"
    },
    "attributes": {
        "attr1": 12
    },
    "events": [],
    "links": [],
    "resource": {
        "attributes": {
            "telemetry.sdk.language": "python",
            "telemetry.sdk.name": "opentelemetry",
            "telemetry.sdk.version": "1.18.0",
            "service.name": "llm-playground2",
            "telemetry.auto.version": "0.39b0"
        },
        "schema_url": ""
    }
}
  • If I logon to Jaeger, there are some errors as follows:
Screenshot 2023-11-08 at 8 38 39 PM

What is the expected behavior?
What did you expect to see?

What is the actual behavior?
What did you see instead?

Additional context
Add any other context about the problem here.

@gyliu513 gyliu513 added the bug Something isn't working label Nov 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant