Skip to content

Commit

Permalink
Fix yamllint errors
Browse files Browse the repository at this point in the history
  • Loading branch information
drewby committed Jan 29, 2024
1 parent 0ef1c1b commit fd57c6c
Show file tree
Hide file tree
Showing 3 changed files with 33 additions and 19 deletions.
6 changes: 3 additions & 3 deletions model/metrics/llm-metrics.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,8 @@ groups:
stability: experimental
attributes:
- ref: llm.response.model
requirement_level: required
- ref: error.type
requirement_level:
conditionally_required: "if the operation ended in error"
- ref: error.type
- ref: server.address
requirement_level: required
requirement_level: required
6 changes: 3 additions & 3 deletions model/registry/llm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ groups:
examples: ['stop']
tag: llm-generic-response
- id: usage.token_type
type:
type:
members:
- id: prompt
value: 'prompt'
Expand Down Expand Up @@ -183,7 +183,7 @@ groups:
tag: tech-specific-openai-events
- id: openai.function.arguments
type: string
brief: If exists, the arguments to call a function call with for a given OpenAI response, denoted by `<index>`. The value for `<index>` starts with 0, where 0 is the first message.
brief: If exists, the arguments to call a function call with for a given OpenAI response, denoted by `<index>`. The value for `<index>` starts with 0, where 0 is the first message.
examples: '{"type": "object", "properties": {"some":"data"}}'
tag: tech-specific-openai-events
- id: openai.choice.type
Expand All @@ -195,4 +195,4 @@ groups:
value: 'message'
brief: The type of the choice, either `delta` or `message`.
examples: 'message'
tag: tech-specific-openai-events
tag: tech-specific-openai-events
40 changes: 27 additions & 13 deletions model/trace/llm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@ groups:
- ref: llm.request.model
requirement_level: required
note: >
The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
The name of the LLM a request is being made to. If the LLM is supplied by a vendor,
then the value must be the exact name of the model requested. If the LLM is a fine-tuned
custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: llm.request.max_tokens
requirement_level: recommended
- ref: llm.request.temperature
Expand All @@ -27,7 +29,9 @@ groups:
- ref: llm.response.model
requirement_level: required
note: >
The name of the LLM a response is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
The name of the LLM a response is being made to. If the LLM is supplied by a vendor,
then the value must be the exact name of the model actually used. If the LLM is a
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: llm.response.finish_reason
requirement_level: recommended
- ref: llm.usage.prompt_tokens
Expand All @@ -44,13 +48,16 @@ groups:
name: llm.content.prompt
type: event
brief: >
In the lifetime of an LLM span, events for prompts sent and completions received may be created, depending on the configuration of the instrumentation.
In the lifetime of an LLM span, events for prompts sent and completions received
may be created, depending on the configuration of the instrumentation.
attributes:
- ref: llm.prompt
requirement_level: recommended
note: >
The full prompt string sent to an LLM in a request. If the LLM accepts a more complex input like a JSON object, this field is blank, and the response is instead captured in an event determined by the specific LLM technology semantic convention.
The full prompt string sent to an LLM in a request. If the LLM accepts a more
complex input like a JSON object, this field is blank, and the response is
instead captured in an event determined by the specific LLM technology semantic convention.
- id: llm.content.completion
name: llm.content.completion
type: event
Expand All @@ -60,7 +67,11 @@ groups:
- ref: llm.completion
requirement_level: recommended
note: >
The full response string from an LLM. If the LLM responds with a more complex output like a JSON object made up of several pieces (such as OpenAI's message choices), this field is the content of the response. If the LLM produces multiple responses, then this field is left blank, and each response is instead captured in an event determined by the specific LLM technology semantic convention.
The full response string from an LLM. If the LLM responds with a more
complex output like a JSON object made up of several pieces (such as OpenAI's message choices),
this field is the content of the response. If the LLM produces multiple responses, then this
field is left blank, and each response is instead captured in an event determined by the specific
LLM technology semantic convention.
- id: llm.openai
type: span
Expand All @@ -74,7 +85,10 @@ groups:
- ref: llm.request.model
requirement_level: required
note: >
The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
The name of the LLM a request is being made to. If the LLM is supplied by a
vendor, then the value must be the exact name of the model requested. If the
LLM is a fine-tuned custom model, the value should have a more specific name
than the base model that's been fine-tuned.
tag: tech-specific-openai-request
- ref: llm.request.max_tokens
tag: tech-specific-openai-request
Expand Down Expand Up @@ -126,7 +140,7 @@ groups:
- ref: llm.openai.content
requirement_level: required
- ref: llm.openai.tool_call.id
requirement_level:
requirement_level:
conditionally_required: >
Required if the prompt role is `tool`.
Expand Down Expand Up @@ -159,18 +173,18 @@ groups:
- ref: llm.openai.content
requirement_level: required
- ref: llm.openai.tool_call.id
requirement_level:
requirement_level:
conditionally_required: >
Required if the choice is the result of a tool call.
- ref: llm.openai.tool.type
requirement_level:
requirement_level:
conditionally_required: >
Required if the choice is the result of a tool call.
- ref: llm.openai.function.name
requirement_level:
requirement_level:
conditionally_required: >
Required if the choice is the result of a tool call of type `function`.
- ref: llm.openai.function.arguments
requirement_level:
requirement_level:
conditionally_required: >
Required if the choice is the result of a tool call of type `function`.
Required if the choice is the result of a tool call of type `function`.

0 comments on commit fd57c6c

Please sign in to comment.