-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x-pack/metricbeat/module/openai: Add new module #41516
Conversation
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
|
Getting hit by this error: #41174 (comment) and hence the CI is failing. Rest all okay. |
To continue with my testing and to avoid: See this: https://www.elastic.co/guide/en/beats/metricbeat/current/configuration-template.html So, this has currently unblocked me but yes we definitely need a fix for this. |
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
I've explain the complicated collection mechanism in the PR description itself. Rest is self-explanatory from the code. Please let me know, if anything needs further clarification. |
x-pack/metricbeat/module/openai/usage/usage_integration_test.go
Outdated
Show resolved
Hide resolved
} | ||
], | ||
"ft_data": [], | ||
"dalle_api_data": [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be good to have data for each data set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I tried generating ft (fine-tuning) data but it doesn't seem to work. As OpenAI provides this API as undocumented, I couldn't find a single source with any samples. Not even sure they even populate for the response of this particular endpoint. For dalle_api_data, I'll add.
x-pack/metricbeat/module/openai/usage/usage_integration_test.go
Outdated
Show resolved
Hide resolved
x-pack/metricbeat/module/openai/usage/usage_integration_test.go
Outdated
Show resolved
Hide resolved
# - "k2: v2" | ||
## Rate Limiting Configuration | ||
# rate_limit: | ||
# limit: 60 # requests per second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this to be changes to 12 as well ?
Why have we changed the limit from 60 to 12 ?
I thought 60 was the agreed upon limit ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was testing everything from scratch today and that too thoroughly. Noticed a slower rate of firing of requests. So, understanding of limit
and burst
was confusing and I did put incorrect values there which I have corrected now.
This part of the doc needs to be updated with make update
; I will run that. Rest all doc files are updated.
The rate limiter works as follows:
limit: 12 means one request every 12 seconds (60 seconds / 5 requests = 12 seconds per request)
burst: 1 means only 1 request can be made in burst
This ensures you never exceed 5 requests per minute
So nothing changed. It's just that it wasn't configured properly by default. Rate limit is still 5 req/ min as per OpenAI.
I hope all major review comments I've addressed. Now I'll begin a thorough testing to check:
Thanks to all the reviewers! |
So far with testing everything looks good. I did run it for a few hours today and collected all my limited OpenAI API usage over 4 months period. So far the data has matched and also found a case where OpenAI's own usage dashboard doesn't show a specific data although it present in the JSON when we hit the usage API. But in our dashboard, it shows perfectly which is good thing. Here's a basic sample dashboard which has panels similar to that of OpenAI's usage dashboard. |
I think we are ready to merge now unless there are more comments. |
e2cdc77
to
549f26e
Compare
@ishleenk17 / @devamanv Let me know if you have any comments? Also @muthu-mps do you have any comments wrt Azure OpenAI vs this? |
Co-authored-by: Brandon Morelli <bmorelli25@gmail.com>
run docs-build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good.
CI passes, then we are GTG!
Updated the CODEOWNERS too. cc: @lalit-satapathy Can you please approve as well? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM codeowner changes.
(cherry picked from commit 93b018a)
Proposed commit message
Implement a new module for OpenAI usage collection. This module operates on
https://api.openai.com/v1/usage
(by default; also configurable for Proxy URLs, etc.) and collects the limited set of usage metrics emitted from the undocumented endpoint.Example how the usage endpoints emits metrics:
Given timestamps
t0
,t1
,t2
, ...tn
in ascending order:t0
(first collection):t1
(after new API usage):t2
(continuous collection):and so on.
Example response:
As soon as the API is used, usage is generated after a few times. So, if collecting using the module real-time and that too multiple times of the day, it would collect duplicates and it is not good for storage as well as analytics of the usage data.
It's better to collect
time.Now() (in UTC) - 24h
so that we get full usage collection of the past day (in UTC) and it avoids duplication. So that's why I have introduced a configrealtime
and set it tofalse
as the collection is 24h delayed; we are now getting daily data.realtime: true
will work as any other normal collection where metrics are fetched in set intervals. Our recommendation is to keeprealtime: false
.As this is a metricbeat module, we do not have existing package that gives us support to store the cursor. So, in order to avoid pulling already pulled data, timestamps are being stored per API key. Logic for the same is commented in the code on how it is stored. We are using a new custom code to store the state in order to store the cursor and begin from the next available date.
Checklist
CHANGELOG.next.asciidoc
orCHANGELOG-developer.next.asciidoc
.Author's Checklist
How to test this PR locally