-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Infra UI] Allow custom metrics calculation based on field values on node detail page charts #42687
Comments
Pinging @elastic/infra-logs-ui |
It is possible that |
@kaiyan-sheng Another idea, could this maybe be solved in |
@skh What if we added an attribute to the metadata endpoint that indicated whether or not they have detailed metrics. If we had that then the two approaches I could see are:
|
Yes, I tried yesterday to create an EC2 instance but can't make
@skh You mean calculating rate for all |
Yeah this sounds like the best solution since, as far as I can tell, we don't care about the "raw values" ever again. Then "whether the user has detailed monitoring" becomes completely irrelevant to all consumers of the data after it's been stored, if I understand correctly. |
Yes. I would also favour this solution. |
Another option would be (for now), to use a As we don't have variable bucket intervals yet (that would come with elastic/beats#12616) this would be correct in the following two cases:
In the following case it would be incorrect:
We could point out in documentation that the reporting period for the @simianhacker @kaiyan-sheng what do you think? |
After further discussion, this is now implemented as: Leave the bucket size hard-coded at First, use a sum aggregation in every bucket. that way, if the user has detailed monitoring and a reporting period of 60s, we sum up even in the smallest possible buckets. If the user zooms out and the TSVB endpoint therefore uses larger buckets (note that the value we use is Next, calculate the cumulative sum in each bucket, i.e. each bucket will have the sum of all buckets before it, and its own value added on top. This turns the metric into a monotonically increasing counter. Finally, take the derivative with Note that this will still be wrong if the user has detailed monitoring enabled in AWS but configured a |
Hi @skh, the PR for calculating rate metrics in |
@skh @simianhacker with the changes that landed in elastic/beats#13203 can we remove our workaround and just use the raw numbers? Will those beats changes have any effect on our workaround? |
This issue came up during the implementation of #39282
AWS CloudWatch metrics, and by extension
metricbeat
'saws.ec2
metricset, contain fields that have to be interpreted differently depending on whether the user has enabled (and paid for) "detailed" CloudWatch monitoring:(from https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html )
The affected metrics are
NetworkIn
,NetworkOut
,DiskReadBytes
, andDiskWriteBytes
. They are mapped to the fieldsaws.ec2.network.in.bytes
,aws.ec2.network.out.bytes
,aws.ec2.diskio.read.bytes
, andaws.ec2.diskio.write.bytes
, respectively.The information whether "detailed" monitoring is enabled is captured in the field
aws.ec2.instance.monitoring.state
.Currently there is no easy way to perform custom calculations on a metric that is fetched through TSVB models.
I did experiment with this code in the model definition (in
kibana/x-pack/legacy/plugins/infra/server/lib/adapters/metrics/models/aws/aws_network_bytes.ts
):This does not work, presumably because string comparison is not possible in the
script
field of a TSVB model.This feature request is about adding this functionality to the node detail page and the corresponding metrics GraphQL resolver, and performing the correct calculations for AWS metrics.
As long as this isn't implemented, we shouldn't show the affected metrics.
The text was updated successfully, but these errors were encountered: