Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CONTINT-53] Make ECS e2e tests target the fake intake #20286

Merged
merged 1 commit into from
Oct 20, 2023
Merged

Conversation

L3n41c
Copy link
Member

@L3n41c L3n41c commented Oct 19, 2023

What does this PR do?

Move the testMetric function away from the Kubernetes-specific k8sSuite object so that it can be used for non-Kubernetes tests.

Motivation

Be able to use the same function to assert metrics for ECS and K8S scenarios.

Additional Notes

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@L3n41c L3n41c added team/containers dev/testing changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card labels Oct 19, 2023
@L3n41c L3n41c added this to the 7.50.0 milestone Oct 19, 2023
@L3n41c L3n41c requested a review from a team as a code owner October 19, 2023 19:22
@L3n41c L3n41c requested a review from a team as a code owner October 19, 2023 19:56
@pr-commenter
Copy link

pr-commenter bot commented Oct 19, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 7d3441f7-41f2-49e6-81cf-c81df04c84d6
Baseline: 90004cf
Comparison: 2f881d3
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
otel_to_otel_logs ingress throughput +0.92 [-0.68, +2.53] 65.78%
file_tree egress throughput +0.16 [-2.11, +2.43] 9.29%
idle egress throughput +0.04 [-2.91, +2.98] 1.60%
dogstatsd_string_interner_8MiB_50k ingress throughput +0.02 [-0.02, +0.06] 52.07%
uds_dogstatsd_to_api ingress throughput +0.02 [-0.17, +0.20] 10.47%
dogstatsd_string_interner_8MiB_100k ingress throughput +0.01 [-0.04, +0.06] 22.05%
dogstatsd_string_interner_8MiB_100 ingress throughput +0.00 [-0.12, +0.13] 4.07%
file_to_blackhole egress throughput +0.00 [-2.93, +2.93] 0.13%
dogstatsd_string_interner_64MiB_100 ingress throughput +0.00 [-0.14, +0.14] 0.31%
trace_agent_json ingress throughput +0.00 [-0.13, +0.13] 0.29%
dogstatsd_string_interner_128MiB_100 ingress throughput -0.00 [-0.14, +0.14] 0.19%
dogstatsd_string_interner_128MiB_1k ingress throughput -0.00 [-0.14, +0.14] 0.81%
dogstatsd_string_interner_8MiB_1k ingress throughput -0.00 [-0.10, +0.10] 1.56%
dogstatsd_string_interner_64MiB_1k ingress throughput -0.00 [-0.13, +0.13] 1.24%
dogstatsd_string_interner_8MiB_10k ingress throughput -0.01 [-0.03, +0.02] 40.66%
tcp_dd_logs_filter_exclude ingress throughput -0.01 [-0.09, +0.07] 15.83%
trace_agent_msgpack ingress throughput -0.02 [-0.14, +0.11] 18.81%
tcp_syslog_to_blackhole ingress throughput -0.88 [-1.02, -0.74] 100.00%

Comment on lines +34 to +37
if err != nil {
collect.Errorf("%w", err)
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💭 thought
Looking forwards to stretchr/testify#1481

func (suite *ecsSuite) TestNginx() {
// `nginx` check is configured via docker labels
// Test it is properly scheduled
suite.testMetric("nginx.net.request_per_s",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❓ question
Have you considered having an helper function instead of using a suite as a base ? Something like

func assertMetrics(t *testing.T, fakeintake *fakeintake.Client, metrics MetricsToCheck)

Not a critic, I am curious if you considered it as an alternative, as here we use the suite to pass the testing and fakeintake context rather than to leverage the suite interface.

An alternative could also be to pass a testing.T context to fakeintake.Client. But I can see this has the limit of not allowing switching testing context when calling it from an Eventually or from a t.Run that create a separate testing context.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I’m not convinced by the current way the helper functions are designed. And I know I’ll refactor them at some point.

For ex., the testMetrics function takes a list of expected tags in the form of regular expressions. I think the code might be less painful to write if the calls to regexp.MustCompile(…) were done inside the testMetrics(…) function itself rather than on the caller side, everywhere this function is invoked.

In the current tests, testMetrics(…) is only checking for the tags on the metrics.
Its parameters are:

  • the name of the metric to look at
  • a list of tags to filter the series to consider
  • the exhaustive list of tags we expect to have on the series.
    At some point, we’ll want to potentially check the value as well. As values are float, it will be an expected range.
    It means two more parameters:
  • expected minimum value
  • expected maximum value.

Having all that as positional arguments will become painful.
So, we might want to have named arguments, similar to the Pulumi API, with some arguments being optional:

assertMetric(&assertMetricArgs{
  filter: {
    metricName: …,
    tags: …,
  },
  expect: {
    tags: …,
    value: {
      min: …,
      max: …,
})

About using a helper function instead of a baseSuite method, I don’t have any strong opinion. It permitted to pass less parameters as the fakeintake object was a field of baseSuite.
I think that I’d like to keep the fact that assertMetric creates a sub-test with suite.Run(…) because it permits to have a more granular report.

So, yeah, I agree that this needs improvement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also like how suite.Run or t.Run improve readability of reports, I do not use them for the separate context, still it can be misleading to use the parent context instead of the current one.

Thinking if we can have this in a generic FilterMetrics inside the fakeintake, or if we can have some external aggregation helpers.

What ddev does in integrations-core tests is having an aggregator that marks each checked metric and eventually can provide all metrics that were not checked with agg.RequireAllMetricsChecked.

test/new-e2e/tests/containers/ecs_test.go Outdated Show resolved Hide resolved
@L3n41c L3n41c merged commit 209b705 into main Oct 20, 2023
171 checks passed
@L3n41c L3n41c deleted the lenaic/CONTINT-53 branch October 20, 2023 19:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card dev/testing team/containers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants