Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhancement(ci): combine build steps for integration test workflows #17724

Merged
merged 35 commits into from
Jul 3, 2023

Conversation

neuronull
Copy link
Contributor

@neuronull neuronull commented Jun 21, 2023

  • vdev integration test logic has ability to build with all integration features flag
  • CI workflows use the new vdev flag, and are run within the same job, thus each step leverages the cached runner image
  • Adds retries for integration tests both at nextest and between bringup/teardown of the container services
  • Reduces billable time for Integration Test Suite workflow by 90% 🚀

@neuronull neuronull added the domain: ci Anything related to Vector's CI environment label Jun 21, 2023
@neuronull neuronull self-assigned this Jun 21, 2023
@netlify
Copy link

netlify bot commented Jun 21, 2023

Deploy Preview for vector-project ready!

Name Link
🔨 Latest commit e456ca1
🔍 Latest deploy log https://app.netlify.com/sites/vector-project/deploys/64a346e8cf58200009a33eb8
😎 Deploy Preview https://deploy-preview-17724--vector-project.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@netlify
Copy link

netlify bot commented Jun 21, 2023

Deploy Preview for vrl-playground ready!

Name Link
🔨 Latest commit e456ca1
🔍 Latest deploy log https://app.netlify.com/sites/vrl-playground/deploys/64a346e8e5f60b00086c88d3
😎 Deploy Preview https://deploy-preview-17724--vrl-playground.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@github-actions github-actions bot added the domain: sinks Anything related to the Vector's sinks label Jun 21, 2023
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Jun 21, 2023

Datadog Report

Branch report: neuronull/ci_combine_build_steps_int_tests
Commit report: 2905909

vector: 0 Failed, 0 New Flaky, 1908 Passed, 0 Skipped, 1m 32.28s Wall Time

@neuronull neuronull requested a review from bruceg June 21, 2023 22:14
@neuronull neuronull marked this pull request as ready for review June 21, 2023 22:26
@neuronull neuronull requested a review from StephenWakely as a code owner June 21, 2023 22:26
@neuronull neuronull requested a review from a team June 21, 2023 22:26
Copy link
Contributor

@spencergilbert spencergilbert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test if the upload-test-results script still worked after these changes?

@neuronull
Copy link
Contributor Author

Did you test if the upload-test-results script still worked after these changes?

I did not... do you have a recommendation on how to validate that? Observe something in the DD UI I presume?

@spencergilbert
Copy link
Contributor

I did not... do you have a recommendation on how to validate that? Observe something in the DD UI I presume?

Yeah, UI or we could add logging to the wrapping script. I think the way things are broken up we may just be uploading the last test run - but it's also unclear to me how that wasn't happening before this change as well.

@neuronull
Copy link
Contributor Author

I did not... do you have a recommendation on how to validate that? Observe something in the DD UI I presume?

Yeah, UI or we could add logging to the wrapping script. I think the way things are broken up we may just be uploading the last test run - but it's also unclear to me how that wasn't happening before this change as well.

Looking at the script here, if this helps- the steps in the same job, share the same context (that's how we're saving time on the runner image build now). So it looks like the script is taking a file from the target dir, which in theory should have persisted across each step.

@neuronull
Copy link
Contributor Author

Looking at the script here, if this helps- the steps in the same job, share the same context (that's how we're saving time on the runner image build now). So it looks like the script is taking a file from the target dir, which in theory should have persisted across each step.

I guess it's a question of if that file is overwritten each time nextest is run? If it is- then it does seem like that script will have to be called during each job step.

@spencergilbert
Copy link
Contributor

I guess it's a question of if that file is overwritten each time nextest is run? If it is- then it does seem like that script will have to be called during each job step.

https://github.com/vectordotdev/vector/blob/master/.config/nextest.toml#L17

I don't think we can template-ize the file name, or couldn't when I initially implemented it. Maybe vdev could be enhanced to call the ddog cli if present and configured?

@neuronull
Copy link
Contributor Author

https://github.com/vectordotdev/vector/blob/master/.config/nextest.toml#L17

I don't think we can template-ize the file name, or couldn't when I initially implemented it. Maybe vdev could be enhanced to call the ddog cli if present and configured?

It does appear that isn't templatizable.

We could have vdev do that. 🤔 I'm just not sure its the best approach. It depends on how much coupling with CI we are OK with for vdev. The changes I have here could benefit local users of vdev, by way of re using the runner image that has all the feature flags. But AFAICT the dd cli stuff would only be applicable to CI.
That said, there is stuff in vdev that is pretty CI specific so 🤷

The alternative to that is, on each of the steps where the vdev commands are used to run the int tests, we just run the upload script there. I'm kind of leaning toward this option, but curious yours and others thoughts.

|| needs.changes.outputs.splunk == 'true'
|| needs.changes.outputs.webhdfs == 'true'
)
timeout-minutes: 60
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to highlight that we are making a tradeoff here between billable compute time and run time. Before, tests were run in parallel, meaning that the total run time was equal to the time to set up the environment + the run time of the longest running integration test. Now, the tests are run synchronously, meaning that the total run time is equal to the time to set up the environment + the sum of the run times of all executed integration tests.

In the example that you linked, it took 37 minutes to run this workflow w/ all integration tests. I believe this is an acceptable tradeoff today given (1) the cost reduction, (2) these tests only run in the merge queue by default, and (3) the merge queue bottleneck is still the regression test suite at ~40-60 minutes.

However, as we add more integration tests in the future, the run time of this workflow may likely surpass that of the regression test suite. Have you considered this and ways that we can mitigate it? Two potential solutions would be (1) to break this up into multiple workflows (each workflow would run n integration tests) when the run time becomes unacceptable or (2) improve caching / environment sharing to minimize environment setup time, which could allow us to run these tests in parallel again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tradeoff is known yes. In general, from the CI cost reduction effort we thus far have been accepting some increased runtime at the expense , of runtime as long as it isn't highly impacting DX.

(1) to break this up into multiple workflows (each workflow would run n integration tests) when the run time becomes unacceptable

This would be pretty easy- actually we could do that in the same workflow, just have two jobs and not have them depend on eachother. The two would start in parallel.

(2) improve caching / environment sharing to minimize environment setup time, which could allow us to run these tests in parallel again.

It did cross my mind that we could potentially re-introduce the matrix approach if we solve the caching issue. The main caveat to that here is that the int tests are all run in containers so we'd have to essentially cache a container image, which would be a bit different than just caching build artifacts.

These are good thoughts. Option 1 is a "cheap and easy" solution to that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More thoughts coming to me- if we made two jobs in the same workflow, they would both take the time to build the image, so yeah that would only benefit once we crossed the idk like 60-70 min total runtime (essentially when we have enough int tests that the running of just all of those together exceeds the time to build).

A side note, @jszwedko and I also discussed the option of moving the int tests back to running on all PR pushes, if we had good cost reduction on it. (or at least on changes to src/lib). But imo I kind of like the on-file detection for PR pushes personally.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re (2) above I think the container reuses the source directory to do the cargo build, which includes target. So, if we were to do the equivalent cargo build --features all-integration-tests as vdev runs, then we would only need to cache the CI step, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re (2) above I think the container reuses the source directory to do the cargo build, which includes target.

😮 If that is the case, then yes, that would be less of a complexity/ difference from the other cache needs we have than I was thinking.

would only need to cache the CI step

You lost me there... 🤔 what is the "CI step" you are referring to?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To test, run cargo build --features … and then vdev int test … and see how much it rebuilds.

By "CI step" I was thinking we could do the build in one step, and then run the actual integration tests in parallel running out of that pre-built binary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 got it
Will remember this when spiking/undertaking the caching effort.

For the changes in this PR, I think we are good. This discussion stemmed from a (valid) theoretic / future consideration where cross a threshold of having enough integrations that the runtime of doing all those post-build exceeds the time to do a second build / runtime pains.

But, if we get caching going, it would def be worth the runtime improvements to run these in a matrix with the cached binary, in addition to reducing the length of the workflow file(s).

.github/workflows/integration-comment.yml Outdated Show resolved Hide resolved
@dsmith3197
Copy link
Contributor

The alternative to that is, on each of the steps where the vdev commands are used to run the int tests, we just run the upload script there. I'm kind of leaning toward this option, but curious yours and others thoughts.

This alternative approach sounds straightforward and simple, especially if we define a shared workflow for each of the integration tests as I mentioned above.

@dsmith3197 dsmith3197 added this pull request to the merge queue Jun 29, 2023
@github-actions
Copy link

Regression Detector Results

Run ID: d59e3c28-11fb-47c7-8339-7f402c688d5a
Baseline: e6e776d
Comparison: 99d0897
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
file_to_blackhole egress throughput +4.57 [+0.61, +8.54] 86.07%
datadog_agent_remap_blackhole ingress throughput +1.75 [+1.64, +1.85] 100.00%
http_text_to_http_json ingress throughput +1.16 [+1.09, +1.22] 100.00%
socket_to_socket_blackhole ingress throughput +0.99 [+0.94, +1.03] 100.00%
otlp_http_to_blackhole ingress throughput +0.91 [+0.77, +1.06] 100.00%
datadog_agent_remap_datadog_logs_acks ingress throughput +0.66 [+0.55, +0.76] 100.00%
syslog_log2metric_humio_metrics ingress throughput +0.58 [+0.50, +0.66] 100.00%
splunk_hec_route_s3 ingress throughput +0.15 [+0.02, +0.29] 85.51%
otlp_grpc_to_blackhole ingress throughput +0.09 [-0.02, +0.21] 70.91%
http_to_http_acks ingress throughput +0.08 [-1.16, +1.33] 6.88%
http_to_http_noack ingress throughput +0.06 [-0.00, +0.12] 78.92%
enterprise_http_to_http ingress throughput +0.04 [-0.01, +0.09] 69.83%
splunk_hec_indexer_ack_blackhole ingress throughput +0.01 [-0.03, +0.06] 32.20%
http_to_http_json ingress throughput +0.01 [-0.03, +0.05] 20.32%
fluent_elasticsearch ingress throughput +0.00 [-0.00, +0.00] 48.05%
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.06, +0.06] 0.27%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.00 [-0.05, +0.04] 1.38%
syslog_log2metric_splunk_hec_metrics ingress throughput -0.15 [-0.25, -0.06] 96.26%
syslog_humio_logs ingress throughput -0.65 [-0.73, -0.58] 100.00%
datadog_agent_remap_blackhole_acks ingress throughput -0.81 [-0.87, -0.75] 100.00%
datadog_agent_remap_datadog_logs ingress throughput -0.81 [-0.93, -0.69] 100.00%
syslog_splunk_hec_logs ingress throughput -1.16 [-1.24, -1.08] 100.00%
syslog_regex_logs2metric_ddmetrics ingress throughput -3.02 [-3.24, -2.79] 100.00%
syslog_loki ingress throughput -3.15 [-3.22, -3.09] 100.00%

@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jun 29, 2023
@neuronull neuronull enabled auto-merge July 3, 2023 16:45
@neuronull neuronull added this pull request to the merge queue Jul 3, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 3, 2023
@github-actions
Copy link

github-actions bot commented Jul 3, 2023

Regression Detector Results

Run ID: 939a43e1-b05c-4555-91c8-a629deb73e75
Baseline: 205300b
Comparison: b684d07
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
file_to_blackhole egress throughput +6.21 [+2.24, +10.17] 95.52%
datadog_agent_remap_datadog_logs_acks ingress throughput +2.23 [+2.13, +2.33] 100.00%
syslog_log2metric_splunk_hec_metrics ingress throughput +2.21 [+2.11, +2.32] 100.00%
http_text_to_http_json ingress throughput +1.84 [+1.77, +1.91] 100.00%
splunk_hec_route_s3 ingress throughput +1.62 [+1.46, +1.77] 100.00%
otlp_grpc_to_blackhole ingress throughput +0.89 [+0.79, +1.00] 100.00%
socket_to_socket_blackhole ingress throughput +0.07 [+0.03, +0.12] 96.17%
enterprise_http_to_http ingress throughput +0.03 [-0.00, +0.06] 75.19%
http_to_http_noack ingress throughput +0.02 [-0.04, +0.07] 29.55%
syslog_splunk_hec_logs ingress throughput +0.02 [-0.06, +0.09] 21.58%
http_to_http_json ingress throughput +0.00 [-0.04, +0.04] 6.42%
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.04, +0.04] 0.54%
fluent_elasticsearch ingress throughput -0.00 [-0.00, +0.00] 4.22%
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.06, +0.06] 0.47%
datadog_agent_remap_blackhole_acks ingress throughput -0.02 [-0.13, +0.09] 15.24%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.02 [-0.06, +0.02] 42.45%
http_to_http_acks ingress throughput -0.18 [-1.40, +1.05] 14.76%
datadog_agent_remap_datadog_logs ingress throughput -0.83 [-0.94, -0.71] 100.00%
syslog_loki ingress throughput -1.01 [-1.08, -0.93] 100.00%
datadog_agent_remap_blackhole ingress throughput -1.12 [-1.21, -1.04] 100.00%
otlp_http_to_blackhole ingress throughput -1.67 [-1.81, -1.53] 100.00%
syslog_humio_logs ingress throughput -1.81 [-1.88, -1.73] 100.00%
syslog_log2metric_humio_metrics ingress throughput -1.88 [-1.95, -1.81] 100.00%
syslog_regex_logs2metric_ddmetrics ingress throughput -8.41 [-8.63, -8.18] 100.00%

@neuronull neuronull enabled auto-merge July 3, 2023 19:04
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Jul 3, 2023

Datadog Report

Branch report: neuronull/ci_combine_build_steps_int_tests
Commit report: bde0556

vector: 0 Failed, 0 New Flaky, 1914 Passed, 0 Skipped, 1m 20.47s Wall Time

@neuronull neuronull added this pull request to the merge queue Jul 3, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 3, 2023
@neuronull neuronull enabled auto-merge July 3, 2023 20:25
@github-actions
Copy link

github-actions bot commented Jul 3, 2023

Regression Detector Results

Run ID: 6df44ccb-dc88-4fa1-b72c-7fff32876eba
Baseline: 205300b
Comparison: f8be5d1
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
syslog_loki ingress throughput +1.75 [+1.68, +1.81] 100.00%
datadog_agent_remap_blackhole_acks ingress throughput +1.35 [+1.26, +1.44] 100.00%
http_to_http_acks ingress throughput +1.06 [-0.17, +2.30] 72.88%
datadog_agent_remap_blackhole ingress throughput +1.04 [+0.96, +1.12] 100.00%
http_text_to_http_json ingress throughput +0.78 [+0.72, +0.85] 100.00%
syslog_log2metric_splunk_hec_metrics ingress throughput +0.52 [+0.42, +0.63] 100.00%
splunk_hec_route_s3 ingress throughput +0.24 [+0.11, +0.36] 98.06%
otlp_http_to_blackhole ingress throughput +0.08 [-0.05, +0.22] 57.87%
socket_to_socket_blackhole ingress throughput +0.06 [+0.01, +0.11] 87.59%
http_to_http_noack ingress throughput +0.03 [-0.03, +0.09] 44.96%
enterprise_http_to_http ingress throughput +0.02 [-0.02, +0.05] 43.05%
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.06, +0.06] 0.29%
fluent_elasticsearch ingress throughput -0.00 [-0.00, -0.00] 80.93%
http_to_http_json ingress throughput -0.00 [-0.04, +0.04] 7.65%
splunk_hec_indexer_ack_blackhole ingress throughput -0.01 [-0.05, +0.04] 16.09%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.01 [-0.06, +0.03] 28.11%
syslog_humio_logs ingress throughput -0.33 [-0.41, -0.24] 100.00%
otlp_grpc_to_blackhole ingress throughput -0.35 [-0.46, -0.25] 100.00%
syslog_splunk_hec_logs ingress throughput -0.59 [-0.67, -0.51] 100.00%
datadog_agent_remap_datadog_logs_acks ingress throughput -0.77 [-0.88, -0.67] 100.00%
syslog_log2metric_humio_metrics ingress throughput -0.79 [-0.87, -0.71] 100.00%
datadog_agent_remap_datadog_logs ingress throughput -0.81 [-0.92, -0.71] 100.00%
syslog_regex_logs2metric_ddmetrics ingress throughput -1.55 [-1.78, -1.32] 100.00%
file_to_blackhole egress throughput -3.21 [-6.92, +0.49] 73.43%

@neuronull neuronull added this pull request to the merge queue Jul 3, 2023
@github-actions
Copy link

github-actions bot commented Jul 3, 2023

Regression Detector Results

Run ID: 57ba427c-a8f8-4a25-a9d7-9f8f49d81f09
Baseline: 0e24411
Comparison: 3842dd5
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
syslog_regex_logs2metric_ddmetrics ingress throughput +3.71 [+3.48, +3.93] 100.00%
syslog_log2metric_humio_metrics ingress throughput +2.36 [+2.28, +2.43] 100.00%
syslog_loki ingress throughput +1.80 [+1.71, +1.89] 100.00%
datadog_agent_remap_blackhole ingress throughput +1.50 [+1.39, +1.61] 100.00%
datadog_agent_remap_blackhole_acks ingress throughput +1.39 [+1.29, +1.49] 100.00%
otlp_http_to_blackhole ingress throughput +0.57 [+0.44, +0.70] 100.00%
enterprise_http_to_http ingress throughput +0.03 [-0.00, +0.06] 78.38%
syslog_splunk_hec_logs ingress throughput +0.03 [-0.05, +0.11] 38.04%
splunk_hec_to_splunk_hec_logs_noack ingress throughput +0.02 [-0.03, +0.06] 33.47%
http_to_http_noack ingress throughput +0.01 [-0.04, +0.07] 27.90%
http_to_http_json ingress throughput +0.00 [-0.04, +0.04] 8.18%
fluent_elasticsearch ingress throughput +0.00 [-0.00, +0.00] 13.56%
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.06, +0.06] 0.13%
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.04, +0.04] 1.09%
otlp_grpc_to_blackhole ingress throughput -0.05 [-0.15, +0.05] 47.28%
splunk_hec_route_s3 ingress throughput -0.05 [-0.20, +0.09] 37.89%
socket_to_socket_blackhole ingress throughput -0.29 [-0.33, -0.25] 100.00%
datadog_agent_remap_datadog_logs_acks ingress throughput -0.65 [-0.75, -0.56] 100.00%
http_to_http_acks ingress throughput -0.91 [-2.14, +0.33] 65.46%
http_text_to_http_json ingress throughput -1.56 [-1.62, -1.49] 100.00%
syslog_humio_logs ingress throughput -2.19 [-2.26, -2.11] 100.00%
datadog_agent_remap_datadog_logs ingress throughput -2.48 [-2.59, -2.37] 100.00%
syslog_log2metric_splunk_hec_metrics ingress throughput -2.89 [-2.99, -2.80] 100.00%
file_to_blackhole egress throughput -10.76 [-14.31, -7.20] 99.99%

@neuronull neuronull removed this pull request from the merge queue due to a manual request Jul 3, 2023
@neuronull neuronull enabled auto-merge July 3, 2023 22:08
@github-actions
Copy link

github-actions bot commented Jul 3, 2023

Regression Detector Results

Run ID: f4823478-659c-48c2-80d6-9bb05df72b91
Baseline: 205300b
Comparison: a179d2d
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
syslog_humio_logs ingress throughput +2.57 [+2.49, +2.65] 100.00%
syslog_log2metric_splunk_hec_metrics ingress throughput +1.91 [+1.83, +1.99] 100.00%
datadog_agent_remap_datadog_logs ingress throughput +1.45 [+1.33, +1.57] 100.00%
syslog_loki ingress throughput +1.24 [+1.15, +1.32] 100.00%
syslog_log2metric_humio_metrics ingress throughput +1.23 [+1.15, +1.30] 100.00%
socket_to_socket_blackhole ingress throughput +0.64 [+0.58, +0.70] 100.00%
otlp_grpc_to_blackhole ingress throughput +0.38 [+0.28, +0.49] 100.00%
enterprise_http_to_http ingress throughput +0.07 [+0.03, +0.11] 97.72%
http_to_http_noack ingress throughput +0.03 [-0.03, +0.09] 50.28%
splunk_hec_indexer_ack_blackhole ingress throughput +0.01 [-0.03, +0.05] 19.34%
fluent_elasticsearch ingress throughput -0.00 [-0.00, +0.00] 16.61%
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.06, +0.06] 0.44%
http_to_http_json ingress throughput -0.00 [-0.04, +0.04] 2.00%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.01 [-0.06, +0.04] 22.83%
otlp_http_to_blackhole ingress throughput -0.14 [-0.27, -0.02] 85.28%
datadog_agent_remap_blackhole ingress throughput -0.18 [-0.27, -0.09] 98.76%
splunk_hec_route_s3 ingress throughput -0.37 [-0.52, -0.23] 99.90%
datadog_agent_remap_datadog_logs_acks ingress throughput -0.64 [-0.76, -0.53] 100.00%
file_to_blackhole egress throughput -0.78 [-4.49, +2.93] 21.20%
http_text_to_http_json ingress throughput -0.90 [-0.96, -0.83] 100.00%
http_to_http_acks ingress throughput -1.04 [-2.28, +0.19] 72.19%
datadog_agent_remap_blackhole_acks ingress throughput -1.39 [-1.50, -1.27] 100.00%
syslog_splunk_hec_logs ingress throughput -2.55 [-2.63, -2.46] 100.00%
syslog_regex_logs2metric_ddmetrics ingress throughput -3.51 [-3.73, -3.30] 100.00%

@neuronull neuronull added this pull request to the merge queue Jul 3, 2023
@github-actions
Copy link

github-actions bot commented Jul 3, 2023

Regression Detector Results

Run ID: 99d0a158-63fb-46d1-bb8f-c734135784c5
Baseline: 205300b
Comparison: 911477a
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
file_to_blackhole egress throughput +2.41 [-1.41, +6.23] 58.05%
syslog_regex_logs2metric_ddmetrics ingress throughput +2.37 [+2.10, +2.64] 100.00%
syslog_humio_logs ingress throughput +1.89 [+1.80, +1.98] 100.00%
syslog_splunk_hec_logs ingress throughput +1.65 [+1.55, +1.75] 100.00%
datadog_agent_remap_blackhole_acks ingress throughput +1.51 [+1.41, +1.61] 100.00%
datadog_agent_remap_datadog_logs_acks ingress throughput +1.12 [+1.01, +1.22] 100.00%
syslog_log2metric_humio_metrics ingress throughput +1.04 [+0.95, +1.13] 100.00%
datadog_agent_remap_datadog_logs ingress throughput +0.83 [+0.72, +0.94] 100.00%
otlp_http_to_blackhole ingress throughput +0.64 [+0.51, +0.76] 100.00%
syslog_log2metric_splunk_hec_metrics ingress throughput +0.22 [+0.13, +0.30] 99.83%
datadog_agent_remap_blackhole ingress throughput +0.08 [-0.01, +0.18] 74.14%
http_to_http_noack ingress throughput +0.04 [-0.02, +0.10] 62.14%
splunk_hec_indexer_ack_blackhole ingress throughput +0.01 [-0.03, +0.05] 20.26%
enterprise_http_to_http ingress throughput +0.00 [-0.03, +0.03] 1.84%
fluent_elasticsearch ingress throughput -0.00 [-0.00, +0.00] 59.41%
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.07, +0.06] 3.69%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.01 [-0.06, +0.03] 23.84%
http_to_http_json ingress throughput -0.01 [-0.05, +0.03] 29.61%
socket_to_socket_blackhole ingress throughput -0.27 [-0.32, -0.21] 100.00%
otlp_grpc_to_blackhole ingress throughput -0.35 [-0.46, -0.23] 99.99%
http_to_http_acks ingress throughput -0.44 [-1.67, +0.79] 35.30%
splunk_hec_route_s3 ingress throughput -0.55 [-0.68, -0.42] 100.00%
http_text_to_http_json ingress throughput -1.22 [-1.27, -1.16] 100.00%
syslog_loki ingress throughput -1.71 [-1.78, -1.63] 100.00%

Merged via the queue into master with commit 911477a Jul 3, 2023
@neuronull neuronull deleted the neuronull/ci_combine_build_steps_int_tests branch July 3, 2023 23:58
aholmberg pushed a commit to aholmberg/vector that referenced this pull request Feb 14, 2024
# [1.33.0](answerbook/vector@v1.32.1...v1.33.0) (2024-01-03)

### Bug Fixes

* allow empty message_key value in config (vectordotdev#18091) [8a2f8f6](answerbook/vector@8a2f8f6) - GitHub
* **aws provider**: Don't unwap external_id (vectordotdev#18452) [77d12ee](answerbook/vector@77d12ee) - Jesse Szwedko
* **azure_blob sink**: Base Content-Type on encoder and not compression (vectordotdev#18184) [4a049d4](answerbook/vector@4a049d4) - GitHub
* **ci**: add missing env var (vectordotdev#17872) [7e6495c](answerbook/vector@7e6495c) - GitHub
* **ci**: address issues in integration test suite workflow (vectordotdev#17928) [8b2447a](answerbook/vector@8b2447a) - GitHub
* **ci**: Drop docker-compose from bootstrap install (vectordotdev#18407) [d9db2e0](answerbook/vector@d9db2e0) - Jesse Szwedko
* **ci**: fix gardener move blocked to triage on comment (vectordotdev#18126) [93b1945](answerbook/vector@93b1945) - GitHub
* **codecs**: Move protobuf codec options under a `protobuf` key (vectordotdev#18111) [36788d1](answerbook/vector@36788d1) - GitHub
* **component validation**: make tests deterministic through absolute comparisons instead of bounds checks (vectordotdev#17956) [52a8036](answerbook/vector@52a8036) - GitHub
* **config**: Fix TOML parsing of compression levels (vectordotdev#18173) [8fc574f](answerbook/vector@8fc574f) - GitHub
* **demo gcp_pubsub internal_metrics source throttle transform**: Fix `interval` fractional second parsing (vectordotdev#17917) [b44a431](answerbook/vector@b44a431) - GitHub
* **deps**: load default and legacy openssl providers (vectordotdev#18276) [8868b07](answerbook/vector@8868b07) - Jesse Szwedko
* **dev**: fix issues when using container tools and `cargo` is not installed locally (vectordotdev#18112) [36111b5](answerbook/vector@36111b5) - GitHub
* **dev**: fix Rust toolchain check in Makefile (vectordotdev#18218) [f77fd3d](answerbook/vector@f77fd3d) - GitHub
* **docs, syslog source**: Correct docs for `syslog_ip` (vectordotdev#18003) [a1d3c3a](answerbook/vector@a1d3c3a) - GitHub
* **docs**: add the 'http_client_requests_sent_total' (vectordotdev#18299) [2dcaf30](answerbook/vector@2dcaf30) - Jesse Szwedko
* make LogEvent index operator test only (vectordotdev#18185) [0c1cf23](answerbook/vector@0c1cf23) - GitHub
* **observability**: add all events that are being encoded (vectordotdev#18289) [c9ccee0](answerbook/vector@c9ccee0) - Jesse Szwedko
* **opentelemetry source**: Remove the 4MB default for gRPC request decoding (vectordotdev#18306) [56177eb](answerbook/vector@56177eb) - Jesse Szwedko
* propagate and display invalid JSON errors in VRL web playground (vectordotdev#17826) [8519cb1](answerbook/vector@8519cb1) - GitHub
* propagate config build error instead of panicking (vectordotdev#18124) [8022464](answerbook/vector@8022464) - GitHub
* **reload**: restart api server based on topology (vectordotdev#17958) [b00727e](answerbook/vector@b00727e) - GitHub
* **spelling**: add spell check exception (vectordotdev#17906) [c4827e4](answerbook/vector@c4827e4) - GitHub
* **splunk_hec source**: insert fields as event_path so names aren't parsed as a path (vectordotdev#17943) [1acf5b4](answerbook/vector@1acf5b4) - GitHub
* **syslog source, docs**: Fix docs for `host` field for syslog source (vectordotdev#18453) [dd460a0](answerbook/vector@dd460a0) - Jesse Szwedko
* **vdev**: Add `--features` with default features for vdev test (vectordotdev#17977) [eb4383f](answerbook/vector@eb4383f) - GitHub
* **vector sink**: Add DataLoss error code as non-retryable (vectordotdev#17904) [4ef0b17](answerbook/vector@4ef0b17) - GitHub
* **vector sink**: cert verification with proxy enabled (vectordotdev#17651) [45e24c7](answerbook/vector@45e24c7) - GitHub
* **vector source**: Remove the 4MB default for requests (vectordotdev#18186) [4cc9cdf](answerbook/vector@4cc9cdf) - GitHub
* **website**: Fix installer list for MacOS (vectordotdev#18364) [3b9144c](answerbook/vector@3b9144c) - Jesse Szwedko
* **websocket sink**: send encoded message as binary frame (vectordotdev#18060) [b85f4f9](answerbook/vector@b85f4f9) - GitHub

### Chores

* Add licenses to packages (vectordotdev#18006) [db9e47f](answerbook/vector@db9e47f) - GitHub
* add more direct regression case for s3 sink (vectordotdev#18082) [c592cb1](answerbook/vector@c592cb1) - GitHub
* added sink review checklist (vectordotdev#17799) [7f45949](answerbook/vector@7f45949) - GitHub
* **api**: Refactor top and tap for library use (vectordotdev#18129) [600f819](answerbook/vector@600f819) - GitHub
* **aws provider, external_docs**: Update the AWS authentication documentation (vectordotdev#18492) [9356c56](answerbook/vector@9356c56) - Jesse Szwedko
* **azure_monitor_logs sink**: refactor to new sink style (vectordotdev#18172) [0aeb143](answerbook/vector@0aeb143) - GitHub
* **CI**: Add missing `--use-consignor` flag on `smp` call (vectordotdev#17966) [7cae000](answerbook/vector@7cae000) - GitHub
* **ci**: Bump docker/setup-buildx-action from 2.8.0 to 2.9.0 (vectordotdev#17907) [251c4c4](answerbook/vector@251c4c4) - GitHub
* **ci**: Bump docker/setup-buildx-action from 2.9.0 to 2.9.1 (vectordotdev#17955) [77ffce8](answerbook/vector@77ffce8) - GitHub
* **ci**: check for team membership on secret-requiring int tests (vectordotdev#17909) [9765809](answerbook/vector@9765809) - GitHub
* **ci**: exclude protobuf files from spell checking (vectordotdev#18152) [34eaf43](answerbook/vector@34eaf43) - GitHub
* **ci**: Feature branch should be checked against `CURRENT_BRANCH` [4742c2f](answerbook/vector@4742c2f) - Darin Spivey [LOG-18882](https://logdna.atlassian.net/browse/LOG-18882)
* **ci**: fix gardener issues comment workflow (vectordotdev#17868) [e9f21a9](answerbook/vector@e9f21a9) - GitHub
* **ci**: fix gardener issues comment workflow pt 2 (vectordotdev#17886) [57ea2b3](answerbook/vector@57ea2b3) - GitHub
* **ci**: fix gardener issues comment workflow pt 3 (vectordotdev#17903) [98ca627](answerbook/vector@98ca627) - GitHub
* **ci**: Fix integration test filter generation (vectordotdev#17914) [528fac3](answerbook/vector@528fac3) - GitHub
* **ci**: fix k8s validate comment job logic (vectordotdev#17841) [99502bb](answerbook/vector@99502bb) - GitHub
* **ci**: remove kinetic as it's no longer supported (vectordotdev#18540) [beb74c1](answerbook/vector@beb74c1) - Jesse Szwedko
* **ci**: Remove path filter that runs all integration tests (vectordotdev#17908) [70632b7](answerbook/vector@70632b7) - GitHub
* **ci**: save time int test workflow merge queue (vectordotdev#17869) [9581b35](answerbook/vector@9581b35) - GitHub
* **ci**: Set HOMEBREW_NO_INSTALL_FROM_API in CI (vectordotdev#17867) [36174e2](answerbook/vector@36174e2) - GitHub
* **CI**: Single Machine Performance: turn off consignor (vectordotdev#17967) [1dfc3e1](answerbook/vector@1dfc3e1) - GitHub
* **CI**: Switch regression detector to new API and analysis service (vectordotdev#17912) [f808ea2](answerbook/vector@f808ea2) - GitHub
* **CI**: Update `smp` to version 0.9.1 (vectordotdev#17964) [98e47c1](answerbook/vector@98e47c1) - GitHub
* **ci**: Use GitHub App token for team membership rather than user PAT (vectordotdev#17936) [7774c49](answerbook/vector@7774c49) - GitHub
* **codecs**: Update syslog_loose to properly handle escapes (vectordotdev#18114) [b009e4d](answerbook/vector@b009e4d) - GitHub
* **core**: Expose shutdown errors (vectordotdev#18153) [cd8c8b1](answerbook/vector@cd8c8b1) - GitHub
* **deps**: Bump `nkeys` to 0.3.2 (vectordotdev#18264) [a1dfd54](answerbook/vector@a1dfd54) - Jesse Szwedko
* **deps**: Bump anyhow from 1.0.71 to 1.0.72 (vectordotdev#17986) [9a6ffad](answerbook/vector@9a6ffad) - GitHub
* **deps**: Bump apache-avro from 0.14.0 to 0.15.0 (vectordotdev#17931) [d5b7fe6](answerbook/vector@d5b7fe6) - GitHub
* **deps**: Bump assert_cmd from 2.0.11 to 2.0.12 (vectordotdev#17982) [fde77bd](answerbook/vector@fde77bd) - GitHub
* **deps**: Bump async_graphql, async_graphql_warp from 5.0.10 to 6.0.0 (vectordotdev#18122) [7df6af7](answerbook/vector@7df6af7) - GitHub
* **deps**: Bump async-compression from 0.4.0 to 0.4.1 (vectordotdev#17932) [5b1219f](answerbook/vector@5b1219f) - GitHub
* **deps**: Bump async-trait from 0.1.68 to 0.1.71 (vectordotdev#17881) [53b2854](answerbook/vector@53b2854) - GitHub
* **deps**: Bump async-trait from 0.1.71 to 0.1.72 (vectordotdev#18053) [bbe2c74](answerbook/vector@bbe2c74) - GitHub
* **deps**: Bump async-trait from 0.1.72 to 0.1.73 (vectordotdev#18235) [20fa1bf](answerbook/vector@20fa1bf) - GitHub
* **deps**: Bump axum from 0.6.18 to 0.6.19 (vectordotdev#18002) [52ac10a](answerbook/vector@52ac10a) - GitHub
* **deps**: Bump axum from 0.6.19 to 0.6.20 (vectordotdev#18154) [0ddd221](answerbook/vector@0ddd221) - GitHub
* **deps**: Bump bitmask-enum from 2.1.0 to 2.2.0 (vectordotdev#17833) [fc62e9c](answerbook/vector@fc62e9c) - GitHub
* **deps**: Bump bitmask-enum from 2.2.0 to 2.2.1 (vectordotdev#17921) [6326f37](answerbook/vector@6326f37) - GitHub
* **deps**: Bump bitmask-enum from 2.2.1 to 2.2.2 (vectordotdev#18236) [851e99c](answerbook/vector@851e99c) - GitHub
* **deps**: Bump bstr from 1.5.0 to 1.6.0 (vectordotdev#17877) [17ccc56](answerbook/vector@17ccc56) - GitHub
* **deps**: Bump clap from 4.3.19 to 4.3.21 (vectordotdev#18178) [0ae3d51](answerbook/vector@0ae3d51) - GitHub
* **deps**: Bump clap_complete from 4.3.1 to 4.3.2 (vectordotdev#17878) [2126707](answerbook/vector@2126707) - GitHub
* **deps**: Bump colored from 2.0.0 to 2.0.4 (vectordotdev#17876) [93f8144](answerbook/vector@93f8144) - GitHub
* **deps**: Bump console-subscriber from 0.1.9 to 0.1.10 (vectordotdev#17844) [f74d5dd](answerbook/vector@f74d5dd) - GitHub
* **deps**: Bump darling from 0.20.1 to 0.20.3 (vectordotdev#17969) [656b1fe](answerbook/vector@656b1fe) - GitHub
* **deps**: Bump dashmap from 5.4.0 to 5.5.0 (vectordotdev#17938) [b535d18](answerbook/vector@b535d18) - GitHub
* **deps**: Bump dyn-clone from 1.0.11 to 1.0.12 (vectordotdev#17987) [81de3e5](answerbook/vector@81de3e5) - GitHub
* **deps**: Bump enum_dispatch from 0.3.11 to 0.3.12 (vectordotdev#17879) [bf1407c](answerbook/vector@bf1407c) - GitHub
* **deps**: Bump gloo-utils from 0.1.7 to 0.2.0 (vectordotdev#18227) [e61c14f](answerbook/vector@e61c14f) - GitHub
* **deps**: Bump governor from 0.5.1 to 0.6.0 (vectordotdev#17960) [467baab](answerbook/vector@467baab) - GitHub
* **deps**: Bump indicatif from 0.17.5 to 0.17.6 (vectordotdev#18146) [a7c95dd](answerbook/vector@a7c95dd) - GitHub
* **deps**: Bump indoc from 2.0.1 to 2.0.2 (vectordotdev#17843) [ed5bc3a](answerbook/vector@ed5bc3a) - GitHub
* **deps**: Bump indoc from 2.0.2 to 2.0.3 (vectordotdev#17996) [3c25758](answerbook/vector@3c25758) - GitHub
* **deps**: Bump infer from 0.14.0 to 0.15.0 (vectordotdev#17860) [97f4433](answerbook/vector@97f4433) - GitHub
* **deps**: Bump inventory from 0.3.10 to 0.3.11 (vectordotdev#18070) [d8f211e](answerbook/vector@d8f211e) - GitHub
* **deps**: Bump inventory from 0.3.6 to 0.3.8 (vectordotdev#17842) [bf2f975](answerbook/vector@bf2f975) - GitHub
* **deps**: Bump inventory from 0.3.8 to 0.3.9 (vectordotdev#17995) [9c59fea](answerbook/vector@9c59fea) - GitHub
* **deps**: Bump inventory from 0.3.9 to 0.3.10 (vectordotdev#18064) [684e43f](answerbook/vector@684e43f) - GitHub
* **deps**: Bump lapin from 2.2.1 to 2.3.1 (vectordotdev#17974) [38719a3](answerbook/vector@38719a3) - GitHub
* **deps**: Bump log from 0.4.19 to 0.4.20 (vectordotdev#18237) [cb007fe](answerbook/vector@cb007fe) - GitHub
* **deps**: Bump lru from 0.10.1 to 0.11.0 (vectordotdev#17945) [4d4b393](answerbook/vector@4d4b393) - GitHub
* **deps**: Bump metrics from 0.21.0 to 0.21.1 (vectordotdev#17836) [c8e1267](answerbook/vector@c8e1267) - GitHub
* **deps**: Bump metrics-util from 0.15.0 to 0.15.1 (vectordotdev#17835) [f91d1b2](answerbook/vector@f91d1b2) - GitHub
* **deps**: Bump nkeys from 0.3.0 to 0.3.1 (vectordotdev#18056) [087a0ac](answerbook/vector@087a0ac) - GitHub
* **deps**: Bump no-proxy from 0.3.2 to 0.3.3 (vectordotdev#18094) [9458b6c](answerbook/vector@9458b6c) - GitHub
* **deps**: Bump num-traits from 0.2.15 to 0.2.16 (vectordotdev#18039) [4de89f2](answerbook/vector@4de89f2) - GitHub
* **deps**: Bump opendal from 0.38.0 to 0.38.1 (vectordotdev#17999) [90f494c](answerbook/vector@90f494c) - GitHub
* **deps**: Bump OpenSSL base version to 3.1.* (vectordotdev#17669) [8454a6f](answerbook/vector@8454a6f) - GitHub
* **deps**: Bump openssl from 0.10.55 to 0.10.56 (vectordotdev#18170) [09610b3](answerbook/vector@09610b3) - GitHub
* **deps**: Bump paste from 1.0.12 to 1.0.13 (vectordotdev#17846) [51d8497](answerbook/vector@51d8497) - GitHub
* **deps**: Bump paste from 1.0.13 to 1.0.14 (vectordotdev#17991) [a36d36e](answerbook/vector@a36d36e) - GitHub
* **deps**: Bump pin-project from 1.1.1 to 1.1.2 (vectordotdev#17837) [17e6632](answerbook/vector@17e6632) - GitHub
* **deps**: Bump pin-project from 1.1.2 to 1.1.3 (vectordotdev#18169) [e125eee](answerbook/vector@e125eee) - GitHub
* **deps**: Bump proc-macro2 from 1.0.63 to 1.0.64 (vectordotdev#17922) [22b6c2b](answerbook/vector@22b6c2b) - GitHub
* **deps**: Bump proc-macro2 from 1.0.64 to 1.0.66 (vectordotdev#17989) [fbc0308](answerbook/vector@fbc0308) - GitHub
* **deps**: Bump quote from 1.0.29 to 1.0.31 (vectordotdev#17990) [6e552f0](answerbook/vector@6e552f0) - GitHub
* **deps**: Bump quote from 1.0.31 to 1.0.32 (vectordotdev#18069) [dc2348a](answerbook/vector@dc2348a) - GitHub
* **deps**: Bump rdkafka from 0.32.2 to 0.33.2 (vectordotdev#17891) [c8deeda](answerbook/vector@c8deeda) - GitHub
* **deps**: Bump redis from 0.23.0 to 0.23.1 (vectordotdev#18107) [48abad4](answerbook/vector@48abad4) - GitHub
* **deps**: Bump redis from 0.23.1 to 0.23.2 (vectordotdev#18234) [ec3b440](answerbook/vector@ec3b440) - GitHub
* **deps**: Bump regex from 1.8.4 to 1.9.0 (vectordotdev#17874) [cb950b0](answerbook/vector@cb950b0) - GitHub
* **deps**: Bump regex from 1.9.0 to 1.9.1 (vectordotdev#17915) [bc5822c](answerbook/vector@bc5822c) - GitHub
* **deps**: Bump regex from 1.9.1 to 1.9.3 (vectordotdev#18167) [00037b0](answerbook/vector@00037b0) - GitHub
* **deps**: Bump rmp-serde from 1.1.1 to 1.1.2 (vectordotdev#18054) [497fdce](answerbook/vector@497fdce) - GitHub
* **deps**: Bump roaring from 0.10.1 to 0.10.2 (vectordotdev#18079) [f6c53d0](answerbook/vector@f6c53d0) - GitHub
* **deps**: Bump ryu from 1.0.13 to 1.0.14 (vectordotdev#17848) [4613b36](answerbook/vector@4613b36) - GitHub
* **deps**: Bump ryu from 1.0.14 to 1.0.15 (vectordotdev#17993) [f53c687](answerbook/vector@f53c687) - GitHub
* **deps**: Bump schannel from 0.1.21 to 0.1.22 (vectordotdev#17850) [ae59be6](answerbook/vector@ae59be6) - GitHub
* **deps**: Bump security-framework from 2.9.1 to 2.9.2 (vectordotdev#18051) [b305334](answerbook/vector@b305334) - GitHub
* **deps**: Bump semver from 1.0.17 to 1.0.18 (vectordotdev#17998) [ca368d8](answerbook/vector@ca368d8) - GitHub
* **deps**: Bump semver from 5.7.1 to 5.7.2 in /website (vectordotdev#17937) [784f3fe](answerbook/vector@784f3fe) - GitHub
* **deps**: Bump serde from 1.0.167 to 1.0.168 (vectordotdev#17920) [3989791](answerbook/vector@3989791) - GitHub
* **deps**: Bump serde from 1.0.168 to 1.0.171 (vectordotdev#17976) [66f4838](answerbook/vector@66f4838) - GitHub
* **deps**: Bump serde from 1.0.171 to 1.0.173 (vectordotdev#18032) [b36c531](answerbook/vector@b36c531) - GitHub
* **deps**: Bump serde from 1.0.173 to 1.0.174 (vectordotdev#18050) [437cad6](answerbook/vector@437cad6) - GitHub
* **deps**: Bump serde from 1.0.174 to 1.0.175 (vectordotdev#18071) [16a42ed](answerbook/vector@16a42ed) - GitHub
* **deps**: Bump serde from 1.0.175 to 1.0.180 (vectordotdev#18127) [e6f2ccc](answerbook/vector@e6f2ccc) - GitHub
* **deps**: Bump serde from 1.0.180 to 1.0.181 (vectordotdev#18155) [2c51c5c](answerbook/vector@2c51c5c) - GitHub
* **deps**: Bump serde from 1.0.181 to 1.0.183 (vectordotdev#18171) [6036d5c](answerbook/vector@6036d5c) - GitHub
* **deps**: Bump serde_bytes from 0.11.11 to 0.11.12 (vectordotdev#17988) [04f9ddc](answerbook/vector@04f9ddc) - GitHub
* **deps**: Bump serde_bytes from 0.11.9 to 0.11.11 (vectordotdev#17898) [b262316](answerbook/vector@b262316) - GitHub
* **deps**: Bump serde_json from 1.0.100 to 1.0.102 (vectordotdev#17948) [4a377a7](answerbook/vector@4a377a7) - GitHub
* **deps**: Bump serde_json from 1.0.102 to 1.0.103 (vectordotdev#17992) [0ebe7a7](answerbook/vector@0ebe7a7) - GitHub
* **deps**: Bump serde_json from 1.0.103 to 1.0.104 (vectordotdev#18095) [00ed120](answerbook/vector@00ed120) - GitHub
* **deps**: Bump serde_json from 1.0.99 to 1.0.100 (vectordotdev#17859) [1a427ed](answerbook/vector@1a427ed) - GitHub
* **deps**: Bump serde_with from 3.0.0 to 3.1.0 (vectordotdev#18004) [39a2bf5](answerbook/vector@39a2bf5) - GitHub
* **deps**: Bump serde_with from 3.1.0 to 3.2.0 (vectordotdev#18162) [be551c8](answerbook/vector@be551c8) - GitHub
* **deps**: Bump serde_yaml from 0.9.22 to 0.9.24 (vectordotdev#18007) [3b91662](answerbook/vector@3b91662) - GitHub
* **deps**: Bump serde_yaml from 0.9.24 to 0.9.25 (vectordotdev#18040) [7050b7e](answerbook/vector@7050b7e) - GitHub
* **deps**: Bump smallvec from 1.10.0 to 1.11.0 (vectordotdev#17880) [46dc18a](answerbook/vector@46dc18a) - GitHub
* **deps**: Bump snafu from 0.7.4 to 0.7.5 (vectordotdev#17919) [49714cf](answerbook/vector@49714cf) - GitHub
* **deps**: Bump strip-ansi-escapes from 0.1.1 to 0.2.0 (vectordotdev#18203) [8bbe6a6](answerbook/vector@8bbe6a6) - GitHub
* **deps**: Bump syn from 2.0.23 to 2.0.25 (vectordotdev#17970) [5dfede4](answerbook/vector@5dfede4) - GitHub
* **deps**: Bump syn from 2.0.25 to 2.0.26 (vectordotdev#17994) [caf6103](answerbook/vector@caf6103) - GitHub
* **deps**: Bump syn from 2.0.26 to 2.0.27 (vectordotdev#18042) [983a92a](answerbook/vector@983a92a) - GitHub
* **deps**: Bump syn from 2.0.27 to 2.0.28 (vectordotdev#18117) [d3e5128](answerbook/vector@d3e5128) - GitHub
* **deps**: Bump thiserror from 1.0.40 to 1.0.43 (vectordotdev#17900) [ea0f5b1](answerbook/vector@ea0f5b1) - GitHub
* **deps**: Bump thiserror from 1.0.43 to 1.0.44 (vectordotdev#18052) [ee2396f](answerbook/vector@ee2396f) - GitHub
* **deps**: Bump tikv-jemallocator from 0.5.0 to 0.5.4 (vectordotdev#18102) [564104e](answerbook/vector@564104e) - GitHub
* **deps**: Bump to syn 2, serde_with 3, darling 0.20, and serde_derive_internals 0.28 (vectordotdev#17930) [3921a24](answerbook/vector@3921a24) - GitHub
* **deps**: Bump tokio from 1.29.0 to 1.29.1 (vectordotdev#17811) [0454d9d](answerbook/vector@0454d9d) - GitHub
* **deps**: Bump tokio from 1.29.1 to 1.30.0 (vectordotdev#18202) [92c2b9c](answerbook/vector@92c2b9c) - GitHub
* **deps**: Bump tokio-tungstenite from 0.19.0 to 0.20.0 (vectordotdev#18065) [3968325](answerbook/vector@3968325) - GitHub
* **deps**: Bump toml from 0.7.5 to 0.7.6 (vectordotdev#17875) [44d3a8c](answerbook/vector@44d3a8c) - GitHub
* **deps**: Bump tower-http from 0.4.1 to 0.4.2 (vectordotdev#18030) [9b4cd44](answerbook/vector@9b4cd44) - GitHub
* **deps**: Bump tower-http from 0.4.2 to 0.4.3 (vectordotdev#18055) [f1d4196](answerbook/vector@f1d4196) - GitHub
* **deps**: Bump typetag from 0.2.10 to 0.2.11 (vectordotdev#18048) [5bccafe](answerbook/vector@5bccafe) - GitHub
* **deps**: Bump typetag from 0.2.11 to 0.2.12 (vectordotdev#18066) [b70074c](answerbook/vector@b70074c) - GitHub
* **deps**: Bump typetag from 0.2.8 to 0.2.9 (vectordotdev#17882) [b10d070](answerbook/vector@b10d070) - GitHub
* **deps**: Bump typetag from 0.2.9 to 0.2.10 (vectordotdev#17968) [f4b1111](answerbook/vector@f4b1111) - GitHub
* **deps**: Bump uuid from 1.4.0 to 1.4.1 (vectordotdev#18001) [60e765d](answerbook/vector@60e765d) - GitHub
* **deps**: Bump zstd from 0.12.3+zstd.1.5.2 to 0.12.4 (vectordotdev#18031) [752056c](answerbook/vector@752056c) - GitHub
* **deps**: Remove an unneeded advisory ignore (vectordotdev#18226) [01295b0](answerbook/vector@01295b0) - GitHub
* **deps**: Swap out bloom crate for bloomy (vectordotdev#17911) [d592b0c](answerbook/vector@d592b0c) - GitHub
* **deps**: Swap tui crate for ratatui (vectordotdev#18225) [8838faf](answerbook/vector@8838faf) - GitHub
* **deps**: Update to Rust 1.71.0 (vectordotdev#18075) [1dd505f](answerbook/vector@1dd505f) - GitHub
* **deps**: Update tokio-util fork to 0.7.8 (vectordotdev#18078) [421b421](answerbook/vector@421b421) - GitHub
* **deps**: Upgrade debian usages to use bookworm (vectordotdev#18057) [fecca5e](answerbook/vector@fecca5e) - GitHub
* **deps**: Upgrade to Rust 1.71.1 (vectordotdev#18221) [eaed0a8](answerbook/vector@eaed0a8) - GitHub
* **deps**: Upgrading version of lading used (vectordotdev#18210) [91e48f6](answerbook/vector@91e48f6) - GitHub
* **dev**: Fix package install in Tiltfile (vectordotdev#18198) [f39a0e9](answerbook/vector@f39a0e9) - GitHub
* **dev**: Install dd-rust-license-tool from crates.io (vectordotdev#18025) [7d0db6b](answerbook/vector@7d0db6b) - GitHub
* **dev**: Mark loki-logproto crate as unpublished (vectordotdev#17979) [5dd2084](answerbook/vector@5dd2084) - GitHub
* **docs**: Add macOS troubleshooting section to VRL web playground (vectordotdev#17824) [0fbdb33](answerbook/vector@0fbdb33) - GitHub
* **docs**: Fix links in CONTRIBUTING.md (vectordotdev#18061) [250cc95](answerbook/vector@250cc95) - GitHub
* **docs**: Remove mentions of deprecated transforms from guides (vectordotdev#17933) [37fb02b](answerbook/vector@37fb02b) - GitHub
* **external docs**: update sink tutorials with Data Volume tag changes (vectordotdev#18148) [b2d23a8](answerbook/vector@b2d23a8) - GitHub
* Install script supports Apple ARM with Rosetta (vectordotdev#18016) [fd10e69](answerbook/vector@fd10e69) - GitHub
* **observability**: add tests to sinks for Data Volume tags (vectordotdev#17853) [4915b42](answerbook/vector@4915b42) - GitHub
* **observability**: consolidate `EventCountTags` with `TaggedEventsSent` (vectordotdev#17865) [81f5c50](answerbook/vector@81f5c50) - GitHub
* **observability**: count byte_size after transforming event (vectordotdev#17941) [0bf6abd](answerbook/vector@0bf6abd) - GitHub
* **observability**: Fix a couple typos with the registered event cache (vectordotdev#17809) [205300b](answerbook/vector@205300b) - GitHub
* **releasing**: Add 0.32.0 highlight for legacy OpenSSL provider deprecation (vectordotdev#18263) [1a32e96](answerbook/vector@1a32e96) - Jesse Szwedko
* **releasing**: Add known issues for v0.32.0 (vectordotdev#18298) [38e95b5](answerbook/vector@38e95b5) - Jesse Szwedko
* **releasing**: Add note about protobuf codec addition for 0.32.0 release (vectordotdev#18275) [91f7612](answerbook/vector@91f7612) - Jesse Szwedko
* **releasing**: Add upgrade note for 0.31.0 about S3 path changes (vectordotdev#17934) [f8461cb](answerbook/vector@f8461cb) - GitHub
* **releasing**: Bump Vector to 0.32.0 (vectordotdev#17887) [9c0d2f2](answerbook/vector@9c0d2f2) - GitHub
* **releasing**: Fix link in v0.31.0 release docs (vectordotdev#17888) [1260c83](answerbook/vector@1260c83) - GitHub
* **releasing**: Fix markdown syntax in minor release template (vectordotdev#17890) [0735ffe](answerbook/vector@0735ffe) - GitHub
* **releasing**: Prepare v0.31.0 release [aeccd26](answerbook/vector@aeccd26) - Jesse Szwedko
* **releasing**: Prepare v0.32.0 release [1b403e1](answerbook/vector@1b403e1) - Jesse Szwedko
* **releasing**: Prepare v0.32.1 release [9965884](answerbook/vector@9965884) - Jesse Szwedko
* **releasing**: Prepare v0.32.2 release [0982551](answerbook/vector@0982551) - Jesse Szwedko
* **releasing**: Regenerate k8s manifests with v0.23.0 of the chart (vectordotdev#17892) [604fea0](answerbook/vector@604fea0) - GitHub
* **releasing**: Run hadolint on distributed Dockerfiles (vectordotdev#18224) [ad08d01](answerbook/vector@ad08d01) - GitHub
* replace path tuples with actual target paths (vectordotdev#18139) [8068f1d](answerbook/vector@8068f1d) - GitHub
* replace various string paths with actual paths (vectordotdev#18109) [d8eefe3](answerbook/vector@d8eefe3) - GitHub
* **security**: Make the warning for the deprecated OpenSSL provider more verbose (vectordotdev#18278) [042fb51](answerbook/vector@042fb51) - Jesse Szwedko
* separate hanwritten and generated files in web-playground (vectordotdev#17871) [9ec0443](answerbook/vector@9ec0443) - GitHub
* stop ignoring topology test (vectordotdev#17953) [a05542a](answerbook/vector@a05542a) - GitHub
* update `rustls-webpki` due to security advisory (vectordotdev#18344) [1cb51a4](answerbook/vector@1cb51a4) - Jesse Szwedko
* Update `smp` to its latest released version (vectordotdev#18204) [7603d28](answerbook/vector@7603d28) - GitHub

### Features

* **adaptive_concurrency**: support configuring the initial ARC limit (vectordotdev#18175) [3b53bcd](answerbook/vector@3b53bcd) - GitHub
* add support for `external_id` in AWS assume role (vectordotdev#17743) [689a79e](answerbook/vector@689a79e) - GitHub
* **clickhouse sink**: make `database` and `table` templateable (vectordotdev#18005) [536a7f1](answerbook/vector@536a7f1) - GitHub
* **codecs**: add support for protobuf decoding (vectordotdev#18019) [a06c711](answerbook/vector@a06c711) - GitHub
* **component validation**: validate `component_errors_total` for sources (vectordotdev#17965) [aa60520](answerbook/vector@aa60520) - GitHub
* **deps, vrl**: Update VRL to 0.6.0 (vectordotdev#18150) [adfef2e](answerbook/vector@adfef2e) - GitHub
* emit an error if the condition return type is not a boolean (vectordotdev#18196) [caf6103](answerbook/vector@caf6103) - GitHub
* LogSchema metadata key refacoring (vectordotdev#18099) [a8bb9f4](answerbook/vector@a8bb9f4) - GitHub
* Migrate `LogSchema` `source_type_key` to new lookup code (vectordotdev#17947) [d29424d](answerbook/vector@d29424d) - GitHub
* Migrate LogSchema::host_key to new lookup code (vectordotdev#17972) [32950d8](answerbook/vector@32950d8) - GitHub
* Migrate LogSchema::message_key to new lookup code (vectordotdev#18024) [0f14c0d](answerbook/vector@0f14c0d) - GitHub
* Migrate LogSchema::metadata key to new lookup code (vectordotdev#18058) [8663602](answerbook/vector@8663602) - GitHub
* migrate to `async_nats` client (vectordotdev#18165) [483e46f](answerbook/vector@483e46f) - GitHub
* **new sink**: Adding greptimedb metrics sink (vectordotdev#17198) [98f44ae](answerbook/vector@98f44ae) - GitHub
* **new sink**: Initial `datadog_events` sink (vectordotdev#7678) [53fc86a](answerbook/vector@53fc86a) - Jesse Szwedko
* Refactor 'event.get()' to use path types (vectordotdev#18160) [e476e12](answerbook/vector@e476e12) - GitHub
* Refactor dnstap to use 'OwnedValuePath's (vectordotdev#18212) [ca7fa05](answerbook/vector@ca7fa05) - GitHub
* Refactor TraceEvent insert to use TargetPath compatible types (vectordotdev#18090) [f015b29](answerbook/vector@f015b29) - GitHub
* replace LogEvent 'String's with '&OwnedTargetPath's (vectordotdev#18084) [065eecb](answerbook/vector@065eecb) - GitHub
* replace tuples with &OwnedTargetPath wherever possible (vectordotdev#18097) [28f5c23](answerbook/vector@28f5c23) - GitHub
* switch to crates.io release of Azure SDK (vectordotdev#18166) [3c535ec](answerbook/vector@3c535ec) - GitHub

### Miscellaneous

* Merge pull request vectordotdev#379 from answerbook/feature/LOG-18882 [8bd9860](answerbook/vector@8bd9860) - GitHub [LOG-18882](https://logdna.atlassian.net/browse/LOG-18882)
* Merge branch 'master' into feature/LOG-18882 [d217387](answerbook/vector@d217387) - Darin Spivey [LOG-18882](https://logdna.atlassian.net/browse/LOG-18882)
* Merge tag 'v0.32.2' into feature/LOG-18882 [c05f969](answerbook/vector@c05f969) - Darin Spivey [LOG-18882](https://logdna.atlassian.net/browse/LOG-18882)
* Managed by Terraform provider [92e320a](answerbook/vector@92e320a) - Terraform
* 0.32.0.cue typo (vectordotdev#18270) [0f7d6e6](answerbook/vector@0f7d6e6) - Jesse Szwedko
* add PGO information (vectordotdev#18369) [3040ae2](answerbook/vector@3040ae2) - Jesse Szwedko
* check VRL conditions return type at compile time (vectordotdev#17894) [fa489f8](answerbook/vector@fa489f8) - GitHub
* **ci**: combine build steps for integration test workflows (vectordotdev#17724) [911477a](answerbook/vector@911477a) - GitHub
* describe the difference between configuration fields and runtime flags (vectordotdev#17784) [01e2dfa](answerbook/vector@01e2dfa) - GitHub
* **elasticsearch sink**: Allow empty data_stream fields (vectordotdev#18193) [1dd7bb1](answerbook/vector@1dd7bb1) - GitHub
* **file source**: fix some typos (vectordotdev#18401) [1164f55](answerbook/vector@1164f55) - Jesse Szwedko
* Fix "Bring your own toolbox" in `DEVELOPING.md` (vectordotdev#18014) [115bd7b](answerbook/vector@115bd7b) - GitHub
* Fix schema.log_namespace and telemetry.tags documentation (vectordotdev#17961) [50736e2](answerbook/vector@50736e2) - GitHub
* **internal docs**: Fix basic sink tutorial issues (vectordotdev#18136) [5a6ce73](answerbook/vector@5a6ce73) - GitHub
* **lua transform**: Emit events with the `source_id` set (vectordotdev#17870) [bc1b83a](answerbook/vector@bc1b83a) - GitHub
* **observability**: add fixed tag option to `RegisteredEventCache` (vectordotdev#17814) [bc86222](answerbook/vector@bc86222) - GitHub
* **prometheus_scrape source**: run requests in parallel with timeouts (vectordotdev#18021) [a9df958](answerbook/vector@a9df958) - GitHub
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain: ci Anything related to Vector's CI environment domain: sinks Anything related to the Vector's sinks domain: vdev Anything related to the vdev tooling
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants