Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APM config from local configuration before enrollment is lost #5204

Closed
juliaElastic opened this issue May 7, 2024 · 48 comments
Closed

APM config from local configuration before enrollment is lost #5204

juliaElastic opened this issue May 7, 2024 · 48 comments
Assignees
Labels
Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team

Comments

@juliaElastic
Copy link
Contributor

juliaElastic commented May 7, 2024

We discovered an issue with the default APM config injected by cloud to internal ESS clusters: https://elasticco.atlassian.net/browse/CP-3464

It seems this APM config is not applied on a newly created clusters, and fleet-server traces are not being sent to https://overview.elastic-cloud.com/app/r/s/JIEzg

It is not clear if the issue is on elastic-agent or fleet-server side.

Related doc: https://github.com/elastic/kibana/blob/main/x-pack/plugins/fleet/dev_docs/apm_tracing.md

Originally posted by @juliaElastic in elastic/fleet-server#3328 (comment)

@blakerouse
Copy link
Contributor

I believe that I have identified the issue. Below shows the flow and where the issue exists:

  1. container is started with a custom elastic-agent.yml file that has all the cloud settings present.
  2. container performs enrollment
  3. container writes fleet.yml
  4. container moves/replaces the elastic-agent.yml with a file that only has fleet.enabled: true and any of these settings from the original elastic-agent.yml - https://github.com/elastic/elastic-agent/blob/main/internal/pkg/agent/cmd/enroll_cmd.go#L1056
  5. container performs run reading the contents of elastic-agent.yml
  6. ESS comes along and overwrites the elastic-agent.yml hiding the fact that the container is actually running from a different configuration

It is possible for 5 and 6 to get crossed based on timing which can very rarely cause APM tracing to work as @simitt noted once.

There are a few possible solutions:

  1. Update the getPersistentConfig to handle the tracing options.
  • Pro: same path we take today
  • Con: always have to keep updating this code for new options (seems some options are already in the file from cloud that are being ignored)
  1. Use the ELASTIC_AGENT_CLOUD environment variable to not overwrite the elastic-agent.yml.
  • Pro: cloud specific and can add any setting they want
  • Con: cloud specific (haha)
  1. Don't overwrite the elastic-agent.yml if it already contains the fleet.enabled: true
  • Pro: works for all users that want to provide a custom configuration on Fleet
  • Con: we might not want users to do that

Looking on input for proper solution from @elastic/elastic-agent-control-plane

@blakerouse blakerouse transferred this issue from elastic/fleet-server Jul 26, 2024
@blakerouse blakerouse changed the title APM config for internal ESS clusters is not applied by agent/fleet-server APM config from local configuration before enrollment is lost Jul 26, 2024
@blakerouse
Copy link
Contributor

Moved this to the Elastic Agent repository because it is an Elastic Agent issue, not a Fleet Server or APM issue.

@michel-laterman
Copy link
Contributor

Additional option:
4. If I'm understanding the cloud startup process correctly; we can use the existing fleet-setup.yml file that containers use

for _, f := range []string{"fleet-setup.yml", "credentials.yml"} {
c, err := config.LoadFile(filepath.Join(paths.Config(), f))
if err != nil && !os.IsNotExist(err) {
return fmt.Errorf("parsing config file(%s): %w", f, err)
}
if c != nil {
err = c.Unpack(&cfg)
if err != nil {
return fmt.Errorf("unpacking config file(%s): %w", f, err)
}
// if in elastic cloud mode, only run the agent when configured
runAgent = true
}
}

Which uses the setupConfig struct, that would require #5199 to be implemented, and change the allocator on cloud to write fleet-setup.yml instead of elastic-agent.yml.

@ycombinator
Copy link
Contributor

ycombinator commented Jul 26, 2024

Thanks for the investigation, @blakerouse, and thanks for the additional option, @michel-laterman. I'm adding this issue to our next sprint, which starts Monday, and assigning it to @michel-laterman, looking at other priorities and capacity. So let's pick it up then.

@blakerouse
Copy link
Contributor

Additional option: 4. If I'm understanding the cloud startup process correctly; we can use the existing fleet-setup.yml file that containers use

for _, f := range []string{"fleet-setup.yml", "credentials.yml"} {
c, err := config.LoadFile(filepath.Join(paths.Config(), f))
if err != nil && !os.IsNotExist(err) {
return fmt.Errorf("parsing config file(%s): %w", f, err)
}
if c != nil {
err = c.Unpack(&cfg)
if err != nil {
return fmt.Errorf("unpacking config file(%s): %w", f, err)
}
// if in elastic cloud mode, only run the agent when configured
runAgent = true
}
}

Which uses the setupConfig struct, that would require #5199 to be implemented, and change the allocator on cloud to write fleet-setup.yml instead of elastic-agent.yml.

That could work if fleet-setup.yml options are persistent after enrollment. Looking at the code its not clear if that is the case, I don't believe it is.

I lean towards option 3, but I don't know if we want that behavior. Seems okay to me, but might not be what we want for the product (aka. allowing all options to be used locally before the policy is applied on top).

@cmacknz
Copy link
Member

cmacknz commented Jul 26, 2024

Don't overwrite the elastic-agent.yml if it already contains the fleet.enabled: true

Interestingly, we already attempted to do this #4166 see the shouldSkipReplace calls.

This feels like along the lines of what users want to happen for Fleet managed agents, which is that their initial configuration turns something on, and then if Fleet doesn't set it or also turns it on it stays enabled. There is no window where a feature is briefly disabled during the transition from standalone to Fleet during enrollment.

@ycombinator
Copy link
Contributor

Discussed this in today's team meeting and @cmacknz suggested an alternative approach that might solve the problem for APM Server in ESS. To be clear, this issue here is still pointing to a larger issue we have with config replacement so it needs to be tackled per the options being discussed in the last few comments but it might not be as high priority if we can implement @cmacknz's proposed approach for APM Server in ESS.

That approach is basically to set the APM tracing configuration as part of the APM Server policy in ESS. I'm going to test it using the overrides API and verify that the config is passed all the way down to the APM Server component by looking at the diagnostic. cc: @simitt

@ycombinator
Copy link
Contributor

That approach is basically to set the APM tracing configuration as part of the APM Server policy in ESS. I'm going to test it using the overrides API and verify that the config is passed all the way down to the APM Server component by looking at the diagnostic. cc: @simitt

I tested this today and it should work.

I created a new policy (because I couldn't override the preset Elastic Cloud agent policy in ESS even with the policy override API), enrolled an Agent with it, took a diagnostic and observed that agent.monitoring.traces was not set:

$ cat diag-before/pre-config.yaml | yq '.agent.monitoring'
{
  "enabled": true,
  "logs": true,
  "metrics": true,
  "namespace": "default",
  "use_output": "default"
}

Then, using the policy override API, I set agent.monitoring.traces: true in this policy, took another diagnostic, and observed that agent.monitoring.traces was indeed set:

cat diag-after/pre-config.yaml | yq '.agent.monitoring'
{
  "enabled": true,
  "logs": true,
  "metrics": true,
  "namespace": "default",
  "traces": true,
  "use_output": "default"
}

So I believe the solution would be to update the policy-elastic-agent-on-cloud policy in ESS to enable traces as part of monitoring. I'm not sure if the right way to do that is to add tracing to this array@kpollich could you advise?

cc: @simitt

@kpollich
Copy link
Member

kpollich commented Aug 5, 2024

The agent policy config schema would also need to be update to allow traces to be set in that array here:

https://github.com/elastic/kibana/blob/7340abb888614670502bda936bf162aad80d5fa7/x-pack/plugins/fleet/server/types/models/agent_policy.ts#L57-L61

Without this change, the config including monitoring_enabled: ['traces'] would be considered invalid and Kibana would throw an error on startup.

@ycombinator
Copy link
Contributor

Thanks @kpollich. Here is the PR to allow traces to be added to the monitoring_enabled array: elastic/kibana#189908

@simitt
Copy link

simitt commented Aug 6, 2024

@ycombinator thanks for looking into alternatives here. We are very eager to finally get apm tracing enabled for apm-server, however, I am a bit worried about building a snowflake solution here, and would prefer a general fix, to avoid any conflicts with future changes that might not consider this specific solution.
We also won't be able to turn it on by default unless EA supports sampling #5211. Will this be supported in 8.16.0?

@ycombinator
Copy link
Contributor

ycombinator commented Aug 6, 2024

I am a bit worried about building a snowflake solution here, and would prefer a general fix, to avoid any conflicts with future changes that might not consider this specific solution.

Hi @simitt, sorry I wasn't clearer in #5204 (comment), but the proposed approach is not a snowflake or workaround solution. @cmacknz can keep me honest but the thinking here is that since the APM Server in ESS is part of a Fleet-managed EA, it makes sense for any configuration for that APM Server to come from the Fleet-managed policy (as opposed to from the EA policy that's locally on disk).

@ycombinator
Copy link
Contributor

We also won't be able to turn it on by default unless EA supports sampling #5211. Will this be supported in 8.16.0?

Yes, as of now 8.16.0 is the target for this issue. We have allocated it to a sprint that will finish before 8.16.0 feature freeze.

@simitt
Copy link

simitt commented Aug 6, 2024

Thanks for the clarifications and timelines!

@ycombinator
Copy link
Contributor

I tested on ESS QA with a 8.16.0-SNAPSHOT deployment and I see that the "Elastic Cloud Agent policy" now has agent.monitoring.enabled: true and agent.monitoring.traces: true.

Image

However, I'm not 100% sure if this is sufficient. @juliaElastic do we also need to inject the following section under agent.monitoring?

apm:
    hosts:
      - <apm host url>
    environment: <apm environment>
    secret_token: <secret token>

If so, where would one get the <apm host url>, <apm environment> and <secret token> from? And where/how would the above section be injected under https://github.com/elastic/cloud-assets/blob/4e9cf8979f57fd08db8a7ebb2b476b852fbd72bf/stackpack/kibana/config/kibana.yml#L250-L252?

@juliaElastic
Copy link
Contributor Author

@ycombinator Yeah, it's needed I think. There is a logic in the cloud repo that sets these values, the issue is they are not applied in agent. https://github.com/elastic/cloud/blob/master/scala-services/runner/src/main/scala/no/found/runner/allocation/stateless/ApmDockerContainer.scala#L434

@ycombinator
Copy link
Contributor

ycombinator commented Aug 30, 2024

Thanks @juliaElastic.

There is a logic in the cloud repo that sets these values...

In that case, why am I not seeing them in the "Elastic Cloud Agent policy" (see the screenshot in #5204 (comment))? Do I need to do some extra configuration elsewhere to have these values show up in the policy?

... the issue is they are not applied in agent.

Yes, I see the temporary workaround in Agent code in:

// PatchAPMConfig is a temporary configuration patcher function (see ConfigPatchManager and ConfigPatch for reference) that
// will patch the configuration coming from Fleet adding the APM parameters from the elastic agent configuration file
// until Fleet supports this config directly
func PatchAPMConfig(log *logger.Logger, rawConfig *config.Config) func(change coordinator.ConfigChange) coordinator.ConfigChange {

Once we can confirm that Agent is able to receive the values from Fleet (for that I need to know the answer to the questions above), we can work on making the necessary changes in Agent to remove the temporary workaround.

@juliaElastic
Copy link
Contributor Author

juliaElastic commented Aug 30, 2024

@ycombinator AFAIK these APM configs are not added to the agent policy, but directly to elastic-agent.yml in cloud, see this issue. @AlexP-Elastic should know the details of how this works in cloud.

@ycombinator
Copy link
Contributor

ycombinator commented Aug 30, 2024

@juliaElastic Right, that's what we are trying to move away from and have the APM configuration be part of the Fleet-managed Agent policy instead 🙂. See #5204 (comment).

So is there some way we can make that happen? I was able to get agent.monitoring.traces: true to show up in the Fleet-managed policy (see the screenshot in #5204 (comment)) but now I'm wondering how I can get the following section also show up under agent.monitoring:

apm:
    hosts:
      - <apm host url>
    environment: <apm environment>
    secret_token: <secret token>

@juliaElastic
Copy link
Contributor Author

I see, it seems an APM config is already there in the cloud policy based on a conditional, so maybe we just need to tweak the condition to enable it on all internal ESS clusters: https://github.com/elastic/cloud-assets/blob/4e9cf8979f57fd08db8a7ebb2b476b852fbd72bf/stackpack/kibana/config/kibana.yml#L315-L343

@ycombinator
Copy link
Contributor

We need to figure where we went those trace to be sent it seems it's not clear, do we want them sent to the APM server running on the deployment (so accessible for the user) or it for our usage and we went to send that to a monitoring cluster?

@simitt is probably the best person to answer this question.

@simitt
Copy link

simitt commented Sep 3, 2024

The apm data should be sent to the internal Elastic cluster, for support engineers and developers to leverage the data for troubleshooting.

@ycombinator
Copy link
Contributor

Thanks @simitt.

@juliaElastic @nchaulet Given this, do you know what the policy should use for the values of <apm host url>, <apm environment>, and <secret token>? And, more importantly, from where/how it should get those values to inject them into the policy?

@ycombinator
Copy link
Contributor

BTW, since I've been testing on ESS QA and feature freeze for ms-113.0 is coming up, I'm going to revert my Cloud PRs (specifically https://github.com/elastic/cloud/pull/130605, https://github.com/elastic/cloud/pull/130769, and https://github.com/elastic/cloud-assets/pull/1569) for now, so these changes don't accidentally get released to production. Once ms-113.0 has been released, I will reintroduce these (and any other changes based on the answer to the previous comment) and resume testing in ESS QA.

@juliaElastic
Copy link
Contributor Author

I'm not sure how to reference the internal Elastic cluster in kibana config. @simitt Could you help with that?

@simitt
Copy link

simitt commented Sep 5, 2024

I think @AlexP-Elastic might be able to provide the details here, as ES and Kibana are already shipping tracing data to cloud regional clusters.

@AlexP-Elastic
Copy link

I'm not sure how to reference the internal Elastic cluster in kibana config

The Elastic cluster is injected by control plane into the templated config files in the stack pack: https://github.com/elastic/cloud/blob/master/scala-services/runner/src/main/scala/no/found/runner/allocation/stateless/KibanaDockerContainer.scala#L561-L589

for regional values, and

https://github.com/elastic/cloud/blob/ca0c6cabbf811a9fc2da3d5fe71738c56679e563/scala-services/adminconsole/src/main/scala/no/found/adminconsole/api/v1/deployments/services/controlplanesettings/providers/kibana/KibanaInternalApmSettings.scala#L49-L57

for global values

so assuming I'm understanding correctly and you want some Kibana code to inject them into the APM policy, easiest would be if you could reference elastic.apm.serverUrl / elastic.apm.secretToken / elastic.apm.environment from the YAML

(there's an additional complication in getting the fields that are needed to bypass IP filtering)

@kpollich kpollich assigned juliaElastic and unassigned ycombinator Sep 6, 2024
@juliaElastic
Copy link
Contributor Author

so assuming I'm understanding correctly and you want some Kibana code to inject them into the APM policy, easiest would be if you could reference elastic.apm.serverUrl / elastic.apm.secretToken / elastic.apm.environment from the YAML

Do these values already available to reference in kibana.yml, or is there a code change needed to make them work?

@AlexP-Elastic
Copy link

AlexP-Elastic commented Sep 6, 2024

These values are already in Kibana YAML (for versions of Kibana that support them)

You can't see them in the stackpack you are linked because they are injected by the control plane infrastructure explicitly (not via the templating we use for defaults)

@juliaElastic
Copy link
Contributor Author

I tested locally to set monitoring in a preconfigured policy with overrides as Nicolas suggested here: #5204 (comment)
It seems to work, so I'm going to add this to the cloud config.

@juliaElastic
Copy link
Contributor Author

juliaElastic commented Sep 10, 2024

@AlexP-Elastic I tested the change in the latest 8.16-SNAPSHOT and it seems the substitutions of elastic.apm.serverUrl / elastic.apm.secretToken / elastic.apm.environment didn't work. How can we fix this?
Instance: https://staging.found.no/deployments/edc7201fc9a5448982955f7017e02106

agent:
  monitoring:
    enabled: true
    traces: true
    apm:
      hosts:
        - ''
      environment: null
      secret_token: null

@AlexP-Elastic

This comment has been minimized.

@AlexP-Elastic
Copy link

AlexP-Elastic commented Sep 10, 2024

Oh no, I just realized I was guilty of totally not understanding what you were doing and giving your last PR a distracted LGTM instead of reading it :(

Sorry about that

what I thought you were proposing to do was have Kibana inject the fields into the policy (and the PR https://github.com/elastic/cloud-assets/pull/1573/files was just adding some placeholder fields)

What it actually does

      agent.monitoring:
        apm:
          hosts:
            - "{{ elastic.apm.serverUrl }}"
          environment: {{ elastic.apm.environment }}
          secret_token: {{ elastic.apm.secretToken }}

doesn't work at all because elastic.apm.serverUrl isn't an actual variable in the templating (it's part of the rendered YAML, which cannot be used to render the same YAML :) )

Hang on I need to think about this a moment, now I know you are trying to do it all via templates and not via code in Kibana

@juliaElastic
Copy link
Contributor Author

@AlexP-Elastic No worries, we could inject the fields from kibana too, I'm just not sure where to take these values from in kibana.

@AlexP-Elastic
Copy link

This is one option: https://github.com/elastic/cloud/pull/131470/files .. I think this is preferable to writing code inside Kibana, if we have to do it using the existing settings

I think my preferred architectural solution would be for the APM container to take the values injected into elastic-agent.yaml and merge them with the policy ... but presumably that is not easy to do?

@juliaElastic
Copy link
Contributor Author

I think my preferred architectural solution would be for the APM container to take the values injected into elastic-agent.yaml and merge them with the policy ... but presumably that is not easy to do?

Yeah I'm not sure if we could change the preconfiguration or call the kibana Fleet API from the APM container to modify the cloud agent policy.

@kpollich
Copy link
Member

kpollich commented Sep 12, 2024

@juliaElastic - Is this blocked because the root cause fix here would involve makes changes to APM itself, or because we are waiting on https://github.com/elastic/cloud/pull/131470?

@juliaElastic
Copy link
Contributor Author

I'm waiting to see if https://github.com/elastic/cloud/pull/131470 works, otherwise we will need to get some help from the APM team to see if we can add the config from the APM container.

@AlexP-Elastic
Copy link

I'm just testing https://github.com/elastic/cloud/pull/131470 now, I think we (control plane) are happy to go forward with this as the plan, so once it's working we'll get it merged (hopefully by the end of the week) and you can follow the kibana.yml in that PR to make the changes to cloud-assets (which is what ESS runs)

@AlexP-Elastic
Copy link

@juliaElastic Sorry for the delay, https://github.com/elastic/cloud/pull/131470 is now merged and in QA, so you can re-create your https://github.com/elastic/cloud/pull/131470 PR vs cloud-assets but using the example here: https://github.com/elastic/cloud/pull/131470/files#diff-e809f52103f72da78d12a320c177bef73b3b86240bc78df5e867d88f94656cb9

and then the next day we can actually test it out in QA

@juliaElastic
Copy link
Contributor Author

@AlexP-Elastic Thanks, I created a pr: https://github.com/elastic/cloud-assets/pull/1588
Do you mean that customSettings will be available in cloud-assets too?

@AlexP-Elastic
Copy link

@juliaElastic - yep I meant to create a PR

Actually I just found out that we've branched master -> 9.x and 8.x to 8.x, so you'll need to issue the same PR against the 8.x branch, sorry about that

@juliaElastic
Copy link
Contributor Author

juliaElastic commented Sep 19, 2024

Tested today in cloud QA, creating a 8.16-SNAPSHOT deployment, I'm seeing the apm config in the cloud agent policy:
https://admin.qa.cld.elstc.co/deployments/7903c8854c424a8f9683e78c1d227dee

Image

Though when I looked up traces on the apm server the metrics were sent to, I'm not seeing any fleet-server traces from the test deployment. Checking fleet-server logs to see what happens.

Image

Seeing monitoring server started successfully in fleet-server logs:

{"log.level":"info","@timestamp":"2024-09-19T09:40:46.332Z","log.origin":{"function":"github.com/elastic/elastic-agent/internal/pkg/agent/application/monitoring/reload.(*ServerReloader).Start","file.name":"reload/reload.go","file.line":54},"message":"Starting monitoring server with cfg &config.MonitoringConfig{Enabled:true, MonitorLogs:false, MonitorMetrics:false, MetricsPeriod:\"\", LogMetrics:true, HTTP:(*config.MonitoringHTTPConfig)(0xc0014b5c50), Namespace:\"default\", Pprof:(*config.PprofConfig)(nil), MonitorTraces:true, APM:config.APMConfig{Environment:\"qa\", APIKey:\"\", SecretToken:\"wU5riaUwR2iW0VV04D\", Hosts:[]string{\"https://8f5a5c4ea3bf4ebb9f039a5444d843db.eu-west-1.aws.qa.cld.elstc.co:9243\"}, GlobalLabels:map[string]string(nil), TLS:config.APMTLS{SkipVerify:false, ServerCertificate:\"\", ServerCA:\"\"}}, Diagnostics:config.Diagnostics{Uploader:config.Uploader{MaxRetries:10, InitDur:1000000000, MaxDur:600000000000}, Limit:config.Limit{Interval:60000000000, Burst:1}}}","log":{"source":"elastic-agent"},"ecs.version":"1.6.0"}

Am I missing something here? I don't see any apm related errors in the logs.
Here is the diagnostics:
elastic-agent-diagnostics-2024-09-19T09-46-03Z-00.zip

EDIT: Never mind, I found the fleet-server traces by searching on one trace id from the logs, I didn't find earlier because the deployment id is not on fleet-server traces.
So I consider the test successful.

{"log.level":"info","@timestamp":"2024-09-19T09:40:48.080Z","message":"applying new components data","component":{"binary":"fleet-server","dataset":"elastic_agent.fleet_server","id":"fleet-server-es-containerhost","type":"fleet-server"},"log":{"source":"fleet-server-es-containerhost"},"@timestamp":"2024-09-19T09:40:48.08Z","trace.id":"72a11ae256ffca942937b6d766a5ca4b","service.type":"fleet-server","server.address":"","fleet.access.apikey.id":"DBinCZIBhH7XyTVuwAuj","req.Components":[{"id":"fleet-server-es-containerhost","message":"Healthy: communicating with pid '188'","status":"HEALTHY","type":"fleet-server","units":[{"id":"fleet-server-es-containerhost-fleet-server-fleet_server-elastic-cloud-fleet-server","message":"Starting","status":"STARTING","type":"input"},{"id":"fleet-server-es-containerhost","message":"Starting","status":"STARTING","type":"output"}]},{"id":"apm-es-containerhost","message":"Healthy: communicating with pid '218'","status":"HEALTHY","type":"apm","units":[{"id":"apm-es-containerhost-elastic-cloud-apm","message":"Starting: spawned pid '218'","status":"STARTING","type":"input"},{"id":"apm-es-containerhost","message":"Starting: spawned pid '218'","status":"STARTING","type":"output"}]}],"transaction.id":"72a11ae256ffca94","ecs.version":"1.6.0","service.name":"fleet-server","http.request.id":"01J84TFP8FE5CTJM6RDP6PV8G8","fleet.agent.id":"6566364d-f43c-4f9a-b6d9-c7f723b2b35f","ecs.version":"1.6.0"}

https://platform-metrics.kb.eu-west-1.aws.qa.cld.elstc.co/app/apm/services/fleet-server/overview?comparisonEnabled=true&environment=ENVIRONMENT_ALL&kuery=labels.agent_id%3A%20%226566364d-f43c-4f9a-b6d9-c7f723b2b35f%22&latencyAggregationType=avg&offset=1d&rangeFrom=now-3h&rangeTo=now&serviceGroup=&transactionType=request

Image
Image

@juliaElastic
Copy link
Contributor Author

Anything else to do before we close the issue? We can add a sample rate to the cloud config when this is done: #5211

@jlind23
Copy link
Contributor

jlind23 commented Sep 19, 2024

@juliaElastic sampling rate being optional I am not sure if we want to add it by default, I would rather consider this issue as done and create a follow PR later on if we need to add any sampling rate.
@ycombinator @kpollich any thoughts?

@kpollich
Copy link
Member

+1 to consider this done

@ycombinator
Copy link
Contributor

ycombinator commented Sep 19, 2024

Very cool to see this done, @juliaElastic and @AlexP-Elastic. Thank you!

Agreed on adding sampling_rate in a follow up PR once #5211 is resolved. cc: @simitt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants