You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using the ec_deployment_elasticsearch_keystore ressource to create a secret that is used together with custom user settings for OIDC login with auth0. When changing/removing or renaming a secret (in our case from "auth0" to "auth0-customers"), the terraform plan looks good that it would destroy one and create the other:
# ec_deployment_elasticsearch_keystore.auth0_client_secret will be destroyed# (because resource uses count or for_each)-resource"ec_deployment_elasticsearch_keystore""auth0_client_secret" {
-as_file=false->null-deployment_id="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"->null-id="1234567890"->null-setting_name="xpack.security.authc.realms.oidc.auth0.rp.client_secret"->null-value=(sensitive value)
}
# ec_deployment_elasticsearch_keystore.auth0_client_secret["auth0-customers"] will be created+resource"ec_deployment_elasticsearch_keystore""auth0_client_secret" {
+deployment_id="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"+id=(known after apply)
+setting_name="xpack.security.authc.realms.oidc.auth0-customers.rp.client_secret"+value=(sensitive value)
}
But in reality that's not what happened: we end up having both the new and the old secret "auth0-customers" along with "auth0" in the keystore! This is a big issue because we cannot apply any change in the deployment when having this situation with a key in keystone not used in the user settings yaml elastic/elasticsearch#43722 (and it's hard to find out.).
Terraform definition
# Add the auth0_client_secret to the deployment key storeresource"ec_deployment_elasticsearch_keystore""auth0_client_secret" {
for_each=var.auth0_tenantsdeployment_id=local.deployment_idsetting_name="xpack.security.authc.realms.oidc.${each.key}.rp.client_secret"value=each.value.client_secret
}
# var auth0_tenants definition: variable"auth0_tenants" {
type=map(object({
client_secret =string
tenant_url =string
client_id =string
}))
description="map of auth0 client-id, client-secret and tenant-url."
}
Steps to Reproduce
Pass a map to the above resource: The first time pass something like
I did not see the change in that particular environment (code is used to deploy everywhere) and it has left over this key which blocked the future upscale operation of the cluster (similar to issue described elastic/elasticsearch#43722).
Possible Solution
It should really, as planned, remove the entry from the keystore...
Your Environment
Version used: "0.4.1"
Running against Elastic Cloud SaaS or Elastic Cloud Enterprise and version: "8.4.2"
The text was updated successfully, but these errors were encountered:
Current Behavior
We are using the ec_deployment_elasticsearch_keystore ressource to create a secret that is used together with custom user settings for OIDC login with auth0. When changing/removing or renaming a secret (in our case from "auth0" to "auth0-customers"), the terraform plan looks good that it would destroy one and create the other:
But in reality that's not what happened: we end up having both the new and the old secret "auth0-customers" along with "auth0" in the keystore! This is a big issue because we cannot apply any change in the deployment when having this situation with a key in keystone not used in the user settings yaml elastic/elasticsearch#43722 (and it's hard to find out.).
Terraform definition
Steps to Reproduce
Context
I did not see the change in that particular environment (code is used to deploy everywhere) and it has left over this key which blocked the future upscale operation of the cluster (similar to issue described elastic/elasticsearch#43722).
Possible Solution
It should really, as planned, remove the entry from the keystore...
Your Environment
The text was updated successfully, but these errors were encountered: