-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Conversation
this is just to avoid constant pods killed during tests
This commit is adding an initContainer to Kibana Pod to retrieve a service account token from Elasticsearch API. Resources: - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-service-token.html
This commit adds a post-delete Job to delete the Kibana ServiceAccount token from Elasticsearch chen the chart is uninstalled. This is required to avoid an error because token already exists during re-installations.
As a current status, Kibana is starting but fails then with
The kibana enrollment token is successfully created by the init container calling Elasticsearch API and mounted into the kibana pod under Steps to reproduce:
|
@jbudz The generated enrollment token is valid since we can use it to query Elasticsearch from the kibana pod. I'm not sure why the I didn't find any way to get more details about this failure (I didn't find any option for more verbose logs). The errors seem to happen in the decodeEnrollmentToken function, but I don't speak JS/TS. Do you have any idea of what is happening? |
Note also that I tried regenerating the token using the |
kibana/templates/deployment.yaml
Outdated
command: | ||
- sh | ||
- -c | ||
- curl --output /usr/share/kibana/config/tokens/{{ template "kibana.fullname" . }} --fail -XPOST --cacert /usr/share/kibana/config/certs/tls.crt -u "$(ELASTICSEARCH_USERNAME):$(ELASTICSEARCH_PASSWORD)" "{{ .Values.elasticsearchHosts }}/_security/service/elastic/kibana/credential/token/{{ template "kibana.fullname" . }}?pretty" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's two different tokens and I think we may be confusing the two:
enrollment token:
- generated by bin/elasticsearch-create-enrollment-token
- when consumed via bin/kibana-setup configures kibana.yml:
elasticsearch.hosts: ["https://localhost:9200/"]
elasticsearch.serviceAccountToken: XXXXXXX
elasticsearch.ssl.certificateAuthorities:
[/absolute/path/to/kibana-8.3.3/data/ca_1659104310110.crt]
xpack.fleet.outputs:
[
{
id: fleet-default-output,
name: default,
is_default: true,
is_default_monitoring: true,
type: elasticsearch,
hosts: ["https://localhost:9200/"],
ca_trusted_fingerprint: XXXXXXX,
},
]
service account token (used here):
- gets the value needed to set
elasticsearch.serviceAccountToken
in kibana.yml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These 2 tokens are really confusing.
This makes things a lot more complicated because the token is generated from the kibana init container when the chart is already deployed. That means that we can't use it in the Helm template itself to modify the kibana.yaml
config map or environment variable before the real kibana container is starting.
AFAIK, the only thing we can do is mount it as a file into the kibana container and run a command to use it while kibana process has already been started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking that might work..can we send a sighup signal without interfering with the process monitor? elastic/kibana#52756 (comment). It sounds like that may not work for all configurations though, need to check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another solution would be to override the Kibana container command in the Helm template to run a script that read the mounted file, parse the token and load it as an environment variable or edit the kibana.yaml
file, then finally start kibana by running manually the default image command.
I'd like to avoid if we can, because that any change in the default image command (in a different kibana version or for people using customized kibana image) would break the chart. However, I feel that that may be the only way to do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reloading the kibana config with kill -HUP
doesn't seem to handle the elasticsearch.serviceAccountToken
value in my tests. In addition, I discovered that the kibana.yaml
config file is opened in RO, so editing it to add this parameter requires too much workarounds. I ended up using a custom entrypoint scripts as mentioned in my previous comment => 2319e09
I guess that we can still rework it if we change the kibana docker image entrypoint in a later version.
@jbudz @elastic/release-eng @framsouza I think this PR is ready for review. The Kibana upgrade test (upgrade from 7.17.x to 8.4.1 is failing for now. I'll investigate in a follow-up PR of that's something we can fix or if we need to disable it and advertise that upgrade from the previous major version isn't supported. |
@@ -76,11 +85,36 @@ spec: | |||
imagePullSecrets: | |||
{{ toYaml .Values.imagePullSecrets | indent 8 }} | |||
{{- end }} | |||
initContainers: | |||
- name: configure-kibana-token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@elastic/kibana-security can you review this approach? short version: interactive setup isn't an option in a helm environment, so we're manually configuring kibana similar to the cli.
- a kibana service account token is created
- certificates are added
- hostname is set
I've tested locally and everything worked smoothly. Well done, @jmlrt ! That was a really good achievement. 🎆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach to configure Kibana security LGTM, just left one question and one nit. Thank you!
command: | ||
- sh | ||
- -c | ||
- curl --output {{ template "kibana.home_dir" . }}/config/tokens/{{ template "kibana.fullname" . }}.json --fail -XPOST --cacert {{ template "kibana.home_dir" . }}/config/certs/{{ .Values.elasticsearchCertificateAuthoritiesFile }} -u "$(ELASTICSEARCH_USERNAME):$(ELASTICSEARCH_PASSWORD)" "{{ .Values.elasticsearchHosts }}/_security/service/elastic/kibana/credential/token/{{ template "kibana.fullname" . }}?pretty" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Sorry if it's a dumb question, but I'm not Helm chart/k8s expert: what would happen if the user installs Kibana helm chart, then uninstalls it and try to install it again? Won't this request fail because the init container will try to create the token with the same name (kibana.fullname
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that's a really relevant question. When you uninstall the chart, there is a K8S job that is triggered using Helm post-delete
hooks that will call Elasticsearch again to remove the token. This way, if we re-install the chart, it can recreate a new token with the same name.
helm-charts/kibana/templates/job.yaml
Lines 1 to 7 in 80e4ada
apiVersion: batch/v1 | |
kind: Job | |
metadata: | |
name: {{ template "kibana.fullname" . }}-post-delete | |
labels: {{ include "kibana.labels" . | nindent 4 }} | |
annotations: | |
"helm.sh/hook": post-delete,post-upgrade |
helm-charts/kibana/templates/job.yaml
Lines 23 to 31 in 80e4ada
command: ["curl"] | |
args: | |
- --fail | |
- -XDELETE | |
- --cacert | |
- {{ template "kibana.home_dir" . }}/config/certs/{{ .Values.elasticsearchCertificateAuthoritiesFile }} | |
- -u | |
- "$(ELASTICSEARCH_USERNAME):$(ELASTICSEARCH_PASSWORD)" | |
- "{{ .Values.elasticsearchHosts }}/_security/service/elastic/kibana/credential/token/{{ template "kibana.fullname" . }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thank you for clarifying that!
Forgot to add this in elastic#1679
Hi @jmlrt , Having problems with this, I see there was a comment made above about having the initContainer creating the token. It means everytime the container is terminated it will try create a token with the same name. For example, we run our elastic environment on AWS spot so the container can be terminated regularly, however it is not failing to start up because it is trying to create a token which already exists. Not sure what to do about this Thanks |
Hi @pjaak, thanks for reporting this bug 👍🏻. I could reproduce locally and need to find a way to fix it. |
Hi @jmlrt , I would propose maybe a helm pre-install hook so it only runs on the first helm install. Similar to the way you do the delete token in the post-delete. :) |
This PR is updating the Kibana chart to make it compatible with 8.x version.