Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where are tls.crt and tls.key available? #3832

Closed
Jfisher77 opened this issue Feb 28, 2019 · 15 comments
Closed

Where are tls.crt and tls.key available? #3832

Jfisher77 opened this issue Feb 28, 2019 · 15 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Jfisher77
Copy link

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.): Yes

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): tls certs, certificates

Context:

We have various ingresses configured for TLS using a secret containing a tls.key and tls.crt. When hitting the ingress in the browser, we are correctly presented with the certificate in this secret (self signed).

We have a scenario to terminate TLS on the ingress, then re-encrypt the traffic for TLS-MA from the ingress -> service. To do this, we have added proxy_ssl_certificate and proxy_ssl_certificate_key as configurations in the server-snippet annotation pointing to a certificate on the machine. We've verified that this fits our need.

However in the server configuration in nginx.conf for this ingress we see only a reference to a fake certificate. How is the correct certificate from the TLS secret presented in the browser when it is not referenced in this configuration in nginx.conf?

@anjuls
Copy link

anjuls commented Feb 28, 2019

@Jfisher77 can you share nginx.conf and the steps how did you setup the ingress controller? If the TLS terminates at ingress then it is quite simple...
kubectl create secret tls nginx-certs --key ./ssl.key --cert ./ssl.crt -n ingress-nginx
then in the nginx ingress controller container, you pass the argument --default-ssl-certificate=ingress-nginx/nginx-certs
If you are looking for TLS between ingress & pod, then you probably need to implement some kind of service mesh (which support mTLS) or cert manager. You can also secure the CNI network if you don't want to go for this path.

@Jfisher77
Copy link
Author

@anjuls the TLS between ingress and pod is the key bit for us. We have verified it's possible when doing TLS termination on the ingress for a request, then reencrypting the traffic using the certificates for TLS-MA with the backend service. To achieve it we use config like this:

nginx.ingress.kubernetes.io/server-snippet: |
proxy_pass_request_body on;
proxy_pass_request_headers on;
proxy_ssl_server_name on;
proxy_ssl_certificate <path_to_backend_cert>;
proxy_ssl_certificate_key <path_to_backend_cert_key>;

However as we are deploying multiple services through multiple environments, we don't want to actually store the certificates and keys on the machine. In the nginx.conf I know there is configuration pointing to the Kubernetes fake certificate (which is presented in the browser for ingress without TLS or misconfigured certificates in TLS secret), but when the TLS secret is configured correctly and the CN matches the host, this certificate is what is presented in the browser whilst the nginx.conf still points to the fake certificate.

The reason I opened this request was to see if we could use the same mechanic which presents the correct TLS cert on the ingress to re-encrypt the traffic using the same certificates to the backend.

@aledbf
Copy link
Member

aledbf commented Mar 1, 2019

@Jfisher77 you don't need the ssl certificate to secure the communication with the backend. Please check https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol

@Jfisher77
Copy link
Author

Maybe my comments are straying from the title of this thread, where are the certs and keys available?
I can see this function saving the certificates to the NGinx pod for use:

`
// syncSecret synchronizes the content of a TLS Secret (certificate(s), secret
// key) with the filesystem. The resulting files can be used by NGINX.
func (s k8sStore) syncSecret(key string) {
s.mu.Lock()
defer s.mu.Unlock()

glog.V(3).Infof("Syncing Secret %q", key)

// TODO: getPemCertificate should not write to disk to avoid unnecessary overhead
cert, err := s.getPemCertificate(key)
if err != nil {
	if !isErrSecretForAuth(err) {
		glog.Warningf("Error obtaining X.509 certificate: %v", err)
	}
	return
}

// create certificates and add or update the item in the store
cur, err := s.GetLocalSSLCert(key)
if err == nil {
	if cur.Equal(cert) {
		// no need to update
		return
	}
	glog.Infof("Updating Secret %q in the local store", key)
	s.sslStore.Update(key, cert)
	// this update must trigger an update
	// (like an update event from a change in Ingress)
	s.sendDummyEvent()
	return
}

glog.Infof("Adding Secret %q to the local store", key)
s.sslStore.Add(key, cert)
// this update must trigger an update
// (like an update event from a change in Ingress)
s.sendDummyEvent()

}
`

Where are these files saved on the machine?

@aledbf
Copy link
Member

aledbf commented Mar 1, 2019

Where are these files saved on the machine?

The SSL certificates are located in the directory /etc/ingress-controller/ssl. That said, if you are using the dynamic ssl certificates, there are no files, just in-memory (keep in mind we are switching to this mode in 0.24)

@Jfisher77
Copy link
Author

Thanks for the reply @aledbf, can we reference the certificates in memory through the server-snippet annotation on the ingress?

@aledbf
Copy link
Member

aledbf commented Mar 1, 2019

@Jfisher77 no because we just send the certificates to lua

local certificate_data = ngx.shared.certificate_data

@stephankfolkes
Copy link

@aledbf What is lua and where do the certificates go?

Is there a way through custom annotations and ingress templates we can reference the certificate in some nginx directives we'd like to add?

We'd like to add directives like proxy_ssl_certificate and proxy_ssl_certificate_key.

@aledbf
Copy link
Member

aledbf commented Mar 1, 2019

@stephankfolkes please check this comment and the links #2965 (comment)

@aledbf
Copy link
Member

aledbf commented Mar 1, 2019

@Jfisher77 @stephankfolkes the only way I see you can do this right now is to use a mount a volume in the ingress controller with the secret you need and use a custom template (or the custom-configuration annotation) to add the directive you need.

https://kubernetes.io/docs/concepts/configuration/secret/#use-cases
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/

@stephankfolkes
Copy link

Thank you for your feedback. We'll look into that solution.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 30, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 29, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants