Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 default service monitor configuration to use https #2065

Merged

Conversation

johanneswuerbach
Copy link
Contributor

By default the ServiceMonitor scrapes port: https https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v2/config/prometheus/monitor.yaml#L13, which is exposed by the auth proxy service https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v2/config/rbac/auth_proxy_service.yaml#L10-L12 and requires an https connection https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v2/config/default/manager_auth_proxy_patch.yaml#L15

Change the ServiceMonitor configuration to work with that by default as otherwise the scraping fails with a 400 Bad Request.

Thanks to #1253 (comment) for the required flags.

@k8s-ci-robot
Copy link
Contributor

Welcome @johanneswuerbach!

It looks like this is your first PR to kubernetes-sigs/kubebuilder 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/kubebuilder has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @johanneswuerbach. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Mar 5, 2021
@k8s-ci-robot k8s-ci-robot requested review from estroz and mengqiy March 5, 2021 20:15
@johanneswuerbach johanneswuerbach force-pushed the improve-service-monitor branch from 26d0997 to f81fb0d Compare March 5, 2021 20:22
@johanneswuerbach
Copy link
Contributor Author

/assign @Adirio

Comment on lines +56 to +59
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point, no new features should go into go/v2

Suggested change
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the current configuration doesn’t work, it could also be seen as a bug fix, but happy to remove this part.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's true, then it can be left in I suppose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is a bugfix (haven't checked that) it should have the 🐛 (:bug:) emoji.

Copy link
Contributor Author

@johanneswuerbach johanneswuerbach Mar 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure, but as the default configuration just doesn't work I think it makes sense to treat this as a bugfix.

Comment on lines +58 to +59
tlsConfig:
insecureSkipVerify: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The webhook server already sets up TLS. I wonder if that cert/key can be used here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems out of scope of this PR, but seems like a good idea otherwise.

scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think that we should move forward here. See the doc: https://book.kubebuilder.io/reference/metrics.html

These metrics are protected by kube-auth-proxy by default if using kubebuilder. Kubebuilder v2.2.0+ scaffold a clusterrole which can be found at config/rbac/auth_proxy_client_clusterrole.yaml.

You will need to grant permissions to your Prometheus server so that it can scrape the protected metrics. To achieve that, you can create a clusterRoleBinding to bind the clusterRole to the service account that your Prometheus server uses.

You can run the following kubectl command to create it. If using kubebuilder <project-prefix> is the namePrefix field in config/default/kustomization.yaml.

kubectl create clusterrolebinding metrics --clusterrole=<project-prefix>-metrics-reader --s

Did you follow that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I'm not sure how your comment is related.

The proxy is by default exposed only via https so scraping using scheme: http will result in a 400 error. Annoyingly this directly implemented in golang, so the rbac proxy doesn't log anything and prometheus doesn't log the response body.

The bearerTokenFile is required for prometheus to actually include a Authorization header.

This header is required so the proxy can actually validate whether the request has the necessary permissions (you get a 401 otherwise, while the missing binding would result in a 403)

The insecureSkipVerify is required as the rbac proxy uses a self-generate certificate in the current configuration https://github.com/brancz/kube-rbac-proxy/blob/e4b31758aedb3d29d8f50d615828401a30d28c1e/main.go#L261

@estroz proposed to use the webhook certificate #2065 (comment), but this seems out of scope of this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can be missing something here our misunderstanding. However, see: https://github.com/kubernetes-sigs/kubebuilder/pull/1317/files#diff-20a08dcfd7ad4ed3343d00cf32c7844ce41a2d8d5f5df5b6de56852a510c5517R46 from the PR: #1317 which via the rbac rules allows accessing metrics behind kube-rbac-proxy.

Then, see that in the e2e tests that are checked:

So, I am trying here understand why in the e2e test we get HTTP 200 from the metrics endpoint and you are facing 400 or 403. What is missing? What has been working our e2e tests and not for you?

Is it because we are getting the token to check and doing the handshake and you are not?

Then, I understand that the default configuration has been allowing Prometheus to get the metrics. See that the problem point out by you was solved in the past with the rbac roles #1317.

Anyway, shows that also has some difference between using Prometheus Operator instead of kube-prometheus which has been solved with the solution applied here as you point out : #1253 (comment)

So, I will

/hold cancel

However, would be very nice to have a new issue with the steps required to reproduce the scenario for the future we are able to check that. Describing the cluster and Prometheus used, and if the rbac roles for allowing the Prometheus get the metrics were or not used, and what are the steps to check the error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, Camila's point is pretty sound. If e2e tests are working and this is being tested, we need an issue describing the steps to get this error so that we can reproduce and check that it is an actual bug and not just an error on your side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go 1.16? Not supported yet. There are some modules related changes (like some commands not tidying up go.mod anymore) that haven't been taking into account in kubebuilder nor any of their dependencies.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that is unfortunate. If I have some time I can update e2e code after this is merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to highlight: We might be unable to reproduce the issue in the e2e as it now if it is specific only when to use Prometheus Operator instead of kube-prometheus.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you create an issue describing the need of an e2e test that covers this case, I'm ok merging this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #2087

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 6, 2021
@johanneswuerbach johanneswuerbach changed the title ✨ improve service monitor configuration 🐛 default service monitor configuration to use https Mar 6, 2021
@johanneswuerbach johanneswuerbach changed the title 🐛 default service monitor configuration to use https 🐛 default service monitor configuration to use https Mar 6, 2021
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 7, 2021
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would not it remove the need to get the token in the e2e tests?

If yes, I think we need to remove that from the e2e tests to validate your solution as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this only instructs how prometheus should scrape the service, but doesn’t change anything about the service itself. As prometheus isn’t used in e2e tests this doesn’t really change anything as this part is not test covered yet.

@Adirio
Copy link
Contributor

Adirio commented Mar 8, 2021

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 8, 2021
Copy link
Member

@camilamacedo86 camilamacedo86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shows OK since more than one person shows that have this problem and solves it the same way. We might be unable to reproduce the issue in the e2e as it now if it is specific only when to use Prometheus Operator instead of kube-prometheus.

So, it is fine for me.

More info: https://github.com/kubernetes-sigs/kubebuilder/pull/2065/files#r589002353

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 9, 2021
@johanneswuerbach
Copy link
Contributor Author

I created the related e2e test ticket #2087, anything else missing here?

Copy link
Contributor

@estroz estroz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the metrics endpoint is protected by kube-rbac-proxy, I don't see a reason against merging this.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 17, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: camilamacedo86, estroz, johanneswuerbach

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [camilamacedo86,estroz]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants