Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applying backend-config annotation on existing ingress service has no effect #1503

Closed
mikouaj opened this issue Jul 2, 2021 · 23 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mikouaj
Copy link

mikouaj commented Jul 2, 2021

Issue

Applying cloud.google.com/backend-config annotation on an existing service, that is associated with an existing Ingress, makes no changes on underlying backend service.

Use cases

  • configuring CloudArmor on GKE provided default backend service, that is default-http-backend in kube-system name space
  • configuring CloudArmor, IAP or other services on any existing service that is part of an Ingress but had no corresponding BackendConfig object

Steps to reproduce

  1. Create Service that matches some existing deployment
  2. Create Ingress associated with the service created above
  3. Wait until ingress is created and in sync
  4. Create BackendConfig with Cloud Armor policy configuration (any other configuration will apply as well)
  5. Annotate service with cloud.google.com/backend-config annotation pointing to BackendConfig created in previous step

Expected Behavior

Cloud Armor policy is configured on a corresponding backend service

Actual Behavior

Nothing happens

GKE version

1.19.10-gke.1600

@boredabdel
Copy link

@bowei any idea about this?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 30, 2021
@jsravn
Copy link

jsravn commented Nov 26, 2021

Alternatively, we should be able to disable the default backend. Very few people want or are aware that ingress-gce forwards all unmatched external traffic to a pod in kube-system. This is undesirable from a security standpoint.

@bowei
Copy link
Member

bowei commented Nov 30, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 30, 2021
@bowei
Copy link
Member

bowei commented Nov 30, 2021

Default backend can be removed when all of the products support a 404 response instead of requiring a Pod to 404. -- however, that seems to not be related to the issue title?

@bowei
Copy link
Member

bowei commented Nov 30, 2021

We will take a look at this bug in the triage.

@freehan
Copy link
Contributor

freehan commented Dec 7, 2021

Does this only impact CloudArmor config or any other config in BackendConfig?

We had a problem with CloudArmor in that version you provided and it is since fixed.

@raphaelauv
Copy link

@mikouaj I have the same problem

GKE : 1.21.5-gke.1302

@edclement
Copy link

edclement commented Feb 11, 2022

Count me in on this problem as well. We can't seem to get a BackendConfig which has a securityPolicy attached to the Ingress (docs). So instead of relying on the BackendConfig we have to manually attach the policy.

We use GKE Autopilot if that matters.

{
  "Major": "1",
  "Minor": "20+",
  "GitVersion": "v1.20.10-gke.1600",
  "GitCommit": "ef8e9f64449d73f9824ff5838cea80e21ec6c127",
  "GitTreeState": "clean",
  "BuildDate": "2021-09-06T09:24:20Z",
  "GoVersion": "go1.15.15b5",
  "Compiler": "gc",
  "Platform": "linux/amd64"
}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 12, 2022
@bowei
Copy link
Member

bowei commented May 12, 2022

/assign @spencerhance

@k8s-ci-robot
Copy link
Contributor

@bowei: GitHub didn't allow me to assign the following users: spencerhance.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @spencerhance

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spencerhance
Copy link
Contributor

Ack

@swetharepakula
Copy link
Member

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 20, 2022
@msuterski
Copy link

We are running to a similar issue. We're required to attach security policies to all exposed backends, including the ones created via/for default http backend.

We're currently considering various "creative" solutions, but it would be a lot easier if it was fixed on the GCE Ingress level.

Thanks for looking into it.

@spencerhance
Copy link
Contributor

spencerhance commented Jun 14, 2022

Hi Folks, I attempted to repro this locally by adding a security policy and backendconfig to a service after the LB was provisioned - but I was unable to. If you share your redacted YAMLs or email your cluster info to the email on my profile I can take another look.

@msuterski
Copy link

@spencerhance thanks for your comment/verification. It does seem to work for default backends when they are attached to healthy Ingresses!

For future reference the steps to have the policies attached to default backends.

  1. Create a security policy for the default http backend in the gcp project

    gcloud compute security-policies create default-http-backend

  2. Attach rule(s) to your security policy. (in this case we're attaching policy to deny all access by default and return 404)

    gcloud compute security-policies rules update 2147483647 --security-policy default-http-backend --action "deny-404"

  3. Create BackendConfig resource in the kube-system namespace

    # backend.yaml
    ---
    apiVersion: cloud.google.com/v1
    kind: BackendConfig
    metadata:
      name: default-http-backend
      labels:
        app.kubernetes.io/name: default-http-backend
    spec:
      timeoutSec: 40
      securityPolicy:
        name: default-http-backend

    kubectl apply -f backend.yaml -n kube-system

  4. Patch the default backend service to attach the required annotation

    # annotation.yaml
    metadata:
      annotations:
        cloud.google.com/backend-config: |
          {"default":"default-http-backend"}

    kubectl patch service default-http-backend --patch-file annotation.yaml -n kube-system

just FYI, this process does not work for Ingresses which backends are in an unhealthy state

@spencerhance
Copy link
Contributor

just FYI, this process does not work for Ingresses which backends are in an unhealthy state

@msuterski
Can you elaborate, are the Backends unhealthy or is the ingress have a configuration issue that prevents it from being fully synced?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@robdatasembly
Copy link

I'm getting a similar issue, but with a new BackendConfig attached to an existing Ingress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests