Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rearrange deployment files into kustomizations #4055

Merged
merged 1 commit into from
May 25, 2019

Conversation

nicknovitski
Copy link
Contributor

What this PR does / why we need it:

kustomize is pretty good, and it's been integrated into kubectl. This makes it possible for users to create a test deployment as simply as running:

kubectl create ns ingress-nginx
kubectl apply -k github.com/kubernetes/ingress-nginx/deploy/base

It also makes it possible for people to have version-controlled files tracking and combining upstream updates and their own additions and changes, for example:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: infrastructure
bases:
- github.com/kubernetes/ingress-nginx/deploy/base?ref=nginx-0.25.0
resources:
- pod-disruption-budget.yaml
patchesStrategicMerge:
- deployment-anti-affinity.yaml
- deployment-increase-replicas.yaml
- deployment-reduce-downtime.yaml
- deployment-resources.yaml
configMapGenerator:
- name: nginx-configuration
  behavior: merge
  literals:
  - use-http2="false"
  - ssl-protocols="TLSv1 TLSv1.1 TLSv1.2"
  - worker-shutdown-timeout="13s"
  # etc

kubectl apply -k also does deployment rollouts on config-map changes.

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #3994

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Apr 30, 2019
@aledbf aledbf self-assigned this Apr 30, 2019
@aledbf
Copy link
Member

aledbf commented Apr 30, 2019

@nicknovitski thank you for working on this!

@nicknovitski
Copy link
Contributor Author

Ah, I didn't see test/e2e-image/manifests, I'll handle that now.

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels May 1, 2019
@nicknovitski nicknovitski force-pushed the kustomize branch 2 times, most recently from 7bdd110 to 622caf0 Compare May 2, 2019 21:32
@nicknovitski
Copy link
Contributor Author

Oh, interesting.

I noticed and figured out a "fix" for a problem in the Makefile:

 e2e-test:
        if  [ "$(KUBECTL_CONTEXT)" != "minikube" ] && \
-               [ "$(KUBECTL_CONTEXT)" =~ .*kind* ] && \
+               ! echo $(KUBECTL_CONTEXT) | grep kind && \

The e2e tests run this target in a sh shell, which has no =~ operator, so it would never detect if the context name wasn't in the whitelist, just complain about not finding a conditional and then move on to the remaining commands. But the e2e tests use a context name that isn't in the whitelist. I've added it. I hope that was the intended behavior.

@nicknovitski
Copy link
Contributor Author

nicknovitski commented May 2, 2019

In general, I'm changing the e2e tests to re-use the bases I've created in deploy/. This has the benefit of actually testing the same configurations the documentation tells people to use.

It also prompted me to consider how to install multiple ingress controllers in the same cluster, which I hadn't thought about before. It won't be too crazy: I'll separate the ClusterRole into its own base, much like the rbac.yaml has been separate in the past, and change the instructions to tell people to use it.

For example, commands to deploy a brand new default ingress on would be something like this:

kubectl create namespace ingress-nginx
kubectl apply -k github.com/kubernetes/ingress-nginx/deploy/cluster-wide # contains only the ClusterRole
kubectl apply -k github.com/kubernetes/ingress-nginx/deploy/base

Or with a kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- github.com/kubernetes/ingress-nginx/deploy/cluster-wide
- github.com/kubernetes/ingress-nginx/deploy/base
resources:
- namespace.yaml # a namespace named `ingress-nginx`

But these could be separable. One user might create the cluster role, another a namespace, and the third an ingress controller in a particular namespace. That last part would look something like this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: staging # or whatever
bases:
- github.com/kubernetes/ingress-nginx/deploy/base
patchesStrategicMerge:
- deployment-namespace-patch.yaml # as in test/e2e-image/overlay, adds "--watch-namespace=$(POD_NAMESPACE)" to container args

@nicknovitski nicknovitski force-pushed the kustomize branch 7 times, most recently from d0d5145 to 6b3e8f1 Compare May 6, 2019 23:13
@nicknovitski
Copy link
Contributor Author

The test of the --default-backend-service argument continues to fail, and I don't know why.

This is the complete list of changes that the branch makes to the configuration applied for a given e2e test, including diff output.

A different cluster role is created for each test, with a unique name:
<   name: nginx-ingress-clusterrole
>   name: nginx-ingress-clusterrole-${NAMESPACE}
Each test's role binding has the same name (as it's being created in a namespace anyway), taken from the deployment manifest 
<   name: nginx-ingress-role-${NAMESPACE}
>   name: nginx-ingress-role-nisa-binding
Each test's cluster role binding takes a name from the deployment manifest  
<   name: nginx-ingress-clusterrole-${NAMESPACE}
>   name: nginx-ingress-clusterrole-nisa-binding-${NAMESPACE}
The cluster role binding's roleref correctly points to the new unique cluster role namexxxxx
<   name: nginx-ingress-clusterrole
>   name: nginx-ingress-clusterrole-${NAMESPACE}
The ingress-nginx service gets the same shared labels as everything else
>   labels:
>     app.kubernetes.io/name: ingress-nginx
>     app.kubernetes.io/part-of: ingress-nginx
The deployment uses the general release api group and version:
< apiVersion: extensions/v1beta1
> apiVersion: apps/v1
The deployment also uses the POD_NAMESPACE environment variable:
<         - --watch-namespace=${NAMESPACE}
>         - --watch-namespace=$(POD_NAMESPACE)

I can't think of any way these changes would have broken only that particular test. I've run out of things to try, and I don't have much more time to work on it.

I'd very much appreciate help or suggestions from anyone who can make them. ☹️

@aledbf
Copy link
Member

aledbf commented May 7, 2019

@nicknovitski this is more than I expected.
Please

If you rebase after these two suggestions this should pass.

Also, apologies for the delay to provide feedback.

@nicknovitski
Copy link
Contributor Author

nicknovitski commented May 7, 2019

Sure. To be honest, I only made the appsv1 changes across the e2e tests to see if it fixed the default-backend-service error, but since they did not, I don't mind removing them.

e: sadly that failed, and brought back the failure of the test of the from-to-www-redirect annotation.

@nicknovitski
Copy link
Contributor Author

I have fixed those two tests by patching the deployment object to have apiVersion: extensions/v1beta1 again. I'm disturbed that worked.

Currently, on master, the deployment object manifests in the deploy folder use apps/v1, and the one in the test/e2e-image/manifests folder uses extensions/v1beta1. I think that's a problem, but it's not one I'm introducing (or fixing) with this PR. At the very least, the explicit patch will make the difference more visible. I could also add a TODO comment.

@aledbf
Copy link
Member

aledbf commented May 7, 2019

@nicknovitski please address my comment and squash the commits once this passes e2e test

@nicknovitski
Copy link
Contributor Author

nicknovitski commented May 7, 2019 via email

@aledbf
Copy link
Member

aledbf commented May 7, 2019

Is the isolation still necessary when each test gets its own cluster role?

Yes because the PSP test requires a different clusterrole that only makes sense in that scenario and that's why requires a serial run. I suggest to rollback all the changes in pod_security_policy.go

@aledbf
Copy link
Member

aledbf commented May 7, 2019

/approve

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 7, 2019
@aledbf
Copy link
Member

aledbf commented May 7, 2019

@ElvinEfendi this is ready for review

@nicknovitski
Copy link
Contributor Author

Rebased to resolve a conflict I introduced.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 17, 2019
@aledbf
Copy link
Member

aledbf commented May 18, 2019

@nicknovitski please rebase

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 19, 2019
@nicknovitski
Copy link
Contributor Author

Rebased and added worker-processes: "1" to the e2e-test configuration.

@aledbf
Copy link
Member

aledbf commented May 20, 2019

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label May 20, 2019
@aledbf
Copy link
Member

aledbf commented May 20, 2019

/retest

@codecov-io
Copy link

codecov-io commented May 20, 2019

Codecov Report

Merging #4055 into master will increase coverage by 0.1%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##           master    #4055     +/-   ##
=========================================
+ Coverage   57.65%   57.75%   +0.1%     
=========================================
  Files          87       87             
  Lines        6452     6432     -20     
=========================================
- Hits         3720     3715      -5     
+ Misses       2300     2286     -14     
+ Partials      432      431      -1
Impacted Files Coverage Δ
internal/ingress/annotations/parser/main.go 81.63% <0%> (-2.01%) ⬇️
internal/ingress/controller/config/config.go 98.54% <0%> (-0.03%) ⬇️
internal/ingress/zz_generated.deepcopy.go 0% <0%> (ø) ⬆️
internal/ingress/status/status.go 73.18% <0%> (ø) ⬆️
internal/ingress/controller/nginx.go 29% <0%> (ø) ⬆️
internal/ingress/controller/controller.go 46.82% <0%> (+0.26%) ⬆️
internal/ingress/controller/store/store.go 62.05% <0%> (+0.29%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ff80dca...51ad0bc. Read the comment docs.

@@ -40,7 +39,7 @@ var _ = framework.IngressNginxDescribe("Custom Default Backend", func() {
framework.UpdateDeployment(f.KubeClientSet, f.Namespace, "nginx-ingress-controller", 1,
func(deployment *appsv1beta1.Deployment) error {
args := deployment.Spec.Template.Spec.Containers[0].Args
args = append(args, fmt.Sprintf("--default-backend-service=%s/%s", f.Namespace, "http-svc"))
args = append(args, "--default-backend-service=$(POD_NAMESPACE)/http-svc")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this a necessary change? Why?

@ElvinEfendi
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 25, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aledbf, ElvinEfendi, nicknovitski

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit dfa7f10 into kubernetes:master May 25, 2019
@AndrewX192
Copy link

FYI the change that just landed breaks the installation guide which calls for

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

This file no longer exists.

@aledbf
Copy link
Member

aledbf commented May 26, 2019

@AndrewX192 please check the docs again. (we need to run a manual task to update the docs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove namespace from ingress-nginx, let user set it.
6 participants