-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rearrange deployment files into kustomizations #4055
Conversation
@nicknovitski thank you for working on this! |
Ah, I didn't see |
7bdd110
to
622caf0
Compare
Oh, interesting. I noticed and figured out a "fix" for a problem in the Makefile: e2e-test:
if [ "$(KUBECTL_CONTEXT)" != "minikube" ] && \
- [ "$(KUBECTL_CONTEXT)" =~ .*kind* ] && \
+ ! echo $(KUBECTL_CONTEXT) | grep kind && \ The e2e tests run this target in a |
In general, I'm changing the e2e tests to re-use the bases I've created in It also prompted me to consider how to install multiple ingress controllers in the same cluster, which I hadn't thought about before. It won't be too crazy: I'll separate the ClusterRole into its own base, much like the For example, commands to deploy a brand new default ingress on would be something like this: kubectl create namespace ingress-nginx
kubectl apply -k github.com/kubernetes/ingress-nginx/deploy/cluster-wide # contains only the ClusterRole
kubectl apply -k github.com/kubernetes/ingress-nginx/deploy/base Or with a apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- github.com/kubernetes/ingress-nginx/deploy/cluster-wide
- github.com/kubernetes/ingress-nginx/deploy/base
resources:
- namespace.yaml # a namespace named `ingress-nginx` But these could be separable. One user might create the cluster role, another a namespace, and the third an ingress controller in a particular namespace. That last part would look something like this: apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: staging # or whatever
bases:
- github.com/kubernetes/ingress-nginx/deploy/base
patchesStrategicMerge:
- deployment-namespace-patch.yaml # as in test/e2e-image/overlay, adds "--watch-namespace=$(POD_NAMESPACE)" to container args |
d0d5145
to
6b3e8f1
Compare
The test of the This is the complete list of changes that the branch makes to the configuration applied for a given e2e test, including A different cluster role is created for each test, with a unique name:
< name: nginx-ingress-clusterrole
> name: nginx-ingress-clusterrole-${NAMESPACE}
Each test's role binding has the same name (as it's being created in a namespace anyway), taken from the deployment manifest
< name: nginx-ingress-role-${NAMESPACE}
> name: nginx-ingress-role-nisa-binding
Each test's cluster role binding takes a name from the deployment manifest
< name: nginx-ingress-clusterrole-${NAMESPACE}
> name: nginx-ingress-clusterrole-nisa-binding-${NAMESPACE}
The cluster role binding's roleref correctly points to the new unique cluster role namexxxxx
< name: nginx-ingress-clusterrole
> name: nginx-ingress-clusterrole-${NAMESPACE}
The ingress-nginx service gets the same shared labels as everything else
> labels:
> app.kubernetes.io/name: ingress-nginx
> app.kubernetes.io/part-of: ingress-nginx
The deployment uses the general release api group and version:
< apiVersion: extensions/v1beta1
> apiVersion: apps/v1
The deployment also uses the POD_NAMESPACE environment variable:
< - --watch-namespace=${NAMESPACE}
> - --watch-namespace=$(POD_NAMESPACE) I can't think of any way these changes would have broken only that particular test. I've run out of things to try, and I don't have much more time to work on it. I'd very much appreciate help or suggestions from anyone who can make them. |
@nicknovitski this is more than I expected.
If you rebase after these two suggestions this should pass. Also, apologies for the delay to provide feedback. |
Sure. To be honest, I only made the e: sadly that failed, and brought back the failure of the test of the |
I have fixed those two tests by patching the deployment object to have Currently, on master, the deployment object manifests in the |
@nicknovitski please address my comment and squash the commits once this passes e2e test |
Is the isolation still necessary when each test gets its own cluster role? I thought the test was marked serial because it was modifying a shared resource. If the resource is no longer shared, then it's safe to run in parallel.
But even if so, I can leave that change for another pull request.
--
Nick
On Tue, May 7, 2019, at 11:34 AM, Manuel Alejandro de Brito Fontes wrote:
***@***.**** commented on this pull request.
In test/e2e/settings/pod_security_policy.go <#4055 (comment)>:
> @@ -38,7 +39,7 @@ const (
ingressControllerPSP = "ingress-controller-psp"
)
…-var _ = framework.IngressNginxDescribe("[Serial] Pod Security Policies", func() {
Please don't remove [Serial]
This is used to run this particular test in isolation to avoid issues with PSP
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#4055 (review)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AABE6KJ4VKMDPDEXGJN7AWTPUHDSRANCNFSM4HJQP7VQ>.
|
Yes because the PSP test requires a different clusterrole that only makes sense in that scenario and that's why requires a serial run. I suggest to rollback all the changes in pod_security_policy.go |
/approve |
@ElvinEfendi this is ready for review |
Rebased to resolve a conflict I introduced. |
@nicknovitski please rebase |
Rebased and added |
/ok-to-test |
/retest |
Codecov Report
@@ Coverage Diff @@
## master #4055 +/- ##
=========================================
+ Coverage 57.65% 57.75% +0.1%
=========================================
Files 87 87
Lines 6452 6432 -20
=========================================
- Hits 3720 3715 -5
+ Misses 2300 2286 -14
+ Partials 432 431 -1
Continue to review full report at Codecov.
|
@@ -40,7 +39,7 @@ var _ = framework.IngressNginxDescribe("Custom Default Backend", func() { | |||
framework.UpdateDeployment(f.KubeClientSet, f.Namespace, "nginx-ingress-controller", 1, | |||
func(deployment *appsv1beta1.Deployment) error { | |||
args := deployment.Spec.Template.Spec.Containers[0].Args | |||
args = append(args, fmt.Sprintf("--default-backend-service=%s/%s", f.Namespace, "http-svc")) | |||
args = append(args, "--default-backend-service=$(POD_NAMESPACE)/http-svc") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this a necessary change? Why?
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aledbf, ElvinEfendi, nicknovitski The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
FYI the change that just landed breaks the installation guide which calls for
This file no longer exists. |
@AndrewX192 please check the docs again. (we need to run a manual task to update the docs) |
What this PR does / why we need it:
kustomize
is pretty good, and it's been integrated intokubectl
. This makes it possible for users to create a test deployment as simply as running:It also makes it possible for people to have version-controlled files tracking and combining upstream updates and their own additions and changes, for example:
kubectl apply -k
also does deployment rollouts on config-map changes.Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #3994