-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding multi tenancy for controller using aad-pod-identity #977
adding multi tenancy for controller using aad-pod-identity #977
Conversation
Skipping CI for Draft Pull Request. |
@devigned I thought I would push this while I'm working on it still so you can take a look and let me know what you think of the direction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per our conversation in Slack: https://kubernetes.slack.com/archives/CEX9HENG7/p1602089427042600. I believe we are able to greatly simplify the design after talking with the AAD pod identity team. This is a great start down the multi-tenancy path.
The gist of the linked conversation:
We should run aad pod identity within the capz namespace in namespace constrained mode. We should allow users to create
AzureIdentities
within the workload cluster namespace and have the identity owned by theAzureCluster
. The capz controller would be responsible for copying the cluster namespace identity into the controller namespace and creating aAzureIdentityBinding
in the controller namespace to link the label of the capz controller and the capz namespaced copy of the clusterAzureIdentity
. The capz controller would then be responsible for deleting the identities within the capz namespace when they are no longer needed.
Probably goes without saying, but we will also need to deploy aad pod identity components with the capz components. |
yes, will add the deployment for it, was still trying to get it to work |
1b31c9f
to
045fad3
Compare
0734ec9
to
ee07557
Compare
ee07557
to
aa6ac11
Compare
e92831d
to
d55c592
Compare
d55c592
to
ef65d99
Compare
51aa667
to
c22ddf6
Compare
/test pull-cluster-api-provider-azure-e2e-full |
} | ||
|
||
// ClusterNamespaceAllowed indicates if the cluster namespace is allowed | ||
func (c *AzureClusterIdentity) ClusterNamespaceAllowed(namespace string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's add a unit test for this
@@ -350,4 +351,60 @@ var _ = Describe("Workload cluster creation", func() { | |||
}) | |||
}) | |||
}) | |||
|
|||
Context("Creating a cluster using a different SP identity", func() { | |||
BeforeEach(func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment I wrote in the Windows PR - we need to audit all the clusters that get created in pr e2e tests and see if any of them can be safely combined to avoid growing the cost and duration of each PR run too much
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense for this test to have its own cluster since it using a different identity which we would want to test separately, wdyt?
type: Opaque | ||
data: | ||
certificate: CERTIFICATE | ||
password: PASSWORD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is unclear to me? is it one or the other?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both kinds are supported by aad-pod-identity https://github.com/Azure/aad-pod-identity/tree/master/deploy/demo
return ctrl.Result{}, err | ||
} | ||
|
||
bindingsToDelete := []aadpodv1.AzureIdentityBinding{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where are the bindings created?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nevermind found it in scope. Seems a bit weird to me to not create the bindings/azureidentity in here too. Also why are we expecting unused bindings to be left around needing cleanup? Why not delete them as part of AzureCluster deletion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thinking behind this is that this object lives in a different namespace and is only for the purpose of the controller with the nmi pod, technically the cluster/user should not care about it. Also the reason we needed the cleanup controller is that if the user changes the reference to a different identity, the controller will generate a new azureIdentity/Binding and the old one will be left behind
APIVersion: "aadpodidentity.k8s.io/v1", | ||
}, | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: identity.GetAzureIdentityName(p.AzureCluster.Name, p.AzureCluster.Namespace, p.Identity.Name), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this has the cluster name in it, its lifecycle is owned by the life of the cluster, no? Why not create/delete it when the an AzureCluster is created/deleted, similarly to how we create/delete the azure.json secret?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just added the cluster name to make it easier to find, but it wasn't there before, its not part of the cluster lifecycle it lives in a different namespace and is for internal use by the controller
c22ddf6
to
516cbd2
Compare
/test pull-cluster-api-provider-azure-e2e-full |
1 similar comment
/test pull-cluster-api-provider-azure-e2e-full |
- support SP identity only - add flavor + topic doc - add identity reconciler
add values to azurecluster instead of expecting an azureidentity
516cbd2
to
d52af1a
Compare
@nader-ziada: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
lgtm Are you tracking all follow-ups (clusterctl move, pre-existing aad pod identity deploy, etc) in issues in there respective repos? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solid work, @nader-ziada!
lgtm
🚀 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: devigned The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Just remembered still waiting on the proposal to match the implementation That's why there's a /hold |
@CecileRobertMichon I promise, promise, promise to update the proposal by eod tomorrow. |
/hold cancel @devigned thanks, github is witness 😉 |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #586 , #778
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
TODOs:
Release note: