-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "meta: Treat internal k8s annotations as invalid" #199
Revert "meta: Treat internal k8s annotations as invalid" #199
Conversation
This reverts commit ba1441b.
👏 👏 thank you @mbarrien This PR unblocks our transition to terraform-defined kubernetes load balancers in eks. Is the gist of #50 this: Terraform is not the only actor annotating kubernetes resources. Since there are others, reality drifts away from terraform state in acceptable way? But terraform sees diffs and trys to "correct" them by resetting the annotations set by other actors. To avoid this overwrite of legitimate drifted state, #50 flat-out prohibits use of any "risky" annotations. Did I get that right @mbarrien / @radeksimko ? |
Is this going to happen ? |
Yes, but not on it’s own. |
How would you propose the user defines/manages this white/blacklisting feature? I am happy to take a shot at implementing it but it would be good to make sure that we agree on how/where the list(s) are defined. |
FYI, @dh-harald has made a POC for the whitelisting feature: #60 (comment) |
FWIW - not rolling back this restriction makes the resource unusable for non-trivial functionality in the Azure AKS product: |
@nmartin413, with PR #244, you can simply add the list of annotations you want to allow to the provider's whitelist. I believe that this allows us to move forward and use the provider without having to fight the battle of reverting the change. Both AWS EKS and GCP GKE have the same issues with blocking the kubernetes.io annotations. |
@alexsomesan, can someone please look at PR #244 and either merge it or provide feedback? |
Is there a timeline for getting this PR accepted and merged? |
I don't normally do this, but since this provider is almost unusable.... is there a timeline for getting this PR accepted and merged? |
Yes, this is the biggest pain point right now.
However, it will most likely happen once the support for Terraform 0.12 is
fully tested and merged as that work has the top priority in order to
deliver a functional provider with the release of 0.12.
Also, merging additional functionality is mostly on hold right now as we
don’t want to introduce additional uncertainty while we test.
Once we merge support for 0.12, development will pick back up as well as
merging community work.
On Fri 22. Mar 2019 at 01:26, Tim Malone ***@***.***> wrote:
I don't normally do this, but since this provider is almost unusable....
is there a timeline for getting this PR accepted and merged?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#199 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABC-mxvhWCExDG40zHlMUAut3ab-D_T_ks5vZCM7gaJpZM4YCeqS>
.
--
— Sent from my phone.
|
While I appreciate these priorities...
I beg to differ, as:
|
The only way to use this provider is to take one of the branches that is
functional, build it, and install it as a custom provider. Clearly, no one
at Hashicorp cares about the impact of their lack of action on terraform
users. Maybe it’s time to move onto Ansible...
…On Thu, Mar 21, 2019 at 7:51 PM Tim Malone ***@***.***> wrote:
While I appreciate these priorities...
...deliver a functional provider with the release of 0.12
...merging additional functionality is mostly on hold right now
I beg to differ, as:
- this provider is almost not functional without this being merged
- this isn't additional functionality; it's a fix for a bug introduced
by the merge of #50
<#50>
in August 2017
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#199 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHoqVfKmeRQBnCBuw2fp9UWvNQbs650-ks5vZCkPgaJpZM4YCeqS>
.
|
I have created some additional noise at the main Terraform repo: hashicorp/terraform#20788 |
For anyone else who is currently blocked until this is fixed - my temporary workaround has been to write the service definition to a local file and invoke |
This reverts #50 and closes #60 by allowing "internal" annotations that have "kubernetes.io" in the domain name.
Examples abound for which this "feature" breaks basic usage, such as with:
https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
https://docs.microsoft.com/en-us/azure/aks/internal-lb
https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/ingress-resources.md#annotations
For 3rd party Kubernetes plugins, Kubernetes has no control over what annotation names they use, and many others have created custom annotations for their tools that end with kubernetes.io in the part before the slash, and for which the annotation is absolutely essential to get it working.
There was an argument made that this plugin is not meant to support alpha or beta features. The problem is these "alpha" or "beta" feature are heavily relied on in production by many users because of the long beta cycles and the the usefulness of those "beta" but in practice stable features. This provider should not be in the business of saying "this annotation should not be supported because it's beta"; let the developer/ops person decide.
With this over-protectiveness, people will start using direct calls to kubectl via null_resource local-execs; that's even more of a nightmare that lets people shoot themselves in the foot more than having the potential for a few broken annotations here and there (for which a good developer will notice the false changes in the plan, and eventually realize how to fix it).
Let the engineers make their mistakes and trust them to figure it out; don't be so over-protective as to cripple core functionality. If necessary, put a big warning up saying "you better know what you're doing", but don't stop the innovation.