-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/kubernetes: Pods resource spec is finished #3471
provider/kubernetes: Pods resource spec is finished #3471
Conversation
~1k LOC makes me a little worried, what do you think about separating this somehow? I see a lot schemas related to volumes for example, which I think could be separated. Also can you explain thoughts behind the
I don't quite understand how this would work, can you explain it and/or maybe attach an example? While changing the schema/DSL, can you also update the relevant acceptance test for this, please? It should be fairly quick to run. We can add more tests later, but at least if you can make the existing one work with the new DSL, that would be great. Otherwise I feel quite positive about the progress you made! 😃 |
A large volume of code is inevitable if you trace from Type Pod down all it's dependencies - it's pretty huge. However, a lot of the individual components I separated out are reused later when we implement the other Kubernetes resources.
The volumes are separated, search for
Yup, it means "generated". The suffix corresponds to the type name in the k8s API
I remember during hashiconf you mentioned you wanted to keep the
Could you point me towards the relevant tests? Thanks for looking over this! |
Understood, I feel that supporting both options is making it more complicated, so I'd only support the DSL, i.e. forget about
Oh, it looks like I had accidentally removed those tests when resubmitting the PR. There are no tests to update then, sorry. 😞 |
Sounds good to me. Then sub-resources shared between resources will go in something like
I agree.
No worries. Where do they normally go? |
Yeah we can always write a tool to convert the yaml to hcl. In fact yaml to json is trivial and that is probably already good enough. |
👍 If something like volumes will appear (way too many related schemas), we can separate it from
In this case |
Hey @radeksimko, have you had trouble with the following error?
It only seems to happen the first time I try and create a pod in a GKE cluster, the attempt after always succeeds. |
@lwander I do remember having it, but I think it's a problem to be solved on the GKE level, i.e. perhaps we should add another wait block, similar to this one -> wait a few more minutes (?) until we get HTTP 200 from the API, so that the resource is available only when it's actually ready for use? I do remember having another similar issue with GKE & disks, I just didn't have time to fix/report it yet. I will have a look into my todo list of google-provider related things after I get back from holiday (end of next week). |
Closing to clean up messy history. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
@radeksimko
No need to merge this yet, I just wanted to check with you about my approach to separating out the resource specs. Since you mentioned that there will be a lot sub-resource duplication, I hoisted the schemas into
gen_kubernetes_schemas.go
. Already it's easier to reuse shared sub-resources like this.Another thing - ever attribute is marked as
optional
as a way to keep thespec
attribute for users who want it.