Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/kubernetes: Pods resource spec is finished #3471

Closed
wants to merge 10 commits into from
Closed

provider/kubernetes: Pods resource spec is finished #3471

wants to merge 10 commits into from

Conversation

lwander
Copy link
Contributor

@lwander lwander commented Oct 9, 2015

@radeksimko

No need to merge this yet, I just wanted to check with you about my approach to separating out the resource specs. Since you mentioned that there will be a lot sub-resource duplication, I hoisted the schemas into gen_kubernetes_schemas.go. Already it's easier to reuse shared sub-resources like this.

Another thing - ever attribute is marked as optional as a way to keep the spec attribute for users who want it.

@radeksimko
Copy link
Member

~1k LOC makes me a little worried, what do you think about separating this somehow? I see a lot schemas related to volumes for example, which I think could be separated.

Also can you explain thoughts behind the gen* naming convention? Does that suppose to mean "generated" or something else?

Another thing - ever attribute is marked as optional as a way to keep the spec attribute for users who want it.

I don't quite understand how this would work, can you explain it and/or maybe attach an example?
The problem I see with making everything optional is that we're totally abandoning the schema-checking and delegating the problem to the API.

While changing the schema/DSL, can you also update the relevant acceptance test for this, please? It should be fairly quick to run. We can add more tests later, but at least if you can make the existing one work with the new DSL, that would be great.

Otherwise I feel quite positive about the progress you made! 😃

@radeksimko radeksimko added enhancement waiting-response An issue/pull request is waiting for a response from the community labels Oct 9, 2015
@lwander
Copy link
Contributor Author

lwander commented Oct 9, 2015

~1k LOC makes me a little worried, what do you think about separating this somehow?

A large volume of code is inevitable if you trace from Type Pod down all it's dependencies - it's pretty huge. However, a lot of the individual components I separated out are reused later when we implement the other Kubernetes resources.

I see a lot schemas related to volumes for example, which I think could be separated.

The volumes are separated, search for *VolumeSource(). Maybe I'm misunderstanding what you mean by "separated".

Also can you explain thoughts behind the gen* naming convention? Does that suppose to mean "generated" or something else?

Yup, it means "generated". The suffix corresponds to the type name in the k8s API

I don't quite understand how this would work, can you explain it and/or maybe attach an example?

I remember during hashiconf you mentioned you wanted to keep the spec attribute for the benefit of the user, i.e., the user wants to keep their .yaml config files. So, if we keep spec in each resource, we can't mark either spec or any component attribute as Required, since defining one precludes defining the other. Unfortunately, now validating the schemas relies either on us, or the Kubernetes API endpoint.

While changing the schema/DSL, can you also update the relevant acceptance test for this, please?

Could you point me towards the relevant tests?

Thanks for looking over this!

@radeksimko
Copy link
Member

The volumes are separated, search for *VolumeSource(). Maybe I'm misunderstanding what you mean by "separated".

builtin/providers/kubernetes/gen_kubernetes_schemas.go <- that single file has ~1k LOC and will likely grow over time, so I thought you could make something like builtin/providers/kubernetes/gen_kubernetes_volume_schemas.go.

So, if we keep spec in each resource, we can't mark either spec or any component attribute as Required, since defining one precludes defining the other. Unfortunately, now validating the schemas relies either on us, or the Kubernetes API endpoint.

Understood, I feel that supporting both options is making it more complicated, so I'd only support the DSL, i.e. forget about spec, unless @sparkprime has better idea.
I think there's value in offline validation of optional/required fields.

Could you point me towards the relevant tests?

Oh, it looks like I had accidentally removed those tests when resubmitting the PR. There are no tests to update then, sorry. 😞

@lwander
Copy link
Contributor Author

lwander commented Oct 10, 2015

so I thought you could make something like builtin/providers/kubernetes/gen_kubernetes_volume_schemas.go

Sounds good to me. Then sub-resources shared between resources will go in something like gen_kuberenetes_shared_schemas.go?

I think there's value in offline validation of optional/required fields.

I agree.

Oh, it looks like I had accidentally removed those tests when resubmitting the PR. There are no tests to update then, sorry.

No worries. Where do they normally go?

@sparkprime
Copy link
Contributor

Yeah we can always write a tool to convert the yaml to hcl. In fact yaml to json is trivial and that is probably already good enough.

@radeksimko
Copy link
Member

Sounds good to me. Then sub-resources shared between resources will go in something like gen_kuberenetes_shared_schemas.go?

👍 If something like volumes will appear (way too many related schemas), we can separate it from shared similarly.

No worries. Where do they normally go?

In this case builtin/providers/kubernetes/resource_pod_test.go. You can take e.g. https://github.com/hashicorp/terraform/blob/master/builtin/providers/google/resource_compute_address_test.go as an example. You can expect the developer running those tests to set ENV variables to point all tests to a K8S cluster, which can be a GKE or anything else.

@radeksimko radeksimko added wip and removed waiting-response An issue/pull request is waiting for a response from the community labels Oct 15, 2015
@lwander
Copy link
Contributor Author

lwander commented Oct 16, 2015

Hey @radeksimko, have you had trouble with the following error?

Post http://<endpoint address>/api/v1/namespaces/default/pods: dial tcp <endpoint address>:80: i/o timeout

It only seems to happen the first time I try and create a pod in a GKE cluster, the attempt after always succeeds.

@radeksimko
Copy link
Member

@lwander I do remember having it, but I think it's a problem to be solved on the GKE level, i.e. perhaps we should add another wait block, similar to this one -> wait a few more minutes (?) until we get HTTP 200 from the API, so that the resource is available only when it's actually ready for use?

I do remember having another similar issue with GKE & disks, I just didn't have time to fix/report it yet. I will have a look into my todo list of google-provider related things after I get back from holiday (end of next week).

@lwander
Copy link
Contributor Author

lwander commented Oct 21, 2015

Closing to clean up messy history.

@ghost
Copy link

ghost commented Apr 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants