-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for kubernetes_replication_controller #9
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel qualified to review this in detail due to my limited knowledge of Kubernetes, but the implementation looks plausible and I just had some minor feedback on the docs and naming.
} | ||
requests{ | ||
cpu = "250m" | ||
memory = "50Mi" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should align these by equals to match what hclfmt
would do.
selector { | ||
Test = "MyExampleApp" | ||
} | ||
template { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indentation seems wrong here
* `env` - (Optional) List of environment variables to set in the container. Cannot be updated. | ||
* `image` - (Optional) Docker image name. More info: http://kubernetes.io/docs/user-guide/images | ||
* `image_pull_policy` - (Optional) Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/images#updating-images | ||
* `lifecycle` - (Optional) Actions that the management system should take in response to container lifecycle events |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course at this nested level lifecycle
isn't special but since we have a special meaning of it at the resource level, should we try to pick a non-conflicting name here in case we want to mimic this same nested structure as a top-level resource later?
Probably not a big deal unless we can already imagine a situation where a container structure like this would be a top-level resource. So feel free to ignore if you disagree. 😀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah 🤔 I see what do you mean, but I'd rather keep it as is. Vast majority of the code + docs was generated based on K8S API structs and I think it's good that we stick to their naming conventions where possible.
I know it may cause troubles if we ever get to implement the nested resources - as 1st level meta-params might suddenly become valid in nested structures, but I'd prefer to deal with it when it becomes a real problem.
371d81d
to
895062e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2(?) questions but non-blockers I think 👍
The following arguments are supported: | ||
|
||
* `metadata` - (Required) Standard replication controller's metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata | ||
* `spec` - (Required) Spec defines the specification of the desired behavior of the replication controller. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, this is kind of a side effect of K8S having these outdated links in swagger/API specs from which I've generated the schema and then these docs. 😢
I'll fix it here manually and then sed
the existing docs too in a separate PR.
return fmt.Errorf("Failed to update replication controller: %s", err) | ||
} | ||
log.Printf("[INFO] Submitted updated replication controller: %#v", out) | ||
d.SetId(buildId(out.ObjectMeta)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it normal for the ID of the resource to change here? Does the old ID still work? Is it possible for the Patch
operation above (~line 170) to encounter an error but still actually change the id, in which case something is lost?
If this kind of id updating is normal then ignore me. I've seen other resources re-set the ID in UPDATE methods when it wasn't necessary (or recommended)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, you're absolutely right, thanks for pointing it out. Neither name or namespace (from which we build the ID) are updatable, so the ID would never change. Fixed.
895062e
to
e86504a
Compare
Both fixed. Thanks for the review. |
Test results