-
Notifications
You must be signed in to change notification settings - Fork 693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate existing repo to kubernetes-client (kube-rs) #2792
Comments
/assign @nikhita |
@nikhita any chance you can look at this when you have time? Thanks! |
Since this would be under SIG API Machinery, we'll need a +1 from one of the leads. /assign @fedebongio |
👀 |
@brendandburns the folks doing krator + krustlet ( https://github.com/krator-rs/krator / https://github.com/deislabs/krustlet ) will join in this effort? |
I am super excited for this. Made my first PR to krustlet and looking to make more : ) |
@dims I will certainly ask them to see if there is interest/time in contributing. Definitely welcome as many people as possible to help with it. |
Any reason we are making an entirely different client than the existing one? Or am I misreading this? kube-rs is widely used among us Rustaceans and also uses the already generated k8s-openapi crate |
I'm curious about the same thing @thomastaylor312 is concerned with. The Kube-rs and k8s-openapi exist already. k8s-openapi is maintained and kept up-to-date with Kubernetes. Kube-rs is idiomatic and field-tested, and the maintainers are active, friendly, and flexible. We've written operators, clients, and (of course) Krustlet using kube-rs & k8s-openapi, and it has been a great tool. Starting a new "from scratch" project is going to confuse the ecosystem, compete where no competition seems warranted, and introduce an "official" repo that is already years behind the existing tooling. So before proceeding here, I would like to suggest that someone provide a clear and compelling reason why we should go this route rather than (a) inviting Kube-rs abd k8s-openapi to be the Kubernetes Rust client tools, or (b) simply recommending the existing Rust implementations and not having an "official" one. |
Tacking onto other comments from krustlet maintainers... @clux has been a fantastic maintainer of the kube-rs project. The project has been incredibly responsive and he has been maintaining that crate for a number of months. There are also several sub-crates of the kube-rs project including kube-derive ( The documentation, support, and overall community is great, it's passes as a silver client (watch APIs are supported), and quite a few examples are readily available. I'm also curious why create a new project instead of recommending one built by the community. Or better yet, why not provide additional support for the project? |
The k8s-openapi doesn't seem to be maintained by kubernetes? @technosophos |
@HongjiangHuang It isn't maintained by Kubernetes, but it is maintained and kept up to date with the latest API versions. I think @technosophos was suggesting that k8s-openapi could also be a good candidate for adoption over into Kubernetes |
@thomastaylor312 @technosophos Regarding kube-rs:
This is definitely not to imply that people shouldn't use There are community libraries for nearly every language (e.g. the fabric8 Kubernetes library for Java, the godaddy Kubernetes library for Javascript) that's great, and we encourage that. But there is also value in a consistent approach to client generation and standardized capabilities (e.g. https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities) Admittedly, there are also a bunch of bugs/problems in the upstream spec (lots of details here: https://github.com/Arnavion/k8s-openapi#works-around-bugs-in-the-upstream-openapi-spec) but I think that fixing them in the central generator is a better approach than forking off for every language. |
It's hard to write anything non-trivial on top of a client generated by a naive openapi generator in rust because of how restrictive it is, and how much of the language type-system you end up having to bypass with code-generation. The I agree it's non-standard compared to the other clients - and we have to hand-roll some code from Anyway. Reglardless of what you end up doing here, it would be great for us (in kube-rs) go get some alignment with sig-apimachinery on the subject of more ergonomic clients. Not saying we that need to be rubber-stamped or absorbed as some official thing (although we would not be opposed to that), but we know people are rejecting kube-rs because of the lack of official support (even with 3 active maintainers, tons of documented users, parts of gold level features, when no official alternative exists) and if we can contribute anyway towards improving this, it would be great. (and with golang receiving generics in the future, maybe we even have some common goals to work towards. 🙏) |
update: fixed wrong quote 🤦♂️ (I'm one of the I've read this issue, Kubernetes: New Client Library Procedure, and its background issue multiple times. However, I don't understand why a new Rust client is necessary, who is wanting it, and why they're wanting it. According to the responses and reactions in this issue, we (Rustaceans who's been working with k8s) are not the target. Please help us understand the reason behind this.
This sounds like a misunderstanding, so I tried to make it clear, expanding on what @clux wrote above.
Are we missing any value? The only difference from the official clients I can think of is not using the same generator (not sure which, https://github.com/swagger-api/swagger-codegen (from the design proposal) or https://github.com/OpenAPITools/openapi-generator (mentioned in more recent comments in official client issues)). This is necessary to make a client that can take advantage of Rust's strengths, as explained above. And I don't see how this is a value since each language generators are different.
The fixups in |
cc @liggitt |
It looks like all but one of the issues in https://github.com/Arnavion/k8s-openapi/blob/445e89ec444ebb1c68e61361e64eec4c4a3f4785/k8s-openapi-codegen/src/fixups/upstream_bugs.rs has been fixed in the upstream spec. This one is still outstanding (though I don't see a kubernetes/kubernetes issue for it): // Path operation annotated with a "x-kubernetes-group-version-kind" that references a type that doesn't exist in the schema.
//
// Ref: https://github.com/kubernetes/kubernetes/pull/66807 Fixed in 1.12 in kubernetes/kubernetes#63837 as noted: // The spec says that `createAppsV1beta1NamespacedDeploymentRollback` returns `DeploymentRollback`, but it returns `Status`.
//
// Ref: https://github.com/kubernetes/kubernetes/pull/63837 Fixed in 1.22 in kubernetes/kubernetes#102159: // `ContainerImage::names`
//
// Ref: https://github.com/kubernetes/kubernetes/issues/93606 Fixed in 1.16 in kubernetes/kubernetes#64996 as noted: // `CustomResourceDefinitionStatus::conditions`
//
// Ref: https://github.com/kubernetes/kubernetes/pull/64996 Fixed in 1.12 in kubernetes/kubernetes#63757: // `PodDisruptionBudgetStatus::disruptedPods`
//
// Ref: https://github.com/kubernetes/kubernetes/pull/65041 Fixed in 1.16 in kubernetes/kubernetes#80773: // Thus `RawExtension` is really an arbitrary JSON value, and should be represented by `serde_json::Value`
//
// Ref: https://github.com/kubernetes/kubernetes/issues/55890 Dropped in 1.14 in kubernetes/kubernetes#74596: // Remove `$ref`s under `io.k8s.kubernetes.pkg` since these are marked deprecated and point to corresponding definitions under `io.k8s.api`.
// They only exist for backward-compatibility with 1.7's spec. |
Good to know another one was fixed in 1.22. Fixups applied for each version are here: https://github.com/Arnavion/k8s-openapi/blob/445e89ec444ebb1c68e61361e64eec4c4a3f4785/k8s-openapi-codegen/src/supported_version.rs#L81-L150 Looks like there's already an issue open for fixing spec bugs mentioned in |
All of the code generation for the "official" projects is centralized under: https://github.com/kubernetes-client/gen Moving the And I think the community would likewise benefit from any fix-ups that we have missed that I don't think that it is necessary to centralize on the generator code itself (the dotnet library, for example uses a different generator, and there is little overlap between the various language generators anyway), but centralizing on the swagger download/patch/etc flow is necessary in my opinion so we make all of the fixes in a single place. The other thing, of course, is that the kube-rs team would need to decide to donate the code to the Kubernetes project/CNCF, abide by the Kubernetes code of conduct, use the Kubernetes bots for all management. Use the Kubernetes security process, etc. There's a bunch of procedural changes and centralizations that are necessary to be "official" ultimately it is up to the @kazk @clux I'm happy to chat about the process of becoming official, if that is interesting. Past experience with other client libraries (e.g. Javascript) is that the cost of making things official didn't seem worth it to the maintainers, but I'm definitely supportive of whatever path you chose. |
One other wrinkle that occurred to me: It doesn't look like either I believe that this means that donating them to Linux foundtion/CNCF is quite complicated and involves going to each contributor (kube-rs has had 52) and getting them to agree to donate their work to the Linux Foundation/CNCF. If the |
For what it's worth, here's the process for donating repos to the Kubernetes project - https://github.com/kubernetes/community/blob/master/github-management/kubernetes-repositories.md#rules-for-donated-repositories |
@nikhita @brendandburns let's time box this - if we don't hear back any active proposal to donate something to seed this repo in a week, let's go ahead with it. is that fair everyone? |
I'll discuss with the rest of us in Personally, I think the overhead involving donation sounds reasonable to me on paper, but we'll have to discuss it a bit. The legal setup does pose a problem, and I am not sure what the recommended methods are for getting retroactive approval from past contributors. But we can at least try to chase down people and see how it goes. As for,
Yes please! Any help about practicalities of the process, legal setup, would be welcome. Any people I can reach out to on the kubernetes slack? I am @clux on there. |
I don't think that it is required for I think what is important is that you use the I generally disconnect on the weekends, but I will try to get you on the Kubernetes slack sometime next week, or you can always reach me at bburns [at] microsoft.com. |
Discussion ongoing on the pros/cons of donating to kubernetes over here: kube-rs/kube#584 |
Hey all! Just wanted to check on how things were going with all of this. It looks like @clux and crew were making progressing on things in the kube-rs repo. Were you all able to talk @brendandburns? |
At the very least, we did get a lot of feedback from a variety of people here on that linked issue about our concerns. We got some legal help from CNCF, and they said that we do not need to collect copyrights and could just use a DCO - although the kubernetes org rules can be interpreted as stricter than that. We've investigated a bit around In terms of standardisation, we might have to concede that our codebase does not necessarily fit perfectly into the existing client org with the simple-client setup. The So basically, how would you prefer us to proceed here? We can make an official proposal somewhere if desired, and we can try to get buy-in from a sig (presumably sig-apimachinery), but maybe it is more desirable for us to operate elsewhere under an official banner? Maybe a |
@clux There are 2 choices.
|
@HongjiangHuang it looks like this has segued into an issue about getting an existing Rust client library to be adopted as official. If that's an outcome you'd like to see, would you be happy to update the issue description accordingly? |
Ok, Glad to see it all |
Is there still a plan to migrate this repo? Just wanted to check back and see if this issue should be closed. |
update from us - I didn't have a good answer a week ago. We are trying to follow option 2 under the CNCF sandbox path - provided they have a reasonable home for us. It doesn't feel like we quite fit into the Our rust protobuf codegen might be a better future goal for donation to Not sure if you want to keep this issue here open in the mean time. |
@clux we can open a fresh issue when the time comes. thanks! /close |
@dims: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
New Repo, Staging Repo, or migrate existing
migrate repository
Requested name for new repository
rust
Which Organization should it reside
kubernetes-client
If not a staging repo, who should have admin access
@brendandburns @clux
If not a staging repo, who should have write access
@brendandburns @clux
If not a staging repo, who should be listed as approvers in OWNERS
@brendandburns @clux
If not a staging repo, who should be listed in SECURITY_CONTACTS
@brendandburns @clux
What should the repo description be
rust client library for kubernetes
What SIG and subproject does this fall under in sigs.yaml
sig-api-machinery kubernetes-client
Approvals
kubernetes-client/gen#192
kubernetes-client/gen#194
Additional context for request
/cc @brendandburns
The text was updated successfully, but these errors were encountered: