Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate existing repo to kubernetes-client (kube-rs) #2792

Closed
alberthuang24 opened this issue Jun 17, 2021 · 34 comments
Closed

Migrate existing repo to kubernetes-client (kube-rs) #2792

alberthuang24 opened this issue Jun 17, 2021 · 34 comments
Assignees
Labels
area/github-repo Creating, migrating or deleting a Kubernetes GitHub Repository

Comments

@alberthuang24
Copy link

alberthuang24 commented Jun 17, 2021

New Repo, Staging Repo, or migrate existing

migrate repository

Requested name for new repository

rust

Which Organization should it reside

kubernetes-client

If not a staging repo, who should have admin access

@brendandburns @clux

If not a staging repo, who should have write access

@brendandburns @clux

If not a staging repo, who should be listed as approvers in OWNERS

@brendandburns @clux

If not a staging repo, who should be listed in SECURITY_CONTACTS

@brendandburns @clux

What should the repo description be

rust client library for kubernetes

What SIG and subproject does this fall under in sigs.yaml

sig-api-machinery kubernetes-client

Approvals

kubernetes-client/gen#192

kubernetes-client/gen#194

Additional context for request

/cc @brendandburns

@alberthuang24 alberthuang24 added the area/github-repo Creating, migrating or deleting a Kubernetes GitHub Repository label Jun 17, 2021
@brendandburns
Copy link

/assign @nikhita

@brendandburns
Copy link

@nikhita any chance you can look at this when you have time?

Thanks!

@nikhita
Copy link
Member

nikhita commented Jun 25, 2021

Since this would be under SIG API Machinery, we'll need a +1 from one of the leads.

/assign @fedebongio
for approval

@n4j
Copy link
Member

n4j commented Jun 25, 2021

👀

@dims
Copy link
Member

dims commented Jun 25, 2021

@brendandburns the folks doing krator + krustlet ( https://github.com/krator-rs/krator / https://github.com/deislabs/krustlet ) will join in this effort?

cc @thomastaylor312

@sladyn98
Copy link

I am super excited for this. Made my first PR to krustlet and looking to make more : )

@brendandburns
Copy link

@dims I will certainly ask them to see if there is interest/time in contributing. Definitely welcome as many people as possible to help with it.

@thomastaylor312
Copy link

thomastaylor312 commented Jun 25, 2021

Any reason we are making an entirely different client than the existing one? Or am I misreading this? kube-rs is widely used among us Rustaceans and also uses the already generated k8s-openapi crate

@technosophos
Copy link

I'm curious about the same thing @thomastaylor312 is concerned with. The Kube-rs and k8s-openapi exist already. k8s-openapi is maintained and kept up-to-date with Kubernetes. Kube-rs is idiomatic and field-tested, and the maintainers are active, friendly, and flexible. We've written operators, clients, and (of course) Krustlet using kube-rs & k8s-openapi, and it has been a great tool.

Starting a new "from scratch" project is going to confuse the ecosystem, compete where no competition seems warranted, and introduce an "official" repo that is already years behind the existing tooling.

So before proceeding here, I would like to suggest that someone provide a clear and compelling reason why we should go this route rather than (a) inviting Kube-rs abd k8s-openapi to be the Kubernetes Rust client tools, or (b) simply recommending the existing Rust implementations and not having an "official" one.

@bacongobbler
Copy link

Tacking onto other comments from krustlet maintainers... @clux has been a fantastic maintainer of the kube-rs project. The project has been incredibly responsive and he has been maintaining that crate for a number of months. There are also several sub-crates of the kube-rs project including kube-derive (derive macros for CustomResourceDefinitions) and kube-runtime. Several projects including krustlet, kdash, and plugins such as kubectl view-allocations have been using kube-rs for a while now.

The documentation, support, and overall community is great, it's passes as a silver client (watch APIs are supported), and quite a few examples are readily available.

I'm also curious why create a new project instead of recommending one built by the community. Or better yet, why not provide additional support for the project?

@alberthuang24
Copy link
Author

k8s-openapi is maintained and kept up-to-date with Kubernetes

The k8s-openapi doesn't seem to be maintained by kubernetes? @technosophos

@thomastaylor312
Copy link

@HongjiangHuang It isn't maintained by Kubernetes, but it is maintained and kept up to date with the latest API versions. I think @technosophos was suggesting that k8s-openapi could also be a good candidate for adoption over into Kubernetes

@brendandburns
Copy link

@thomastaylor312 @technosophos

Regarding kube-rs:

kube-rs is not generated using the Kubernetes swagger specification, nor in a standard way.

This is definitely not to imply that people shouldn't use kube-rs or that we want to compete with them in any way. It is just a philosophical difference.

There are community libraries for nearly every language (e.g. the fabric8 Kubernetes library for Java, the godaddy Kubernetes library for Javascript) that's great, and we encourage that.

But there is also value in a consistent approach to client generation and standardized capabilities (e.g. https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)

Admittedly, there are also a bunch of bugs/problems in the upstream spec (lots of details here: https://github.com/Arnavion/k8s-openapi#works-around-bugs-in-the-upstream-openapi-spec) but I think that fixing them in the central generator is a better approach than forking off for every language.

@clux
Copy link

clux commented Jun 28, 2021

kube-rs is not generated using the Kubernetes swagger specification, nor in a standard way.

It's hard to write anything non-trivial on top of a client generated by a naive openapi generator in rust because of how restrictive it is, and how much of the language type-system you end up having to bypass with code-generation.

The k8s-openapi library infers generic resource impls for every resource in the spec, for instance, and it's kind of required for building something more rust-friendly on top of it.

I agree it's non-standard compared to the other clients - and we have to hand-roll some code from apimachinery - but the goal is that it makes rust feel like a first-class supported language for kubernetes, while still having 99% of the code generated from openapi.


Anyway. Reglardless of what you end up doing here, it would be great for us (in kube-rs) go get some alignment with sig-apimachinery on the subject of more ergonomic clients.

Not saying we that need to be rubber-stamped or absorbed as some official thing (although we would not be opposed to that), but we know people are rejecting kube-rs because of the lack of official support (even with 3 active maintainers, tons of documented users, parts of gold level features, when no official alternative exists) and if we can contribute anyway towards improving this, it would be great.

(and with golang receiving generics in the future, maybe we even have some common goals to work towards. 🙏)

@kazk
Copy link

kazk commented Jun 29, 2021

update: fixed wrong quote 🤦‍♂️

(I'm one of the kube-rs maintainers.)

I've read this issue, Kubernetes: New Client Library Procedure, and its background issue multiple times. However, I don't understand why a new Rust client is necessary, who is wanting it, and why they're wanting it. According to the responses and reactions in this issue, we (Rustaceans who's been working with k8s) are not the target. Please help us understand the reason behind this.

kube-rs is not generated using the Kubernetes swagger specification, nor in a standard way.

This sounds like a misunderstanding, so I tried to make it clear, expanding on what @clux wrote above.

By generating the API Operations and Data Models, updating the client and tracking changes from main repositories becomes much more sustainable.

Background - Kubernetes: New Client Library Procedure

k8s-openapi is kept up to date with Kubernetes releases (thanks Arnavion!). k8s-openapi uses swagger.json like the official clients, workarounds upstream spec bugs, supports more versions than the official clients, does lots of Kubernetes specific handling, and makes it possible to take advantage of Rust's strengths.

There are good clients for each of these languages, but having a basic supported client would even help those client libraries to focus on their interface and delegate transport and config layer to this basic client.

Languages - Kubernetes: New Client Library Procedure

kube-rs provides idiomatic client with generated code in k8s-openapi. The client makes use of the Tokio ecosystem, and is customizable with middleware (can also be used for testing with mocks). In terms of features, it's mostly gold (supports exec and attach; portforward is not merged yet, but works). We also provide a runtime abstraction inspired by controller-runtime (kube-runtime), and a derive macro for CRDs inspired by KubeBuilder (kube-derive). All of these are implemented in a pretty small code base (you can read all in an afternoon) thanks to Rust and k8s-openapi.

But there is also value in a consistent approach to client generation and standardized capabilities

Are we missing any value? The only difference from the official clients I can think of is not using the same generator (not sure which, https://github.com/swagger-api/swagger-codegen (from the design proposal) or https://github.com/OpenAPITools/openapi-generator (mentioned in more recent comments in official client issues)). This is necessary to make a client that can take advantage of Rust's strengths, as explained above. And I don't see how this is a value since each language generators are different.
Also, the official clients have many issues that points to the generator that's been open for years.

Admittedly, there are also a bunch of bugs/problems in the upstream spec ... but I think that fixing them in the central generator is a better approach than forking off for every language.

The fixups in k8s-openapi are applied to the spec before code generation, based on the Kubernetes version. The central generator is not specific to Kuberentes, and is not aware of Kubernetes versions, right?
The upstream spec should be fixed to include the correct information.

@dims
Copy link
Member

dims commented Jun 30, 2021

The upstream spec should be fixed to include the correct information.

cc @liggitt

@liggitt
Copy link
Member

liggitt commented Jun 30, 2021

It looks like all but one of the issues in https://github.com/Arnavion/k8s-openapi/blob/445e89ec444ebb1c68e61361e64eec4c4a3f4785/k8s-openapi-codegen/src/fixups/upstream_bugs.rs has been fixed in the upstream spec. This one is still outstanding (though I don't see a kubernetes/kubernetes issue for it):

// Path operation annotated with a "x-kubernetes-group-version-kind" that references a type that doesn't exist in the schema.
//
// Ref: https://github.com/kubernetes/kubernetes/pull/66807

Fixed in 1.12 in kubernetes/kubernetes#63837 as noted:

// The spec says that `createAppsV1beta1NamespacedDeploymentRollback` returns `DeploymentRollback`, but it returns `Status`.
//
// Ref: https://github.com/kubernetes/kubernetes/pull/63837

Fixed in 1.22 in kubernetes/kubernetes#102159:

	// `ContainerImage::names`
	//
	// Ref: https://github.com/kubernetes/kubernetes/issues/93606

Fixed in 1.16 in kubernetes/kubernetes#64996 as noted:

	// `CustomResourceDefinitionStatus::conditions`
	//
	// Ref: https://github.com/kubernetes/kubernetes/pull/64996

Fixed in 1.12 in kubernetes/kubernetes#63757:

	// `PodDisruptionBudgetStatus::disruptedPods`
	//
	// Ref: https://github.com/kubernetes/kubernetes/pull/65041

Fixed in 1.16 in kubernetes/kubernetes#80773:

// Thus `RawExtension` is really an arbitrary JSON value, and should be represented by `serde_json::Value`
//
// Ref: https://github.com/kubernetes/kubernetes/issues/55890

Dropped in 1.14 in kubernetes/kubernetes#74596:

// Remove `$ref`s under `io.k8s.kubernetes.pkg` since these are marked deprecated and point to corresponding definitions under `io.k8s.api`.
// They only exist for backward-compatibility with 1.7's spec.

@kazk
Copy link

kazk commented Jun 30, 2021

Good to know another one was fixed in 1.22. Fixups applied for each version are here: https://github.com/Arnavion/k8s-openapi/blob/445e89ec444ebb1c68e61361e64eec4c4a3f4785/k8s-openapi-codegen/src/supported_version.rs#L81-L150

Looks like there's already an issue open for fixing spec bugs mentioned in k8s-openapi kubernetes/kubernetes#95052. For the last one, connect_options_gvk, there's a comment in that issue kubernetes/kubernetes#95052 (comment)

@brendandburns
Copy link

brendandburns commented Jul 7, 2021

All of the code generation for the "official" projects is centralized under:

https://github.com/kubernetes-client/gen

Moving the kube-rs project to that generator would be necessary in order to be an official client. (or maybe it would be "moving the kube-openapi project to that generator"?)

And I think the community would likewise benefit from any fix-ups that we have missed that kube-rs/k8s-openapi has done, moving them into the gen project would benefit all clients (and is the main reason why the centralization is worthwhile).

I don't think that it is necessary to centralize on the generator code itself (the dotnet library, for example uses a different generator, and there is little overlap between the various language generators anyway), but centralizing on the swagger download/patch/etc flow is necessary in my opinion so we make all of the fixes in a single place.

The other thing, of course, is that the kube-rs team would need to decide to donate the code to the Kubernetes project/CNCF, abide by the Kubernetes code of conduct, use the Kubernetes bots for all management. Use the Kubernetes security process, etc.

There's a bunch of procedural changes and centralizations that are necessary to be "official" ultimately it is up to the kube-rs team as to whether that is something they want or not.

@kazk @clux I'm happy to chat about the process of becoming official, if that is interesting. Past experience with other client libraries (e.g. Javascript) is that the cost of making things official didn't seem worth it to the maintainers, but I'm definitely supportive of whatever path you chose.

@brendandburns
Copy link

brendandburns commented Jul 7, 2021

One other wrinkle that occurred to me:

It doesn't look like either kube-rs or k8s-openapi use the CNCF CLA (or any sort of CLA or DCO).

I believe that this means that donating them to Linux foundtion/CNCF is quite complicated and involves going to each contributor (kube-rs has had 52) and getting them to agree to donate their work to the Linux Foundation/CNCF.

If the kube-rs and k8s-openapi folks decide that they want to donate to the CNCF, then we should definitely talk to the Linux foundation lawyers and see what is needed here (and the kube-rs folks should consider whether they want to take on the leg-work of running down every contributor and getting them to agree)

@nikhita
Copy link
Member

nikhita commented Jul 8, 2021

For what it's worth, here's the process for donating repos to the Kubernetes project - https://github.com/kubernetes/community/blob/master/github-management/kubernetes-repositories.md#rules-for-donated-repositories

@dims
Copy link
Member

dims commented Jul 8, 2021

@nikhita @brendandburns let's time box this - if we don't hear back any active proposal to donate something to seed this repo in a week, let's go ahead with it.

is that fair everyone?

@clux
Copy link

clux commented Jul 8, 2021

I'll discuss with the rest of us in kube-rs about the possibilities of donating over at our issues this week. I'll share an issue for context here later.

Personally, I think the overhead involving donation sounds reasonable to me on paper, but we'll have to discuss it a bit. The legal setup does pose a problem, and I am not sure what the recommended methods are for getting retroactive approval from past contributors. But we can at least try to chase down people and see how it goes.

As for, k8s-openapi - while we contribute - this is not managed by kube-rs maintainers, so the generation path might take some more discussions. Using the official generator is certainly not yet viable for use with the type of code it's outputting, and not sure if it's realistic to port the type of changes needed into it. But if you're happy with only the "swagger download/patch/etc flow" standardised then maybe that is something we can define and help tackle.

I'm happy to chat about the process of becoming official, if that is interesting. Past experience with other client libraries (e.g. Javascript) is that the cost of making things official didn't seem worth it to the maintainers, but I'm definitely supportive of whatever path you chose.

Yes please! Any help about practicalities of the process, legal setup, would be welcome. Any people I can reach out to on the kubernetes slack? I am @clux on there.

@brendandburns
Copy link

I don't think that it is required for k8s-openapi to be donated, I think you can use it as an open source tool in the context of the gen repo just like we use the other code generators (which aren't donated either)

I think what is important is that you use the k8s-openapi code generator rather than simply picking up the code that is generated. So if there is a way to integrate the k8s-openapi code generator into the https://github.com/kubernetes-client/gen generation process that would be a good thing to explore. Please have a look at the existing flow for code generation (mostly https://github.com/kubernetes-client/gen/blob/master/openapi/openapi-generator/generate_client_in_container.sh and https://github.com/kubernetes-client/gen/blob/master/openapi/openapi-generator/client-generator.sh) and see if it seems feasible to integrate the k8s-openapi generation into that flow.

I generally disconnect on the weekends, but I will try to get you on the Kubernetes slack sometime next week, or you can always reach me at bburns [at] microsoft.com.

@spiffxp
Copy link
Member

spiffxp commented Jul 23, 2021

Discussion ongoing on the pros/cons of donating to kubernetes over here: kube-rs/kube#584

@thomastaylor312
Copy link

Hey all! Just wanted to check on how things were going with all of this. It looks like @clux and crew were making progressing on things in the kube-rs repo. Were you all able to talk @brendandburns?

@clux
Copy link

clux commented Aug 5, 2021

At the very least, we did get a lot of feedback from a variety of people here on that linked issue about our concerns. We got some legal help from CNCF, and they said that we do not need to collect copyrights and could just use a DCO - although the kubernetes org rules can be interpreted as stricter than that.

We've investigated a bit around gen and k8s-openapi and seeing how they might fit together. There's a new discussion there. So far it doesn't look very promising to try to fit those two together (for comments outlined therein), but some less disruptive pathways that we can see could involve 1. using gen as a starting point for us starting proto work, and 2. finding a better way to upstream spec bugs, and feedback as much as is possible from that work. Anyway input is welcome.

In terms of standardisation, we might have to concede that our codebase does not necessarily fit perfectly into the existing client org with the simple-client setup. The client is just one module within one of our crates, and the rest is more focused on functionality inspired by client-go, controller-runtime and kubebuilder. In the same way that there is a distinction between kubernetes-client/go and kubernetes/client-go, maybe it's just that the kubernetes-client org is not the ideal destination?

So basically, how would you prefer us to proceed here? We can make an official proposal somewhere if desired, and we can try to get buy-in from a sig (presumably sig-apimachinery), but maybe it is more desirable for us to operate elsewhere under an official banner? Maybe a sig-rust would make sense? Given our setup, would you want to have us in a particular org?

@dims
Copy link
Member

dims commented Aug 5, 2021

@clux There are 2 choices.

@sftim
Copy link
Contributor

sftim commented Aug 15, 2021

@HongjiangHuang it looks like this has segued into an issue about getting an existing Rust client library to be adopted as official. If that's an outcome you'd like to see, would you be happy to update the issue description accordingly?

@alberthuang24
Copy link
Author

@HongjiangHuang it looks like this has segued into an issue about getting an existing Rust client library to be adopted as official. If that's an outcome you'd like to see, would you be happy to update the issue description accordingly?

Ok, Glad to see it all

@alberthuang24 alberthuang24 changed the title Request to create new repository “kubernetes-client/rust” Migrate existing repo to kubernetes-client (kube-rs) Aug 16, 2021
@mrbobbytables
Copy link
Member

Is there still a plan to migrate this repo? Just wanted to check back and see if this issue should be closed.

@clux
Copy link

clux commented Oct 30, 2021

update from us - I didn't have a good answer a week ago.

We are trying to follow option 2 under the CNCF sandbox path - provided they have a reasonable home for us. It doesn't feel like we quite fit into the kubernetes-client org with the conflicting scopes. See kube-rs/kube#584 (comment) for details.

Our rust protobuf codegen might be a better future goal for donation to kubernetes-client, but for now that's still a WIP with just some core processes at kube-rs/k8s-pb. If there are concerns/ideas around those approaches we would love to cooperate and smooth out the pain points there to ensure the boundary between languages remains as small as possible.

Not sure if you want to keep this issue here open in the mean time.

@dims
Copy link
Member

dims commented Oct 30, 2021

@clux we can open a fresh issue when the time comes. thanks!

/close

@k8s-ci-robot
Copy link
Contributor

@dims: Closing this issue.

In response to this:

@clux we can open a fresh issue when the time comes. thanks!

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/github-repo Creating, migrating or deleting a Kubernetes GitHub Repository
Projects
None yet
Development

No branches or pull requests