-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extracting Kubernetes runtime out of Vector code base #2963
Comments
So, how about I start with extracting the parts into a crate? Then we can see if one of the crates our there accepts our code. @lukesteensen, @Hoverbear, what do you think? |
I'd definitely love to see a |
Thanks for writing this up @MOZGIII!
The big motivation is less about code review and auditing, and more about having the code used as widely as possible in a number of different contexts. This will give us much more confidence in its correct behavior than just reading it carefully and trying to find bugs ourselves.
This is one of the primary reasons for #2635. We do not want to have to use custom HTTP clients for everything, because we know they're not terribly common in libraries like this. Using one of the standard HTTP client crates should be just fine for our dependencies.
These are all reasonable concerns, but I do want to be careful about differentiating between how we think the code "should" be written and factors that actually affect the user-facing behavior. If the crate functions reliably and offers the APIs we need, that should be enough (especially for an initial implementation).
I think this is a great first step! The big benefit here will be to draw a clear line between the k8s implementation and the API that Vector depends on. Once that API is clear, we can use it to have more concrete conversations about changes we'd like to see in community crates. As a second step, I'd suggest that we write up a short, concise proposal to share the the If the |
Unfortunately, this issue is fairly recent, compared to when the implementation was written. It's going to make the transition easier.
Absolutely! Unfortunately, Arnavion/k8s-openapi#70 was only found by us, despite
Yep, sounds good! |
I'll begin the process of extracting the implementation into a crate then. |
Sounds good! Pulling out into |
I support I'd have to unwrap the |
I don't think we need to rely on emitting events internal to the k8s client. We can instrument in Vector itself around use of the API. |
A lot of the discussion at the original PR was on switching from logs to emits. Some pieces are elegantly aligned to allow instrumenting around, but others - more "client-internal" things - aren't that easy to add instrumentation. For those, I think the best course of action is to switch them back to events. I'll be easier to discuss this in better detail when I submit the PR. |
Yes, it's likely some bits will need to change now that they're in a different context than when they were originally reviewed (internal to Vector vs part of an external crate). We should align the new crate's API with that of |
I updated the plan in the OP. |
Closing since we ended up using |
A lot of code to interact with Kubernetes API was implemented as part of #2653. We'd like to extract this code into a shared crate of some kind.
Motivation
Context
There were a few reasons why we implemented the code ourselves in the first place, and didn't use some existing crate:
k8s-openapi
- it provides some machine-generated rust types and serialization/deserialization based on the OpenAPI spec for the Kubernetes API.The new code is very heavily inspired by the
WatchClient
(it too doesn't rely on anything higher level thank8s-openapi
, and usesevmap
).WatchClient
had very good design decisions, and I attempted to use higher-level crates, but quickly got back to the same core design for various reasons - more on this later.http
clients facilities. None of the crates had this flexibility.Halfway through the implementation, it became evident that this factor is very important, as I had to debug and fix issues at the bottom of the library stack. It would've been way more difficult to do if there were additional layers.
kube
- the only one actively maintained. It's problematics though - lot of things are hard-coded to a particular implementation, rather than being generic around a trait. It also provides a lot of very basic custom functionality that's manually implemented, rather than being based onk8s-openapi
(which it also depends on). The test coverage is also lacking. All those factors repelled me from relying on that crate so far.k8s-openapi
. Technically, we can just extract into it's own crate, and invite users to depend on directly. We can implement adapters for compatibility withkube
crate too.kube
crate to improve the modularity of their crate, probably port our code there, and then we could switch to usingkube
crate. Although it's a lot more work than just extracting the code to a new crate, we'll get the community more involved in the process, and, eventually, transition the maintenance to the community completely.kube
crate (8) - that way we'll start having the benefits of a cleaner code separation earlier - and there'd earlier be fewer things to think about for the Vector team.Plan
lib/
.kube
. Discuss things with them.kube
.kube
code changes.kube
variant.I'm very looking forward to sharing what I implemented for Vector with the community because there's nothing quite like it available out there yet.
The text was updated successfully, but these errors were encountered: