-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Device Plugin Design Proposal #695
Conversation
I will try to take a pass at this before the next resource-mgmt-wg meeting. |
In order to solve this problem it is obvious that we need a plugin system in | ||
order to have vendors advertise and monitor their resources on behalf of Kubelet. | ||
|
||
Additionally, we introduce the concept of ResourceType to be able to select |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend keeping this proposal just about how kubelet will interact with third party hardware devices. This itself is an API. Let's deal with the k8s APIs as part of resource classes to avoid a deluge of comments in this proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed -- resource class will be a separate proposal altogether.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand how it might be a better strategy, will do
|
||
* I want to use a particular device type (GPU, InfiniBand, FPGA, etc.) in my pod. | ||
* I should be able use that device without writing custom Kubernetes code. | ||
* I want some mechanism to give priority among pods for particular devices on my node. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a goal for this proposal which is about hardware device plugins
## Use Cases | ||
|
||
* I want to use a particular device type (GPU, InfiniBand, FPGA, etc.) in my pod. | ||
* I should be able use that device without writing custom Kubernetes code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add I want a consistent and portable solution to consuming hardware devices across k8s clusters
|
||
## Objectives | ||
|
||
1. Create a plugin mechanism which allows discovery and monitoring of devices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add installation of appropriate drivers too similar to storage plugins. K8s APIs now work on top of vanilla Linux. I don't want to introduce third party driver dependencies which makes it hard to consume k8s API across deployments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is out of scope and was defined out of scope in the meeting notes :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our knowledge of the containers and GPU landscape has increased since the last summit.
If you think you cannot handle driver installation, state why in this proposal.
It's hard for me to imagine every other hardware vendor in the world having the same set of restrictions with drivers like Nvidia does currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coming from a hypervisor background I also doubt that the installatoinof drivers should be part of this proposal, to have a chance to keep it simple and to not get derailed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if kubernetes/kubernetes#46597 could assist here
## Objectives | ||
|
||
1. Create a plugin mechanism which allows discovery and monitoring of devices | ||
2. Change the API to support RessourceClass and ResourceType |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned above, keep this proposal about hardware device interface.
|
||
## Non Objectives | ||
|
||
1. Solving the runtime is out of scope |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? k8s users care about the overall solution, not bits and pieces of the puzzle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually handled by returning a CRI spec in Allocate
.
I forgot to remove this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend not reusing the CRI spec. Instead explicitly define a separate API for the hardware plugin interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vishh Why not reuse the CRI spec? If we don't, doesn't that mean we would have to create a new API that handle adding mounts, device cgroups, volumes and possibly hooks?
|
||
### Device Plugin | ||
|
||
Kubelet will create a gRPC connection (as a client) to the Device Plugins whose endpoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flags are going away. The plugins have to register themselves with the kubelet and then kubelet will start interacting with the plugins. This requires a two way communication channel (kubelet is both server and client).
How kubelet auth will be tackled should also be dealt with here.
Kubelet will create a gRPC connection (as a client) to the Device Plugins whose endpoint | ||
were specified on the CLI. | ||
|
||
The kubelet will call the `Discover` function once and then listen to a stream returned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expected the hardware plugins to expose a List
and Watch
API similar to the k8s API server. In fact the k8s API server is getting refactored and so the plugin can use the k8s API server library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually ignore my comment on re-using the API server machinery. It's meant for declarative stateful servers. The hardware plugins interface is an imperative API.
The kubelet will call the `Discover` function once and then listen to a stream returned | ||
by the `Monitor` function. | ||
|
||
When receiving a pod which requests GPUs (through the ResourceClass api) kubelet will |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ResourceClasses aren't necessary. For example, kubelet can decide to expose devices with it's own name mapping, which might break portability though. I'd say through resource names in the Pod Spec
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand you here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ResourceClasses provide better scheduling primitives than what is available today.
With resource classes, you can request a GPU with specific attributes. Without that you will get any GPU.
The hardware plugin interface should ideally work with the current resource APIs even if it were to expose additional attributed to enabled ResourceClasses in the future.
Once ResourceClasses are introduced, better scheduling primitives will be available through the same hardware plugin APIs.
|
||
When receiving a pod which requests GPUs (through the ResourceClass api) kubelet will | ||
be in charge of deciding which device to assign and advertising the corresponding changes | ||
to the node's allocatable pool and pod assigned devices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allocatable
is not Free
or Available
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allocatable represents the resources of a node that are available for scheduling.
I don't really understand what you are saying here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allocatable = Capacity - (resources reserved for system daemons)
Allocatable != Capacity - (resources allocated to pods) which is what I refer to as Available
.
requests. | ||
|
||
The typical process will follow the following pattern: | ||
1. A user submits a pod spec requesting X devices with Y properties |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardware plugins can in theory work without Resource Classes. So leave it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, you've now stated this twice -- so is that how you intend to consume devices, then? Without ResourcesClasses?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeremyeder from an abstractions standpoint, I think the hardware plugin interface has it's own versioned APIs. The devices themselves may be exposed as ResourceClasses version foo
. But that version foo
is independent of hardware plugin interface version bar
.
2. The scheduler filters the nodes which do not match the requests | ||
3. The pod lands on the node | ||
4. Kubelet decides which device should be assigned to the pod | ||
5. Kubelet removes the devices from the `Allocatable` pool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allocatable
is not Available
.
3. The pod lands on the node | ||
4. Kubelet decides which device should be assigned to the pod | ||
5. Kubelet removes the devices from the `Allocatable` pool | ||
6. Kubelet updates the pod's status with the allocated devices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WHy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
User can know what devices were assigned to the pod. It can also be useful in the case of monitoring :)
|
||
```go | ||
service DeviceManager { | ||
rpc Discover() returns (stream ResourceType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There needs to be separate API types for this interface. The public k8s APIs should not be re-used here ideally for having clean, modular, separate abstractions
rpc Deallocate(stream ResourceType) returns (Error) | ||
} | ||
|
||
enum HealthStatus { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer strings to enums.
When calling Allocate or Deallocate, only the Name field needs to be set. | ||
|
||
```go | ||
service DeviceManager { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will these APIs be versioned?
``` | ||
|
||
|
||
### Device Plugin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the deployment strategy for device plugins? Shipping more binaries directly is a non-starter because that will require adding the binaries to most of the popular k8s deployment solutions.
@RenaudWasTaken I did a first pass. Ping me once you have addressed the comments. |
cc @urzds |
|
||
## Objectives | ||
|
||
1. Create a plugin mechanism which allows discovery and monitoring of devices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coming from a hypervisor background I also doubt that the installatoinof drivers should be part of this proposal, to have a chance to keep it simple and to not get derailed.
It is defined as follows: | ||
|
||
```golang | ||
type ResourceType struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this struct would benefit from a little more structure, i.e. using the common Class, Vendor, Device fields known from PCI devices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate more on that ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure - I just wonder if adding more specific details about devices (i.e. bu type, vendor id, class id, device id) to the struct. These fields are pretty stable, and allow a pretty tight control when selecting/requiring a specific device.
I.e. in KubeVirt we are working on offering device pass-through, here a tight control is needed, to ensure that guests will always see the every same device, to prevent i.e. driver reinstallation or reactivation of the OS.
The bottom line is that explicit numeric class, vendor, device id's might be more reliable than a free form text to identify a specific device.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @fabiand, but: could adding those fields limit us to PCI devices?
We could still make them optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah - That's an issue - tying us to much to PCI/USB devices (where bus, class, vendor, device would work).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabiand wouldn't these fields fit in the property map?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RenaudWasTaken valid point, they could fit in there.
But it will be obviously less formal, and thus the value of it will be much lower. I.e. other components could not rely on these fields/properties to be present.
7. Kubelet calls `Allocate` on the matching Device Plugins | ||
8. The user deletes the pod or the pod terminates | ||
9. Kubelet calls `Deallocate` on the matching Device Plugins | ||
10. Kubelet puts the devices back in the `Allocatable` pool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume it should be Available
here as well.
## Use Cases | ||
|
||
* I want to use a particular device type (GPU, InfiniBand, FPGA, etc.) in my pod. | ||
* I should be able use that device without writing custom Kubernetes code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "be able to use that device"
|
||
Because the current API (Capacity, Allocatable) can not be extended to | ||
support ResourceType, we will need to create two new attributes in the NodeStatus | ||
structure: `CapacityV2` and `AllocatabeV2`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AllocatableV2
What I did:
What I did not do:
|
f58499c
to
83517b9
Compare
Automatic merge from submit-queue |
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607) Updated gRPC vendoring to support Keep Alive **What this PR does / why we need it**: This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Special notes for your reviewer**: @vishh @jiayingz **Release note**: ``` Bumped gRPC from v1.0.4 to v1.5.1 ```
The gRPC version upgrade PR was rolled back due to some broken tests on other components. We may need to think about a backup plan on how to recover from failures. Re-thinking about this, I wonder whether it would be more efficient for kubelet to write a generation number at a canonical place that different plugin components, like device plugin and CSI, can watch to detect kubelet failures. @RenaudWasTaken @dchen1107 @vishh @derekwaynecarr @thockin Any thoughts on this? |
I'd like to avoid adding more features just because of a bug in one of our
vendored dependencies. We might be better off helping fix our vendored
dependency (etcd in this case). Meanwhile, if we have to forsake health
checks, that's OK IMHO since the feature is alpha.
If kubelet dies, device plugin will re-register since connection from
kubelet will drop. If device plugin dies, kubelet connection is also
expected to drop and kubelet can raise an event for example. How long is
the grpc timeout?
…On Tue, Aug 22, 2017 at 11:29 AM, Jiaying Zhang ***@***.***> wrote:
The gRPC version upgrade PR was rolled back due to some broken tests on
other components. We may need to think about a backup plan on how to
recover from failures. Re-thinking about this, I wonder whether it would be
more efficient for kubelet to write a generation number at a canonical
place that different plugin components, like device plugin and CSI, can
watch to detect kubelet failures.
@RenaudWasTaken <https://github.com/renaudwastaken> @dchen1107
<https://github.com/dchen1107> @vishh <https://github.com/vishh>
@derekwaynecarr <https://github.com/derekwaynecarr> @thockin
<https://github.com/thockin> Any thoughts on this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#695 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGvIKN1GSuyRWyxsdb40GdNgh9bnwpOLks5sax3_gaJpZM4NzUmN>
.
|
I might be missing something here.
The only failure that I can see and is solved by Keep Alive is a Device Plugin which would still hold a stream to Kubelet but be unresponsive. Did I forget an important failure case ? |
On Tue, Aug 22, 2017 at 1:18 PM, Renaud Gaubert ***@***.***> wrote:
I might be missing something here.
Keep alive was needed for two things:
- Handling of a Kubelet crash from the device plugin
- From what I understand and experimented when writting on the
stream gRPC will fail the write
At this point the Device Plugin should re-register himself
Vish and I had an offline discussion on this. We think we may rely on this
and checkpointing
for 1.8 release. I.e., Kubelet will checkpoint device state information as
well as device allocation information,
and assumes fail static policy.
-
- Handling of a Device Plugin crash from the Kubelet:
- From what I understand gRPC will always intercept this and close
the stream
This part I am not quite sure, but I think for 1.8 release, we can rely on
pod rescheduling and have Kubelet
always take newest connected device plugin.
Jiaying
…
-
The only failure that I can see and is solved by Keep Alive is a Device
Plugin which would still hold a stream to Kubelet but be unresponsive.
Did I forget an important failure case ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#695 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AcIZlPTyDTXvzcRgJ6KeXcgA8Z4wK98vks5sazePgaJpZM4NzUmN>
.
|
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200) Bumped gRPC version to 1.3.0 **What this PR does / why we need it**: This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Which issue this PR fixes**: fixes #51099 Which was caused by my previous PR updating to 1.5.1 **Special notes for your reviewer**: @vishh @jiayingz @shyamjvs **Release note**: ``` Bumped gRPC to v1.3.0 ```
Automatic merge from submit-queue (batch tested with PRs 51590, 48217, 51209, 51575, 48627) Deviceplugin jiayingz **What this PR does / why we need it**: This PR implements the kubelet Device Plugin Manager. It includes four commits implemented by @RenaudWasTaken and a commit that supports allocation. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # Design document: kubernetes/community#695 PR tracking: kubernetes/enhancements#368 **Special notes for your reviewer**: **Release note**: Extending Kubelet to support device plugin ```release-note ```
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607) Updated gRPC vendoring to support Keep Alive **What this PR does / why we need it**: This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Special notes for your reviewer**: @vishh @jiayingz **Release note**: ``` Bumped gRPC from v1.0.4 to v1.5.1 ``` Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200) Bumped gRPC version to 1.3.0 **What this PR does / why we need it**: This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Which issue this PR fixes**: fixes #51099 Which was caused by my previous PR updating to 1.5.1 **Special notes for your reviewer**: @vishh @jiayingz @shyamjvs **Release note**: ``` Bumped gRPC to v1.3.0 ``` Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607) Updated gRPC vendoring to support Keep Alive **What this PR does / why we need it**: This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Special notes for your reviewer**: @vishh @jiayingz **Release note**: ``` Bumped gRPC from v1.0.4 to v1.5.1 ``` Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200) Bumped gRPC version to 1.3.0 **What this PR does / why we need it**: This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Which issue this PR fixes**: fixes #51099 Which was caused by my previous PR updating to 1.5.1 **Special notes for your reviewer**: @vishh @jiayingz @shyamjvs **Release note**: ``` Bumped gRPC to v1.3.0 ``` Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607) Updated gRPC vendoring to support Keep Alive **What this PR does / why we need it**: This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Special notes for your reviewer**: @vishh @jiayingz **Release note**: ``` Bumped gRPC from v1.0.4 to v1.5.1 ``` Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200) Bumped gRPC version to 1.3.0 **What this PR does / why we need it**: This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Which issue this PR fixes**: fixes #51099 Which was caused by my previous PR updating to 1.5.1 **Special notes for your reviewer**: @vishh @jiayingz @shyamjvs **Release note**: ``` Bumped gRPC to v1.3.0 ``` Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607) Updated gRPC vendoring to support Keep Alive **What this PR does / why we need it**: This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Special notes for your reviewer**: @vishh @jiayingz **Release note**: ``` Bumped gRPC from v1.0.4 to v1.5.1 ``` Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200) Bumped gRPC version to 1.3.0 **What this PR does / why we need it**: This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Which issue this PR fixes**: fixes #51099 Which was caused by my previous PR updating to 1.5.1 **Special notes for your reviewer**: @vishh @jiayingz @shyamjvs **Release note**: ``` Bumped gRPC to v1.3.0 ``` Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
Automatic merge from submit-queue Device Plugin Design Proposal Notes for reviewers First proposal submitted to the community repo, please advise if something's not right with the format or procedure, etc. cc @vishh @derekwaynecarr
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607) Updated gRPC vendoring to support Keep Alive **What this PR does / why we need it**: This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Special notes for your reviewer**: @vishh @jiayingz **Release note**: ``` Bumped gRPC from v1.0.4 to v1.5.1 ``` Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200) Bumped gRPC version to 1.3.0 **What this PR does / why we need it**: This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0 This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error. Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`. - Design document: kubernetes/community#695 - PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment)) **Which issue this PR fixes**: fixes #51099 Which was caused by my previous PR updating to 1.5.1 **Special notes for your reviewer**: @vishh @jiayingz @shyamjvs **Release note**: ``` Bumped gRPC to v1.3.0 ``` Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
* add zirain as E&T maintainer * per Doug's suggestion Co-authored-by: craigbox <craigbox@google.com>
Notes for reviewers
First proposal submitted to the community repo, please advise if something's not right with the format or procedure, etc.
cc @vishh @derekwaynecarr