Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Device Plugin Design Proposal #695

Merged
merged 4 commits into from
Aug 18, 2017

Conversation

RenaudWasTaken
Copy link

Notes for reviewers

First proposal submitted to the community repo, please advise if something's not right with the format or procedure, etc.

cc @vishh @derekwaynecarr

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 7, 2017
@derekwaynecarr
Copy link
Member

I will try to take a pass at this before the next resource-mgmt-wg meeting.

In order to solve this problem it is obvious that we need a plugin system in
order to have vendors advertise and monitor their resources on behalf of Kubelet.

Additionally, we introduce the concept of ResourceType to be able to select
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend keeping this proposal just about how kubelet will interact with third party hardware devices. This itself is an API. Let's deal with the k8s APIs as part of resource classes to avoid a deluge of comments in this proposal.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed -- resource class will be a separate proposal altogether.

Copy link
Author

@RenaudWasTaken RenaudWasTaken Jun 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand how it might be a better strategy, will do


* I want to use a particular device type (GPU, InfiniBand, FPGA, etc.) in my pod.
* I should be able use that device without writing custom Kubernetes code.
* I want some mechanism to give priority among pods for particular devices on my node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a goal for this proposal which is about hardware device plugins

## Use Cases

* I want to use a particular device type (GPU, InfiniBand, FPGA, etc.) in my pod.
* I should be able use that device without writing custom Kubernetes code.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd add I want a consistent and portable solution to consuming hardware devices across k8s clusters


## Objectives

1. Create a plugin mechanism which allows discovery and monitoring of devices
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd add installation of appropriate drivers too similar to storage plugins. K8s APIs now work on top of vanilla Linux. I don't want to introduce third party driver dependencies which makes it hard to consume k8s API across deployments.

Copy link
Author

@RenaudWasTaken RenaudWasTaken Jun 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is out of scope and was defined out of scope in the meeting notes :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our knowledge of the containers and GPU landscape has increased since the last summit.
If you think you cannot handle driver installation, state why in this proposal.
It's hard for me to imagine every other hardware vendor in the world having the same set of restrictions with drivers like Nvidia does currently.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coming from a hypervisor background I also doubt that the installatoinof drivers should be part of this proposal, to have a chance to keep it simple and to not get derailed.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if kubernetes/kubernetes#46597 could assist here

## Objectives

1. Create a plugin mechanism which allows discovery and monitoring of devices
2. Change the API to support RessourceClass and ResourceType
Copy link
Contributor

@vishh vishh Jun 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned above, keep this proposal about hardware device interface.


## Non Objectives

1. Solving the runtime is out of scope
Copy link
Contributor

@vishh vishh Jun 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? k8s users care about the overall solution, not bits and pieces of the puzzle.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually handled by returning a CRI spec in Allocate.
I forgot to remove this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend not reusing the CRI spec. Instead explicitly define a separate API for the hardware plugin interface.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vishh Why not reuse the CRI spec? If we don't, doesn't that mean we would have to create a new API that handle adding mounts, device cgroups, volumes and possibly hooks?


### Device Plugin

Kubelet will create a gRPC connection (as a client) to the Device Plugins whose endpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flags are going away. The plugins have to register themselves with the kubelet and then kubelet will start interacting with the plugins. This requires a two way communication channel (kubelet is both server and client).
How kubelet auth will be tackled should also be dealt with here.

Kubelet will create a gRPC connection (as a client) to the Device Plugins whose endpoint
were specified on the CLI.

The kubelet will call the `Discover` function once and then listen to a stream returned
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expected the hardware plugins to expose a List and Watch API similar to the k8s API server. In fact the k8s API server is getting refactored and so the plugin can use the k8s API server library.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually ignore my comment on re-using the API server machinery. It's meant for declarative stateful servers. The hardware plugins interface is an imperative API.

The kubelet will call the `Discover` function once and then listen to a stream returned
by the `Monitor` function.

When receiving a pod which requests GPUs (through the ResourceClass api) kubelet will
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ResourceClasses aren't necessary. For example, kubelet can decide to expose devices with it's own name mapping, which might break portability though. I'd say through resource names in the Pod Spec.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand you here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ResourceClasses provide better scheduling primitives than what is available today.
With resource classes, you can request a GPU with specific attributes. Without that you will get any GPU.
The hardware plugin interface should ideally work with the current resource APIs even if it were to expose additional attributed to enabled ResourceClasses in the future.
Once ResourceClasses are introduced, better scheduling primitives will be available through the same hardware plugin APIs.


When receiving a pod which requests GPUs (through the ResourceClass api) kubelet will
be in charge of deciding which device to assign and advertising the corresponding changes
to the node's allocatable pool and pod assigned devices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allocatable is not Free or Available.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allocatable = Capacity - (resources reserved for system daemons)

Allocatable != Capacity - (resources allocated to pods) which is what I refer to as Available.

requests.

The typical process will follow the following pattern:
1. A user submits a pod spec requesting X devices with Y properties
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardware plugins can in theory work without Resource Classes. So leave it out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, you've now stated this twice -- so is that how you intend to consume devices, then? Without ResourcesClasses?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeremyeder from an abstractions standpoint, I think the hardware plugin interface has it's own versioned APIs. The devices themselves may be exposed as ResourceClasses version foo. But that version foo is independent of hardware plugin interface version bar.

2. The scheduler filters the nodes which do not match the requests
3. The pod lands on the node
4. Kubelet decides which device should be assigned to the pod
5. Kubelet removes the devices from the `Allocatable` pool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allocatable is not Available.

3. The pod lands on the node
4. Kubelet decides which device should be assigned to the pod
5. Kubelet removes the devices from the `Allocatable` pool
6. Kubelet updates the pod's status with the allocated devices
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WHy?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

User can know what devices were assigned to the pod. It can also be useful in the case of monitoring :)


```go
service DeviceManager {
rpc Discover() returns (stream ResourceType);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There needs to be separate API types for this interface. The public k8s APIs should not be re-used here ideally for having clean, modular, separate abstractions

rpc Deallocate(stream ResourceType) returns (Error)
}

enum HealthStatus {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer strings to enums.

When calling Allocate or Deallocate, only the Name field needs to be set.

```go
service DeviceManager {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will these APIs be versioned?

```


### Device Plugin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the deployment strategy for device plugins? Shipping more binaries directly is a non-starter because that will require adding the binaries to most of the popular k8s deployment solutions.

@vishh
Copy link
Contributor

vishh commented Jun 7, 2017

@RenaudWasTaken I did a first pass. Ping me once you have addressed the comments.

@cmluciano
Copy link

cc @urzds


## Objectives

1. Create a plugin mechanism which allows discovery and monitoring of devices
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coming from a hypervisor background I also doubt that the installatoinof drivers should be part of this proposal, to have a chance to keep it simple and to not get derailed.

It is defined as follows:

```golang
type ResourceType struct {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this struct would benefit from a little more structure, i.e. using the common Class, Vendor, Device fields known from PCI devices.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate more on that ?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure - I just wonder if adding more specific details about devices (i.e. bu type, vendor id, class id, device id) to the struct. These fields are pretty stable, and allow a pretty tight control when selecting/requiring a specific device.
I.e. in KubeVirt we are working on offering device pass-through, here a tight control is needed, to ensure that guests will always see the every same device, to prevent i.e. driver reinstallation or reactivation of the OS.

The bottom line is that explicit numeric class, vendor, device id's might be more reliable than a free form text to identify a specific device.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @fabiand, but: could adding those fields limit us to PCI devices?
We could still make them optional.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - That's an issue - tying us to much to PCI/USB devices (where bus, class, vendor, device would work).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabiand wouldn't these fields fit in the property map?

Copy link

@fabiand fabiand Jun 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RenaudWasTaken valid point, they could fit in there.

But it will be obviously less formal, and thus the value of it will be much lower. I.e. other components could not rely on these fields/properties to be present.

7. Kubelet calls `Allocate` on the matching Device Plugins
8. The user deletes the pod or the pod terminates
9. Kubelet calls `Deallocate` on the matching Device Plugins
10. Kubelet puts the devices back in the `Allocatable` pool
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume it should be Available here as well.

## Use Cases

* I want to use a particular device type (GPU, InfiniBand, FPGA, etc.) in my pod.
* I should be able use that device without writing custom Kubernetes code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "be able to use that device"


Because the current API (Capacity, Allocatable) can not be extended to
support ResourceType, we will need to create two new attributes in the NodeStatus
structure: `CapacityV2` and `AllocatabeV2`:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AllocatableV2

@RenaudWasTaken
Copy link
Author

RenaudWasTaken commented Jun 8, 2017

What I did:

  • Removed ResourceClass from proposal
  • Added Client/Server registration process for Device Plugins with Kubelet
  • Added deployment "proposal" (through deamonSet)
  • Added the Device type to the Protobuf API to address concern's about modularity

What I did not do:

  • Address driver installation, which I feel should be addressed in a follow up PR (in the same way ResourceClass will be) for now it's best if we assume the machine is provided with the drivers
  • API versioning, coming in the next version
  • CRI: every vendor might want to change its own part of the spec, having our own protobuf type only means:
    • Every time a vendor needs a new functionality he will have to do a PR
    • Depend on the latest version of K8s which implements the PR

@RenaudWasTaken RenaudWasTaken force-pushed the master branch 2 times, most recently from f58499c to 83517b9 Compare June 8, 2017 23:07
@dchen1107
Copy link
Member

I am approving this to unblock the progress. We agreed upon the current design as the alpha support for the device plugin; and thanks the detailed reviewed made by @jiayingz and @vishh above.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 18, 2017
@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit bf60571 into kubernetes:master Aug 18, 2017
tjfontaine pushed a commit to oracle/kubernetes that referenced this pull request Aug 21, 2017
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607)

Updated gRPC vendoring to support Keep Alive

**What this PR does / why we need it**:

This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Special notes for your reviewer**:
@vishh @jiayingz 

**Release note**:
```
Bumped gRPC from v1.0.4 to v1.5.1
```
@jiayingz
Copy link
Contributor

The gRPC version upgrade PR was rolled back due to some broken tests on other components. We may need to think about a backup plan on how to recover from failures. Re-thinking about this, I wonder whether it would be more efficient for kubelet to write a generation number at a canonical place that different plugin components, like device plugin and CSI, can watch to detect kubelet failures.

@RenaudWasTaken @dchen1107 @vishh @derekwaynecarr @thockin Any thoughts on this?

@vishh
Copy link
Contributor

vishh commented Aug 22, 2017 via email

@RenaudWasTaken
Copy link
Author

RenaudWasTaken commented Aug 22, 2017

I might be missing something here.
Keep alive was needed for two things:

  • Handling of a Kubelet crash from the device plugin
    • From what I understand and experimented when the device plugin will write on the Health Check
      stream, gRPC will fail the write
      At this point the Device Plugin should re-register itself
  • Handling of a Device Plugin crash from the Kubelet:
    • From what I understand gRPC will always intercept this and close the stream
      At this point Kubelet removes the Device Plugin from it's endpoint list (current implementation)

The only failure that I can see and is solved by Keep Alive is a Device Plugin which would still hold a stream to Kubelet but be unresponsive.

Did I forget an important failure case ?

@jiayingz
Copy link
Contributor

jiayingz commented Aug 22, 2017 via email

k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this pull request Aug 24, 2017
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)

Bumped gRPC version to 1.3.0

**What this PR does / why we need it**:

This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Which issue this PR fixes**: fixes #51099
Which was caused by my previous PR updating to 1.5.1

**Special notes for your reviewer**:
@vishh @jiayingz @shyamjvs

**Release note**:
```
Bumped gRPC to v1.3.0
```
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this pull request Sep 2, 2017
Automatic merge from submit-queue (batch tested with PRs 51590, 48217, 51209, 51575, 48627)

Deviceplugin jiayingz

**What this PR does / why we need it**:
This PR implements the kubelet Device Plugin Manager.
It includes four commits implemented by @RenaudWasTaken and a commit that supports allocation.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Design document: kubernetes/community#695
PR tracking: kubernetes/enhancements#368

**Special notes for your reviewer**:

**Release note**:
Extending Kubelet to support device plugin

```release-note
```
tamalsaha pushed a commit to kmodules/shared-informer that referenced this pull request Aug 13, 2020
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607)

Updated gRPC vendoring to support Keep Alive

**What this PR does / why we need it**:

This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Special notes for your reviewer**:
@vishh @jiayingz

**Release note**:
```
Bumped gRPC from v1.0.4 to v1.5.1
```

Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
tamalsaha pushed a commit to kmodules/shared-informer that referenced this pull request Aug 13, 2020
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)

Bumped gRPC version to 1.3.0

**What this PR does / why we need it**:

This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Which issue this PR fixes**: fixes #51099
Which was caused by my previous PR updating to 1.5.1

**Special notes for your reviewer**:
@vishh @jiayingz @shyamjvs

**Release note**:
```
Bumped gRPC to v1.3.0
```

Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
tamalsaha pushed a commit to gomodules/jsonpath that referenced this pull request Apr 21, 2021
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607)

Updated gRPC vendoring to support Keep Alive

**What this PR does / why we need it**:

This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Special notes for your reviewer**:
@vishh @jiayingz

**Release note**:
```
Bumped gRPC from v1.0.4 to v1.5.1
```

Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
tamalsaha pushed a commit to gomodules/jsonpath that referenced this pull request Apr 21, 2021
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)

Bumped gRPC version to 1.3.0

**What this PR does / why we need it**:

This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Which issue this PR fixes**: fixes #51099
Which was caused by my previous PR updating to 1.5.1

**Special notes for your reviewer**:
@vishh @jiayingz @shyamjvs

**Release note**:
```
Bumped gRPC to v1.3.0
```

Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
tamalsaha pushed a commit to gomodules/jsonpath that referenced this pull request Apr 21, 2021
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607)

Updated gRPC vendoring to support Keep Alive

**What this PR does / why we need it**:

This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Special notes for your reviewer**:
@vishh @jiayingz

**Release note**:
```
Bumped gRPC from v1.0.4 to v1.5.1
```

Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
tamalsaha pushed a commit to gomodules/jsonpath that referenced this pull request Apr 21, 2021
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)

Bumped gRPC version to 1.3.0

**What this PR does / why we need it**:

This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Which issue this PR fixes**: fixes #51099
Which was caused by my previous PR updating to 1.5.1

**Special notes for your reviewer**:
@vishh @jiayingz @shyamjvs

**Release note**:
```
Bumped gRPC to v1.3.0
```

Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
tamalsaha pushed a commit to gomodules/encoding that referenced this pull request Aug 10, 2021
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607)

Updated gRPC vendoring to support Keep Alive

**What this PR does / why we need it**:

This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Special notes for your reviewer**:
@vishh @jiayingz

**Release note**:
```
Bumped gRPC from v1.0.4 to v1.5.1
```

Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
tamalsaha pushed a commit to gomodules/encoding that referenced this pull request Aug 10, 2021
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)

Bumped gRPC version to 1.3.0

**What this PR does / why we need it**:

This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Which issue this PR fixes**: fixes #51099
Which was caused by my previous PR updating to 1.5.1

**Special notes for your reviewer**:
@vishh @jiayingz @shyamjvs

**Release note**:
```
Bumped gRPC to v1.3.0
```

Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
MadhavJivrajani pushed a commit to MadhavJivrajani/community that referenced this pull request Nov 30, 2021
Automatic merge from submit-queue

Device Plugin Design Proposal

Notes for reviewers

First proposal submitted to the community repo, please advise if something's not right with the format or procedure, etc.

cc @vishh  @derekwaynecarr
akhilerm pushed a commit to akhilerm/apimachinery that referenced this pull request Sep 20, 2022
Automatic merge from submit-queue (batch tested with PRs 50531, 50853, 49976, 50939, 50607)

Updated gRPC vendoring to support Keep Alive

**What this PR does / why we need it**:

This PR bumps the version of the vendored version of gRPC from v1.0.4 to v1.5.1
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Special notes for your reviewer**:
@vishh @jiayingz 

**Release note**:
```
Bumped gRPC from v1.0.4 to v1.5.1
```

Kubernetes-commit: 967c19df4916160d4d4fbd9a65bad41a53992de8
akhilerm pushed a commit to akhilerm/apimachinery that referenced this pull request Sep 20, 2022
Automatic merge from submit-queue (batch tested with PRs 51193, 51154, 42689, 51189, 51200)

Bumped gRPC version to 1.3.0

**What this PR does / why we need it**:

This PR bumps down the version of the vendored version of gRPC from v1.5.1 to v1.3.0
This is needed as part of the Device Plugin API where we expect client and server to use the Keep alive feature in order to detect an error.

Unfortunately I had to also bump the version of `golang.org/x/text` and `golang.org/x/net`.

- Design document: kubernetes/community#695
- PR tracking: [kubernetes/enhancements#368](kubernetes/enhancements#368 (comment))

**Which issue this PR fixes**: fixes #51099
Which was caused by my previous PR updating to 1.5.1

**Special notes for your reviewer**:
@vishh @jiayingz @shyamjvs

**Release note**:
```
Bumped gRPC to v1.3.0
```

Kubernetes-commit: 5fb38a325efb343c2a0467a12732829bd5ed3c3c
danehans pushed a commit to danehans/community that referenced this pull request Jul 18, 2023
* add zirain as E&T maintainer

* per Doug's suggestion

Co-authored-by: craigbox <craigbox@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.