-
Notifications
You must be signed in to change notification settings - Fork 606
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port Windows device support from containerd #2079
Conversation
Signed-off-by: Adam Rehn <adam@adamrehn.com>
94ad603
to
2eb25f1
Compare
LGTM for implementation. but is it possible to add an integration test with a special device ( e.g |
So looking at the Devices in Containers on Windows page, I see the following devices can be exposed to containers:
Perhaps a COM port might be the best option? I don't think cloud VMs typically have one by default, but if we're able to modify the VM image used for the Cirrus CI workflow then we could potentially install a third-party driver that provides a virtual COM port and use that? (It's also worth noting that those docs are quite out of date compared to what's supported by newer versions of Windows, so experimentation might potentially reveal other types of devices that could be used instead. After all, that page still references the old |
cirrus base image is windows-2019-core-for-containers. We fill all the requirement to run one of these devices. I am wondering is there a built-in/specific device in the above list
SGTM, if there is a way to do it from |
Oops, my mistake! I had incorrectly assumed that this was an HCS API convention due to seeing that format used within the hcsshim codebase, but looking again now, I can see that the only place it's actually used is within unit tests related to containerd (hence the use of the containerd/ctr format convention).
So looking at the Cirrus docs for the |
To clarify, I was trying to standardise to As it happens, Docker had not yet had a release supporting anything except So I'd lean towards avoiding supporting It's a good call that now that Docker v.23.00 is released, the MS docs can probably be updated to prefer the new syntax, and possibly demonstrate some of the other IDTypes, although the others tend to rely on system-local details that you look up with pnputil. Note that there was no matching change in the Docker cli AFAIR because it passes the string wholesale through the API, so all the parsing is on the moby/moby side. So anyway, the current code seems fine to me on this basis (since I wrote an earlier version...), just giving some background as to why, and giving context if you decide you want Docker-legacy-compat as well. (In which case, consider slurping code from either containerd-cri, or moby, since both do the same splitting but also recognise For integration tests, you could consider the trick I used in containerd (or moby... I forget which) which is that when activating the DirectX device class, even if no dedicated GPU is present, a magic host-bind-mount is created. That's where this nonsense came from, and proved able to run integration tests on the random Azure and GitHub Actions VMs that containerd test suites get run on. If you check We also had a discussion somewhere (probably in that PR's history, but it might have been in the matching moby/moby PR; whenever it was that MS's container device expert noticed what I was up to and suggested we find a not-relying-on-undocumented-internal-behaviour solution -- Edit: Found the thread.) about using pnputil or similar to detect mapped device classes or individual devices. However, pnputil isn't present in nanoserver images, from memory. So we filed the host-mount trick as "good enough" to merge, since it was already true for every released Windows version we cared about, and replacing it was still a "someday" idea for that team. There's also this little toy I was putting together to use in integration tests of this code, before I realised the directory-mount trick would be simpler. Something like that, in a dedicated image, might be usable and extendable for all of the various things that need to build integration tests for Windows devices, perhaps. But I don't have time to work on it at the (long) moment. If you want to look at other IDType values for tests, I believe that the containerd-cri test suite's |
@TBBle thanks for the additional context and advice about testing this functionality! So I performed some tests on a GCE instance, and I was indeed able to expose the COM ports to a Windows container. The only problem was that the In the end, I decided that the simplest option was to follow @TBBle's example by specifying the device interface class for GPUs and testing for the presence of the |
405d70d
to
d361417
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
Signed-off-by: Adam Rehn <adam@adamrehn.com>
d361417
to
3effa44
Compare
CI is failing after 5 retires :( . I believe that cause is not related to your changes |
Yeah, the CI failures all appear to be rootless Linux tests, and the changes in this PR only affect Windows hosts, so I guess something else must be going on there. |
Port Windows device support from containerd Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Port Windows device support from containerd Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
Port Windows device support from containerd Signed-off-by: Nashwan Azhari <nazhari@cloudbasesolutions.com>
// Microsoft could decide to change in the future (breaking both this unit test and the one in containerd itself): | ||
// https://github.com/containerd/containerd/pull/6618#discussion_r823302852 | ||
func TestRunProcessContainerWithDevice(t *testing.T) { | ||
testutil.DockerIncompatible(t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why incompatible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't notice that line earlier, but now I look, every test in this file appear to be DockerIncompatible
, and some appear to be needlessly-so (like this one and TestRunProcessIsolated
, which are both trivially "run container with Docker-compatible command-line, and check stdout" tests). Is there something in the underlying test suite that makes running against a WIndows Docker build inherently fail?
container_run_user_windows_test.go
doesn't seem to have this limitation though, so perhaps this is a long-ago copy-and-paste oversight that has kept propagating from a test which did use Docker-incompatible features?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I wasn't 100% sure if that line was actually necessary. I included it because it was present in TestRunProcessIsolated
, despite the absence of any Docker-incompatible flags. I figured that the same compatibility constraints apply to the new test as the existing test, but if that line is erroneous in the existing test then I've effectively just duplicated that mistake.
(Contributes to the high-level goal of supporting Windows containers, as outlined in #28.)
This change adds support for exposing host devices to Windows containers using the
--device
flag withnerdctl run
. The code itself is directly ported from containerd, where it was first contributed by @TBBle in #6618 (relevant diff lines here), and subsequently modified by @thaJeztah to usestrings.Cut()
in commit eaedadbe. My only unique contribution here is to update the command reference and adorn the--device
flag with the little blue square which denotes Windows support. 😆It's worth noting that I haven't added any unit tests to cover this functionality. Most of the existing Windows unit tests appear to run containers and check their output, but I don't know of a decent way to test this without requiring that a GPU be present on the underlying host system. I can see that Windows unit tests are currently run as part of the Cirrus CI workflow, but I suspect that the cloud VMs being used for the tests do not have GPUs, and any test requiring one would fail. If a simpler kind of unit test would be desirable (e.g. one that just tests the parsing behaviour for device
IDType://ID
values without actually running a container) then I would certainly be happy to write one.