-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: supporting third-party network stack such as TLDK #9266
Comments
Very interested in making this happen. Thinking of this as separate sub-issues:
Please let me know what you think. Also happy to discuss your specific setup in email/chat/wherever if that's easier. |
@kevinGC Among those sub-issues, the core one is CGO.
If we understand it correctly, pure go needs the decision made at compile time. Do we have a conditional compile mechanism in gvisor bazel? |
@kevinGC Thanks for your quick response and we are happy to discuss more on these sub issues you have . To answer sub-issue 3, in short, our stack can be portable to other environment and detailed reasons are below:
Yes, we can support multiple pods on a node and yes, it is supported by using SR-IOV which can create multiple NICs.
In our current implement for gVisor with TLDK+DPDK, it does not have requirements on NIC. As long as NIC can be used as virtio backend device, our solution to support TLDK can work on it.
We do not use any CNI to set TLDK stack up. Instead, we invoke CGO wrapper to initialize TLDK stack during gVisor doing StartRoot().
Yes, non-gVisor pods can run with gVisor with TLDK pods in the same environment. |
We've never discussed the CGO interface on its own, i.e. with something other than C being called into. But my first take is that the go_binary(
name = "runsc-tldk",
srcs = ["main.go"],
pure = False,
visibility = [
"//visibility:public",
],
deps = [
"@dev_gvisor//runsc/cli",
"@dev_gvisor//runsc/version",
"//my/codebase/tldk:runsc_plugin",
],
) This yields a few benefits:
@tanjianfeng what do you think? Since you already have a third-party network stack, we want to hear what setup would work for you. If you have specific ideas in mind, we'd love to hear them. Once we have some agreement here, we can get others onboard and actually make the changes.
Yes. Generally such tradeoffs are implemented but off by default. For example, raw sockets are implemented because people need tools like tcpdump, but must be enabled via a flag. Since CGO introduces a security issue just by being present in the binary, we shouldn't compile it in by default. @amysaq2023 that's super impressive that you're getting the benefits of kernel bypass without many of the traditional issues (e.g. machines being single-app only). A few more questions (if you can answer):
|
Thank you for your insightful suggestion on how to support TLDK while maintaining the high level of security in gVisor. We have an additional proposal to consider:
The nodes in the Redis benchmark are actual physical machines.
DPDK not only functions as a driver, but also offers various performance enhancements. For instance, it utilizes rte_ring for efficient communication with hardware and introduces its own memory management mechanisms with mbuf and mempool. Moreover, DPDK operates entirely at the user-level, completely detached from the host kernel, unlike XDP which still relies on hooking into the host kernel. Therefore, the performance enhancement achieved with TLDK+DPDK goes beyond just kernel bypass, benefiting from the improvements introduced by both TLDK and DPDK. |
Agreed! Maybe you could send a PR with the interface you use now to work with TLDK -- that would be a really good starting point. Much better than trying to come up with an arbitrary API, given that you've got this running already.
Right, if I understand correctly the build process for cgo requires building the object file first, then writing a Go layer around it that can call into it using the tools provided by
Can you help me understand why we couldn't just build a static binary containing gVisor and the third party network stack? As part of the API we talked about above, gVisor can support registering third party netstacks. So the third party stack would contain an implementation of the API (socket ops like in your diagram), the cgo wrapper, the third party stack itself, and an init function that registers the stack to be used instead of netstack: import "pkg/sentry/socket"
func init() {
socket.RegisterThirdPartyProvider(linux.AF_INET, &tldkProvider)
// etc..
} This keeps everything building statically and avoids issues introduced by go plugins as far as I can tell, but maybe I'm missing something. |
Something I should've been more clear about regarding the static binary idea: I'm suggesting that the existing, cgo-free go_binary(
name = "runsc",
srcs = ["main.go"],
pure = True,
tags = ["staging"],
visibility = [
"//visibility:public",
],
x_defs = {"gvisor.dev/gvisor/runsc/version.version": "{STABLE_VERSION}"},
deps = [
"//runsc/cli",
"//runsc/version",
],
) And building runsc with a third party network stack requires adding another target (which could be in the same BUILD file, a different one, or even a separate bazel project): go_binary(
name = "runsc_tldk",
srcs = ["main_tldk.go"],
pure = False,
tags = ["staging"],
visibility = [
"//visibility:public",
],
x_defs = {"gvisor.dev/gvisor/runsc/version.version": "{STABLE_VERSION}"},
deps = [
"//runsc/cli",
"//runsc/version",
"//othernetstacks/tldk:tldk_provider",
],
) Both |
@kevinGC |
Just want to check on this and see if there's anything I can do help it along. |
Hi Kevin, thanks for checking out. We have finished porting our modification of supporting TLDKv2 to current gVisor master branch and are currently working on refactoring some implementation to make it more general. I think we are on the right track, just needs a little more time due to the amount of code. If everything goes well, we will send out the patch next week. |
Hey, back to see whether there's anything I can do to help here. We're really excited to try this out, benchmark, and see the effects on gVisor networking. |
This commit adds network stack and socket interfaces for supporting external network stack. - pkg/sentry/stack: Interfaces for initializing external network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/externalstack: Glue layer for external stack's socket and stack ops with sentry. It will also register external stack operations if imported. - pkg/sentry/socket/externalstack/cgo: Interfaces defined in C for external network stack to support. To build target runsc-external-stack, which imports pkg/sentry/socket/externalstack package and enables CGO: bazel build runsc:runsc-external-stack By using runsc-external-stack binary and setting network type as external stack, user can use third-party network stack instead of netstack embedded in gVisor. This commit only sets up the interfaces template, the specific implementation for external stack operations will be provided in follow up commits. Updates google#9266 Signed-off-by: Anqi Shen <amy.saq@antgroup.com>
@kevinGC |
Thanks a TON. Just responded over there, but want to ask about testing here. We'll want to test third party netstacks. I'm thinking that what you're contributing will only be testable if we have a similar environment (DPDK and such). Is that correct? |
Hi Kevin, happy to hear that you are exploring third-party netstack testing too. In the current version we're working on, once we complete the implementation of all the necessary glue layers, we will compile the TLDK repository within it. (It will become clearer when we share the socket ops glue layer for the plugin netstack in the next commit.) With this binary, you can easily test it by 'docker run' to start a container, just as original runsc with native netstack does. |
This commit adds network stack and socket interfaces for supporting external network stack. - pkg/sentry/socket/externalstack: Interfaces for initializing external network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/externalstack/wrapper: Glue layer for external stack's socket and stack ops with sentry. It will also register external stack operations if imported. - pkg/sentry/socket/externalstack/cgo: Interfaces defined in C for external network stack to support. To build target runsc-external-stack, which imports pkg/sentry/socket/externalstack package and enables CGO: bazel build runsc:runsc-external-stack By using runsc-external-stack binary and setting network type as external stack, user can use third-party network stack instead of netstack embedded in gVisor. This commit only sets up the interfaces template, the specific implementation for external stack operations will be provided in follow up commits. Updates google#9266 Signed-off-by: Anqi Shen <amy.saq@antgroup.com>
Hi @kevinGC , we have recently pushed our implementation of supporting plugin network stack into gVisor. You can now compile the runsc binary with support for the plugin stack by executing the following command: We has conducted performance testing of gVisor when utilizing the plugin stack. We chose Redis as our benchmark and test the network performance under various conditions: 1. within runc; 2. within runsc with netstack on KVM; 3. within runsc with the plugin on KVM containers. The results are quite promising—the performance of runsc with the plugin stack closely rivals that of runc, delivering double RPS compared to runsc with netstack. We have documented the detailed performance metrics in our commit log for your review. The current performance test is being conducted with the software-implemented virtio-net backend, which is less optimized. Performance can be further improved if using VF (SR-IOV) passthrough. Thanks for your continued support and patience throughout this development process. Your feedback on our design and implementation is greatly welcomed and appreciated. |
This commit supports a third-party network stack as a plugin stack for gVisor. The overall plugin package structure is the following: - pkg/sentry/socket/plugin: Interfaces for initializing plugin network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/plugin/stack: Glue layer for plugin stack's socket and stack ops with sentry. It will also register plugin stack operations if imported. - pkg/sentry/socket/plugin/cgo: Interfaces defined in C for plugin network stack to support. To build target runsc-plugin-stack, which imports pkg/sentry/socket/plugin/stack package and enables CGO: bazel build --config=cgo-enable runsc:runsc-plugin-stack By using runsc-plugin-stack binary and setting "--network=plugin" in runtimeArgs, user can use third-party network stack instead of netstack embedded in gVisor to get better network performance. Redis benchmark with following setups: 1. KVM platform 2. 4 physical cores for target pod 3. target pod as redis server Runc: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 115207.38 requests per second, p50=0.215 msec GET: 92336.11 requests per second, p50=0.279 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 113895.21 requests per second, p50=0.247 msec GET: 96899.23 requests per second, p50=0.271 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 126582.27 requests per second, p50=0.199 msec GET: 95969.28 requests per second, p50=0.271 msec Runsc with plugin stack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 123915.74 requests per second, p50=0.343 msec GET: 115473.45 requests per second, p50=0.335 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 120918.98 requests per second, p50=0.351 msec GET: 117647.05 requests per second, p50=0.351 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 119904.08 requests per second, p50=0.367 msec GET: 112739.57 requests per second, p50=0.375 msec Runsc with netstack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.759 msec GET: 61162.08 requests per second, p50=0.631 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 52219.32 requests per second, p50=0.719 msec GET: 58719.91 requests per second, p50=0.663 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.751 msec GET: 60827.25 requests per second, p50=0.751 msec Updates #9266 Co-developed-by: Tianyu Zhou <wentong.zty@antgroup.com> Signed-off-by: Anqi Shen <amy.saq@antgroup.com> FUTURE_COPYBARA_INTEGRATE_REVIEW=#9551 from amysaq2023:support-external-stack 56f2530 PiperOrigin-RevId: 677140616
This commit supports a third-party network stack as a plugin stack for gVisor. The overall plugin package structure is the following: - pkg/sentry/socket/plugin: Interfaces for initializing plugin network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/plugin/stack: Glue layer for plugin stack's socket and stack ops with sentry. It will also register plugin stack operations if imported. - pkg/sentry/socket/plugin/cgo: Interfaces defined in C for plugin network stack to support. To build target runsc-plugin-stack, which imports pkg/sentry/socket/plugin/stack package and enables CGO: bazel build --config=cgo-enable runsc:runsc-plugin-stack By using runsc-plugin-stack binary and setting "--network=plugin" in runtimeArgs, user can use third-party network stack instead of netstack embedded in gVisor to get better network performance. Redis benchmark with following setups: 1. KVM platform 2. 4 physical cores for target pod 3. target pod as redis server Runc: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 115207.38 requests per second, p50=0.215 msec GET: 92336.11 requests per second, p50=0.279 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 113895.21 requests per second, p50=0.247 msec GET: 96899.23 requests per second, p50=0.271 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 126582.27 requests per second, p50=0.199 msec GET: 95969.28 requests per second, p50=0.271 msec Runsc with plugin stack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 123915.74 requests per second, p50=0.343 msec GET: 115473.45 requests per second, p50=0.335 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 120918.98 requests per second, p50=0.351 msec GET: 117647.05 requests per second, p50=0.351 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 119904.08 requests per second, p50=0.367 msec GET: 112739.57 requests per second, p50=0.375 msec Runsc with netstack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.759 msec GET: 61162.08 requests per second, p50=0.631 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 52219.32 requests per second, p50=0.719 msec GET: 58719.91 requests per second, p50=0.663 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.751 msec GET: 60827.25 requests per second, p50=0.751 msec Updates #9266 Co-developed-by: Tianyu Zhou <wentong.zty@antgroup.com> Signed-off-by: Anqi Shen <amy.saq@antgroup.com> FUTURE_COPYBARA_INTEGRATE_REVIEW=#9551 from amysaq2023:support-external-stack 56f2530 PiperOrigin-RevId: 677140616
This commit supports a third-party network stack as a plugin stack for gVisor. The overall plugin package structure is the following: - pkg/sentry/socket/plugin: Interfaces for initializing plugin network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/plugin/stack: Glue layer for plugin stack's socket and stack ops with sentry. It will also register plugin stack operations if imported. - pkg/sentry/socket/plugin/cgo: Interfaces defined in C for plugin network stack to support. To build target runsc-plugin-stack, which imports pkg/sentry/socket/plugin/stack package and enables CGO: bazel build --config=cgo-enable runsc:runsc-plugin-stack By using runsc-plugin-stack binary and setting "--network=plugin" in runtimeArgs, user can use third-party network stack instead of netstack embedded in gVisor to get better network performance. Redis benchmark with following setups: 1. KVM platform 2. 4 physical cores for target pod 3. target pod as redis server Runc: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 115207.38 requests per second, p50=0.215 msec GET: 92336.11 requests per second, p50=0.279 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 113895.21 requests per second, p50=0.247 msec GET: 96899.23 requests per second, p50=0.271 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 126582.27 requests per second, p50=0.199 msec GET: 95969.28 requests per second, p50=0.271 msec Runsc with plugin stack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 123915.74 requests per second, p50=0.343 msec GET: 115473.45 requests per second, p50=0.335 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 120918.98 requests per second, p50=0.351 msec GET: 117647.05 requests per second, p50=0.351 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 119904.08 requests per second, p50=0.367 msec GET: 112739.57 requests per second, p50=0.375 msec Runsc with netstack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.759 msec GET: 61162.08 requests per second, p50=0.631 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 52219.32 requests per second, p50=0.719 msec GET: 58719.91 requests per second, p50=0.663 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.751 msec GET: 60827.25 requests per second, p50=0.751 msec Updates #9266 Co-developed-by: Tianyu Zhou <wentong.zty@antgroup.com> Signed-off-by: Anqi Shen <amy.saq@antgroup.com> FUTURE_COPYBARA_INTEGRATE_REVIEW=#9551 from amysaq2023:support-external-stack 56f2530 PiperOrigin-RevId: 677140616
This commit supports a third-party network stack as a plugin stack for gVisor. The overall plugin package structure is the following: - pkg/sentry/socket/plugin: Interfaces for initializing plugin network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/plugin/stack: Glue layer for plugin stack's socket and stack ops with sentry. It will also register plugin stack operations if imported. - pkg/sentry/socket/plugin/cgo: Interfaces defined in C for plugin network stack to support. To build target runsc-plugin-stack, which imports pkg/sentry/socket/plugin/stack package and enables CGO: bazel build --config=cgo-enable runsc:runsc-plugin-stack By using runsc-plugin-stack binary and setting "--network=plugin" in runtimeArgs, user can use third-party network stack instead of netstack embedded in gVisor to get better network performance. Redis benchmark with following setups: 1. KVM platform 2. 4 physical cores for target pod 3. target pod as redis server Runc: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 115207.38 requests per second, p50=0.215 msec GET: 92336.11 requests per second, p50=0.279 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 113895.21 requests per second, p50=0.247 msec GET: 96899.23 requests per second, p50=0.271 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 126582.27 requests per second, p50=0.199 msec GET: 95969.28 requests per second, p50=0.271 msec Runsc with plugin stack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 123915.74 requests per second, p50=0.343 msec GET: 115473.45 requests per second, p50=0.335 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 120918.98 requests per second, p50=0.351 msec GET: 117647.05 requests per second, p50=0.351 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 119904.08 requests per second, p50=0.367 msec GET: 112739.57 requests per second, p50=0.375 msec Runsc with netstack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.759 msec GET: 61162.08 requests per second, p50=0.631 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 52219.32 requests per second, p50=0.719 msec GET: 58719.91 requests per second, p50=0.663 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.751 msec GET: 60827.25 requests per second, p50=0.751 msec Updates #9266 Co-developed-by: Tianyu Zhou <wentong.zty@antgroup.com> Signed-off-by: Anqi Shen <amy.saq@antgroup.com> FUTURE_COPYBARA_INTEGRATE_REVIEW=#9551 from amysaq2023:support-external-stack 56f2530 PiperOrigin-RevId: 677140616
This commit supports a third-party network stack as a plugin stack for gVisor. The overall plugin package structure is the following: - pkg/sentry/socket/plugin: Interfaces for initializing plugin network stack. It will be used in network setting up during sandbox creating. - pkg/sentry/socket/plugin/stack: Glue layer for plugin stack's socket and stack ops with sentry. It will also register plugin stack operations if imported. - pkg/sentry/socket/plugin/cgo: Interfaces defined in C for plugin network stack to support. To build target runsc-plugin-stack, which imports pkg/sentry/socket/plugin/stack package and enables CGO: bazel build --config=cgo-enable runsc:runsc-plugin-stack By using runsc-plugin-stack binary and setting "--network=plugin" in runtimeArgs, user can use third-party network stack instead of netstack embedded in gVisor to get better network performance. Redis benchmark with following setups: 1. KVM platform 2. 4 physical cores for target pod 3. target pod as redis server Runc: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 115207.38 requests per second, p50=0.215 msec GET: 92336.11 requests per second, p50=0.279 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 113895.21 requests per second, p50=0.247 msec GET: 96899.23 requests per second, p50=0.271 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 126582.27 requests per second, p50=0.199 msec GET: 95969.28 requests per second, p50=0.271 msec Runsc with plugin stack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 123915.74 requests per second, p50=0.343 msec GET: 115473.45 requests per second, p50=0.335 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 120918.98 requests per second, p50=0.351 msec GET: 117647.05 requests per second, p50=0.351 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 119904.08 requests per second, p50=0.367 msec GET: 112739.57 requests per second, p50=0.375 msec Runsc with netstack: $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.759 msec GET: 61162.08 requests per second, p50=0.631 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 52219.32 requests per second, p50=0.719 msec GET: 58719.91 requests per second, p50=0.663 msec $redis-benchmark -h [target ip] -n 100000 -t get,set -q SET: 59952.04 requests per second, p50=0.751 msec GET: 60827.25 requests per second, p50=0.751 msec Updates #9266 Co-developed-by: Tianyu Zhou <wentong.zty@antgroup.com> Signed-off-by: Anqi Shen <amy.saq@antgroup.com> FUTURE_COPYBARA_INTEGRATE_REVIEW=#9551 from amysaq2023:support-external-stack 56f2530 PiperOrigin-RevId: 677140616
Its value will be known only on the configuration phase, before that it can be a select directive. Updates #9266 PiperOrigin-RevId: 678288252
Its value will be known only on the configuration phase, before that it can be a select directive. Updates #9266 PiperOrigin-RevId: 678288252
Its value will be known only on the configuration phase, before that it can be a select directive. Updates #9266 PiperOrigin-RevId: 678288252
Its value will be known only on the configuration phase, before that it can be a select directive. Updates #9266 PiperOrigin-RevId: 678412518
#10954 starts running a minimal set of tests on buildkite. We need to add more tests. In ideal case, we need to run image tests and network specific tests. alipay/tldk#4 is needed to be merged, otherwise tldk fails to build in the gvisor docker build container. |
Description
As an application kernel, gVisor provides developers with the opportunity to build a lightweight pod-level kernel and allows for more agile development and deployment than the host kernel. To maximize the advantage of gVisor's flexibility, we propose an enhancement to its network module: a solution to support TLDK for better performance, and also want to further discuss about whether there is a more general way to support more third-party network stack such as Smoltcp, F-Stack etc.
Our Implementation to support TLDK
Since cloud-native applications are highly sensitive to network performance, we have expanded gVisor to support a high-performance user-level network stack called TLDK. This has resulted in significantly better network I/O performance in certain scenarios.
To support TLDK network stack, we need to enable CGO in gVisor , as TLDK is currently implemented in C. We then initialized the TLDK stack through a cgo wrapper, based on the network type specified in the container boot config, and set up the TLDK socket opts interface in gVisor. Later network syscalls used gVisor's TLDK socket ops and invoked TLDK socket operation implementation through the cgo wrapper.
One of the key factors for gVisor's significant performance improvement with TLDK is that we support device (SR-IOV) passthrough with TLDK. This not only enhances network I/O performance but also reduces the attack surface on the host kernel. The original gVisor netstack cannot support drivers for device passthrough, but TLDK can work with DPDK as the frontend driver for device passthrough.
Moreover, we have provided a proper thread model and enabled an interrupt mode to avoid busy polling in typical DPDK scenarios. In this mode, the I/O thread wakes up when an event is raised by the host kernel upon receiving a packet from the NIC, and starts to read all available packets in DMA. It then wakes up the corresponding goroutine to receive the packets. This approach ensures efficient use of CPU resources, while avoiding unnecessary busy polling that can negatively impact application performance.
Performance with TLDK
We compared runc and gVisor with TLDK, and the results show significant performance improvements in network I/O sensitive scenarios:
Further Discussion
While supporting TLDK, we had to modify the gVisor code to support another network stack socket ops, which incurred significant development costs. Therefore, in addition to proposing the support for TLDK in gVisor, we would like to open a discussion about whether there is a more general way for users to choose a third-party network stack without modifying gVisor.
One possible solution we are considering is exposing the network interface from the API to the ABI and building third-party network stacks as plugins that fit with these ABIs.
We would appreciate any insights or feedback from the community on this proposal and the further discussion matter and are open to exploring other potential solutions. Thanks.
Is this feature related to a specific bug?
No.
Do you have a specific solution in mind?
As decribed in 'Description' section.
The text was updated successfully, but these errors were encountered: