-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Performance issue in multi-thread runtime #5
Comments
Hi, thank you for interested in this project. Unfortunately I didn't do the benchmarking. And I think it's not surprise to me to be slower than the system's network stack.
But I believe that the practice in |
There are some projects out there:
|
When I tried to upgrade to smoltcp 0.8 I found that the interface changes should make the lock bigger. Therefore the efficiency may be reduced. |
It shouldn't. You will have to take the lock of |
Previous version I can send packets( Lines 27 to 31 in ac22a78
|
I was using an alternative way: https://github.com/shadowsocks/shadowsocks-rust/blob/4d30371bdf7b9c6eec2ab54ce4a042cfc82eac17/crates/shadowsocks-service/src/local/tun/tcp.rs#L225-L353 Every |
Looks reasonable. It is a space-for-time approach. |
But still, the CPU usage is very high. I don't think this is the ultimate approach. |
Just published a performance fix. Previously, after the TCP buffer was consumed, Reactor was not notified to send a packet to notify the other side, so the other side would assume that smoltcp's tcp buffer had not been consumed. It has been fixed in 35245cc |
I couldn't find another use cases in Github with tokio except yours.
I am working on a Project with also using
smoltcp
as an user space network stack and provide wrappers for interpolate with existed Tokio IO structs (TcpStream, ...), but I found some problems:Interface::poll
requires to call frequently in a very short interval, it have to be put in a separated task (the Reactor in this Project).smoltcp
,SocketSet
is now managed byInterface
, so if you want to callInterface::poll
, and alsoAsyncRead::poll_read
andAsyncRead::poll_write
onTcpStream
(wrapper ofTcpSocket
), you will have to take a lock on theInterface
. (which is the same in this Project, the SocketAllocator).I don't know if you have any benchmark data about the current design of
tokio-smoltcp
. I made a simple test withiperf3
, which showed that the interface is relatively slower than system's network stack.Here I open an issue and hoping you can share any informations about this with me.
The text was updated successfully, but these errors were encountered: