-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scheduler: implement a "opportunistic retransmission" #332
Labels
Comments
This was referenced Feb 1, 2023
This was referenced Mar 29, 2023
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 12, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 13, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 13, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 13, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 15, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 19, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 20, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 20, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 21, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 21, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 22, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 22, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 22, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 22, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Mar 23, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 10, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 10, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 11, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 11, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 11, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 12, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 13, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 13, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 13, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 16, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 17, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 20, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 23, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 23, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 23, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 23, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 23, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 24, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 24, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 24, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 24, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 25, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 25, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 26, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 26, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 26, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 27, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
geliangtang
added a commit
to geliangtang/mptcp_net-next
that referenced
this issue
Dec 27, 2024
scheduler: implement a "opportunistic retransmission" The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf This is implemented in mptcp.org, see mptcp_rcv_buf_optimization(). With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires. The packet scheduler should be able to get more info: not just when MPTCP cwnd close or the seq num has increased (max allowed MPTCP level seq num to be sent == last ack + (...)) but also when there is a RTO at subflow level: maybe linked to scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343 Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up) Closes: multipath-tcp#332 Signed-off-by: Geliang Tang <geliang@kernel.org>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf
This is implemented in mptcp.org, see
mptcp_rcv_buf_optimization()
.With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires.
The packet scheduler should be able to get more info:
Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up)
The text was updated successfully, but these errors were encountered: