-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KERNEL: Use KCP for realtime UDP streaming #770
Comments
cool |
About 600-1000 millisecond streaming media applications According to Agora's PR, currently CDN latency is generally around 5-20 seconds. If the latency is lower (such as reaching 600 to 1000 milliseconds), there will be many new businesses. However, in reality, it seems that many customers are not so concerned about latency, so I think it's hard to say. I think it might just be a viewpoint that we as technical people came up with: If the end-to-end CDN latency is reduced to around 600-1000 milliseconds, there will be many new businesses.
Currently, can streaming media, such as HTTP-FLV or HTTP-TS, achieve this level? The answer is definitely no. The best indicator currently is ChinaNet's stable 1-3 seconds. If we switch to UDP, it can be reduced by one level to 600-1000 milliseconds. Agora's indicator is 40 milliseconds to 600 milliseconds. How much delay is enough to drive the emergence of new businesses? I think as long as it is reduced by one level, which is 600-1000 milliseconds, so I set the latency target of Oryx at 600-1000 milliseconds, of course, it needs to be coordinated with the end.
Oryx currently has three conditions for starting:
As for the client side, we can start by supporting Android streaming and playback, with Oryx used for transmission on the server side. SRS and current CDNs have a delay of 3-10 seconds. ORYX aims to provide a solution with a delay of 600-1000 milliseconds. RCDN (Realtime CDN) aims for even lower latency. If there are new applications that achieve the target latency of ORYX, CDNs can certainly optimize it further. Of course, CDNs cannot use TCP, but UDP can be used. Why can't UDP be used? It is completely possible. I think it's not a technical issue, but rather a lack of demand from clients (it's possible to argue that this is a chicken and egg situation). Clients' applications cannot surpass CDNs, and deploying their own nodes is not feasible for most clients. CDNs rarely consider achieving lower latency because the current TCP solutions are already challenging enough. For example, Wangsu has been satisfied with their 1-3 second delay for many years. Wangsu's mindset is probably: "My clients haven't requested lower latency, and the stable 1-3 second delay is already sufficient. Why should I switch the entire network to UDP?" Even if someone has this idea, it's not practical and difficult to achieve results. KPI assessments don't include such considerations. Other CDNs would likely say, "If Wangsu isn't doing it, it must not be useful." Therefore, I conclude that current CDNs cannot achieve latency within 1000 milliseconds unless lower latency services have already been proven. VOIP is said to be able to perceive latency at 400 milliseconds, but I think we shouldn't take such a big leap. 600 milliseconds is already very good. Currently, CDNs haven't reached this level of providing basic services, so there's no need for the internet to suddenly drop to 300 milliseconds. It would scare a lot of people. Additionally, if we can achieve 600 to 1000 milliseconds, we can further reduce it because the nature of the service has already changed. Open source is sufficient for testing and validating the market, but it cannot directly provide services.
Oryx can try to see if the 600-1000ms low latency is just a fantasy or if there is indeed a widespread hardware demand. Currently, it's a chicken-and-egg situation, with both hesitating.
|
KCP is not considering support at the moment. Currently, it has already been supported:
|
This post was written 4 years ago when I was mainly doing live streaming and had no knowledge of low-latency technologies like WebRTC or RTC. It expresses my thoughts on live streaming within 1 second. Well, of course, the writing style back then was quite youthful, and now I have been trained to not use exclamation marks or unnecessary emotional words. After working on RTC servers at Alibaba Cloud for four years, I can say that I have gained some insights. Looking back at this article, if live streaming latency is reduced to within 1 second, can it disrupt the entire live streaming industry? Will it give rise to many new scenarios? Currently, my answer remains: 90%, no. So, is the remaining 10% certain? No, it's not. Indeed, there are new scenarios at the moment, but it's definitely not the scenario we thought it was.
The new scenarios mentioned earlier about live streaming are actually just the second type, and this type of scenario actually has a small volume. From the perspective of application scenarios or usage scenarios:
Comparing the market of live streaming, the market of meetings (excluding traditional meeting hardware), and the market of interactive live streaming, we can clearly see the relationship. This relationship can be observed from the users of open-source projects, the size of the community, and the revenue of cloud vendors. Therefore, from the perspective of strict 1-second live streaming scenarios, it is really impossible to revolutionize because consumers really don't care about the underlying technology. In terms of experience, ordinary live streaming does not have a demand for interactivity. It is like forcing all live streaming to be replaced by interactive live streaming, just like what is currently being done by forcing all 4G users to upgrade to 5G. I can't understand why I should eat this crab, it doesn't benefit me at all, except that I will have to spend more money in the future, and even the current 5G experience is not up to the level of 4G. Returning to the technology, can KCP achieve live streaming within 1 second, or can SRT achieve it? In fact, this question is quite simple. If we only consider the transmission delay of the server, such as the vague concept of global 76ms RTT, any UDP protocol can be used. The ceiling for determining the transmission delay is the fiber RTT. However, if we are talking about the end-to-end delay of the entire link, which is the delay perceived by the user, we cannot ignore the client. So my main issue before was a lack of understanding of the client, as I was only considering the problem from the perspective of the server. Narrow-minded, so narrow-minded.
|
KCP(https://github.com/skywind3000/kcp/wiki) is a well designed library, and it's possible to used in realtime video and audio, to decrease the latency.
The text was updated successfully, but these errors were encountered: