Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why not use srs_cond_signal to wake up the muxer coroutine in the gb28181 receiving RTP coroutine? #2874

Closed
yushimeng opened this issue Jan 11, 2022 · 1 comment
Assignees
Labels
TransByAI Translated by AI/GPT.

Comments

@yushimeng
Copy link

yushimeng commented Jan 11, 2022

May I ask why the processing of gb28181 on_rtp does not use srs_cond_signal(wait_ps_queue) to wake up rtmp_muxer, but instead uses the sleep method of rtmp_muxer? Is it because there are performance issues with the wake-up method or for other considerations? Additionally, in scenarios involving accelerated downloading/playing, the sleep interval may need to be adjusted.

TRANS_BY_GPT3

@winlinvip
Copy link
Member

winlinvip commented Jan 12, 2022

GB has been moved to a separate repository srs-gb28181, please refer to #2845.
For any issues, please submit them to the GB repository bug, or pr.
Make sure to maintain the markdown structure.

Since 265 is mainly used in GB, the 265 branch has also been migrated to srs-gb28181.
Make sure to maintain the markdown structure.

This issue will be deleted, please read the FAQ first: #2716.
Make sure to maintain the markdown structure.

TRANS_BY_GPT3

@winlinvip winlinvip self-assigned this Jan 12, 2022
@winlinvip winlinvip changed the title gb28181接收rtp协程唤醒muxer协程的方式为什么不用srs_cond_signal? Why not use srs_cond_signal to wake up the muxer coroutine in the gb28181 receiving RTP coroutine? Jul 29, 2023
@winlinvip winlinvip added the TransByAI Translated by AI/GPT. label Jul 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
TransByAI Translated by AI/GPT.
Projects
None yet
Development

No branches or pull requests

2 participants