Replies: 1 comment 1 reply
-
I really like the idea of instant transcoding, something more native, and all of the other pros you have listed. What would the complexity end up looking like? As for the requests for GPU resources, there are options to allow sharing those devices on a cluster now, instead of reserving the entire one. But that comes with yet more complexity. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Lately I've been thinking a lot about reducing transcoding startup latency. I have a few ideas on how to to make that happen, but it comes at the cost of increasing complexity.
The main problem is that since the transcoder is running within a container, the startup has quite a bit of overhead like pulling images, container setup and even controller delays within Kubernetes itself. So the main idea I'm playing around with is preloading the transcoder. This would mean that there would always be a transcoder process running. When Plex needs a transcoder, it simply connects to the pod to start the transcoding. This leads into issues with scaling etc.. I'll try and break this down to pros and cons
Pros:
Cons:
All in all this would move kube-plex closer to the way some other transcoding multiplexers work today, but at the cost of simplicity. Even today it's difficult for a newcomer to understand how kube-plex works and after these changes it would be even more complex to troubleshoot their deployments. I also worry that this would be a bit of an overkill solution.
Beta Was this translation helpful? Give feedback.
All reactions