-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
charts/sftd: introduce #382
Conversation
c61d3f8
to
092d6a5
Compare
6397951
to
bb1e79b
Compare
9c9e594
to
56bce48
Compare
d68ce5b
to
1a6b381
Compare
7669023
to
3e62f27
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K8s and sftd like to allocate ports, so they might start to fight over them. Would it be necessary to put some boundaries in place. Is it even possible? Or just an unfounded concern?
Valid concerns. but two observations:
So for people directly using our charts for configuring ingress, I think there are no issues. However this might be surprising for people deploying A workaround could be to set But ideally; sftd should be configured to only allocate in a specific port range. This is something that has been requested and will be implemented in the future. In that case this concern will go away completely. My suggestion is to add these notes that I wrote here to the README and file tickets to address these concerns in future pull requests. |
Adds a new helm chart to deploy sftd as a single replica. This is suitable for small installations with <50 users. In the future we will support running SFTD with multiple replicas to support scale-out of conference calling infrastructure. The chart runs sftd in the hostNetwork:true such that the SFT can allocate WebRTC mediastreams on its public IP so people can connect and call. This is not ideal, but it's the best workaround I could come up with. We could solve this by getting kubernetes/kubernetes#23864 fixed but I don't have a lot of hope as it has been open for quite a few years. Other way to fix this is to not use IPv4 but Ipv6 in the cluster. Then every pod has a proper public ip address and non of this is an issue at all to begin with. But given IPv6 adoption isn't particularly high, and because I don't think our TURN servers could bridge the IPv4-IPv6 gap I do not think it's a realistic option for now.
This information isn't static, as the pod might move around between nodes. So we need to dynamically discover this. This won't work in environments behind 1:1 NAT
The code that iterates the default route and binds to it is already implemented in SFTD proper From discussion with the Calling team: SO, the binding address will be the address of the default network interface, i.e. whatever interface is used for the default gateway, and has UP status. essentially it uses either the route list, if the OS supports it, and chooses the interface that is used for the default gateway route, OR if this is not available, then what getifaddrs returns as the first interface that has the status "UP" We can reintroduce the -A flag later to handle the scenario where sftd is deployed in a 1:1 NAT scenario (Like in GCP)
TODO: we should probably add this to the example inventory file?
This is useful to provide information from ansible to be available in kubernetes. For example if you want to annotate the node with its external IP on-prem: ```yaml node_annotations: wire.com/access-ip: {{ access_ip }} ```
This is useful when sft is run behind 1:1 NAT
Co-authored-by: Lucendio <gregor.jahn@wire.com>
Co-authored-by: Lucendio <gregor.jahn@wire.com>
ingress-nginx can handle headless service as it doesn't use ClusterIP but directly inspecs the endpoints object of a service. https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#why-endpoints-and-not-services
We do not have it in other charts either
Co-authored-by: Lucendio <gregor.jahn@wire.com>
This will need a rebase on the latest develop, as this feature only recently landed in brig
ACCESS_IP doesn't exist
I forgot to commit this. But it's needed!
It's not needed since the latest version of brig; which should land on master soon
CORS should be served; otherwise the webapp will not be able to connect
I don't understand why tests are failing on this PR; as i am not touching anything |
* charts/sftd: introduce Adds a new helm chart to deploy sftd as a single replica. This is suitable for small installations with <50 users. In the future we will support running SFTD with multiple replicas to support scale-out of conference calling infrastructure. The chart runs sftd in the hostNetwork:true such that the SFT can allocate WebRTC mediastreams on its public IP so people can connect and call. This is not ideal, but it's the best workaround I could come up with. We could solve this by getting kubernetes/kubernetes#23864 fixed but I don't have a lot of hope as it has been open for quite a few years. Other way to fix this is to not use IPv4 but Ipv6 in the cluster. Then every pod has a proper public ip address and non of this is an issue at all to begin with. But given IPv6 adoption isn't particularly high, and because I don't think our TURN servers could bridge the IPv4-IPv6 gap I do not think it's a realistic option for now. * charts/sftd: Detect the public IP of the (hostNetwork) pod This information isn't static, as the pod might move around between nodes. So we need to dynamically discover this. This won't work in environments behind 1:1 NAT From discussion with the Calling team: SO, the binding address will be the address of the default network interface, i.e. whatever interface is used for the default gateway, and has UP status. essentially it uses either the route list, if the OS supports it, and chooses the interface that is used for the default gateway route, OR if this is not available, then what getifaddrs returns as the first interface that has the status "UP" * ansible/kubernetes: Support annotating nodes This is useful to provide information from ansible to be available in kubernetes. For example if you want to annotate the node with its external IP on-prem: ```yaml node_annotations: wire.com/access-ip: {{ access_ip }} ``` * charts/sftd: Discover external IP of node we're running on This is useful when sft is run behind 1:1 NAT * charts/brig: Point to SFT server
* charts/sftd: introduce Adds a new helm chart to deploy sftd as a single replica. This is suitable for small installations with <50 users. In the future we will support running SFTD with multiple replicas to support scale-out of conference calling infrastructure. The chart runs sftd in the hostNetwork:true such that the SFT can allocate WebRTC mediastreams on its public IP so people can connect and call. This is not ideal, but it's the best workaround I could come up with. We could solve this by getting kubernetes/kubernetes#23864 fixed but I don't have a lot of hope as it has been open for quite a few years. Other way to fix this is to not use IPv4 but Ipv6 in the cluster. Then every pod has a proper public ip address and non of this is an issue at all to begin with. But given IPv6 adoption isn't particularly high, and because I don't think our TURN servers could bridge the IPv4-IPv6 gap I do not think it's a realistic option for now. * charts/sftd: Detect the public IP of the (hostNetwork) pod This information isn't static, as the pod might move around between nodes. So we need to dynamically discover this. This won't work in environments behind 1:1 NAT From discussion with the Calling team: SO, the binding address will be the address of the default network interface, i.e. whatever interface is used for the default gateway, and has UP status. essentially it uses either the route list, if the OS supports it, and chooses the interface that is used for the default gateway route, OR if this is not available, then what getifaddrs returns as the first interface that has the status "UP" * ansible/kubernetes: Support annotating nodes This is useful to provide information from ansible to be available in kubernetes. For example if you want to annotate the node with its external IP on-prem: ```yaml node_annotations: wire.com/access-ip: {{ access_ip }} ``` * charts/sftd: Discover external IP of node we're running on This is useful when sft is run behind 1:1 NAT * charts/brig: Point to SFT server
Ran
helm create sftd
and edited some stufffixes zinfra/backend-issues#1699
fixes zinfra/backend-issues#1704