-
Notifications
You must be signed in to change notification settings - Fork 295
Allow to choose between ELB and Route53 round robin for the APIServer #281
Comments
I would like to use 0.8.3 approach for small clusters where controller node HA is not important. Instead of creating autoscaling group and ELB for 1 controller node, just use ElasticIP for external access and host internal ip for worker nodes. |
Whats the point? ELB approach is generic enough to cover both small and HA setups |
I guess the biggest difference is that you have to pay a bit more. Another problem is when you don't want to open API server to the world (basically restrict access in security group to specific IPs). Then you have to create additional internal ELB for worker nodes and attach it to controller autoscaling group otherwise worker nodes cannot access API (this setup also assumes that you have recreated API certificate etc.). I think this is more generic problem, but with small clusters it means 2 ELBs in total. |
use internal ELB,or if you use external ELB add workers SG to ELB SG rules |
I already tried that. It doesn't work. |
Hi @tarvip, I've recently worked on adding a generic support to switch internal/external ELB for serving Kubernetes API in #284. To make an external ELB public, set controller:
subnets:
- name: private1
- name: private2
loadBalancer:
private: false And obviously, set |
Hi @spacepluk @tarvip! Thanks for your interest on kube-aws. I occasionally receive feature requests like this. |
I could confirm it, I have to recreate few clusters anyway from scratch (i think they are too old for upgrade), but v0.9.3 is still |
Hi there, I also migrated my own servers (previously debian) to a very small kube-aws cluster. This was before the HA changes. It's great and super cheap with spot instances. But in this scenario, adding an ELB costs about the same as the whole cluster without bringing much value, whereas the Route53 alias would be free. I think that this option adds a lot of flexibility and is very appealing for people like me that work under a tight budget but might want to scale up occasionally. |
@mumoshu this is similar to my concern about the 50% increase in price for dev cluster with dedicated etcd. I guess we could support this? In the case of etcd, my ideal is support shared/external etcd somehow. |
Thanks everyone. |
Btw how many ip addresses of controller nodes can be associated to a single A record(for k8s api endpoint)? |
I'll try to do that this week. |
@mumoshu I would definitely test the round robin thing if you have time to implement it :) I'm not sure about the limits though. |
It seems that it doesn't cover my case. This option allows only creating public or private ELB. But I need both. The idea is that we want to restrict access to public ELB (by removing 0.0.0.0/0), but when you restrict access to public ELB then nodes cannot connect to API via public ELB any more. |
@tarvip Ah, makes sense! So what you'd like to have is both:
And you're going to manually tweak security groups for controller nodes in |
Yes. Anyway, I guess if others are happy with current solution, then there is no need to support such case. |
Honestly I've not yet considered a separation of controller LBs like that but it sounds like a good thing to do for security reasons. |
Just a quick thought without any consideration on feasibility but probably a generic solution for #281 (comment) would be something like the below in cluster.yaml? controller:
# Instead of this
# loadBalancer:
# private: true
# Introduce this
apiEndpoints:
# Non-empty value inside the `external` key + type `dnsRoundRobin` requires `controller.subnets[].private` to be false i.e. controller nodes are deployed to public subnets
external:
dnsName: k8s.external.example.com
hostedZoneId: <id for the hosted zone external.example.com>
securityGroups:
- id: sg-toallowexternalaccess
type: loadBalancer
internal:
dnsName: k8s.internal.example.com
hostedZoneId: <id for the hosted zone internal.example.com>
securityGroups:
- id: sg-toallowinternalacess
type: dnsRoundRobin |
Are there any use-cases that requires 2 or more external endpoints and/or 2 or more internal endpoints? |
Probably, we'd need something like the below to consistently support both this use-case and @c-knowles's use-case #343: controller:
# Instead of this
# loadBalancer:
# private: true
# Introduce this
apiEndpoints:
- name: stableExternalEndpoint
# `dnsName` this will be added to CNs in the apiserver cert/cc @c-knowles
dnsName: k8s.external.example.com
# You can omit this. If omitted, it is your responsibility to add controller nodes to an ELB serving `k8s.external.example.com`
loadBalancer:
id: id-of-existing-internet-facing-elb
- name: stableInternalEndpoint
dnsName: k8s.internal.example.com
loadBalancer:
id: id-of-existing-internal-elb
# Former `externalDNSName` + `hostedZoneId` + newly added SG definitions for controller ELB
- name: versionedExternalEndpoint
dnsName: v2.k8s.external.example.com
loadBalancer:
hostedZone:
id: <id for the hosted zone external.example.com>
securityGroups:
- id: sg-toallowexternalaccess
# Former `externalDNSName` + `hostedZoneId` without an ELB + newly added SG definitions for controller nodes
- name: versionedInternalEndpoint
dnsName: v2.k8s.internal.example.com
dnsRoundRobin: # will enable a bash script in cloud-config-controller to update Route 53 record sets
hostedZone:
id: <id for the hosted zone internal.example.com>
securityGroups:
- id: sg-toallowinternalacess |
Including the ability to either reuse an existing ELB or a creation of a managed one, and whether to create a record set or not, and of course DNS name, per API endpoint. Support for DNS round-robin(kubernetes-retired#281) is planned but not included in this commit
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi,
I think this would be a nice option to allow deployments of really tiny (and cheap) clusters.
The text was updated successfully, but these errors were encountered: