-
Notifications
You must be signed in to change notification settings - Fork 295
Feature: Add config properties for private-network mode (e.g. in case of bastion or VPN setup). #277
Comments
Hi @PepijnK if you create an SG when you set up your VPC, e.g. to allow inward from VPC CIDR, you can automatically attach that SG to workers in the
For the second issue, I agree it would be nice to choose whether the API is public, separate from whether the worker nodes are public. |
I wonder when kube-aws turned into AWS management tool? IMHO it should focus on a kube and not on a cloud infrastructure. Almost non of the latest PRs are about kube and about cloud management which people try to get for free from a tool which just happen to deal a bit with subnets and security groups. IMHO kube-aws should support just 2 modes:
If there is a demand to drive cloud infrastructure from some high-level yaml file, then it can be done in a spin-off project, not in kube-aws itself. but with amount of variance that file will quickly grow into |
@redbaron I don't agree with that in many ways:
I'm just referring to properties that are now arbitrarily chosen anyway. Making them configurable would make me happy already. |
OK, lets discuss that.
indeed kube-aws is deploying k8s on AWS, but it is not a AWS cloud management tool. It is unfortunately quicky becoming one, but from my POV this is a wrong course of development. kube-aws should deliver quality kube setup: good RBAC defaults, common base layer bits like log collection support, prometheus monitoring, flexible overlay network backends, network and etcd multi tenancy, PKI infrastructure integration and many more. What happens instead is that people assume that if kube-aws provisions some subnets, then it should be capable to manage their whole AWS infrastructure. No it is not. It is just a matter of time , people will be asking for VPN gateway management, direct connect, cloudwatch alerts setup etc. Amount of different setups is so vast, that Amazon created CF to be able to describe it. kube-aws is not a CF replacement, and shouldn't try to accommodate every possible customization into As a demo-only Normal mode of operation should be "run into existing subnets", where AWS infrastructure is created by other means and kube-aws concerned is only about creating a well behaving cluster on top of it. Most of recent PRs and issues raised have nothing to do with k8s operation, but with AWS infrastructure it is running on. It clearly indicates that there is a desire to have a simple tool, where in a top level I am not against of flexibility, I am against losing focus of a project. |
@redbaron IMHO you are a but too strict about the scope. In fact, if no customization
is allowed kube-aws would be a bit useless for me. It renders a private
ELB, which I can't make public afterwards. So I need to hack into the CF
template. Following your argumentation why bother about a tool after all?
Simply commit some demo CF templates to GitHub could prove that k8s runs on
AWS as well.
And why not could aws-kube spin up a production grade cluster? Is aws-kube
really demoted to setup a demo cluster only? I agree this tool should not
encapsulate other AWS service that has nothing or little to do with K8s,
but the stuff it creates should be configurable. That is what my CR is
about at last which somehow triggered this meta discussion ;)
|
This is a difficult subject. The aim is to provide a user-customizable framework to deploy production ready platform, without overwhelming new users with thousands of knobs to fiddle with. Maintenance/flexibility suffers from having too many config options too.
That's exactly why the steps "generate template" and "deploy cluster" are separate - to allow you to do this.
Agreed here as well.
This is at times necessary. Scope and focus is important! We don't want to burn out the one maintainer that somehow keeps finding time for this project. The aim is to make the rendered outputs as customizable as possible and as far as i can tell, your specific usecase can currently be solved with Discussing code makes things a lot easier though ;-) |
OK, clear. Not want to put down the maintainer. The tool is doing a great job, that is why I like to use it. Good work has been done here. I need to consider the YAML file isn't supposed to be the single source of truth and is disposable. The rendered CF template is leading. I just hate to commit generated code... |
Hi @PepijnK I just commit the patches to the generated code. At any given time I may or may not need patches due to bugs I'm trying to fix, deployment choices that I can't make in
|
Hi,
We run Kubernetes inside private subnets. The cluster YAML is configured with
mapPublicIPs: false
to completely seal off the cluster from external access (== more secure). Outbound Internet access is done through a NAT, like documented. Admin (SSH mainly) is done with help of a VPN server sitting in the same VPC as the Kubernetes subnets.To allow a service under development which is running on my local machine to talk to services on the development cluster, I had to add a rule to the worker SG and allow inbound traffic from the VPC CIDR. This is a post kube-aws manual step, so I thought maybe it's a good idea to put this in the
cluster.yaml
config.Secondly, when
mapPublicIPs: false
is configured, the controller LoadBalancer is provisioned as internal. This makes it hard to runkubectl
commands from outside the VPN, for example to do CI/CD. I wonder if there could be a config option to switch this to public.So 2 options are requested:
The text was updated successfully, but these errors were encountered: