Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Feature: Add config properties for private-network mode (e.g. in case of bastion or VPN setup). #277

Closed
2 tasks
PepijnK opened this issue Jan 23, 2017 · 9 comments

Comments

@PepijnK
Copy link

PepijnK commented Jan 23, 2017

Hi,

We run Kubernetes inside private subnets. The cluster YAML is configured with mapPublicIPs: false to completely seal off the cluster from external access (== more secure). Outbound Internet access is done through a NAT, like documented. Admin (SSH mainly) is done with help of a VPN server sitting in the same VPC as the Kubernetes subnets.

To allow a service under development which is running on my local machine to talk to services on the development cluster, I had to add a rule to the worker SG and allow inbound traffic from the VPC CIDR. This is a post kube-aws manual step, so I thought maybe it's a good idea to put this in the cluster.yaml config.

Secondly, when mapPublicIPs: false is configured, the controller LoadBalancer is provisioned as internal. This makes it hard to run kubectl commands from outside the VPN, for example to do CI/CD. I wonder if there could be a config option to switch this to public.

So 2 options are requested:

  • vpcAccessToWorkerNodes=[true/false] => Add a inbound rule to the worker SG.
  • controllerLoadBalancer=[public/private] => switches the LB mode
@whereisaaron
Copy link
Contributor

Hi @PepijnK if you create an SG when you set up your VPC, e.g. to allow inward from VPC CIDR, you can automatically attach that SG to workers in the cluster.yaml using workerSecurityGroupIds. That is what I use to achieve what you describe.

# Existing "glue" security groups attached to worker nodes which are typically used to allow access from worker nodes to services running on an existing infrastructure
workerSecurityGroupIds:
  - sg-12345678

For the second issue, I agree it would be nice to choose whether the API is public, separate from whether the worker nodes are public.

@redbaron
Copy link
Contributor

I wonder when kube-aws turned into AWS management tool? IMHO it should focus on a kube and not on a cloud infrastructure. Almost non of the latest PRs are about kube and about cloud management which people try to get for free from a tool which just happen to deal a bit with subnets and security groups.

IMHO kube-aws should support just 2 modes:

  1. basic "demo-only" mode where subnets, routing tables etc are created to play around
  2. normal mode, where ASGs are created in existing subnets.

If there is a demand to drive cloud infrastructure from some high-level yaml file, then it can be done in a spin-off project, not in kube-aws itself. but with amount of variance that file will quickly grow into stack-template.yaml which can be directly submitted to CF :)

@PepijnK
Copy link
Author

PepijnK commented Jan 26, 2017

@redbaron I don't agree with that in many ways:

  1. Kube-aws is a bridging tool for deploying K8s on AWS, so off course it should have parameters that makes it possible to customize your cluster.
  2. You should regard the YAML file also as infrastructure in code. Is it the source of truth? Or the rendered CF-stack? I guess the YAML file... This means some obvious post-install tweaking in K8s management infra should be avoided and ideally made configurable in the YAML file. Rendering a CF stack and use that for tweaking, would make kube-aws an init-only tool which is IMO not the idea behind it. Editing the YAML file is also by far easier then editing the rendered CF-stack file.
  3. Regarding modes. A predefined set of properties would be the worst solution, as it restricts very much the ability to customize the K8s infra to your needs. These modes are already there in fact: In case of DEMO use-case, simply keep the properties to their sese-making defaults.

I'm just referring to properties that are now arbitrarily chosen anyway. Making them configurable would make me happy already.

@redbaron
Copy link
Contributor

OK, lets discuss that.

Kube-aws is bridging tool for deploying K8s on AWS, so of course it should have parameters that makes it possible to customize your cluster.

indeed kube-aws is deploying k8s on AWS, but it is not a AWS cloud management tool. It is unfortunately quicky becoming one, but from my POV this is a wrong course of development. kube-aws should deliver quality kube setup: good RBAC defaults, common base layer bits like log collection support, prometheus monitoring, flexible overlay network backends, network and etcd multi tenancy, PKI infrastructure integration and many more.

What happens instead is that people assume that if kube-aws provisions some subnets, then it should be capable to manage their whole AWS infrastructure. No it is not. It is just a matter of time , people will be asking for VPN gateway management, direct connect, cloudwatch alerts setup etc. Amount of different setups is so vast, that Amazon created CF to be able to describe it. kube-aws is not a CF replacement, and shouldn't try to accommodate every possible customization into cluster.yaml format,trying to do that inevitably will make cluster.yaml look like CF template in YAML format.

As a demo-only kube-aws should be able to spin up cluster in a new VPC with some basic ELB + ASG setup. Purpose of this mode is for people to play around with k8s, not to run production clusters.

Normal mode of operation should be "run into existing subnets", where AWS infrastructure is created by other means and kube-aws concerned is only about creating a well behaving cluster on top of it.

Most of recent PRs and issues raised have nothing to do with k8s operation, but with AWS infrastructure it is running on. It clearly indicates that there is a desire to have a simple tool, where in a top level cluster.yaml-like file whole infrastructure is described. That's fine, but it should be moved to another sister project whose goal will be to manage AWS infrastructure in a simplified way.

I am not against of flexibility, I am against losing focus of a project.

@PepijnK
Copy link
Author

PepijnK commented Jan 26, 2017 via email

@pieterlange
Copy link
Contributor

This is a difficult subject. The aim is to provide a user-customizable framework to deploy production ready platform, without overwhelming new users with thousands of knobs to fiddle with. Maintenance/flexibility suffers from having too many config options too.

So I need to hack into the CF template.

That's exactly why the steps "generate template" and "deploy cluster" are separate - to allow you to do this.

indeed kube-aws is deploying k8s on AWS, but it is not a AWS cloud management tool.

Agreed here as well.

IMHO you are a but too strict about the scope.

This is at times necessary. Scope and focus is important! We don't want to burn out the one maintainer that somehow keeps finding time for this project. The aim is to make the rendered outputs as customizable as possible and as far as i can tell, your specific usecase can currently be solved with workerSecurityGroupIds and a little tweak to the CF template.

Discussing code makes things a lot easier though ;-)

@PepijnK
Copy link
Author

PepijnK commented Jan 26, 2017

OK, clear. Not want to put down the maintainer. The tool is doing a great job, that is why I like to use it. Good work has been done here. I need to consider the YAML file isn't supposed to be the single source of truth and is disposable. The rendered CF template is leading. I just hate to commit generated code...

@PepijnK PepijnK closed this as completed Jan 26, 2017
@whereisaaron
Copy link
Contributor

Hi @PepijnK I just commit the patches to the generated code. At any given time I may or may not need patches due to bugs I'm trying to fix, deployment choices that I can't make in cluster.yaml, and totally local-specific customizations.

kube-aws render stack
[[ -r stack-template.patch ]] && patch stack-template.json stack-template.patch
[[ -r cloud-config-controller.patch ]] && patch userdata/cloud-config-controller cloud-config-controller.patch
[[ -r cloud-config-worker.patch ]] && patch userdata/cloud-config-worker cloud-config-worker.patch
kube-aws validate --s3-uri "$BUCKET_URL"

@neoandroid
Copy link
Contributor

@PepijnK Could you try using the features introduced by #169 ? They're already merged into master and can allow you to setup a cluster w/ all nodes in private subnets while keeping the ELB for controller nodes on the public subnet.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants