How to setup k3s Cluster with disabled external Interfaces #252
Replies: 8 comments 20 replies
-
To clarify, the tool installs k3s, not rke2 :) |
Beta Was this translation helpful? Give feedback.
-
Do note that the interface need not be |
Beta Was this translation helpful? Give feedback.
-
Try to access the k8s api via the private IP of the LB. This should work. Next you can disable public access of the hetzner LB. |
Beta Was this translation helpful? Give feedback.
-
Since the pfsense host is a potential single point of failure I guess this setup is not ideal for clusters (applications) that need to contact a lot of external APIs, am I right? Would it be an alternative to block all incoming connections on the nodes using the Hetzner firewall that's available on the cloud servers and let the nodes talk to external hosts directly? |
Beta Was this translation helpful? Give feedback.
-
shameless plug: See also #379 - has some additional infos on the topic |
Beta Was this translation helpful? Give feedback.
-
Can anyone help with testing v2 rc1? see #385 |
Beta Was this translation helpful? Give feedback.
-
Hey, I'm currently trying to boot a cluster in the private network with version 2.0.8. But for some reason the CloudInit step takes up to 7 minutes, until then the ‘create’ command cancels. If I use the same config with a public ipv4 address, the CloudInit step runs through within a few seconds. Am I missing something? Is my configuration wrong? Is it possible to set the timeout via a variable? My config: cluster_name: test-cluster
kubeconfig_path: "~/.kube/config"
k3s_version: v1.26.4+k3s1
networking:
ssh:
port: 22
use_agent: false # set to true if your key has a passphrase
public_key_path: "~/.ssh/id_rsa.pub"
private_key_path: "~/.ssh/id_rsa"
allowed_networks:
ssh:
- 0.0.0.0/0
api:
- 0.0.0.0/0
public_network:
ipv4: false
ipv6: false
private_network:
enabled : true
subnet: 10.12.8.0/22
existing_network_name: "my-network"
disable_flannel: false # set to true if you want to install a different CNI
schedule_workloads_on_masters: false
masters_pool:
instance_type: cpx31
instance_count: 1
location: nbg1
worker_node_pools:
- name: jira-node
instance_type: ccx33
instance_count: 1
location: nbg1
post_create_commands:
- export IP=$(ip addr show enp7s0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
- >
printf "network## {config## disabled}" |
sed 's/##/:/g' > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
- >
printf "network##\n version## 2\n renderer## networkd\n ethernets##\n enp7s0##\n addresses##\n - $IP/32\n routes##\n - to## default\n via## xx.xx.xx.xx\n - to## xx.xx.xx.xx/32\n scope## link\n - to## xx.xx.xx.xx/32\n scope## link\n - to## xx.xx.xx.xx/8\n via## xx.xx.xx.xx\n on-link## true\n - to## xx.xx.xx.xx/12\n via## xx.xx.xx.xx\n on-link## true\n - to## xx.xx.xx.xx/16\n via## xx.xx.xx.xx\n on-link## true\n" |
sed 's/##/:/g' > /etc/netplan/50-cloud-init.yaml
- netplan generate
- netplan apply And the resulting log:
|
Beta Was this translation helpful? Give feedback.
-
sharing also my post_create_commands that worked for me (using networkd instead of netplan) and an instance as NAT gateway in the same private network
|
Beta Was this translation helpful? Give feedback.
-
Some pre-condition:
existing_network
Example config for test cluster:
Some explanations:
post_create_commands
for cloud init will create own network configuration on VM creation:
directly the yaml parser crashes. So I use##
and replace it the end. Didn't found anythink to mask the:
(Maybe you found it ;-))Deployment grafik
Small graphic to get an overview (Network is different to example config):
Beta Was this translation helpful? Give feedback.
All reactions