- A complete package built using shell, Terrafrom and Ansible to automate the creation of a complete Kubernetes cluster in a proxmox installation.
- Most values default to the default installation settings of proxmox, the comments in the files should help you change any if you need to.
- Has been tested with proxmox 7.x and 8.x.
- The terrafrom assumes a 2 node setup but if you have more nodes, you can easily update the terraform and ansible to suit your needs.
- SSH into your each of your proxmox nodes as root and run the below command to create a VM template in each proxmox node.
- Make sure to replace
<vm-id>
with a valid and recognizable number like 8888 or 9999 - Run
wget -O template.sh https://raw.githubusercontent.com/ash0ne/proxmox-kubernetes/main/prepare-vm-template.sh && . template.sh --vmid <vm-id>
- Click on Datacenter -> Permissions -> API Tokens
- Click on 'Add' and create a token for one of your admin users. Ideally this must be an admin user in the pve realm but any admin user works just fine.
- Lastly, do not forget to add the permission for the API token by going to Permissions -> Add. This needs to be done even if the user acssociated with the token already has permissions.
- Update everything to the right values in
terraform.tfvars
- A sample tfvars file is added at
./terrafrom/terraform.tfvars
- From the
./terraform
directory, runterraform plan
andterraform apply
- Run
ansible -i ./ansible/inventory/hosts all -m ping -u ubuntu --key-file <private_ssh_key>
. This SSH key should be the private key matching the ssh public key added interraform.tfvars
- Apply the common playbook first by running
ansible-playbook -i ./ansible/inventory/hosts --key-file <private_ssh_key> ./ansible/roles/common/tasks/main.yaml
- Apply the main-node playbook to initialise k8s master node by running
ansible-playbook -i ./ansible/inventory/hosts --key-file <private_ssh_key> ./ansible/roles/main-node/tasks/main.yaml
- At this point, ssh into the master node by running
ssh ubuntu@<main-nod-ip> -i <private_ssh_key>
and install the cluster network by runningkubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
- You should now have your core
kube-system
pods running and should see the below output if you runkubectl get pod -A
- Join the agent nodes by running
ansible-playbook -i ./ansible/inventory/hosts --key-file <private_ssh_key> ./ansible/roles/join-nodes/tasks/main.yaml --extra-vars "main_node_ip=<ip_of_the_main_node>"