-
Notifications
You must be signed in to change notification settings - Fork 63
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
No changes, just a straight copy 0.4->0.5. Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
- Loading branch information
Showing
34 changed files
with
2,440 additions
and
0 deletions.
There are no files selected for viewing
124 changes: 124 additions & 0 deletions
124
website/content/docs/v0.5/Getting Started/create-workload.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,124 @@ | ||
--- | ||
description: "Create a Workload Cluster" | ||
weight: 8 | ||
title: "Create a Workload Cluster" | ||
--- | ||
|
||
Once created and accepted, you should see the servers that make up your ServerClasses appear as "available": | ||
|
||
```bash | ||
$ kubectl get serverclass | ||
NAME AVAILABLE IN USE | ||
any ["00000000-0000-0000-0000-d05099d33360"] [] | ||
``` | ||
|
||
## Generate Cluster Manifests | ||
|
||
We are now ready to generate the configuration manifest templates for our first workload | ||
cluster. | ||
|
||
There are several configuration parameters that should be set in order for the templating to work properly: | ||
|
||
- `CONTROL_PLANE_ENDPOINT`: The endpoint used for the Kubernetes API server (e.g. `https://1.2.3.4:6443`). | ||
This is the equivalent of the `endpoint` you would specify in `talosctl gen config`. | ||
There are a variety of ways to configure a control plane endpoint. | ||
Some common ways for an HA setup are to use DNS, a load balancer, or BGP. | ||
A simpler method is to use the IP of a single node. | ||
This has the disadvantage of being a single point of failure, but it can be a simple way to get running. | ||
- `CONTROL_PLANE_SERVERCLASS`: The server class to use for control plane nodes. | ||
- `WORKER_SERVERCLASS`: The server class to use for worker nodes. | ||
- `KUBERNETES_VERSION`: The version of Kubernetes to deploy (e.g. `v1.21.1`). | ||
- `CONTROL_PLANE_PORT`: The port used for the Kubernetes API server (port 6443) | ||
|
||
For instance: | ||
|
||
```bash | ||
export CONTROL_PLANE_SERVERCLASS=any | ||
export WORKER_SERVERCLASS=any | ||
export TALOS_VERSION=v0.13.0 | ||
export KUBERNETES_VERSION=v1.22.2 | ||
export CONTROL_PLANE_PORT=6443 | ||
export CONTROL_PLANE_ENDPOINT=1.2.3.4 | ||
|
||
clusterctl config cluster cluster-0 -i sidero > cluster-0.yaml | ||
``` | ||
|
||
Take a look at this new `cluster-0.yaml` manifest and make any changes as you | ||
see fit. | ||
Feel free to adjust the `replicas` field of the `TalosControlPlane` and `MachineDeployment` objects to match the number of machines you want in your controlplane and worker sets, respecively. | ||
`MachineDeployment` (worker) count is allowed to be 0. | ||
|
||
Of course, these may also be scaled up or down _after_ they have been created, | ||
as well. | ||
|
||
## Create the Cluster | ||
|
||
When you are satisfied with your configuration, go ahead and apply it to Sidero: | ||
|
||
```bash | ||
kubectl apply -f cluster-0.yaml | ||
``` | ||
|
||
At this point, Sidero will allocate Servers according to the requests in the | ||
cluster manifest. | ||
Once allocated, each of those machines will be installed with Talos, given their | ||
configuration, and form a cluster. | ||
|
||
You can watch the progress of the Servers being selected: | ||
|
||
```bash | ||
watch kubectl --context=sidero-demo \ | ||
get servers,machines,clusters | ||
``` | ||
|
||
First, you should see the Cluster created in the `Provisioning` phase. | ||
Once the Cluster is `Provisioned`, a Machine will be created in the | ||
`Provisioning` phase. | ||
|
||
![machine provisioning](./images/sidero-cluster-start.png) | ||
|
||
During the `Provisioning` phase, a Server will become allocated, the hardware | ||
will be powered up, Talos will be installed onto it, and it will be rebooted | ||
into Talos. | ||
Depending on the hardware involved, this may take several minutes. | ||
|
||
Eventually, the Machine should reach the `Running` phase. | ||
|
||
![machine_running](./images/sidero-cluster-up.png) | ||
|
||
The initial controlplane Machine will always be started first. | ||
Any additional nodes will be started after that and will join the cluster when | ||
they are ready. | ||
|
||
## Retrieve the Talosconfig | ||
|
||
In order to interact with the new machines (outside of Kubernetes), you will | ||
need to obtain the `talosctl` client configuration, or `talosconfig`. | ||
You can do this by retrieving the resource of the same type from the Sidero | ||
management cluster: | ||
|
||
```bash | ||
kubectl --context=sidero-demo \ | ||
get talosconfig \ | ||
-l cluster.x-k8s.io/cluster-name=cluster-0 \ | ||
-o jsonpath='{.items[0].status.talosConfig}' \ | ||
> cluster-0-talosconfig.yaml | ||
``` | ||
|
||
## Retrieve the Kubeconfig | ||
|
||
With the talosconfig obtained, the workload cluster's kubeconfig can be retrieved in the normal Talos way: | ||
|
||
```bash | ||
talosctl --talosconfig cluster-0.yaml kubeconfig | ||
``` | ||
|
||
## Check access | ||
|
||
Now, you should have two cluster available: you management cluster | ||
(`sidero-demo`) and your workload cluster (`cluster-0`). | ||
|
||
```bash | ||
kubectl --context=sidero-demo get nodes | ||
kubectl --context=cluster-0 get nodes | ||
``` |
36 changes: 36 additions & 0 deletions
36
website/content/docs/v0.5/Getting Started/expose-services.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
--- | ||
description: "A guide for bootstrapping Sidero management plane" | ||
weight: 6 | ||
title: "Expose Sidero Services" | ||
--- | ||
|
||
> If you built your cluster as specified in the [Prerequisite: Kubernetes] section in this tutorial, your services are already exposed and you can skip this section. | ||
There are two external Services which Sidero serves and which much be made | ||
reachable by the servers which it will be driving. | ||
|
||
For most servers, TFTP (port 69/udp) will be needed. | ||
This is used for PXE booting, both BIOS and UEFI. | ||
Being a primitive UDP protocl, many load balancers do not support TFTP. | ||
Instead, solutions such as [MetalLB](https://metallb.universe.tf) may be used to expose TFTP over a known IP address. | ||
For servers which support UEFI HTTP Network Boot, TFTP need not be used. | ||
|
||
The kernel, initrd, and all configuration assets are served from the HTTP service | ||
(port 8081/tcp). | ||
It is needed for all servers, but since it is HTTP-based, it | ||
can be easily proxied, load balanced, or run through an ingress controller. | ||
|
||
The main thing to keep in mind is that the services **MUST** match the IP or | ||
hostname specified by the `SIDERO_CONTROLLER_MANAGER_API_ENDPOINT` environment | ||
variable (or configuration parameter) when you installed Sidero. | ||
|
||
It is a good idea to verify that the services are exposed as you think they | ||
should be. | ||
|
||
```bash | ||
$ curl -I http://192.168.1.150:8081/tftp/ipxe.efi | ||
HTTP/1.1 200 OK | ||
Accept-Ranges: bytes | ||
Content-Length: 1020416 | ||
Content-Type: application/octet-stream | ||
``` |
Binary file added
BIN
+72.7 KB
website/content/docs/v0.5/Getting Started/images/sidero-cluster-start.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+71.9 KB
website/content/docs/v0.5/Getting Started/images/sidero-cluster-up.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
73 changes: 73 additions & 0 deletions
73
website/content/docs/v0.5/Getting Started/import-machines.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
--- | ||
description: "A guide for bootstrapping Sidero management plane" | ||
weight: 7 | ||
title: "Import Workload Machines" | ||
--- | ||
|
||
At this point, any servers on the same network as Sidero should network boot from Sidero. | ||
To register a server with Sidero, simply turn it on and Sidero will do the rest. | ||
Once the registration is complete, you should see the servers registered with `kubectl get servers`: | ||
|
||
```bash | ||
$ kubectl get servers -o wide | ||
NAME HOSTNAME ACCEPTED ALLOCATED CLEAN | ||
00000000-0000-0000-0000-d05099d33360 192.168.1.201 false false false | ||
``` | ||
|
||
## Accept the Servers | ||
|
||
Note in the output above that the newly registered servers are not `accepted`. | ||
In order for a server to be eligible for consideration, it _must_ be marked as `accepted`. | ||
Before a `Server` is accepted, no write action will be performed against it. | ||
This default is for safety (don't accidentally delete something just because it | ||
was plugged in) and security (make sure you know the machine before it is given | ||
credentials to communicate). | ||
|
||
> Note: if you are running in a safe environment, you can configure Sidero to | ||
> automatically accept new machines. | ||
For more information on server acceptance, see the [server docs](../../resource-configuration/servers/#server-acceptance). | ||
|
||
## Create ServerClasses | ||
|
||
By default, Sidero comes with a single ServerClass `any` which matches any | ||
(accepted) server. | ||
This is sufficient for this demo, but you may wish to have | ||
more flexibility by defining your own ServerClasses. | ||
|
||
ServerClasses allow you to group machines which are sufficiently similar to | ||
allow for unnamed allocation. | ||
This is analogous to cloud providers using such classes as `m3.large` or | ||
`c2.small`, but the names are free-form and only need to make sense to you. | ||
|
||
For more information on ServerClasses, see the [ServerClass | ||
docs](../../resource-configuration/serverclasses/). | ||
|
||
## Hardware differences | ||
|
||
In baremetal systems, there are commonly certain small features and | ||
configurations which are unique to the hardware. | ||
In many cases, such small variations may not require special configurations, but | ||
others do. | ||
|
||
If hardware-specific differences do mandate configuration changes, we need a way | ||
to keep those changes local to the hardware specification so that at the higher | ||
level, a Server is just a Server (or a server in a ServerClass is just a Server | ||
like all the others in that Class). | ||
|
||
The most common variations seem to be the installation disk and the console | ||
serial port. | ||
|
||
Some machines have NVMe drives, which show up as something like `/dev/nvme0n1`. | ||
Others may be SATA or SCSI, which show up as something like `/dev/sda`. | ||
Some machines use `/dev/ttyS0` for the serial console; others `/dev/ttyS1`. | ||
|
||
Configuration patches can be applied to either Servers or ServerClasses, and | ||
those patches will be applied to the final machine configuration for those | ||
nodes without having to know anything about those nodes at the allocation level. | ||
|
||
For examples of install disk patching, see the [Installation Disk | ||
doc](../../resource-configuration/servers/#installation-disk). | ||
|
||
For more information about patching in general, see the [Patching | ||
Guide](../../guides/patching). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
--- | ||
description: "Overview" | ||
weight: 1 | ||
title: "Overview" | ||
--- | ||
|
||
This tutorial will walk you through a complete Sidero setup and the formation, | ||
scaling, and destruction of a workload cluster. | ||
|
||
To complete this tutorial, you will need a few things: | ||
|
||
- ISC DHCP server. | ||
While any DHCP server will do, we will be presenting the | ||
configuration syntax for ISC DHCP. | ||
This is the standard DHCP server available on most Linux distributions (NOT | ||
dnsmasq) as well as on the Ubiquiti EdgeRouter line of products. | ||
- Machine or Virtual Machine on which to run Sidero itself. | ||
The requirements for this machine are very low, but it does need to be x86 for | ||
now, and it should have at least 4GB of RAM. | ||
- Machines on which to run Kubernetes clusters. | ||
These have the same minimum specifications as the Sidero machine. | ||
- Workstation on which `talosctl`, `kubectl`, and `clusterctl` can be run. | ||
|
||
## Steps | ||
|
||
1. Prerequisite: CLI tools | ||
1. Prerequisite: DHCP server | ||
1. Prerequisite: Kubernetes | ||
1. Install Sidero | ||
1. Expose services | ||
1. Import workload machines | ||
1. Create a workload cluster | ||
1. Scale the workload cluster | ||
1. Destroy the workload cluster | ||
1. Optional: Pivot management cluster | ||
|
||
## Useful Terms | ||
|
||
**ClusterAPI** or **CAPI** is the common system for managing Kubernetes clusters | ||
in a declarative fashion. | ||
|
||
**Management Cluster** is the cluster on which Sidero itself runs. | ||
It is generally a special-purpose Kubernetes cluster whose sole responsibility | ||
is maintaining the CRD database of Sidero and providing the services necessary | ||
to manage your workload Kubernetes clusters. | ||
|
||
**Sidero** is the ClusterAPI-powered system which manages baremetal | ||
infrastructure for Kubernetes. | ||
|
||
**Talos** is the Kubernetes-focused Linux operating system built by the same | ||
people who bring to you Sidero. | ||
It is a very small, entirely API-driven OS which is meant to provide a reliable | ||
and self-maintaining base on which Kubernetes clusters may run. | ||
More information about Talos can be found at | ||
[https://talos.dev](https://talos.dev). | ||
|
||
**Workload Cluster** is a cluster, managed by Sidero, on which your Kubernetes | ||
workloads may be run. | ||
The workload clusters are where you run your own applications and infrastruture. | ||
Sidero creates them from your available resources, maintains them over time as | ||
your needs and resources change, and removes them whenever it is told to do so. |
44 changes: 44 additions & 0 deletions
44
website/content/docs/v0.5/Getting Started/install-clusterapi.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
--- | ||
description: "Install Sidero" | ||
weight: 5 | ||
title: "Install Sidero" | ||
--- | ||
|
||
Sidero is included as a default infrastructure provider in `clusterctl`, so the | ||
installation of both Sidero and the Cluster API (CAPI) components is as simple | ||
as using the `clusterctl` tool. | ||
|
||
> Note: Because Cluster API upgrades are _stateless_, it is important to keep all Sidero | ||
> configuration for reuse during upgrades. | ||
Sidero has a number of configuration options which should be supplied at install | ||
time, kept, and reused for upgrades. | ||
These can also be specified in the `clusterctl` configuration file | ||
(`$HOME/.cluster-api/clusterctl.yaml`). | ||
You can reference the `clusterctl` | ||
[docs](https://cluster-api.sigs.k8s.io/clusterctl/configuration.html#clusterctl-configuration-file) | ||
for more information on this. | ||
|
||
For our purposes, we will use environment variables for our configuration | ||
options. | ||
|
||
```bash | ||
export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true | ||
export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=192.168.1.150 | ||
|
||
clusterctl init -b talos -c talos -i sidero | ||
``` | ||
|
||
First, we are telling Sidero to use `hostNetwork: true` so that it binds its | ||
ports directly to the host, rather than being available only from inside the | ||
cluster. | ||
There are many ways of exposing the services, but this is the simplest | ||
path for the single-node management cluster. | ||
When you scale the management cluster, you will need to use an alternative | ||
method, such as an external load balancer or something like | ||
[MetalLB](https://metallb.universe.tf). | ||
|
||
The `192.168.1.150` IP address is the IP address or DNS hostname as seen from the workload | ||
clusters. | ||
In our case, this should be the main IP address of your Docker | ||
workstation. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
--- | ||
description: "A guide for bootstrapping Sidero management plane" | ||
weight: 11 | ||
title: "Optional: Pivot management cluster" | ||
--- | ||
|
||
Having the Sidero cluster running inside a Docker container is not the most | ||
robust place for it, but it did make for an expedient start. | ||
|
||
Conveniently, you can create a Kubernetes cluster in Sidero and then _pivot_ the | ||
management plane over to it. | ||
|
||
Start by creating a workload cluster as you have already done. | ||
In this example, this new cluster is called `management`. | ||
|
||
After the new cluster is available, install Sidero onto it as we did before, | ||
making sure to set all the environment variables or configuration parameters for | ||
the _new_ management cluster first. | ||
|
||
```bash | ||
export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=sidero.mydomain.com | ||
|
||
clusterctl init \ | ||
--kubeconfig-context=management | ||
-i sidero -b talos -c talos | ||
``` | ||
|
||
Now, you can move the database from `sidero-demo` to `management`: | ||
|
||
```bash | ||
clusterctl move \ | ||
--kubeconfig-context=sidero-demo \ | ||
--to-kubeconfig-context=management | ||
``` | ||
|
||
## Delete the old Docker Management Cluster | ||
|
||
If you created your `sidero-demo` cluster using Docker as described in this | ||
tutorial, you can now remove it: | ||
|
||
```bash | ||
talosctl cluster destroy --name sidero-demo | ||
``` |
Oops, something went wrong.