Status: This has been replaced with https://github.com/clayshek/homelab-monorepo
This (WIP) page describes a home lab environment for evaluation and testing of various technologies. Basic provisioning & configuration of both supporting infrastructure and additional products is documented here - mostly so I remember how I did stuff.
- A stable base platform of hypervisors & container hosts on physical hardware, on which further virtualized or containerized products can be easily deployed without impact to the base platform.
- Simplicity (as much as possible)
- Raspberry Pis always on, power-hungry servers powered on as needed - so any "critical" roles (dynamic DNS updater, etc) should reside on a Raspberry Pi.
- Totally separate lab env from home (don't want tinkering to impact "home" WiFi, DNS, storage, etc in any way).
- Codified & documented config leading to trivial re/deployments.
- Learning
- 3-node Proxmox VE cluster for KVM based virtual machines and LXC containers.
- 4-node Raspberry Pi K3s / Ubuntu Server cluster for ARM-compatible containerized workloads
- Lots of Ansible for automation of provisioning and configuration
- Servers
- 5x Dell R610 1U rack servers.
- Ea: 96 GB RAM, 2x 73 GB HDD (RAID-1 for OS), 4x 450 GB HDD (local data storage)
- Roles: 3x Proxmox VE hypervisors, 1x cold standby, 1x spare parts
- 5x Dell R610 1U rack servers.
- Rasberry Pis
- 4x Model 3B, 1x Model 3B+
- Ea: 1 GB RAM, 1x 32 GB MicroSD
- Roles: 4x K3s cluster members, 1x standalone running Docker and serving as an Ansible control node, all running Ubuntu Server
- 4x Model 3B, 1x Model 3B+
- Switches, Routers, APs
- 1x Ubiquiti EdgeRouter X. Provides routing, firewall, DHCP, DNS to lab, as well as inbound VPN
- 1x Netgear JGS524E 24-port managed switch
- 1x Netgear 8-port unmanaged switch
- 1x Ubiquiti Unifi AP AC Pro
- Storage
- 1x Buffalo 500 GB NAS (backups, image storage, etc). Old, and requires SMB v1, target for replacement.
- Otherwise locally attached storage (R610 RAID controller limitation not allowing JBOD passthrough restricts ability to use Ceph and other cluster storage technologies)
- Power
- 1x APC BX1500M 1500VA UPS
-
Network
- LAN:
192.168.2.0/24
- Gateway:
192.168.2.1
- DHCP: Range
192.168.2.150-.199
, provided by EgdeRouterX - DNS Resolver (default): EdgeRouterX to upstream ISP router to OpenDNS
- Managed switch: currently no special config, but will likely implement VLANs in the future
- LAN:
-
DNS Zones
- layer8sys.com (Root zone. Authoritative DNS servers: Google DNS)
- int.layer8sys.com (Purpose: private IP space / internal resource access by FQDN. Authoritative DNS: Primary home router)
- ad.layer8sys.com (Purpose: Windows Active Directory. Authoritative DNS: AD domain controller VMs)
- lab.layer8sys.com (TBD)
-
Wireless
Raspberry Pis are each configured with an Ansible playbook, pulled at OS install from another of my GitHub repos: https://github.com/clayshek/raspi-ubuntu-ansible
Requires flashing SD card(s) with Ubuntu, and copying in the customizable CloudInit user-data file (included in repo) to the boot partition before inserting into and starting each Pi. After a few minutes, based on defined inventory role, provisioning is complete and ready for any further config. K3s cluster is provisioned with Rancher's Ansible playbook.
Proxmox configuration requires installation of Proxmox VE on each node, followed by running https://github.com/clayshek/ansible-proxmox-config Ansible playbook (after customization). Once complete, manually create cluster on one node, join other nodes to cluster, and configure cluster data storage specific to implementation details.
- Prometheus / Grafana
- UPS power status & consumption monitoring
- ELK - Logzio?
- UptimeRobot for remote network monitoring
VM deployments based on a template are much faster than running through a new install. The following repos use Ansible to create Proxmox template images (and handle OS / package updates) for my most frequently used VM operating systems. These templates are used for later infrastructure provisioning.
- Windows Server 2019: https://github.com/clayshek/ans-pve-win-templ
- Ubuntu Server 20.04:
- 2x Active Directory Domain Controllers (Proxmox VMs)
- 4-node Microsoft Hyper-V Cluster (Proxmox VMs)
- System Center Virtual Machine Manager (Proxmox VM)
- Windows Admin Center (Proxmox VM)
The base VMs for the Windows Server lab are provisioned (from the Server 2019 template above), using https://github.com/clayshek/ans-pve-win-provision. Once online, role assignment and final configuration is done using https://github.com/clayshek/ansible-lab-config
GitLab (Proxmox Turnkey Linux Container)
Caddy-based Lab Dashboard / Portal (K3s container)
- Google Domains dynamic DNS updater deployed onto Ras Pi K3s cluster to keep my dynamic home IP mapped to a custom FQDN. Deployed as documented here: https://github.com/clayshek/google-ddns-updater
- Identify better NAS storage solution, potentially with iSCSI, also providing persistent K3s storage.
- Update Proxmox config repo to automate cluster creation/join & storage setup. Possibly change to auto playbook pull?
- Check out the Ras Pi model 4
- Maybe switch all this from Ansible to Salt