It's an automated system install and image-creation tool for situations where provisioning machines via a PXE server is not an option, or is not an option yet. It's ideal for small-scale greenfielding, proofs-of-concept, and general management of on-prem compute infrastructure in a cloud-native way without the cloud.
PXEless is based on covertsh/ubuntu-autoinstall-generator, and generates a customized Ubuntu auto-intstall ISO. This is accomplished by using cloud-init and Ubuntu's Ubiquity installer - specifically the server variant known as Subiquity, which itself wraps Curtin.
PXEless works by:
- Downloading the ISO of your choice - a daily build, or a release.
- Extracting the EFI, MBR, and File-System from the ISO
- Adding some kernel command line parameters
- Adding customised autoinstall and cloud-init configuration files
- Copying arbitrary files to / running scripts against the squashfs (Optional)
- Repacking the data into a new ISO.
The resulting product is a fully-automated Ubuntu installer. This serves as an easy stepping-off point for configuration-management tooling like Ansible, Puppet, and Chef or personalization tools like jessebot/onboardme. Please note that while similar in schema, the Autoinstall and Cloud-Init portions of the user-data
file do not mix. The user-data
key marks the transition from autoinstall to cloud-init syntax as seen HERE
It is advised to run PXEless in a docker container due to it's reliance on Linux-only packages.
Skip 1 and 2 if you already have a cloud-init file
-
Clone the rpos
git clone https://github.com/cloudymax/pxeless.git
-
Change directory to the root of the repo
cd pxeless
-
Run in a Docker container:
-
Basic Usage:
docker run --rm --volume "$(pwd):/data" \ --user $(id -u):$(id -g) deserializeme/pxeless \ --all-in-one \ --user-data user-data.basic \ --code-name jammy \ --use-release-iso
-
Adding static files to the ISO
Take note that we do not specify a user here. Adding extra files to the ISO via the
-x
or--extra-files
flag requires root access in order to chroot the squashfs.The contents of the
extras
directory will be copied to the/media
dir of the image's filesystem. The extra-files are mounted as/data/<directory>
when running in a docker conatiner because we mount$(pwd)
as/data/
docker run --rm --volume "$(pwd):/data" deserializeme/pxeless \ --all-in-one \ --user-data user-data.basic \ --code-name jammy \ --use-release-iso \ --extra-files /data/extras
-
Offline Installation
Running an offline installer script to customize image during build procedure. Adding a bash script as content of the
extras
directory. The script should be passed toimage-create
using-o
or--offline-installer
.docker run --rm --volume "$(pwd):/data" deserializeme/pxeless \ --all-in-one \ --user-data user-data.basic \ --code-name jammy \ --use-release-iso \ --extra-files /data/extras \ --offline-installer installer-sample.sh
-
-
Writing your ISO to a USB drive
-
On MacOS I reccommend using Etcher
-
On Linux use
dd
.# /dev/sdb is assumed for the sake of the example export IMAGE_FILE="ubuntu-autoinstall.iso" sudo fdisk -l |grep "Disk /dev/" export DISK_NAME="/dev/sdb" sudo umount "$DISK_NAME" sudo dd bs=4M if=$IMAGE_FILE of="$DISK_NAME" status=progress oflag=sync
-
-
Boot your ISO file on a physical machine for VM and log-in. If you used my
user-data.basic
file the user isvmadmin
, and the password ispassword
. You can create your own credentials by runningmkpasswd --method=SHA-512 --rounds=4096
as documented on THIS page at line 49.
Short | Long | Description |
---|---|---|
-h | --help | Print this help and exit |
-v | --verbose | Print script debug info |
-n | --code-name | The Code Name of the Ubuntu release to download (bionic, focal, jammy etc...) |
-a | --all-in-one | Bake user-data and meta-data into the generated ISO. By default you will need to boot systems with a CIDATA volume attached containing your autoinstall user-data and meta-data files. For more information see: https://ubuntu.com/server/docs/install/autoinstall-quickstart |
-e | --use-hwe-kernel | Force the generated ISO to boot using the hardware enablement (HWE) kernel. Not supported by early Ubuntu 20.04 release ISOs. |
-u | --user-data | Path to user-data file. Required if using -a |
-m | --meta-data | Path to meta-data file. Will be an empty file if not specified and using the -a flag. You may read more about providing a meta-data file HERE |
-x | --extra-files | Specifies a folder with files and folders, which will be copied into the root of the iso image. If not set, nothing is copied. Requires use of --privileged flag when running in docker |
-k | --no-verify | Disable GPG verification of the source ISO file. By default SHA256SUMS- and SHA256SUMS-.gpg files in the script directory will be used to verify the authenticity and integrity of the source ISO file. If they are not present the latest daily SHA256SUMS will be downloaded and saved in the script directory. The Ubuntu signing key will be downloaded and saved in a new keyring in the script directory. |
-o | --offline-installer | Run a bash script to customize image, including install packages and configuration. It should be used with -x, and the bash script should be avilable in the same extras directory. |
-r | --use-release-iso | Use the current release ISO instead of the daily ISO. The file will be used if it already exists. |
-s | --source | Source ISO file. By default the latest daily ISO for Ubuntu 20.04 will be downloaded and saved as script directory/ubuntu-original-current date.iso That file will be used by default if it already exists. |
-t | --timeout | Set the GRUB timeout. Defaults to 30. |
-d | --destination | Destination ISO file. By default script directory/ubuntu-autoinstall-current date.iso will be created, overwriting any existing file. |
This project is made possible through the open-source work of the following authors and many others. Thank you all for sharing your time, effort, and knowledge freely with us. You are the giants upon whos shoulders we stand. ❤️
Reference | Author | Description |
---|---|---|
ubuntu-autoinstall-generator | covertsh | The original project that PXEless is based off of. If the original author ever becomes active again, I would love to merge these changes back. |
Ubuntu Autoinstall Docs | Canonical | Official documentation for the Ubuntu Autoinstall process |
Cloud-Init Docs | Canonical | The official docs for the Cloud-Init project |
How-To: Make Ubuntu Autoinstall ISO with Cloud-init | Dr Donald Kinghorn | A great walkthrough of how to manually create an AutoInstall USB drive using Cloud-Init on Ubuntu 20.04 |
My Magical Adventure with Cloud-Init | Xe Iaso | Excellent practical example of how to manipulate cloud-init's execution order by specifying module order |
Basic user-data example | Cloudymax | A very basic user-data file that will provision a user with a password |
Advanced user-data example | Cloudymax |
PXEless currently only supports creating ISO's using Ubuntu Server (Focal and Jammy). Users who's needs ae not met by PXEless may find these other FOSS projects useful:
Project Name | Description |
---|---|
Tinkerbell | A flexible bare metal provisioning engine. Open-sourced by the folks @equinixmetal; currently a sandbox project in the CNCF |
Metal³ | Bare Metal Host Provisioning for Kubernetes and preferred starting point for Cluster API |
Metal-as-a-Service | Treat physical servers like virtual machines in the cloud. MAAS turns your bare metal into an elastic cloud-like resource |
Packer | A tool for creating identical machine images for multiple platforms from a single source configuration. |
Clonezilla Live! | A partition or disk clone tool similar to Norton Ghost®. It saves and restores only used blocks in hard drive. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (Server Edition) |
Click to expand
You will need to have a VNC client (tigerVNC or Remmina etc...) installed as well as the following packages:
sudo apt-get install -y qemu-kvm \
bridge-utils \
virtinst\
ovmf \
qemu-utils \
cloud-image-utils \
ubuntu-drivers-common \
whois \
git \
guestfs-tools
- You will need to replace my host IP (192.168.50.100) with your own.
- Also change the path to the ISO file to match your system.
- I have also set this VM to forward ssh over port 1234 instead of 22, feel free to change that as well.
-
Do fresh clone of the pxeless repo
-
Create the iso with
docker run --rm --volume "$(pwd):/data" --user $(id -u):$(id -g) deserializeme/pxeless -a -u user-data.basic -n jammy
-
Create a virtual disk with
qemu-img create -f qcow2 hdd.img 8G
-
Create a test VM to boot the ISO files with
sudo qemu-system-x86_64 -machine accel=kvm,type=q35 \ -cpu host,kvm=off,hv_vendor_id=null \ -smp 2,sockets=1,cores=1,threads=2,maxcpus=2 \ -m 2G \ -cdrom /home/max/repos/pxeless/ubuntu-autoinstall.iso \ -object iothread,id=io1 \ -device virtio-blk-pci,drive=disk0,iothread=io1 \ -drive if=none,id=disk0,cache=none,format=qcow2,aio=threads,file=hdd.img \ -netdev user,id=network0,hostfwd=tcp::1234-:22 \ -device virtio-net-pci,netdev=network0 \ -serial stdio -vga virtio -parallel none \ -bios /usr/share/ovmf/OVMF.fd \ -usbdevice tablet \ -vnc 192.168.50.100:0
-
Connect to the VM using VNC so we can watch the grub process run.
-
After the install process completes and the VM reboots, select the "Boot from next volume" grub option to prevent installing again
-
I was then able to log into he machine using
vmadmin
andpassword
for the credentials -
Finally i tried to SSH to the machine (since the vm I created is using SLIRP networking I have to reach it via a forwarded port)
The most common issues I run into with this process are improperly formatted yaml in the user-data file, and errors in the process of burning the ISO to a USB drive.
In those cases, the machine will perform a partial install but instead of seeing pxeless login:
as the machine name at login it will still say ubuntu login:
.
Max! |
Lars Munch |
Meraj Kashi |
Koen Van De Sande |
Snyk Bot |
Markus Pöschl |
Arnold |
N0k0m3 |
MrKinauJr |
Toro |
Webber Takken |
Null |
MIT license.
This spin-off project adds support for eltorito + GPT images required for Ubuntu 20.10 and newer. It also keeps support for the now depricated isolinux + MBR image type. In addition, the process is dockerized to make it possible to run on Mac/Windows hosts in addition to Linux. Automated builds via github actions have also been created.