This repository provides infrastructure-as-code to automate the creation of a virtual machine image for VMWare vSphere 7 using HashiCorp Packer and the Packer Plugin for VMware vSphere (vsphere-iso
builder). This image is authored in the HashiCorp Configuration Language ("HCL2").
Much of the code in this project was copied from packer-examples-for-vspere.
Initialy, the image created by this project is a VM Template, however, the intent is to configure this image to be transferred to a vSphere Content Library as an OVF template and the temporary machine image is destroyed.
This project builds Ubuntu Server 22.04 LTS via cloud-init:
Operating Systems:
Operating systems and versions tested with the project:
- macOS Ventura (Intel)
Note
If your Ansible control node already uses OpenSSH >= 9.0 (e.g.,, macOS Ventura) you must add an additional option to enable scp.
Update the
ansible/ansible.cfg
to include the following:[ssh_connection] scp_extra_args = "-O"
Packer:
-
HashiCorp Packer 1.9.2 or higher.
Note
I use an Ansible project to install products on my MacOS. most likely, you don't have access to this project, so here is the task file I use:
---
# We need to download packer archive from here on our remote packer instance.
# This will give a zip archive file.
# To unzip packer archive, we need to install unzip so we can unzip packer archive and takeout needed binary.
# Once this is done, we need to:
# - unzip packer archive,
# - move our packer binary to “/usr/local/bin”
# - make hashiUser:hashiGroup as the owner of this binary with needed permissions.
- name: Check if packer binary file exists
stat:
path: /usr/local/bin/packer
register: packer_binary_file
- name: Get Current packer version
shell: packer --version | awk '{print $2}' | cut -d 'v' -f 2
changed_when: false
register: current_packerVersion
when: packer_binary_file.stat.exists
- name: Current packer Version
debug: var=current_packerVersion
- name: Check if packer zip file exists
stat:
path: /tmp/packer_{{ packerVersion }}_{{ clientOS }}_{{ clientArch }}.zip
register: packer_zip_file
- name: Download binary
get_url:
url: https://releases.hashicorp.com/packer/{{ packerVersion }}/packer_{{ packerVersion }}_{{ clientOS }}_{{ clientArch }}.zip
dest: /tmp/packer_{{ packerVersion }}_{{ clientOS }}_{{ clientArch }}.zip
owner: "{{ hashiUser }}"
group: "{{ hashiGroup }}"
mode: 0755
# checksum: "{{packer_checksum }}"
register: packer_download
when: (not packer_binary_file.stat.exists) or (not packer_zip_file.stat.exists and not current_packerVersion.stdout == packerVersion)
# - debug: var=packer_download
- name: Unzip packer archive
unarchive:
src: /tmp/packer_{{ packerVersion }}_{{ clientOS }}_{{ clientArch }}.zip
dest: /usr/local/bin
copy: no
owner: "{{ hashiUser }}"
group: "{{ hashiGroup }}"
mode: 0755
when: packer_download.changed or (packer_zip_file.stat.exists and not current_packerVersion.stdout == packerVersion)
-
Packer plugins:
Note
Required plugins are automatically downloaded and initialized when using
packer init .
. For dark sites, you may download the plugins and place these same directory as your Packer executable/usr/local/bin
or$HOME/.packer.d/plugins
.- HashiCorp Packer Plugin for VMware vSphere 1.2.0 or later.
- Packer Plugin for Git 0.4.2 or later - a community plugin for HashiCorp Packer.
Additional Software Packages:
The following additional software packages must be installed on the operating system running Packer.
Note
Additional software is required. As mentioned, I use Ansible to install these, but you can do it manually:
-
Additional software
-
git command-line tools.
-
ansible-core 2.15.
-
jq A command-line JSON processor.
-
Coreutils
-
HashiCorp Terraform 1.5.0 or higher.
-
gomplate 3.11.5 or higher.
pip3 install --user ansible-core==2.15 brew install git jq coreutils hashicorp/tap/terraform gomplate
-
mkpasswd - Password generating utility
brew install --cask docker
-
Platform:
- VMware vSphere 7.0 Update 3N or later.
You can choose between two options to get the source code:
TAG_NAME=$(curl -s https://api.github.com/repos/DonBower/packer-vsphere-ubuntu/releases | jq -r '.[0].tag_name')
TARBALL_URL=$(curl -s https://api.github.com/repos/DonBower/packer-vsphere-ubuntu/releases | jq -r '.[0].tarball_url')
mkdir packer-examples-for-vsphere
cd packer-examples-for-vsphere
curl -sL $TARBALL_URL | tar xvfz - --strip-components 1
git init -b main
git add .
git commit -m "Initial commit"
git switch -c $TAG_NAME HEAD
Note
You may also clone
main
for the latest prerelease updates.
TAG_NAME=$(curl -s https://api.github.com/repos/DonBower/packer-vsphere-ubuntu/releases | jq -r '.[0].tag_name')
git clone https://github.com/DonBower/packer-vsphere-ubuntu.git
cd packer-examples-for-vsphere
git switch -c $TAG_NAME $TAG_NAME
Warning
A branch is mandatory because it is used for the build version and the virtual machine name. It does not matter if it is based on the HEAD or a release tag.
The directory structure of the repository.
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── MAINTAINERS.md
├── NOTICE
├── README.md
├── ubuntu.auto.pkrvars.hcl
├── ubuntu.pkr.hcl
├── variables.pkr.hcl
├── ansible
│  ├── ansible.cfg
│  ├── main.yml
│ └── roles
│ └── <role>
│ └── *.yml
├── artifacts
├── manifests
└── terraform
├── vsphere-role
│  └── *.tf
└── vsphere-virtual-machine
├── content-library-ovf-linux-cloud-init
│  └── *.tf
├── content-library-ovf-linux-cloud-init-hcp-packer
│  └── *.tf
├── content-library-ovf-linux-guest-customization
│  └── *.tf
├── content-library-ovf-linux-guest-customization-hcp-packer
│  └── *.tf
├── content-library-template-linux-guest-customization-hcp-packer
│  └── *.tf
├── template-linux-cloud-init
│  └── *.tf
├── template-linux-cloud-init-hcp-packer
│  └── *.tf
├── template-linux-guest-customization
│  └── *.tf
├── template-linux-guest-customization-hcp-packer
│  └── *.tf
The files are distributed in the following directories.
ansible
- contains the Ansible roles to prepare Linux machine image builds.artifacts
- contains the OVF artifacts exported by the builds, if enabled.builds
- contains the templates, variables, and configuration files for the machine image builds.scripts
- contains the scripts to initialize and prepare Windows machine image builds.manifests
- manifests created after the completion of the machine image builds.terraform
- contains example Terraform plans to create a custom role and test machine image builds.
Warning
When forking the project for upstream contribution, please be mindful not to make changes that may expose your sensitive information, such as passwords, keys, certificates, etc.
Create a custom vSphere role with the required privileges to integrate HashiCorp Packer with VMware vSphere. A service account can be added to the role to ensure that Packer has least privilege access to the infrastructure. Clone the default Read-Only vSphere role and add the following privileges:
Category | Privilege | Reference |
---|---|---|
Content Library | Add library item | ContentLibrary.AddLibraryItem |
... | Update Library Item | ContentLibrary.UpdateLibraryItem |
Cryptographic Operations | Direct Access (Required for packer_cache upload.) |
Cryptographer.Access |
... | Encrypt (Required for vTPM.) | Cryptographer.Encrypt |
Datastore | Allocate space | Datastore.AllocateSpace |
... | Browse datastore | Datastore.Browse |
... | Low level file operations | Datastore.FileManagement |
Host | Configuration > System Management | Host.Config.SystemManagement |
Network | Assign network | Network.Assign |
Resource | Assign virtual machine to resource pool | Resource.AssignVMToPool |
vApp | Export | vApp.Export |
Virtual Machine | Configuration > Add new disk | VirtualMachine.Config.AddNewDisk |
... | Configuration > Add or remove device | VirtualMachine.Config.AddRemoveDevice |
... | Configuration > Advanced configuration | VirtualMachine.Config.AdvancedConfig |
... | Configuration > Change CPU count | VirtualMachine.Config.CPUCount |
... | Configuration > Change memory | VirtualMachine.Config.Memory |
... | Configuration > Change settings | VirtualMachine.Config.Settings |
... | Configuration > Change Resource | VirtualMachine.Config.Resource |
... | Configuration > Modify device settings | VirtualMachine.Config.EditDevice |
... | Configuration > Set annotation | VirtualMachine.Config.Annotation |
... | Edit Inventory > Create from existing | VirtualMachine.Inventory.CreateFromExisting |
... | Edit Inventory > Create new | VirtualMachine.Inventory.Create |
... | Edit Inventory > Remove | VirtualMachine.Inventory.Delete |
... | Interaction > Configure CD media | VirtualMachine.Interact.SetCDMedia |
... | Interaction > Configure floppy media | VirtualMachine.Interact.SetFloppyMedia |
... | Interaction > Connect devices | VirtualMachine.Interact.DeviceConnection |
... | Interaction > Inject USB HID scan codes | VirtualMachine.Interact.PutUsbScanCodes |
... | Interaction > Power off | VirtualMachine.Interact.PowerOff |
... | Interaction > Power on | VirtualMachine.Interact.PowerOn |
... | Provisioning > Create template from virtual machine | VirtualMachine.Provisioning.CreateTemplateFromVM |
... | Provisioning > Mark as template | VirtualMachine.Provisioning.MarkAsTemplate |
... | Provisioning > Mark as virtual machine | VirtualMachine.Provisioning.MarkAsVM |
... | State > Create snapshot | VirtualMachine.State.CreateSnapshot |
If you would like to automate the creation of the custom vSphere role, a Terraform example is included in the project.
-
Navigate to the directory for the example.
cd terraform/vsphere-role
-
Duplicate the
terraform.tfvars.example
file toterraform.tfvars
in the directory.cp terraform.tfvars.example terraform.tfvars
-
Open the
terraform.tfvars
file and update the variables according to your environment. -
Initialize the current directory and the required Terraform provider for VMware vSphere.
terraform init
-
Create a Terraform plan and save the output to a file.
terraform plan -out=tfplan
-
Apply the Terraform plan.
terraform apply tfplan
Once the custom vSphere role is created, assign Global Permissions in vSphere for the service account that will be used for the HashiCorp Packer to VMware vSphere integration in the next step. Global permissions are required for the content library. For example:
- Log in to the vCenter Server at <management_vcenter_server_fqdn>/ui as
administrator@vsphere.local
. - Select Menu > Administration.
- Create service account in vSphere SSO if it does not exist: In the left pane, select Single Sign On > Users and Groups and click on Users, from the dropdown select the domain in which you want to create the user (e.g., rainpole.io) and click ADD, fill all the username (e.g., svc-packer-vsphere) and all required details, then click ADD to create the user.
- In the left pane, select Access control > Global permissions and click the Add permissions icon.
- In the Add permissions dialog box, enter the service account (e.g., svc-packer-vsphere@rainpole.io), select the custom role (e.g., Packer to vSphere Integration Role) and the Propagate to children checkbox, and click OK.
In an environment with many vCenter Server instances, such as management and workload domains, you may wish to further reduce the scope of access across the infrastructure in vSphere for the service account. For example, if you do not want Packer to have access to your management domain, but only allow access to workload domains:
-
From the Hosts and clusters inventory, select management domain vCenter Server to restrict scope, and click the Permissions tab.
-
Select the service account with the custom role assigned and click the Edit.
-
In the Change role dialog box, from the Role drop-down menu, select No Access, select the Propagate to children checkbox, and click OK.
The variables are defined in .pkrvars.hcl
files.
Run the config script ./config.sh
to copy the .pkrvars.hcl.example
files to the config
directory.
The config
folder is the default folder, You may override the default by passing an alternate value as the first argument.
./config.sh foo
./build.sh foo
For example, this is useful for the purposes of running machine image builds for different environment.
San Francisco: us-west-1
./config.sh config/us-west-1
./build.sh config/us-west-1
Los Angeles: us-west-2
./config.sh config/us-west-2
./build.sh config/us-west-2
Edit the config/build.pkrvars.hcl
file to configure the following:
- Credentials for the default account on machine images.
Example: config/build.pkrvars.hcl
build_username = "rainpole"
build_password = "<plaintext_password>"
build_password_encrypted = "<sha512_encrypted_password>"
build_key = "<public_key>"
You can also override the build_key
value with contents of a file, if required.
For example:
build_key = file("${path.root}/config/ssh/build_id_ecdsa.pub")
Generate a SHA-512 encrypted password for the build_password_encrypted
using tools like mkpasswd.
Example: mkpasswd using Docker on Photon:
rainpole@photon> sudo systemctl start docker
rainpole@photon> sudo docker run -it --rm alpine:latest
mkpasswd -m sha512
Password: ***************
[password hash]
rainpole@photon> sudo systemctl stop docker
Example: mkpasswd using Docker on macOS:
rainpole@macos> docker run -it --rm alpine:latest
mkpasswd -m sha512
Password: ***************
[password hash]
Example: mkpasswd on Ubuntu:
rainpole@ubuntu> mkpasswd -m sha-512
Password: ***************
[password hash]
Generate a public key for the build_key
for public key authentication.
Example: macOS and Linux.
rainpole@macos> ssh-keygen -t ecdsa -b 521 -C "code@rainpole.io"
Generating public/private ecdsa key pair.
Enter file in which to save the key (/Users/rainpole/.ssh/id_ecdsa):
Enter passphrase (empty for no passphrase): **************
Enter same passphrase again: **************
Your identification has been saved in /Users/rainpole/.ssh/id_ecdsa.
Your public key has been saved in /Users/rainpole/.ssh/id_ecdsa.pub.
The content of the public key, build_key
, is added the key to the ~/.ssh/authorized_keys
file of the build_username
on the guest operating system.
Warning
Replace the default public keys and passwords.
By default, both Public Key Authentication and Password Authentication are enabled for Linux distributions. If you wish to disable Password Authentication and only use Public Key Authentication, comment or remove the portion of the associated Ansible
configure
role.
Edit the config/ansible.pkrvars.hcl
file to configure the following:
- Credentials for the Ansible account on Linux machine images.
Example: config/ansible.pkrvars.hcl
ansible_username = "ansible"
ansible_key = "<public_key>"
Note
A random password is generated for the Ansible user.
You can also override the ansible_key
value with contents of a file, if required.
For example:
ansible_key = file("${path.root}/config/ssh/ansible_id_ecdsa.pub")
Edit the config/common.pkrvars.hcl
file to configure the following common variables:
- Virtual Machine Settings
- Template and Content Library Settings
- OVF Export Settings
- Removable Media Settings
- Boot and Provisioning Settings
- HCP Packer Registry
Example: config/common.pkrvars.hcl
// Virtual Machine Settings
common_vm_version = 19
common_tools_upgrade_policy = true
common_remove_cdrom = true
// Template and Content Library Settings
common_template_conversion = false
common_content_library_name = "sfo-w01-lib01"
common_content_library_ovf = true
common_content_library_destroy = true
// OVF Export Settings
common_ovf_export_enabled = false
common_ovf_export_overwrite = true
// Removable Media Settings
common_iso_datastore = "sfo-w01-cl01-ds-nfs01"
// Boot and Provisioning Settings
common_data_source = "http"
common_http_ip = null
common_http_port_min = 8000
common_http_port_max = 8099
common_ip_wait_timeout = "20m"
common_shutdown_timeout = "15m"
// HCP Packer
common_hcp_packer_registry_enabled = false
http
is the default provisioning data source for Linux machine image builds.
If iptables is enabled on your Packer host, you will need to open common_http_port_min
through common_http_port_max
ports.
Example: Open a port range in iptables.
iptables -A INPUT -p tcp --match multiport --dports 8000:8099 -j ACCEPT`
You can change the common_data_source
from http
to disk
to build supported Linux machine images without the need to use Packer's HTTP server. This is useful for environments that may not be able to route back to the system from which Packer is running.
The cd_content
option is used when selecting disk
unless the distribution does not support a secondary CD-ROM. For distributions that do not support a secondary CD-ROM the floppy_content
option is used.
common_data_source = "disk"
If you need to define a specific IPv4 address from your host for Packer's HTTP Server, modify the common_http_ip
variable from null
to a string
value that matches an IP address on your Packer host. For example:
common_http_ip = "172.16.11.254"
Edit the config/proxy.pkrvars.hcl
file to configure the following:
- SOCKS proxy settings used for connecting to Linux machine images.
- Credentials for the proxy server.
Example: config/proxy.pkrvars.hcl
communicator_proxy_host = "proxy.rainpole.io"
communicator_proxy_port = 8080
communicator_proxy_username = "rainpole"
communicator_proxy_password = "<plaintext_password>"
Edit the config/redhat.pkrvars.hcl
file to configure the following:
- Credentials for your Red Hat Subscription Manager account.
Example: config/redhat.pkrvars.hcl
rhsm_username = "rainpole"
rhsm_password = "<plaintext_password>"
These variables are only used if you are performing a Red Hat Enterprise Linux Server build and are used to register the image with Red Hat Subscription Manager during the build for system updates and package installation. Before the build completes, the machine image is unregistered from Red Hat Subscription Manager.
Edit the config/scc.pkrvars.hcl
file to configure the following:
- Credentials for your SUSE Customer Connect account.
Example: config/scc.pkrvars.hcl
scc_email = "hello@rainpole.io"
scc_code = "<plaintext_code>"
These variables are only used if you are performing a SUSE Linux Enterprise Server build and are used to register the image with SUSE Customer Connect during the build for system updates and package installation. Before the build completes, the machine image is unregistered from SUSE Customer Connect.
Edit the builds/vsphere.pkrvars.hcl
file to configure the following:
- vSphere Endpoint and Credentials
- vSphere Settings
Example: config/vsphere.pkrvars.hcl
vsphere_endpoint = "sfo-w01-vc01.sfo.rainpole.io"
vsphere_username = "svc-packer-vsphere@rainpole.io"
vsphere_password = "<plaintext_password>"
vsphere_insecure_connection = true
vsphere_datacenter = "sfo-w01-dc01"
vsphere_cluster = "sfo-w01-cl01"
vsphere_datastore = "sfo-w01-cl01-ds-vsan01"
vsphere_network = "sfo-w01-seg-dhcp"
vsphere_folder = "sfo-w01-fd-templates"
If you prefer not to save potentially sensitive information in cleartext files, you add the variables to environmental variables using the included set-envvars.sh
script:
rainpole@macos> . ./set-envvars.sh
Note
You need to run the script as source or the shorthand "
.
".
Edit the *.auto.pkrvars.hcl
file in each builds/<type>/<build>
folder to configure the following virtual machine hardware settings, as required:
-
CPUs
(int)
-
CPU Cores
(int)
-
Memory in MB
(int)
-
Primary Disk in MB
(int)
-
.iso URL
(string)
-
.iso Path
(string)
-
.iso File
(string)
-
.iso Checksum Type
(string)
-
.iso Checksum Value
(string)
Note
All
variables.auto.pkrvars.hcl
default to using the VMware Paravirtual SCSI controller and the VMXNET 3 network card device types.
The project supports using a datastore to store your guest operating system .iso
files, you must download and upload these to a datastore path.
-
Download the x64 guest operating system
.iso
files.Linux Distributions:
- VMware Photon OS 5
- Download the latest release of the FULL
.iso
image. (e.g.,photon-5.0-xxxxxxxxx.x86_64.iso
)
- Download the latest release of the FULL
- VMware Photon OS 4
- Download the latest release of the FULL
.iso
image. (e.g.,photon-4.0-xxxxxxxxx.iso
)
- Download the latest release of the FULL
- Debian 12
- Download the latest netinst release
.iso
image. (e.g.,debian-12.x.x-amd64-netinst.iso
)
- Download the latest netinst release
- Debian 11
- Download the latest netinst release
.iso
image. (e.g.,debian-11.x.x-amd64-netinst.iso
)
- Download the latest netinst release
- Ubuntu Server 22.04 LTS
- Download the latest LIVE release
.iso
image. (e.g.,ubuntu-22.04.x-live-server-amd64.iso
)
- Download the latest LIVE release
- Ubuntu Server 20.04 LTS
- Download the latest LIVE release
.iso
image. (e.g.,ubuntu-20.04.x-live-server-amd64.iso
)
- Download the latest LIVE release
- Red Hat Enterprise Linux 9 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-9.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- Red Hat Enterprise Linux 8 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-8.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- Red Hat Enterprise Linux 7 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-server-7.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- AlmaLinux OS 9
- Download the latest release of the FULL
.iso
image. (e.g.,AlmaLinux-9.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- AlmaLinux OS 8
- Download the latest release of the FULL
.iso
image. (e.g.,AlmaLinux-8.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- Rocky Linux 9
- Download the latest release of the FULL
.iso
image. (e.g.,Rocky-9.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- Rocky Linux 8
- Download the latest release of the FULL
.iso
image. (e.g.,Rocky-8.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- CentOS Stream 9
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-Stream-9-latest-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Stream 8
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-Stream-8-x86_64-latest-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Linux 7
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-7-x86_64-DVD.iso
)
- Download the latest release of the FULL
- SUSE Linux Enterprise 15
- Download the latest 15.4 release of the FULL
.iso
image. (e.g.,SLE-15-SP4-Full-x86_64-GM-Media1.iso
)
- Download the latest 15.4 release of the FULL
Microsoft Windows:
- Microsoft Windows Server 2022
- Microsoft Windows Server 2019
- Microsoft Windows 11 22H2
- Microsoft Windows 10 22H2
- VMware Photon OS 5
-
Obtain the checksum type (e.g.,
sha512
,sha256
,md5
, etc.) and checksum value for each guest operating system.iso
from the vendor. This will be use in the build input variables. -
Upload or your guest operating system
.iso
files to the datastore and update the configuration variables.Example:
config/common.pkrvars.hcl
common_iso_datastore = "sfo-w01-cl01-ds-nfs01"
Example:
builds/<type>/<build>/*.auto.pkrvars.hcl
iso_path = "iso/linux/photon" iso_file = "photon-4.0-xxxxxxxxx.iso" iso_checksum_type = "md5" iso_checksum_value = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
If required, modify the configuration files for the Linux distributions and Microsoft Windows.
Username and password variables are passed into the kickstart or cloud-init files for each Linux distribution as Packer template files (.pkrtpl.hcl
) to generate these on-demand. Ansible roles are then used to configure the Linux machine image builds.
Variables are passed into the Microsoft Windows unattend files (autounattend.xml
) as Packer template files (autounattend.pkrtpl.hcl
) to generate these on-demand. By default, each unattended file is set to use the KMS client setup keys as the Product Key.
PowerShell scripts are used to configure the Windows machine image builds.
Need help customizing the configuration files?
-
VMware Photon OS - Read the Photon OS Kickstart Documentation.
-
Ubuntu Server - Install and run system-config-kickstart on a Ubuntu desktop.
sudo apt-get install system-config-kickstart ssh -X rainpole@ubuntu sudo system-config-kickstart
-
Red Hat Enterprise Linux (as well as CentOS Linux/Stream, AlmaLinux OS, and Rocky Linux) - Use the Red Hat Kickstart Generator.
-
SUSE Linux Enterprise Server - Use the SUSE Configuration Management System.
-
Microsoft Windows - Use the Microsoft Windows Answer File Generator if you need to customize the provided examples further.
If you are new to HCP Packer, review the following documentation and video to learn more before enabling an HCP Packer registry:
Before you can use the HCP Packer registry, you need to create it by following Create HCP Packer Registry procedure.
Edit the config/common.pkrvars.hcl
file to enable the HCP Packer registry.
// HCP Packer
common_hcp_packer_registry_enabled = true
Then, export your HCP credentials before building.
rainpole@macos> export HCP_CLIENT_ID=<client_id>
rainpole@macos> export HCP_CLIENT_SECRET=<client_secret>
Start a build by running the build script (./build.sh
). The script presents a menu the which simply calls Packer and the respective build(s).
You can also start a build based on a specific source for some of the virtual machine images.
For example, if you simply want to build a Microsoft Windows Server 2022 Standard Core, run the following:
Initialize the plugins:
rainpole@macos> packer init builds/windows/server/2022/.
Build a specific machine image:
rainpole@macos> packer build -force \
--only vsphere-iso.windows-server-standard-core \
-var-file="config/vsphere.pkrvars.hcl" \
-var-file="config/build.pkrvars.hcl" \
-var-file="config/common.pkrvars.hcl" \
builds/windows/server/2022
You can set your environment variables if you would prefer not to save sensitive information in cleartext files.
You can add these to environmental variables using the included set-envvars.sh
script.
rainpole@macos> . ./set-envvars.sh
Note
You need to run the script as source or the shorthand "
.
".
Initialize the plugins:
rainpole@macos> packer init builds/windows/server/2022/.
Build a specific machine image using environmental variables:
rainpole@macos> packer build -force \
--only vsphere-iso.windows-server-standard-core \
builds/windows/server/2022
The build script (./build.sh
) can be generated using a template (./build.tmpl
) and a configuration file in YAML (./build.yaml
).
Generate a custom build script:
rainpole@macos> gomplate -c build.yaml -f build.tmpl -o build.sh
Happy building!!!
- Read Debugging Packer Builds.
-
Owen Reynolds @OVDamn
VMware Tools for Windows installation PowerShell script.