Ansible role for deploying oVirt Hosted-Engine
- Ansible version 2.7 or higher
- Python SDK version 4.2 or higher
- Python netaddr library on the ansible controller node
No.
-
A fully qualified domain name prepared for your Engine and the host. Forward and reverse lookup records must both be set in the DNS.
-
/var/tmp
has at least 5 GB of free space. -
Unless you are using Gluster, you must have prepared storage for your Hosted-Engine environment (choose one):
-
Install additional oVirt ansible roles:
$ ansible-galaxy install ovirt.repositories # case-sensitive
$ ansible-galaxy install ovirt.engine-setup # case-sensitive
Name | Default value | Description |
---|---|---|
he_bridge_if | eth0 | The network interface ovirt management bridge will be configured on |
he_fqdn | null | The engine FQDN as it configured on the DNS |
he_mem_size_MB | max | The amount of memory used on the engine VM |
he_vcpus | max | The amount of CPUs used on the engine VM |
he_disk_size_GB | 61 | Disk size of the engine VM |
he_vm_mac_addr | null | MAC address of the engine vm network interface. |
he_domain_type | null | Storage domain type. available options: nfs, iscsi, glusterfs, fc |
he_storage_domain_addr | null | Storage domain IP/DNS address |
he_ansible_host_name | localhost | hostname in use on the first HE host (not necessarily the Ansible controller one) |
he_restore_from_file | null | a backup file created with engine-backup to be restored on the fly |
he_pki_renew_on_restore | false | Renew engine PKI on restore if needed |
he_cluster | Default | name of the cluster with hosted-engine hosts |
he_data_center | Default | name of the datacenter with hosted-engine hosts |
he_host_name | $(hostname -f) | name used by the engine for the first host |
he_host_address | $(hostname -f) | address used by the engine for the first host |
he_bridge_if | null | interface used for the management bridge |
he_apply_openscap_profile | false | apply a default OpenSCAP security profile on HE VM |
Name | Default value | Description |
---|---|---|
he_mount_options | '' | NFS mount options |
he_storage_domain_path | null | shared folder path on NFS server |
he_nfs_version | auto | NFS version. available options: auto, v4, v3 |
Name | Default value | Description |
---|---|---|
he_iscsi_username | null | iscsi username |
he_iscsi_password | null | iscsi password |
he_iscsi_target | null | iscsi target |
he_lun_id | null | Lun ID |
he_iscsi_portal_port | null | iscsi portal port |
he_iscsi_portal_addr | null | iscsi portal address |
he_iscsi_tpgt | null | iscsi tpgt |
he_discard | false | Discard the whole disk space when removed. more info here |
DHCP configuration is used on the engine VM by default. However, if you would like to use static ip instead, define the following variables:
Name | Default value | Description |
---|---|---|
he_vm_ip_addr | null | engine VM ip address |
he_vm_ip_prefix | null | engine VM ip prefix |
he_dns_addr | null | engine VM DNS server |
he_default_gateway | null | engine VM default gateway |
he_vm_etc_hosts | false | Add engine VM ip and fqdn to /etc/hosts on the host |
This is a simple example for deploying Hosted-Engine with NFS storage domain.
This role can be used to deploy on localhost (the ansible controller one) or on a remote host (please correctly set he_ansible_host_name).
All the playbooks can be found inside the examples/
folder.
---
- name: Deploy oVirt hosted engine
hosts: localhost
connection: local
roles:
- role: ovirt.hosted_engine_setup
---
- name: Deploy oVirt hosted engine
hosts: host123.localdomain
roles:
- role: ovirt.hosted_engine_setup
---
# As an example this file is keep in plaintext, if you want to
# encrypt this file, please execute following command:
#
# $ ansible-vault encrypt passwords.yml
#
# It will ask you for a password, which you must then pass to
# ansible interactively when executing the playbook.
#
# $ ansible-playbook myplaybook.yml --ask-vault-pass
#
he_appliance_password: 123456
he_admin_password: 123456
{
"he_bridge_if": "eth0",
"he_fqdn": "he-engine.example.com",
"he_vm_mac_addr": "00:a5:3f:66:ba:12",
"he_domain_type": "nfs",
"he_storage_domain_addr": "192.168.100.50",
"he_storage_domain_path": "/var/nfs_folder"
}
{
"he_bridge_if": "eth0",
"he_fqdn": "he-engine.example.com",
"he_vm_ip_addr": "192.168.1.214",
"he_vm_ip_prefix": "24",
"he_gateway": "192.168.1.1",
"he_dns_addr": "192.168.1.1",
"he_vm_etc_hosts": true,
"he_vm_mac_addr": "00:a5:3f:66:ba:12",
"he_domain_type": "iscsi",
"he_storage_domain_addr": "192.168.1.125",
"he_iscsi_portal_port": "3260",
"he_iscsi_tpgt": "1",
"he_iscsi_target": "iqn.2017-10.com.redhat.stirabos:he",
"he_lun_id": "36589cfc000000e8a909165bdfb47b3d9",
"he_mem_size_MB": "4096",
"he_ansible_host_name": "host123.localdomain"
}
[root@c75he20180820h1 ~]# iscsiadm -m node --targetname iqn.2017-10.com.redhat.stirabos:he -p 192.168.1.125:3260 -l
[root@c75he20180820h1 ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 6.2.0.874-7
Target: iqn.2017-10.com.redhat.stirabos:data (non-flash)
Current Portal: 192.168.1.125:3260,1
Persistent Portal: 192.168.1.125:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:6a4517b3773a
Iface IPaddress: 192.168.1.14
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 5
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 131072
FirstBurstLength: 131072
MaxBurstLength: 16776192
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 2
Attached scsi disk sdb State: running
scsi3 Channel 00 Id 0 Lun: 3
Attached scsi disk sdc State: running
Target: iqn.2017-10.com.redhat.stirabos:he (non-flash)
Current Portal: 192.168.1.125:3260,1
Persistent Portal: 192.168.1.125:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:6a4517b3773a
Iface IPaddress: 192.168.1.14
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 5
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 131072
FirstBurstLength: 131072
MaxBurstLength: 16776192
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 6 State: running
scsi6 Channel 00 Id 0 Lun: 0
Attached scsi disk sdd State: running
scsi6 Channel 00 Id 0 Lun: 1
Attached scsi disk sde State: running
[root@c75he20180820h1 ~]# lsblk /dev/sdd
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 100G 0 disk
└─36589cfc000000e8a909165bdfb47b3d9 253:10 0 100G 0 mpath
[root@c75he20180820h1 ~]# lsblk /dev/sde
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sde 8:64 0 10G 0 disk
└─36589cfc000000ab67ee1427370d68436 253:0 0 10G 0 mpath
[root@c75he20180820h1 ~]# /lib/udev/scsi_id --page=0x83 --whitelisted --device=/dev/sdd
36589cfc000000e8a909165bdfb47b3d9
[root@c75he20180820h1 ~]# iscsiadm -m node --targetname iqn.2017-10.com.redhat.stirabos:he -p 192.168.1.125:3260 -u
Logging out of session [sid: 4, target: iqn.2017-10.com.redhat.stirabos:he, portal: 192.168.1.125,3260]
Logout of [sid: 4, target: iqn.2017-10.com.redhat.stirabos:he, portal: 192.168.1.125,3260] successful.
- Check all the prerequisites and requirements are met.
- Encrypt passwords.yml
$ ansible-vault encrypt passwords.yml
- Execute the playbook
Local deployment:
$ ansible-playbook hosted_engine_deploy.yml --extra-vars='@he_deployment.json' --extra-vars='@passwords.yml' --ask-vault-pass
Deployment over a remote host:
ansible-playbook -i host123.localdomain, hosted_engine_deploy.yml --extra-vars='@he_deployment.json' --extra-vars='@passwords.yml' --ask-vault-pass
To significantly reduce the amount of time it takes to deploy a hosted engine over a remote host, add the following lines to /etc/ansible/ansible.cfg
under the [ssh_connection]
section:
ssh_args = -C -o ControlMaster=auto -o ControlPersist=30m
control_path_dir = /root/cp
control_path = %(directory)s/%%h-%%r
pipelining = True
Here a demo showing a deployment on NFS configuring the engine VM with static IP.
Apache License 2.0