gluster ansible runner is a proof of concept tool to automate user interaction with gluster ansible through command line arguments. The goal is to use this to build a tool that will automate WA's interation with gluster ansible.
all command line arguments must be formated in a json style and wrapped in quotes to be parsed properly, True and False options must be capitalized
Specify the playbook you want ansible to use
infra.yml, cluster.yml, features.yml, repo.yml
Specify the inventory you want ansible to use. Input format example below
"[vdos:192.168.122.158,192.168.122.159,192.168.122.160],[other:192.168.122.158,192.168.122.159],[more:192.168.122.158]"
"[vdos:192.168.122.158,192.168.122.158,192.168.122.158],[other:192.168.122.158]"
"[vdos:192.168.122.158,192.168.122.158,192.168.122.158]"
Enable or disable a setting. For ports: Should this
port accept(enabled) or reject(disabled) connections.
The states "present" and "absent" can only be used in
zone level operations (i.e. when no other parameters
but zone and state are set).
A list of ports in the format PORT/PROTO. For example
111/tcp. This is a list value.
Whether to make the rule permanenet.
The firewalld zone to add/remove to/from
Name of a service to add/remove to/from firewalld -
service must be listed in output of firewall-cmd
--get-services. This is a list variable
Optional variable, default is taken as present.
Mandatory argument if vdo has to be setup. Key/Value
pairs have to be given. name and device are the keys,
see examples for syntax.
Backend disk type.
RAID diskcount, can be ignored if disktype is JBOD
Optional variable, if not provided glusterfs_vg is
used as vgname.
Comma-separated list of physical devices. If vdo is
used this variable can be omitted.
Stripe unit size (KiB). DO NOT including trailing 'k'
or 'K'
Metadata size for LV, recommended value 16G is used by
default. That value can be overridden by setting the
variable. Include the unit [G|M|K]
Optional variable. If omitted glusterfs_thinpool is
used for thinpoolname.
Thinpool size, if not set, entire disk is used.
Include the unit [G|M|K]
This is a list of hash/dictionary variables, with
keys, lvname and lvsize.
Optional. Needed only if thick volume has to be
created. The variable will have default name
gluster_infra_lv_thicklvname if thicklvsize is
defined.
Optional. Needed only if thick volume has to be
created. Include the unit [G|M|K]
This is a dictionary with mount values. path, and lv
are the keys.
SSD disk for cache setup, specific to HC setups.
Should be absolute path. e.g /dev/sdc
Optional variable, if omitted glusterfs_ssd_cache is
used by default.
Size of the cache logical volume. Used only while
setting up cache.
Optional. Cache metadata volume name.
Optional. Cache metadata volume size.
Optional. If omitted writethrough is used.
Number of arbiter bricks to use (Only for arbiter
volume types).
Bricks that form the GlusterFS volume. The format of
the bricks would be hostname:mountpoint/brick_dir
alternatively user can provide just
mountpoint/birck_dir, in such a case gluster_hosts
variable has to be set
Disperse count for the volume. If this value is
specified, a dispersed volume will be created
Force option will be used while creating a volume, any
warnings will be suppressed.
Contains the list of hosts that have to be peer
probed.
Specifies the number of redundant bricks while
creating a disperse volume. If redundancy count is
missing an optimal value is computed.
Replica count while creating a volume. Currently
replica 2 and replica 3 are supported.
If value is present volume will be created. If value
is absent, volume will be deleted. If value is
started, volume will be started. If value is stopped,
volume will be stopped.
The transport type for the volume.
Name of the volume. Refer GlusterFS documentation for
valid characters in a volume name.
Contains the list of bricks along with the new bricks
to be added to the GlusterFS volume. The format of the
bricks is mountpoint/brick_dir
Contains the list of bricks to be removed.
Name of the NFS Ganesha cluster.
An existing GlusterFS volume which will be exported
through NFS Ganesha
A comma separated list of hostnames, these are subset
of nodes of the Gluster Trusted Pool that form the
ganesha HA cluster
A comma separated list of virtual IPs for each of the
nodes specified above.
One of the nodes from the Trusted Storage Pool,
gluster commands will be run on this node.
gluster_features_ganesha_masternode: {{
groups['ganesha_nodes'][0] }} - the first node of the
inventory section ganesha_nodes will be used.
List of the nodes in the Trusted Storage Pool.
gluster_features_ganesha_clusternodes: {{
groups['ganesha_nodes'] }} - The nodes listed in
section ganesha_nodes in the inventory.
The cluster ip/hostnames. Can be set by
gluster_hci_cluster: {{ groups['hc-nodes'] }}, where
hc-nodes is from the inventory file.
This is a dictionary setting the volume information.
See below for further explanation and variables.
List of packages to be installed. User need not set
this, picked up from defaults.
This is not needed to be set by user, defaults are
picked up. Set to override defaults. For default
values see Gluster HCI documentation.
Activation key for enabling the repositories
Whether to auto-attach the available repositories
Username for the subscription-manager command
Password for the subscription-manager command
If set to yes, subscription-manager registers by force
even if already registerd
List of pool ids to attach
Disable all the repositories before attaching to new
repositories
List of repositories to enable
Attach to HCI repositories
Attach to list of NFS Ganesha repositories
Attach to list of SMB repositores
copy your ssh key to the root user on all nodes prior or ansible will fail to connect
./test.py --inventory "[servers:192.168.122.79,192.168.122.121,192.168.122.249]" -p "infra.yml" --gluster_infra_pvs "/dev/vdb" --gluster_infra_lv_logicalvols "[{"lvname": "thin_lv1", "lvsize": "25G"}, {"lvname": "thin_lv2", "lvsize": "25G"}]" --gluster_infra_mount_devices "[{"path": "/mnt/thinv1", "lv": "thin_lv1"}, {"path": "/mnt/thinv2", "lv": "thin_lv2"}]"
./test.py --inventory "[servers:192.168.122.79,192.168.122.121,192.168.122.249]" -p "infra.yml" --gluster_infra_fw_ports "["2049/tcp", "54321/tcp", "5900/tcp", "5900-6923/tcp", "5666/tcp", "16514/tcp"]" --gluster_infra_fw_permanent "True" --gluster_infra_fw_state "enabled" --gluster_infra_fw_zone "public" --gluster_infra_fw_services "["glusterfs"]"
./test.py --inventory "[servers:192.168.122.87,192.168.122.149,192.168.122.150]" -p "cluster.yml" --gluster_cluster_hosts "["192.168.122.87","192.168.122.149","192.168.122.150"]" --gluster_cluster_volume "testvol" --gluster_cluster_replica_count "3" --gluster_cluster_force "yes" --gluster_cluster_bricks "/data/brick1,/data/brick2,/data/brick3"
./test.py --inventory "[servers:192.168.122.206]" -p "repo.yml" --gluster_repos_username "dpivonka@redhat.com" --gluster_repos_password "*******" --gluster_repos_disable_all "True" --gluster_repos_pools "8a85f98c617475400161756d571b1485" --gluster_repos_rhsmrepos "["rhel-7-server-rpms", "rhel-ha-for-rhel-7-server-rpms"]"