The goal is to provision one or more web server instances behind a load balancer on Azure automatically. These instances, while they are typically identical to one another, can run a mix of different Linux distributions (just for fun).
To authenticate via service principal to Azure, you need to provide these variables; subscription_id
, client_id
, secret
and tenant
or set them as environment variables;AZURE_SUBSCRIPTION_ID
, AZURE_CLIENT_ID
, AZURE_SECRET
and AZURE_TENANT
. More details on the following links:
AZURE_SUBSCRIPTION_ID
: Find your Azure subscriptionAZURE_CLIENT_ID
andAZURE_TENANT
: Register an application with Azure AD and create a service principalAZURE_SECRET
: Create a new application secret
You need to provide an SSH Key pair, so Azure can add the public SSH Key to ~/.ssh/authorized_keys
in the instances it creates. Ansible uses the Private Key to configure these instances after they are created.
Follow these steps to provision the Web Server(s).
- Create a Project with for this repo (
https://github.com/nleiva/ansible-webserver-azure
). I called the ProjectAzure WebServer
in the example below.
- Create a Microsoft Azure Resource Manager credential with your Azure service principal parameters.
- The number and operating system of the backend servers is defined via the variable
vms
. Its default value is defined in the vms file. It list 2 instances; one runningcentos
, and the other oneubuntu
(these are the two distributions supported at the moment). You can override this with a newvms
definition as an Extra Variable.
vms:
1: centos
2: ubuntu
- Put all these pieces together in a Job Template pointing to create_resources.yml.
- Run the Job Template.
It should look like this when it finishes:
We distribute the traffic among the instances using an Azure Load Balancer to prevent failure in case any of the virtual machines fail. By default the web server is at http://testbed.eastus.cloudapp.azure.com/
. You can modify this with the variable prefix
. Its default value is testbed
.
This URL will take you to one of the backend VM's. For example:
You can create a similar Job Template pointing to delete_resources.yml instead.
And run it.
You can alternatively run this with ansible-navigator.
ansible-navigator run create_resources.yml
Note: I use podman as my container engine (container-engine
). You can change to another alternative in the ansible navigator config file.
From FAQ
A: The simplest way to use SSH keys with an execution environment is to use ssh-agent and use default key names....
... ansible-navigator will automatically volume mount the user’s SSH keys into the execution environment in 2 different locations to assist users not running ssh-agent.
...the keys are mounted into the home directory of the default user within the execution environment as specified by the user’s entry in the execution environment’s /etc/passwd file. When using OpenSSH without ssh-agent, only keys using the default names (id_rsa, id_dsa, id_ecdsa, id_ed25519, and id_xmss) will be used. ...
-v /home/current_user/.ssh/:/root/.ssh/
Note: When using ansible_ssh_private_key_file with execution environments, the path to the key needs to reference it’s location after being volume mounted to the execution environment. (eg /home/runner/.ssh/key_name or /root/.ssh/key_name). It may be convenient to specify the path to the key as ~/.ssh/key_name which will resolve to the user’s home directory with or without the use of an execution environment.
Check ansible_core.