It currently fully support the following workload:
- docker - A basic docker installation
- docker-swarm - A swarm installation (using latest docker version)
- docker-registry - A docker registry installation (using latest docker version)
- stable-docker - The current stable version of docker installation
- ha-docker-swarm - A high-availability docker swarm installation
- keepalived - A high-availability basic keepalived setup
- rancher - A rancher mode installation
- maas - A basic Ubuntu MaaS installation for inventory management
You can see the support details by clicking the workload name.
It really depends of the workload you want to deploy, but bundled, by default, you get :
- full dynamic inventory from OpenStack, allowing you to grow or shrink your deployment as needed
- unless specified otherwise, a specific ssh key to the deployment, ensuring only people having this key can access the deployed nodes
- a separation between 'public nodes' who are internet exposed and hardened to avoid any unneeded exposure and the 'private nodes' who are only communicating through the local customer-specific network
- a bridge (bastion technically), for you to be able to access the private nodes securely
- a proxy configuration, for the private nodes to be able to do updates and install new packages
Note: These are currently configured to be used with an OVH Openstack account.
- First, using either pip or the package install
- using pip:
- Install pip and the library to connect to OpenStack (shade)
sudo pip install ansible shade python-openstackclient --ignore-installed six
- using default install:
- On Ubuntu
sudo apt-get install ansible
- On CentOS
sudo yum install ansible
- On MacOSX (you need to have brew installed)
brew install ansible
- Clone this repo :
git clone https://github.com/b-yond-infinite-network/openstack-ansible-workloads
-
Make sure your clouds.yaml file is configured properly:
cat ~/.config/openstack/clouds.yaml
it should look something like :
clouds: ovh: profile: ovh auth: project_name: 1234567890123456 username: YOUR_OPENSTACK_USERNAME password: YOUR_PASSWORD regions: - BHS3 - DE1 - GRA3 - SBG3 - UK1 - WAW1
- To find your project name, user name and password
- Go to your OpenStack account (in general an URL like http://horizon.your-cloud-provider),
- Click 'Identity > Project'
- To find your project name, user name and password
-
Adapt your Openstack config file to your account and your needs:
cat config/openstack-config.yaml
it should look something like :openstack_config: image_name: Ubuntu 17.10 #this is the OS image we'll be using flavor_name: s1-2 #this is the default flavor we'll be using controller_flavor: s1-2 #this is the flavor we'll be using for 'controller' node (see specifif role for details)
-
You can now launch the Ansible playbook using :
./openstack-ansible -e os_cloud=<MY_CLOUDS_YAML_PROFILE> -e role=<THE_WORKLOAD_NAME>
- node_count= the total number of node you want to create/maintain
- public_node_count= the number of node you want to be public facing
- action= a non-default action to trigger, that can be :
- delete: the script will then delete all existing instances
- delete_all: the script will delete instance, local config files and keys in OS
- delete_all_includinguserkey: the script will wipe keys and instances both in OS and locally
- skip_setup: the script will execute only the docker role and it's dependencies and skip all creation and setup of instance
- key_filename= explicit SSH key file name to use
- using Ansible:
./openstack-ansible -e os_cloud=<MY_CLOUDS_YAML_PROFILE> -e role=<THE_WORKLOAD_NAME> -e node_count=4 -e key_filename=/tmp/blabla
- In the case of our example OVH configuration using a key automatically generated for that role, that would be:
./openstack-ansible -e os_cloud=ovh -e role=<THE_WORKLOAD_NAME> -e node_count=4
- And if we want to run the docker-swarm role:
./openstack-ansible -e os_cloud=ovh -e role=docker-swarm -e node_count=4
- In the case of our example OVH configuration using a key automatically generated for that role, that would be:
Feel free to raise issues and send some pull request, we'll be happy to look at them! We also would love to have other provider adding their own workload and configuration to make it a repository of generic, hardened, IaaS recipe.