-
Notifications
You must be signed in to change notification settings - Fork 33
Installation
Emily edited this page May 18, 2021
·
1 revision
When all the prequisites are met and the conf.json is prepared:
- Install from RPM package (from controller):
# Switch to cland su - cland # Install via yum yum -y install cloudland-<ver>-<rel>.<arch>.rpm # Copy conf.json to /opt/cloudland/deploy cp <path-to>/conf.json /opt/cloudland/deploy # Deploy controller and compute nodes cd /opt/cloudland/deploy ./deploy.sh # Verify: access https://[controller-ip] from web browser and use default "admin/passw0rd" to login. Please change admin's password immediately.
- Install from source code (from controller, which is also the build server):
# Build from source # Refer to [Build] # Switch to cland su - cland # Copy conf.json to /opt/cloudland/deploy cp <path-to>/conf.json /opt/cloudland/deploy # Deploy controller and compute nodes cd /opt/cloudland/deploy ./deploy.sh # Verify: access https://[controller-ip] from web browser and use default "admin/passw0rd" to login. Please change admin's password immediately.
- Deploy new compute node(s) after installation
# Prepare new compute node # Refer to [Prequisites : For each compute nodes] # Add new compute node(s) configuration to conf.json # Important: all ids must be continuous in conf.json # Refer to [conf.json] (below) # Deploy new compute node, inclusive [begin_id, end_id] cd /opt/cloudland/deploy ./deploy_compute.sh begin_id end_id
- Update compute node(s) after installation
# Prepare new compute node if the configurations are changed # Refer to [Prequisites : For each compute nodes] # Update the configurations of conf.json if they are changed # Refer to [conf.json] (below) # Build and install new binaries if source codes are changed # Refer to [Build] # Update compute node, inclusive [begin_id, end_id] cd /opt/cloudland/deploy ./deploy_compute.sh begin_id end_id
- Note:
- 'admin password' and 'database password' will be asked in the first time installation.
- 'admin password' is used to login the controller (via admin:admin_password). User can change it later.
- 'database password' is used by cloudland to access postgresql. In current release, it's saved in /opt/cloudland/web/clui/conf/config.toml after deployment. So if user wants to change the password, config.toml needs to be changed too to make sure cloudland can access postgresql successfully.
- These two passwords will NOT be asked again when user upgrades CloudLand.
Logically, there are three types of roles in CloudLand:
- Build server:
- Refer to Build for more information. It's used to build the binaries (SCI, CloudLand, CLUI, etc.).
- Controller:
- The node which user uses to manange all resources, like compute nodes (hypervisors), network, VMs, images, etc.
- User accesses controller via https://[controller-ip]
- Compute nodes (hypervisors):
- The nodes which create VMs.
Note:
- In current release, the architecture of all nodes are the same, like s390x, or x86_64, etc.
- For development, the Build Server and Controller can be the same machine. After building, CloudLand can be installed directly after preparing compute nodes and the conf.json
- Controller can be a compute node too, which means the deploy.sh will apply the compute node role to controller.
- CPU: >=4
- Memory: >=8G
- Disk: disk1 >= 500G, disk2 >= 500G, disk1 is for compute and disk2 is for storage. For development or a quick trying, it is fine to use a single small disk like 10G.
- Red Hat Enterprise Linux 8.3 or above
- yum is used to install following softwares, and you may need to install epel repo first:
- ansible jq gnutls-utils iptables iptables-services postgresql postgresql-server postgresql-contrib
- user 'cland' is added and granted (for ansible deployment)
useradd cland echo 'cland ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/cland
-
cland.key and cland.key.pub are generated under /home/cland/.ssh and cland.key.pub has been added to /home/cland/.ssh/authorized_keys for ansilbe deployment
su - cland # create .ssh and authorized_keys if they don't exist, and change their mode mkdir -p ~/.ssh chmod 700 ~/.ssh touch ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys # generate cland.key and cland.key.pub yes y | ssh-keygen -t rsa -N "" -f /home/cland/.ssh/cland.key # add cland.key.pub to authorized_keys cat /home/cland/.ssh/cland.key.pub >> /home/cland/.ssh/authorized_keys
- yum is used to install following softwares, and you may need to install epel repo first:
- sqlite jq mkisofs NetworkManager net-tools iptables iptables-services
- For KVM on x86_64 and KVM on s390x:
- Compute node uses KVM to manage virtual machines
- qemu-img libvirt libvirt-client dnsmasq keepalived dnsmasq-utils conntrack-tools
- For z/VM:
- (Current release) Assume feilong has been installed and it's providing service via http://127.0.0.1:8080, refer to its repo and document for more information.
- user 'cland' is added and granted (for ansible deployment)
useradd cland echo 'cland ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/cland
- The cland.key.pub from controller are added to the /home/cland/.ssh/authorized_keys on each compute node
- use ssh-copy-id from controller
- or, copy cland.key.pub from controller and paste the content to the /home/cland/.ssh/authorized_keys on each compute node directly
- verify if
ssh -i ~/.ssh/cland.key cland@compute-node-X
from cland@control-node can work without password prompt
- (Current release) Network requirement for KVM and KVM on Z:
- conf.json is the configuration file which describes the controller and compute nodes. There's an example: /opt/cloudland/deploy/conf.json.example . Copy the json part to conf.json and update it according to the real configuratons of the controller and each compute nodes.
- The conf.json is used to generate the cloudrc.local file which will be used by the compute node when doing the real jobs. After deployment, check /opt/cloudland/scripts/cloudrc.local for more information
- The controller IP is the entry point to access CloudLand after installation.
- The sequence of the compute node id is mandatory: it start from 0, increases by 1.
- There are three virt_types: zvm, kvm-s390x and kvm-x86_64:
- zvm is for z/VM hypervisor. In current release, this kind of hypervisor relies on felong. It should be installed and be providing service from http://127.0.0.1:8080 (the default service point) on the compute node. The default guest name is ZCCXXXXX, you can find them from the cloudrc.local, which are not included in conf.json
- kvm-s390x, the KVM on Z, it's the KVM hypervisor running on Z. The settings are the same as the KVM, but it has one more entry called 'zlayer2_iface' which is used to configure the fdb entries.
- kvm-x86_64, the KVM on x86_64.
- Note: In current release, we only support the same architecture node, which means CloudLand can support zvm and kvm-s390x at the same time(s390x for all build server, controller and compute nodes), or kvm-x86_64 (x86_64 build server controller and compute nodes) only.
- The zone_name should be pre-set according to the whole topology.