Skip to content
This repository has been archived by the owner on Aug 26, 2021. It is now read-only.

User guide

Alek Boninski edited this page Aug 10, 2018 · 18 revisions

Intro

Admiral is highly scalable and very lightweight Container Management platform deploying and managing container based applications.

Admiral project capabilities are modeling, provisioning and managing containerized applications via UI, yaml based template or Docker Compose file, configuring and managing container hosts.

The main feature of the Admiral project will enable users to provision an app that is built from containers. Admiral makes use of the Docker Remote API to provision and manage containers, including retrieving stats and info about container instances. From a deployment perspective, developers will use Docker Compose, Admiral templates or Admiral UI to compose their app and deploy it using Admiral provisioning and orchestration engine. For cloud administrators, they will be able to manage container host infrastructure and apply governance to its usage, including grouping of resources, policy based placements, quotas and reservations and elastic placement zones.

Supported Docker Versions

For current Admiral version

Docker version Docker Remote API version Comment
Minimum supported version 1.12+ 1.24 This is the API version that is used by Admiral for all calls to the docker daemon
Docker version Docker Remote API version Comment
Minimum supported version 1.9 1.21 This is the API version that is used by Admiral for all calls to the docker daemon
Minimum version for deployments with docker networking 1.10 1.22 Network scoped aliases (introduced in docker 1.10) are needed in this case
Recommended version for deployments with docker networking 1.11+ 1.23+ Basic load balancing for services with matching network aliases was introduced in docker 1.11

Configure Container Hosts

The initial configuration step is related to configuring an already existing docker host (provisioning a new one is on the roadmap).

Configure Existing Container Docker Host

A step-by-step tutorial with screenshots: Getting started with VMware Admiral Container Service on Photon OS

To configure an already existing Docker Container host, navigate to "Hosts" and click on "Add Host". Admiral communicates to the Docker Container Host via the Docker Remote API (check the Supported Docker Versions section).

Hence, the Docker host should have enabled the docker remote API and all certificates configured properly. There are two different options to enable docker remote API - manually or automatically over SSH.

Manual configuration

DOCKER_OPTS='-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock'

Note: Admiral agent uses unix socket file for communication with the docker daemon. The path to the unix socket file should be /var/run/docker.sock.

The last post points to enabling Docker Remote API over http (not https), providing a specific port and NOT enabling Docker Remote API authentication (not recommended for production but ideal for initial evaluations and demos). In such a case, the configuration in Admiral should include the whole URL (including the schema - http, since the default one is https, including the port since the default port is 443 for https and 80 for http). The container host or IP should be entered in this case like:

http://{docker-host-ip}:4243

Since no authentication is provided, the "Credentials" in the form should remain empty and only Placement Zone should be configured beside the Docker host URL.

The configuration overall includes:

  • Container host IP or hostname in form (http|https) (IP/hostname) (: port). The default URL schema is "https" if not provided.
  • Credentials - for Docker remote API, the credentials are in form of Public/Private certificate.
  • Select or create Placement Zone - "Placement Zone" is logical grouping of resources that later on could be mapped to specific "Placement" in order to apply deployment related policies.
  • Tags - key:value tags

Automatic Configuration Over SSH

Automatic configuration of the docker remote API over SSH can be opted from the "Add Host" page by selecting "Auto configure". This option will configure the docker remote API on the address and port filled in under "Address" with generated self signed certificates. Currently supported operating systems are:

  • CoreOS

  • PhotonOS

  • Ubuntu Server 16.04 or greater

  • Note: Docker is being shipped in CoreOS and PhotonOS as part of the operating system. As for Ubuntu, docker will be automatically installed as part of the configuration using Docker install script. For that your Ubuntu host must be connected to the Internet during the configuration process.*

  • Warning: Automatic configuration will not take any previous configuration of the docker daemon into account and will try to override it. The end configuration may be inconsistent or illegal in that particular case. It's recommended automatic configuration to be only used on clean deployments of the supported OS. In case your docker daemon is properly configured already, add the host by following the manual configuration guide above.*

  • Prerequisite: Access to the root user of the OS or another sudoer user. For PhotonOS the default user is root. For CoreOS - core, it's sudoer by default. For Ubuntu run "sudo adduser sudo" to add your user to the sudo group.*

The configuration overall includes:

  • Container host IP or hostname in form https: (IP/hostname) (: port). "http" is not supported for automatic configuration. Docker API will be exposed on the selected port of your choice. Popular choices are 443 and 2376 but any available port will work.
  • Credentials - SSH credentials for the machine (username/password or username/private key). The user must be "root" user or another sudoer.
  • Select or create Placement Zone - "Placement Zone" is logical grouping of resources that later on could be mapped to specific "Placement" in order to apply deployment related policies.
  • Tags - key:value tags

Placements and Host Selection

First, let's start with a simple use case to illustrate the placement and allocation logic and concepts. When we have typical Dev/Stage/Production environments or we have different Projects then we need to group or isolate the available resources (in this case Container hosts) accordingly.

In the first use case (Dev/Stage/Production):

  1. Projects - we can create three groups/projects "Dev/Stage/Production"
  2. Placement Zones - group the container host in dev/stage/production placement zones
  3. Placements - Link Projects to Placement Zones in the Placement.

Once we have this configuration in place then requesting a container or container application with selected project "Dev" will place the containers only on hosts that are grouped by the placement zone "Dev".

Placements

A placement is a way to limit and reserve resources used by a resource group. A placement has the following fields:

  • Name - the name of the placement
  • Project - the group to which the placement applies
  • Placement Zone - the placement zone is logical grouping of container hosts.
  • Priority (Optional) - There may be more than one placement per group. The priority specifies in what order the placements should be selected.
  • Instances (Optional) - (Integer > 0) Maximum number of containers provisioned
  • Memory Limit (Optional) - A number between 0 and the memory available in the placement zone. This is the total memory available for resources in this placement. 0 means no limit.If not zero, next minimum is 4M.
  • CPU Shares (Optional) - the provisioned resource will be given this amount of CPU shares (relative weight). All containers in this placement zone will be configured with that number of CPU shares. By default, all containers get the same proportion of CPU cycles. This proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers. This property will modify the proportion from the default of 1024 to set the weighting to 2 or higher. If 0 is set, the system will ignore the value and use the default of 1024. More info: https://docs.docker.com/engine/reference/run/ or http://stackoverflow.com/questions/26841846/how-to-allocate-50-cpu-resource-to-docker-container

placement diagram

Each placement zone is related to a placement. A placement zone is a collection of hosts. Additionally, the placement zone might be configured to be dynamic based on matching tags on the placement zones and the container hosts. All container hosts with given tags will be dynamically allocated to a placement zone with those matching tags.

The available resources in a placement zone are the sum of all the resources of the hosts inside it. Thus, transitively, a placement manages the resources of a collection of hosts. Multiple placements can be created for the same placement zone but one placement zone per placement is the recommended configuration.

When a container is provisioned the placements are filtered based on the resource group, available resources (instances, memory) and priority. This is the first step of the host selection process. After we pick a resource group policy we continue with the host selection procedure.

Host Selection

Hosts are filtered based on their power state and their available memory. Note: In order to filter hosts based on available memory memory limit must be set in the container description of the container to be provisioned. Then the affinity filters are applied. The affinity filters are active only when provisioning an application and not when provisioning a single container.

Affinity Filters

Affinity filters are used to filter hosts based on relationships between containers in the same application. The user may set two containers to be provisioned on the same hosts or on different hosts. In addition, the rules may be hard or soft. In the "Provision Container" form under "Policy" the user may set explicitly affinity/anti-affinity constraints. An affinity constraint consists of the following:

  • Affinity type - (affinity or anti-affinity) In case of anti-affinity the containers will be placed on different hosts, otherwise they will be placed on the same host
  • Service name - the name of the other container
  • Constraint type - (hard or soft) A hard rule means that if there is no way to satisfy the constraint the provisioning should fail. If the rule is soft the provisioning should continue.

Affinity constraints may be set implicitly based on other settings:

  • Cluster Size - If cluster size is bigger than 1, the placement engine will try to spread the multiple containers among as many hosts as it can
  • Volumes From - If "Volumes From" is set, the container will be placed on the same host as the one that is getting its volumes from
  • If multiple containers expose the same host port, they will be placed on different hosts
  • Containers that have the same pod will be placed on the same host

How the Actual Host Selection Is Done

When a provisioning request arrives and the placement is selected, a dependency graph is built. The dependencies are resolved based on the affinity filters, "dependsOn" properties in the application and others. This graph is sorted topologically in order to get the order of how containers and networks should be provisioned in the application. The containers in a connected component are provisioned sequentially, but in parallel with the containers in other connected components. For each container the hosts in the placement are run through the affinity filters where each filter decreases the size of the set of hosts available for provisioning. If there are no hosts available after all filters have passed, the provisioning fails. If there are multiple hosts available, a host is selected randomly. Note that the host selection procedure doesn't "look ahead" and doesn't go through all the possible configurations i.e it places a container on a host without knowing if the next container can placed at all. This may lead to cases where the provisioning may fail even if there is a way to place the containers. For example:

  • 2 available hosts
  • Wordpress application
    • 2 wordpress nodes
    • 1 DB node with anti-affinity to wordpress

Provisioning will fail as the two wordpress nodes will first be provisioned on the two available hosts and because of the anti-affinity rule the DB node cannot be placed on either host. If the affinity rule is set on the wordpress nodes the provisioning will succeed. First, the db node will be placed on one of the hosts and then the two wordpress nodes on the other.

Templates, Images & Registries

Registries

A Docker Registry is a stateless, server side application that stores and lets you distribute Docker images. You can configure multiple registries to gain access to both public and private images. This can be done under Templates > Manage Registry. To configure a registry, you need to provide its address, a custom registry name, and credentials (optional). The address must start with http(s) scheme to designate whether the registry is secured or unsecured. If no scheme is provided, https is assumed by default. The registry port defaults to 443 for https and 80 for http, so if your registry runs on another port (for example 5000), you should specify so. You can also choose to enable or disable registries at any time to include or exclude their results from your image search.

The container service can interact with both Docker Registry HTTP API V1 and V2 with the following specifics:

  • V1 over HTTP (unsecured, plain http registry). Using HTTP registries is not recommended as they are very insecure and require manual configuration for every docker host in Admiral. If you still decide to use one for isolated testing, you must update DOCKER_OPTS on each host like this: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (port is mandatory) and restart Docker after that. More info on how to configure the flag: https://docs.docker.com/registry/insecure/
  • V1 over HTTPS. Normally behind a reverse proxy like NGINX. The standard implementation is open sourced at https://github.com/docker/docker-registry
  • V2 over HTTPS. The standard implementation is open sourced at https://github.com/docker/distribution
  • V2 over HTTPS with basic authentication.
  • V2 over HTTPS with authentication via central service. More info: https://docs.docker.com/v1.7/registry/spec/auth/token/

Known supported third-party registries: JFrog Artifactory, Harbor. Docker Hub is enabled by default for all tenants and is not present in the registry list. However, it can be disabled with a system property.

Docker does not normally interact with secure registries configured with certificates signed by unknown authority. The container service handles this case by automatically uploading untrusted certificates to all docker hosts thus enabling the hosts to connect to these registries. In case a certificate cannot be uploaded to a given host, the host is automatically disabled. More info at https://docs.docker.com/registry/insecure/#/using-self-signed-certificates

Images

Creating a single container from an image is no different than doing it from the Docker CLI. You have the ability to search images through the registries that you have defined (see section 4.1). Once you have found the image you want to create a container from, you can do a single click provisioning, which creates a container based on the latest tag of the image, attached to the Bridge network and publishing all exposed ports. You can also provision a container by providing additional info. This will take you to a form where you can provide most of the known Docker API properties as well as additional properties like affinity rules, health config, links.

Additionally you can pick desired images from different registries and add them as favorites. Favorite images appear whenever you open the 'Repositories' tab anew or if you conduct an empty search (i.e. pressing 'Enter' on an empty search field). Container images can be added to favorites using the 'Add Image to Favorites' button, which can be seen after expanding the additional image actions with the arrow next to 'Provision'. Similarly, while viewing your favorite images you can remove any desired image via the 'Remove Image from Favorites' button located in the same place. A specific container image can be added to favorites only once. If a registry gets disabled or removed, all favorite images belonging to it will stop being shown, however they are not removed,so if the registry gets re-added or enabled, it's favorite images will still be there.

Templates

In addition to provisioning a container from an image, you can also create a template from this image. A template is a reusable configuration for provisioning a container or a suite of containers. This is where you can define a multi-tier application consisting of different linked services (service - one or more container of the same type/image).

Creating templates

Templates can be created by:

  • Starting from a base container, by selecting an image, then saving this container definition a template and adding additional containers/service along way.
  • Importing a YAML file. You can click the import button, which allows you to either provide the contents of the YAML as text or browse through the filesystem to upload a YAML file. The YAML represents the template, the configuration for the different containers and their connections. Supported format types are:

Provisioning templates

Templates can be provisioned like the single container images with a catalog-like experience. Based on a variety of properties of the template and the whole environment, the containers of the template will be provisioned on one or more hosts. For more info see Policies and Placement . Once a template is provisioned it is shown as an application in which you can drill down on to the single container.

Exporting templates

Templates can be exported to a file in the same 2 formats, that are supported in importing - Docker compose and Container Service's YAML format. You can import a template from one format, modify it the UI and export it in another format. However, have in mind that some of the configuration that are specific to the Container Service, like health config, affinity constrains, etc. will not be included if you export in docker compose format. More info on Docker Compose support...

Containers & Applications

Container configurations

When defining a single container or a mutli-container application, in addition to all known Docker container properties you can also configure

Health Config

To have the container status updated based on a custom criteria, you can configure a health check method. This can be done under the Health Config tab in the Container Definition form. You can choose between the following methods of health check: HTTP, TCP and executing a command on the container. No health checks are configured by default.

  • HTTP - when HTTP option is set, you will have to provide an API to access and HTTP method and version to use. The API is relative, i.e. you don't need to enter the container's address. You can also specify timeout for the operation and thresholds healthy/unhealthy status. Healthy threshold of 2 means that there will be 2 successive successful calls needed in order for the container to be considered healthy (status RUNNING). Unhealthy threshold of 2 means that there will be 2 unsuccessful calls needed in order for the container to be considered unhealthy (status ERROR). For all the states in between the healthy and unhealthy thresholds the container's status will DEGRADED.
  • TCP - when TCP option is set, only port will be required. The health check will try to establish TCP connection with the container on the provided port. The options Timeout, Health Threshold and Unhealthy Threshold are the same as in HTTP mode.
  • Command - when Command option is set, you will be requested to enter a command to be executed on the container. The success of the health check will be determined by the exit status of the command.

Health configuration supports also options for ignoring the health check during the provisioning and automatic redeployment of containers.

  • Ignore health check on provision - uncheck this option to force health check on provision. By forcing it, a container will not be considered provisioned until one successful health check passes.
  • Autoredeploy - check this options if you want containers in ERROR state to be automatically redeployed.

Networking

Admiral supports native container networking which covers cross host communication, service discovery and load balancing.

Supported Network Types

Brief introduction to the different network types supported by Docker and partially by vSphere Integrated Containers (VIC).

Overlay

Overlays use networking tunnels to deliver communication across hosts. This allows containers to behave as if they are on the same machine by tunneling network subnets from one host to the next; in essence, spanning one network across multiple hosts.

Bridge

A Linux bridge provides a host internal network in which containers on the same host may communicate, but the IP addresses assigned to each container are not accessible from outside the host. Bridge networking leverages iptables for NAT and port-mapping, which provide single-host networking. Bridge networking is the default Docker network type, where one end of a virtual network interface pair is connected between the bridge and the container.

Host

In this approach, a newly created container shares its network namespace with the host, providing higher performance — near metal speed — and eliminating the need for NAT; however, it does suffer port conflicts. While the container has access to all of the host’s network interfaces, unless deployed in privilege mode, the container may not reconfigure the host’s network stack.

None

None is straightforward in that the container receives a network stack, but lacks an external network interface. It does, however, receive a loopback interface

3rd party plugins

Docker supports installing 3rd party plugins that provide additional network drivers that can be used. For more info on those plugins and how to configure them see here

Deploying Applications with Connected Containers

Generally there are two ways to connect containers on a single host or across hosts:

User defined networks

User defined networks exposes the networking of the application as a separate component that has its own properties and lifecycle.

New container networks can be defined as part of application templates designed in the template editor, using YAML or Docker Compose application definitions which can be imported through the user interface or REST API. By default the new network can be configured to use the "bridge" or "overlay" driver. Depending on the configuration of the hosts on the environment, the product will decide what type of network to create. Typically most hosts support the "bridge" driver. To support the "overlay" driver across hosts, those hosts need to be configured in a cluster registered to a common key-value store. For more info how to set up overlay networking see here overlay-network-with-an-external-key-value-store

When deploying the application if there a two or more added hosts which are also registered to a common key-value store and the configured affinity rules mandate that the containers should be deployed on different hosts, then an overlay network will be created. If one or more of these preconditions are not met, the containers comprising the application will be deployed on a single host and will communicate over a bridge network.

The product also supports exiting networks which have already been created on one or more hosts added to the container management solution. These networks will appear in the network management tab and can also be used in templates through the template editor. The type of such networks is defined during creation time and cannot be modified later. The type of existing network will also influence the distribution of multi-container applications, i.e. if an application uses a bridge network, this indicates to the container management solution that the containers comprising this application must be deployed on a single container host.

Service Discovery

When containers are connected through a user defined network, they can reach each other using their names as defined in the template. You can also provide additional aliases on how containers can find each other. Aliases can be defined in both ways, by assigning names on the target container that defines how the container will be reachable in the network by others, or by specifying an alias that the source container will use to reach the target container. This all happens with the help of an implicit DNS server on the host.

Example: If we take for example 2 services that are represented by 2 containers, one named serviceA and the other serviceB.

  • If they are connected to one user defined network, then serviceA can reach serviceB, using its name i.e. serviceB.
  • In addition, you can add more aliases for serviceB so that it can be reached by multiple hostnames. In this case you can provide a list of aliases that serviceB will use when connected to the network. serviceA and all other services in the network will reach it by any of those aliases.
  • If however serviceA is designed to lookup serviceB using some other hostname, e.g. my-service, then in the template definition for serviceA you can add a link to serviceB providing the alias my-service. Then only serviceA will be able to reach serviceB by using my-service in addition to the other aliases of serviceB. This is also covered in Links with Networking

For more information on user defined networks, please refer to Docker's documentation

Links

Links provide another way to connect containers. There are two ways to use Links:

Links with networking

These links provides another way of Service Discovery, it works for containers that are added in the same user defined network. It provides a way to specify multiple hostnames/aliases that one service can use to connect to another service. Links are assigned on the source container/service that will connect to the target service using it's name and a preferred alias.

For more information on links in user defined networks, please refer to Docker's documentation

Links without networking (Legacy Links)

Note: These links refer to the legacy Docker feature and so we discourage their use - use User defined networks and Links with networking instead.


They provide a way for one container to connect to another using preferred alias. If you decide to use container links, you should be aware of a few limitations when linking containers in an application:

  • They work on a single host only. The product is aware of this limitation and will place the linked containers on a single host.
  • You cannot have bi-directional linking, this is caused by the limitation that when linking a container, the target container should be up and running.
  • When linking a cluster, there will be a link created to each container of the cluster from the dependent container. Notice though, that Docker does not support multiple links with the same alias. For this reason, do not set an alias for the link to a cluster - we will generate it for you.
  • You cannot update the links of a container runtime. This means that when scaling up or down a linked cluster, the dependent container’s links will not be updated.

For more information on the legacy links, please refer to Docker's documentation

Network Management Interface

The container management feature provides a user interface for viewing existing networks as well as creating new networks and subsequently deleting them.

To view all existing networks:
  1. Click on the Resources tab
  2. Click on the Networks tab

All networks that have been created externally on the managed container hosts as well as the networks created as part of provisioned applications are listed, with information on the network type as well as their IPAM configuration.

To create a new network:
  1. Navigate to the Networks tab
  2. Click on Create Network
  3. Add one or more hosts from the Hosts dropdown. If the selected hosts are independent, for each selected host a new “bridge” network will be created. If any of the selected hosts belongs to a key-value store cluster, for each of those host a new “overlay” network will be created instead, and visible to all the hosts from the key-value store cluster.
  4. Toggling the Advanced checkbox will expand the IPAM configuration where Subnet, IP Range, Gateway as well as custom properties can be set.
  5. Click Create Network to save the new network and provision it on the managed container host(s).
Deleting a network:
  1. Navigate to the Networks tab
  2. Move the cursor to a network in the list and click the Remove button
  3. Click Delete
Deleting multiple networks at once:
  1. Navigate to the Networks tab
  2. Click on the Select Items button and select one or more networks from the list
  3. Click the Delete button and click Delete again to confirm the operation

Cluster size and Scale in/out

Users have the ability to create clusters of containers by setting the cluster size field in the container provisioning form under "Policy". This means that Bellevue will provision as many containers of that type as specified and requests will be load balanced among all containers in the cluster. Users, also, are allowed to modify the cluster size on an already provisioned container/application by clicking the + and - icons in the container's grid tile. This will respectively increase and decrease the size of the cluster by 1. When modifying the cluster size at runtime all affinity filters and placement rules are taken into account.

Load balancing

The network traffic between containers in the same network is automatically load balanced. It is done using Docker's Round Robin DNS. When scaling containers in and out, the DNS server will automatically list the new IPs of the containers, so they will reachable as well. However, Round Robin DNS has its limitations that may not fit your needs, in this case you may introduce your own load balancing solution, where you can resolve the IPs of the containers from the DNS by using tools like nslookup and dig.


Note: This applies to User defined networks and Links with networking. It is not supported for Legacy links


Volumes

Admiral supports native container volumes backed by the "local" volume plugin.

Supported Volume Drivers

Brief introduction to the different volume driver supported by Docker and partially by vSphere Integrated Containers (VIC).

  • local - The default built-in local driver. Volumes created by this driver have a local scope, which means that they can be accessed only by containers on the same host.
  • vsphere - The default driver for VIC. local is an alias for vsphere
  • 3rd party plugins - Admiral does not support third-party volume plugins officially but it is possible to create and use volumes based on such plugins.

Volume Management Interface

The container management feature provides a user interface for viewing existing volumes as well as creating new volumes and subsequently deleting them.

To view all existing volumes:
  1. Click on the Resources tab
  2. Click on the Volumes tab

All volumes that have been created externally on the managed container hosts as well as the volumes created as part of provisioned applications are listed, with information on the volume type as well as driver options.

To create a new volume:
  1. Navigate to the Volumes tab
  2. Click on Create Volume
  3. Enter a volume name.
  4. (Optional) Specify the driver. Defaults to "local".
  5. Add a host from the Hosts dropdown.
  6. Toggling the Advanced checkbox will expand the driver options.
  7. Click Create Volume to save the new volume and provision it on the managed container host(s).
Deleting a volume:
  1. Navigate to the Volumes tab
  2. Move the cursor to a volume in the list and click the Remove button
  3. Click Delete
Deleting multiple volumes at once:
  1. Navigate to the Volumes tab
  2. Click on the Select Items button and select one or more volumes from the list
  3. Click the Delete button and click Delete again to confirm the operation
Clone this wiki locally