- Revision
- Scope
- Definitions/Abbreviations
- Overview
- Requirements
- Architecture Design
- High-Level Design
- SONiC Package
- Built-In SONiC Packages
- SONiC Package Management
- SONiC Package Database
- SONiC Base Image and Packages Versioning
- SONiC Application Extension Security Considerations
- Configuration and management
- SONiC Package Upgrade Flow
- Manifest
- SONiC Package Installation
- SONiC Package Changelog
- SONiC Docker Container Resource restrictions
- SONiC Package Docker Container Lifetime
- Initial Extension Configuration
- CLI extension
- SONiC Processes and Docker Statistics Telemetry Support
- Monit Configuration
- Feature Concept Integration
- Multi-DB support
- Configuration Reload
- System Dump
- Multi-ASIC
- Warmboot and Fastboot Design Impact
- SONiC-2-SONiC upgrade
- Kubernetes & SONiC Application Extension
- 3rd party Docker images
- Installing 3rd party image as is.
- Prepare 3rd party image as to be SONiC compliant
- SONiC Build System
- SAI API
- Restrictions/Limitations
- Testing Requirements/Design
- Open/Action items
- Figure 1. Basic Concepts
- Figure 2. High Level Overview of SONiC Package integration
- Figure 3. SONiC Package Installation Flow
- Figure 4. SONiC Package Uninstallation Flow
- Figure 5. SONiC Package Upgrade Flow
- Figure 6. Feature Start Sequence Diagram
- Figure 7. Feature Stop Sequence Diagram
Rev | Date | Author | Change Description |
---|---|---|---|
0.1 | 09/2020 | Stepan Blyshchak | Phase 1 Design |
This document describes the high level design of SONiC Application Extension Infrastructure.
Abbreviation | Definition |
---|---|
SONiC | Software for Open Networking in Cloud |
DB | Database |
API | Application Programming Interface |
SAI | Switch Abstraction Interface |
YANG | Yet Another Next Generation |
JSON | Java Script Object Notation |
XML | eXtensible Markup Language |
gNMI | gRPC Network Management Interface |
SONiC Application Extension Infrastructure is a SONiC infrastructure and framework for managing SONiC Application Packages which in this scope are SONiC compatible Docker images distributed individually from one another and from the base SONiC image.
SONiC NOS was built with extendability in mind. The key role here play the fact that the main building block of SONiC is Docker. Every SONiC functionality piece is packaged inside a Docker image which then is run on a SONiC box. As of today, SONiC comes with a set of Docker containers that are built-in into SONiC image, limiting users to a predefined functionality that is supported by SONiC. The end goal of this proposal is to achieve building a system which makes it possible to extend SONiC base set of features at runtime without a need to upgrade the whole SONiC OS.
We are going to leverage the existing Docker and Docker registry infrastructure and build SONiC Application Extension framework on top of it. While Docker provides a tool for packaging an application and Docker registry for hosting it, it is not enough to just execute "docker pull" to make an application "look and feel" like a native SONiC application.
SONiC Application Extension framework aims at making the process of development and integration of 3-rd party applications with a native integration into SONiC. For that we need to provide SONiC Application Extension Infrastructure with the API to connect every 3rd party application with SONiC native infrastructure, like access to the database, SAI ASIC programming interface, sonic utility CLI, Klish based CLI, REST API, gNMI, logging infrastructure, warm and fast restarts, etc.
When SONiC Application Extension infrastructure will become a part of SONiC, application developer will not have to integrate every application into SONiC codebase but maintain them separately. This follows all the popular Linux distributions that allow for installation of external applications.
This section describes a list of requirements for both SONiC Application Extension Infrastructure.
The following list of requirements has to be met for SONiC Application Extension Infrastructure:
- SONiC OS must provide a CLI to manage SONiC repositories and packages. This includes package installation, un-installation, both cold and warm upgrades as well as adding and removing repositories.
- Definition for a SONiC compatible Docker image and metadata a this Docker image must provide.
- Versioning schema and a mechanism to control package dependencies and conflicts.
- All SONiC packages are registered as an optional feature in SONiC, thus "feature" CLI commands are applicable to SONiC Packages.
- Application upgrades: container level upgrade and system level upgrade - SONiC-2-SONiC upgrade.
- SONiC utilities CLI extension mechanism.
- Resource sharing with SONiC Package container: redis database, syslog, Linux resources etc.
- Build infrastructure and tools for easier package development.
This section covers the changes that are required in the SONiC architecture. In general, it is expected that the current architecture is not changed. This section should explain how the new feature/enhancement (module/sub-module) fits in the existing architecture.
Basic definitions:
SONiC Package - SONiC compatible Docker image providing its functionality as a service
SONiC Package Repository - store of SONiC compatible Docker images that can be referenced by a tag
Docker Registry - a storage and content delivery system, holding named Docker images, available in different tagged versions
There are three notions: package, repository and registry. A repository is a Docker registry (private or open like Docker Hub) repository with tagged images for specific package.
In the above figure Azure/sonic-dhcp-relay and Azure/sonic-snmp are repositories with a set of images.
SONiC Packages must meet few requirements in order to be a SONiC compatible Docker image.
- A package must provide a manifest as part of the Docker image.
- Requirement on the container state recording by Kubernetes HLD).
- The DockerHub or a private registry containing SONiC images is always accessible from the SONiC switch.
The idea is to auto-generate most of the components on the host OS based on manifest provided by SONiC Package.
Every SONiC Docker image can be converted to be a SONiC Package, although a subset of SONiC Dockers will be converted to be a SONiC Package at first phase. These are Dockers for which it might be comlicated to separate the OS part from the Docker image itself. Those Docker images are considered to be built-in. Built-in packages cannot be removed and upgraded as SONiC Packages and the infrastructure will mark with a built-in flag.
This will allow for a smooth transition of SONiC Docker images into SONiC packages by marking all of the existing Docker images as build-in and then removing this flag for images that become a SONiC packages.
The following list enumerates built-in Docker containers, that cannot be converted to SONiC Package as part of phase 1:
- database
- syncd
- swss
- pmon
For those packages it might be a challenge to support installation, un-installation and upgrade using SONiC Application Extension framework. E.g. syncd contains vendor SDK which usually means there is a kernel driver installed on the host OS. Upgrading just the syncd may become challenging because of a need to upgrade kernel driver on the host. Same is for the pmon Docker, swss and database - they are tightly integrated into base OS.
As any mature OS distribution SONiC will use its own package management solution and provide a utility to manage packages. SONiC Package Manager will use a persistent storage for its purposes at /var/lib/sonic-packages/ on the host OS. There is a packages.json file representing local database of packages.
/
var/
lib/
sonic-packages/
packages.json
A locking mechanism will be used in order to make a package operation (installation, de-installation, upgrades) atomic. For this a lock file /var/lib/sonic-packages/lock will be created on every operation and released once operation is completed to guaranty that the database won't become broken if two write operations are performed at the same time.
The /var/lib/sonic-packages/packages.json file is used as a persistent database of available SONiC packages. Schema definition for packages.json file is following:
Path | Type | Description |
---|---|---|
/name | string | Name of the package. |
/name/repository | string | Repository in Docker registry or a local image reference. |
/name/description | string | Application description field. |
/name/default-reference | string | A tag or digest of Docker image that will be a default installation candidate. |
/name/built-in | boolean | A flag to indicate that a Docker is a built-in package. |
/name/status | string | Status indicate the installation status of the package. It is either "installed" or "not-installed". |
/name/installed-version | string | Installed version string. |
A sample of the content in JSON format:
{
"database": {
"repository": "docker-database",
"description": "SONiC database service",
"built-in": true,
"default-reference": "1.0.0",
"status": "installed",
"installed-version": "1.0.0"
},
"swss": {
"repository": "docker-orchagent",
"description": "SONiC switch state service",
"built-in": true,
"default-reference": "1.0.0",
"status": "installed",
"installed-version": "2.0.1"
},
"cpu-report": {
"repository": "Azure/sonic-dhcp-relay",
"description": "DHCP relay feature",
"default-reference": "sha256:5d41c289942008211c2964bca72800f5c9d5ea5aa4057528da617fb36463d4ab",
"status": "not-installed"
},
"featureXXX": {
"repository": "Azure/sonic-snmp",
"description": "Simple Network Monitoring Protocol",
"default-reference": "1.0.0",
"status": "installed",
"installed-version": "1.0.0"
}
}
The initial packages.json that comes with SONiC image is auto-generated at build time. The packages.json will include every SONiC package installed at built time and have the installation status set. Besides of the packages that were built-in to SONiC OS at built time the packages.json also includes the repositories that are available for users to install. E.g. a DHCP relay feature may not come with SONiC by default, but the user will have a corresponding entry for DHCP relay package in packages.json which user can install.
Community can extend packages.json with own developed packages. The recommended way of defining a 'default-reference' is by specifying a digest rather then a tag, so that a package entry points strictly to a specific image.
Once a Docker becomes a SONiC package, user will have two options:
- SONiC build system will be extended with a build parameter "INCLUDE_$PACKAGE=y|n". If this parameter is set to "y", a package will be installed in SONiC image filesystem during build time.
- If the "INCLUDE_$PACKAGE" is set to "n", the target is not installed, but compiled into Docker Image and published to Docker Hub by CI for users to install the package on a running switch. For that, the reference to the package will be added into packages.json.
Most of the SONiC Packages will depend on SONiC base OS API and on other SONiC packages API. Every mature package management solution provides a set of agreements on how packages versioning should be done to avoid potential incompatibilities.
This documents proposes to use semantic versioning which is used for many package management solutions.
The schema for version is in the format of ${MAJOR}.${MINOR}.${PATCH} version. Semantic versioning can also include a suffix for pre-release identifiers and build id, like 1.0.0-dev+153, that can be used for master branch builds. Such a schema allows for a simple logic for comparison, e.g: 1.0.0 < 1.1.0 and 1.5.1 > 1.4.20. For more details of comparison rules follow the reference above.
The version number is defined as part of the SONiC package manifest. Package maintainers must follow the above versioning approach and are encouraged to follow a commonly used convention in Docker by tagging images with a version number.
The base OS has also have a version number that follows the same rules of semantic versioning so that a package can define a dependency on base OS version. A new variable is introduced in /etc/sonic/sonic_version.yml called "base_os_compatibility_version" that follows semantic versioning schema. This version is in addition to SONiC version we have today.
This version number does not replace the current SONiC version string that is generated during the build so both SONiC version string and base OS compatibility version coexist. Base OS compatibility version can be updated independently from SONiC version.
The updated output of "show version" command is given below:
admin@sonic:~$ show version
SONiC Software Version: SONiC.master.0-7580c846
Base OS Compatibility Version: 1.0.0
Distribution: Debian 9.13
Kernel: 4.9.0-11-2-amd64
Build commit: 7580c846
Build date: Sat Sep 26 04:17:56 UTC 2020
Built by: johnar@jenkins-worker-8
...
For SONiC containers available in sonic-buildimage repository the corresponding makefile is modified to include a version string:
rules/docker-dhcp-relay.mk
$(DOCKER_DHCP_RELAY)_VERSION = 1.0.0
This version is used to tag a Docker image when installing SONiC package in SONiC OS at build-time or publishing SONiC package.
The versioning of the package is a responsibility of the package maintainer. The exact rules of how the versioning is done when branching, publishing new images is out of the scope of this document.
There are a lots of aspects about security when it comes to running arbitrary SONiC packages on SONiC. Making sure the container is restricted to do only what it is supposed to is a complex functionality peace here. Security considerations of Phase 1 of Application Extension feature stand on an assumptions that 3rd party Dockers are secured and trusted.
- 3rd party package developer must add an entry in sonic-buildimage repository in packages.json.j2 template file. The community tests and verifies the package and only those packages which are trusted are added to the template.
- If user manually adds an entry to packages.json file, it is user's responsibility to verify and check the package that user is trying to install.
The SONiC Application Extension Framework may leverage Docker Content Trust feature which allows to pull only signed images. A set of trusted public keys may come as a default with SONiC image as well as user may add their own public keys. Using those public keys and a signature in Docker image docker validates the signature of the image. This way the user can ensure the integrity and the publisher of the image he is trying to install.
The SONiC Package Manager is another executable utility available in base SONiC OS called sonic-package-manager or abbreviated to spm. The command line interfaces are given bellow:
admin@sonic:~$ sonic-package-manager
Usage: sonic-package-manager [OPTIONS] COMMAND [ARGS]...
CLI to manage SONiC application packages
Options:
--help Show this message and exit
Commands:
add Add a new package to package database.
remove Remove a package from package database.
list List packages available in SONiC.
show Show SONiC package Info.
install Install SONiC package from repository.
upgrade Upgrade SONiC package.
uninstall Uninstall SONiC package.
admin@sonic:~$ sonic-package-manager show
Usage: sonic-package-manager [OPTIONS] COMMAND [ARGS]...
Show SONiC package Info.
Options:
--help Show this message and exit
Commands:
manifest Print package manifest.
changelog Print package changelog.
admin@sonic:~$ sonic-package-manager list
Name Repository Description Version Status
----------- --------------------- ------------------------ ------------ --------------
database docker-database SONiC database 1.0.0 Built-In
swss docker-orchagent Switch state service 1.0.0 Built-In
syncd docker-syncd-vs SONiC ASIC sync service 1.0.0 Built-In
cpu-report Azure/cpu-report CPU time report feature 1.0.5 Installed
dhcp-relay Azure/dhcp-relay DHCP relay service N/A Not installed
admin@sonic:~$ sudo sonic-package-manager add [NAME] [REPOSITORY] --description=[STRING] --default-reference=[STRING]
admin@sonic:~$ sudo sonic-package-manager remove [NAME]
admin@sonic:~$ sudo sonic-package-manager install --help
Usage: sonic-package-manager install [OPTIONS] [REFERENCE]
Install SONiC package.
Options:
-y, --yes Answer yes for any prompt.
-f, --force Force installation.
--help Show this message and exit
admin@sonic:~$ sudo sonic-package-manager install cpu-report
Install a specific tag:
admin@sonic:~$ sudo sonic-package-manager install cpu-report==1.0.0
Optionally specifying a version after package name separated by a '==' in CLI allows user to install any version of extension.
Installing using a tag/version is a convenient methods to install packages for users. The downside of using tag as an image reference is the fact that a tag is mutable reference. Thus, an image tag might not point to the same image at any given time. Docker provides a digest (content-addressable identifier) as immutable reference. In case users download from Docker Hub rather then from trusted repository they might want to use a digest instead for their installations.
Install using a digest:
admin@sonic:~$ sudo sonic-package-manager install cpu-report@sha256:8273733f491f362bb36710fd8a99f78c3fbaecd8d09333985c76f1064b80760f
For developer convenience or for unpublished SONiC packages,it is possible to install the extension from a Docker image tarball.
admin@sonic:~$ ls featureA.gz
featureA.gz
admin@sonic:~$ sudo sonic-package-manager install featureA.gz
This option should mainly be used for debugging, developing purpose, while the preferred way will be to pull the image from repository. Package Database is updated with a "repository" field set to local image name. The image is tagged to 1.0.0. In the above example the following entry is added to packages.json:
{
"featureA": {
"repository": "featureA",
"default-reference": "1.0.0",
"installed-version": "1.0.0",
"status": "installed"
}
}
An option to skip all dependency checks and force the installation:
admin@sonic:~$ sudo sonic-package-manager install --force feature
WARN: feature depends on syncd^1.1.1 while installed version is 1.0.5. Ignoring.
show version command can be used to display feature docker image version.
admin@sonic:~$ sudo sonic-package-manager upgrade --help
Usage: sonic-package-manager upgrade [OPTIONS] [REFERENCE]
Upgrade SONiC package.
Options:
-y, --yes Answer yes for any prompt.
-f, --force Force upgrade.
--help Show this message and exit
The command line example for package upgrade:
admin@sonic:~$ sudo sonic-package-manager upgrade <package>==1.5.1
The the new package is downloaded and installed, the package service is stopped and restarted, an old Docker image is removed.
For a feature that supports warm upgrade:
admin@sonic:~$ sudo config warm-restart enable <package>
admin@sonic:~$ sudo sonic-package-manager upgrade <package>===1.5.1
NOTE: SONiC already supports docker containers warm upgrade to some extent by sonic-installer utility's "upgrade-docker" sub-command. This command will be deprecated and replaced by "sonic-package-manager" functionality.
No CONFIG DB changes required for this feature.
The upgrade scenario is different from sequential uninstall and install operations. In order to minimize service downtime, the image pulling and templates rendering has to be done while the old container is running. Also, it might be that the old container auto-generated container management scripts are incompatible with the new one and vice verse. So we need to stop the old container using old auto-generated service file and container management scripts, then replace the old scripts with new one and start the service. The sequence is shown on the figure below:
Every SONiC Package that is not a built-in package must provide a manifest. manifest is a set of Docker image labels which describe the package and instruct SONiC how this package integrates in the system.
Image labeling is a standard Docker way to add metadata information to the image. Besides, a label can be queried using Docker Registry API without the need to download the whole image.
com.azure.sonic.manifest
The value should contain a JSON serialized as a string.
The following table shows the top-level objects in the manifest. In the next sections it is described all the fields that are relevant.
Path | Type | Mandatory | Description |
---|---|---|---|
/version | string | no | Version of manifest schema definition. Defaults to 1.0.0. |
/package | object | no | Package related metadata information. |
/service/ | object | yes | Service management related properties. |
/container/ | object | no | Container related properties. |
/processes/ | list | no | A list defining processes running inside the container. |
/cli | object | no | CLI plugin information. NOTE: Later will deprecated and replaced with a YANG module file path. |
A required "version" field can be used in case the format of manifest.json is changed in the future. In this case a migration script can be applied to convert format to the recent version. This is similar to approach SONiC uses for CONFIG DB version.
An installation process has to verify all requirements are met. First check to perform is a SONiC base image version match. The package manager has to build the dependency tree and verify all dependent packages are installed version requirements are met and the package that is about to be installed does not break any other package or any installed package does not break the package that is going to be installed.
The package manager currently won't try to install missing packages or resolve dependency conflicts but give the user an appropriate error message.
NOTE: SONiC package manager does not maintain two different versions at the same time. So, only single version of the package can be installed at any given time.
Path | Type | Mandatory | Description |
---|---|---|---|
/package/version | string | yes | Version of the package. |
/package/depends | list of strings | no | List of SONiC packages the service depends on. Defaults to []. |
/package/breaks | list of strings | no | List of SONiC package the service breaks. Defaults to []. |
/package/base-os-constraint | string | no | Base image version dependency constraint. Defaults to '*': allows any version. |
base-os-constraint should have the following format:
[>|>=|==|<|<=|^|!|!=]<version>
Example:
{
"package": {
"base-os-constraint": ">1.0.0"
}
}
depends, breaks fields are defined to be in the following format:
<package-name>[>|>=|==|<|<=|^|!|!=]<version>,[>|>=|==|<|<=|^|!|!=]<version>,...
Examples:
{
"package": {
"depends": "swss>=1.0.0,!=1.2.2,<=3.0.0"
}
}
or
{
"package": {
"conflicts": "syncd^1.0.0"
}
}
Path | Type | Mandatory | Description |
---|---|---|---|
/package/changelog | dict | no | Changelog dictionary. |
/package/changelog/<version> | dict | yes | Package version. |
/package/changelog/<version>/changes | list of strings | yes | Changelog messages for a given version. |
/package/changelog/<version>/author | string | yes | Author name. |
/package/changelog/<version>/email | string | yes | Author's email address. |
/package/changelog/<version>/date | string | yes | Date and time in RFC 2822 format. |
Example:
{
"package": {
"changelog": {
"1.0.0": {
"changes": ["Initial release"],
"author": "Stepan Blyshchak",
"email": "stepanb@nvidia.com",
"date": "Mon, 25 May 2020 12:24:30 +0300"
},
"1.1.0": {
"changes": [
"Added functionality",
"Bug fixes"
],
"author": "Stepan Blyshchak",
"email": "stepanb@nvidia.com",
"date": "Fri, 23 Oct 2020 12:26:08 +0300"
}
}
}
}
This information will be useful for user, so a command to show changelog for a package is added:
admin@sonic:~$ sonic-package-manager show package changelog <some-package>
1.0.0:
* Initial release
-- Stepan Blyshchak <stepanb@nvidia.com> Mon, 25 May 2020 12:24:30 +0300
1.1.0
* Added functionality
* Buf fixes
-- Stepan Blyshchak <stepanb@nvidia.com> Fri, 23 Oct 2020 12:26:08 +0300
This feature will allow user to specify resource restrictions for a container via FEATURE table in CONFIG DB. This feature is not related to SONiC Application Extension Design, as can be applied to any existing SONiC container with existing infrastructure. Every SONiC Package will automatically support this feature.
TODO: Put a reference to the design doc of this feature when it becomes available.
Container lifetime in SONiC is currently controlled by systemd. The current SONiC design for container management consists of a service unit, a container management script /usr/bin/<feature>.sh and optionally /usr/local/bin/<feature>.sh.Those two scripts and a service unit file will be auto-generated during SONiC Package installation. The information needed for them to be auto-generated is defined in the manifest of a package.
The relation between those scripts is shown in the below two figures in high level:
The service unit file defines a dependency relation between different units and start/stop ordering between them. The template for creating service files will be placed at /usr/share/sonic/templates/service.j2. The manifest fed through this template outputs a systemd service file with all the unit properties set according to the package's manifest.
Path | Type | Mandatory | Description |
---|---|---|---|
/service/name | string | yes | Name of the service. There could be two packages e.g: fpm-quagga, fpm-frr but the service name is the same "bgp". For such cases each one have to declare the other service in "breaks". |
/service/requires | list of strings | no | List of SONiC services the application requires. The option maps to systemd's unit "Requires=". |
/service/requisite | list of strings | no | List of SONiC services that are requisite for this package. The option maps to systemd's unit "Requisite=". |
/service/after | list of strings | no | Boot order dependency. List of SONiC services the application is set to start after on system boot. |
/service/before | list of strings | no | Boot order dependency. List of SONiC services the |
The script under /usr/local/bin/ has two feature specific use cases.
- In case when the feature requires to execute specific container lifecycle actions the code to be executed after the container has started and before the container is going down is executed within this script. SONiC package manifest includes two data nodes - /service/post-start-action and /service/pre-shutdown-action. This node is of type string and the value is the path to an executable to execute within Docker container. Note, post-start-action does not guaranty that the action will be executed before or after a the container ENTRYPOINT is started.
Example of container lifecycle hook can apply to a database package. A database systemd service should not reach started state before redis process is ready otherwise other services will start but fail to connect to the database. Since, there is no control when the redis process starts a post-start-action script may execute "sonic-db-cli ping" till the ping is succeessful. This will ensure that service start is blocked till the redis service is up and running.
The pre-shutdown-action might be usefull to execute a specific action to prepare for warm reboot. For example, teamd script that sends SIGUSR1 to teamd to initiate warm shutdown. Note, the usage of the pre-shutdown action is not limited to warm restart and is invoked every time the container is about to be stopped or killed.
- Another use cases is to manage coldboot-only dependent services by conditionally starting and stopping dependent services based on the warm reboot configuration flag. For example, a DHCP relay service is a coldboot-only dependent service of swss service, thus a warm restart of swss should not restart DHCP relay, on the other hand a cold restart of swss must restart DHCP relay service. This is controlled by a node in manifest /service/dependent-of. The DHCP relay package will have "swss" listed on the dependent-of list which will instruct auto-generation process to include the DHCP relay service in the list of dependent services in swss. This means swss.sh script has to be auto-generated as well. To avoid re-generating swss.sh script we will put dependent services in a separate file what swss.sh can read. The file path is chosen to be /etc/sonic/<service_name>_dependent for single instance services and /etc/sonic/<service_name>_dependent_multi_inst_dependent for multi instance services.
Example of required code change for swss is given below swss.sh:
DEPENDENT="radv dhcp_relay"
MULTI_INST_DEPENDENT="teamd"
DEPDENDENT=$(cat /etc/sonic/${SERVICE}_dependent)
MULTI_INST_DEPENDENT=$(cat /etc/sonic/${SERVICE}_multi_inst_dependent)
The infrastructure is not deciding whether this script is needed for a particular package or not based on warm-reboot requirements or container lifetime hooks provided by a feature, instead this script is always generated and if no specific actions descirbed above are needed it becomes a simple wrapper around a script under /usr/bin/.
Examples are swss.sh, syncd.sh, bgp.sh. These scripts share a good amount of code, thus making it possible to templatize into a single script that can be parametrized during generation according to feature needs - place lifecycle action hooks and dependent service management.
Every service that the starting service requires should be started as well and stopped when a service is stopped but only if a service is doing a cold start. This means when a new package is installed it might affect the other scripts. So after all dependencies are known after installation all the service scripts under /usr/local/bin/ are re-generated.
Path | Type | Mandatory | Description |
---|---|---|---|
/service/dependent-of | lits of strnigs | no | List of SONiC services this application is dependent of. Specifying in this option a service X, will regenerate the /usr/local/bin/X.sh script and upgrade the "DEPENDENT" list with this package service. This option is warm-restart related, a warm-restart of service X will not trigger this package service restart. On the other hand, this service package will be started, stopped, restarted togather with service X. Example: For "dhcp-relay", "radv", "teamd" this field will have "swss" service in the list. |
/service/post-start-action | string | no | Path to an executable inside Docker image filesystem to be executed after container start. A package may use this field in case a systemd service should not reach started state before some condition. E.g.: A database service should not reach started state before redis process is not ready. Since, there is no control, when the redis process will start a "post-start-action" script may execute "redis-cli ping" till the ping is succeessful. |
/service/pre-stop-action | string | no | Path to an executable inside Docker image filesystem to be executed before container stops. A uses case is to execute a warm-shutdown preparation script. A script that sends SIGUSR1 to teamd to initiate warm shutdown is one of such examples. |
The script under /usr/bin/ starts, stops or waits for container exit. This script is auto-generated during build time from docker_image_ctl.j2. To allow a runtime package installation, it is required to have this file as part of SONiC image and put it in /usr/share/sonic/templates/docker_image_ctl.j2. The Jinja2 template will accept three arguments, docker_container_name, docker_image_name and docker_run_options, which derive from the /container/ node from manifest. Besides of options defined in the manifest, the following default are used to start container to allow container to access base SONiC resources, like database and syslog:
docker create {{ docker_run_options }}
--net=$NET \
--uts=host \
--log-opt max-size=2M --log-opt max-file=5 \
-v /var/run/redis$DEV:/var/run/redis:rw \
$REDIS_MNT \
-v /usr/share/sonic/device/$PLATFORM:/usr/share/sonic/platform:ro \
-v /usr/share/sonic/device/$PLATFORM/$HWSKU/$DEV:/usr/share/sonic/hwsku:ro \
--env "NAMESPACE_ID"="$DEV" \
--env "NAMESPACE_PREFIX"="$NAMESPACE_PREFIX" \
--env "NAMESPACE_COUNT"=$NUM_ASIC \
--name={{docker_container_name}}$DEV {{docker_image_name}}
Path | Type | Mandatory | Description |
---|---|---|---|
/container/privileged | string | no | Start the container in privileged mode. Later versions of manifest might extend container properties to include docker capabilities instead of privileged mode. Defaults to False. |
/container/volumes | list of strings | no | List of mounts for a container. The same syntax used for '-v' parameter for "docker run". Example: "<src>:<dest>:<options>". Defaults to []. |
SONiC Package can provide an the initial configuration it would like to start with after installation. The JSON file will be loaded into running CONFIG DB and boot CONFIG DB file during installation.
Path | Type | Mandatory | Description |
---|---|---|---|
/package/init-cfg | dict | no | Default package configuration in CONFIG DB format. Defaults to {} |
Example:
{
"package": {
"init-cfg": {
"CPU_REPORT": {
"global": {
"report-interval": "5"
}
}
}
}
}
SONiC utilities support show, config, sonic-clear operations. A plugin approach is taken when extending those utilities. A common way to introduce a plugin support for a python application is to structure a plugin as a python module that can be discovered by the application in a well known location in the system.
The proposed location is a package directory named plugins under each show, config, sonic-clear python package, so that by iterating modules inside those packages utilities can load them. This is implemented in a way defined in Python Packaging Guide. Creating and discovering plugins.
A code snipped describing the approach is given:
import show.plugins
def iter_plugins_namespace(ns_pkg):
return pkgutil.iter_modules(ns_pkg.__path__, ns_pkg.__name__ + ".")
discovered_plugins = {
name: importlib.import_module(name)
for finder, name, ispkg
in iter_namespace(show.plugins)
}
A plugin will register it's sub-commands so that any utility will have a new sub-command group. The SONiC package can provide a CLI plugin that will be installed into the right location during package installation and then discovered and loaded by CLI. Later, once YANG CLI auto-generation tool is ready, the plugin will be auto-generated and all command conflicts will be checked in advance during installation.
In this approach it is easy to extend CLI with new commands, but in order to extend a command which is already implemented in sonic-utilities the code in sonic-utilities base has to be implemented in an extendable manner.
For dhcp-relay feature, it is needed to extend the CLI with a new sub-command for vlan, which is easily implemented by declaring a new sub-command:
show/plugins/dhcp_relay.py:
from show.vlan import vlan
@vlan.command()
def dhcp_relay():
pass
Extending an existing command like "show vlan brief" will require to rewrite it in an extandable way:
show/vlan.py:
class VlanBrief:
COLUMNS = [
("VLAN ID", get_vlan_id),
("IP address", get_vlan_ip_address),
("Ports", get_vlan_ports),
("Port Tagging", get_vlan_ports_tagging)
]
show/plugins/dhcp_relay.py
def get_vlan_dhcp_relays(vlan):
pass
VlanBrief.COLUMNS.append(("DHCP Helper Address", get_vlan_dhcp_relays))
NOTE: In this approach or in approach with auto-generated CLI an output of the command may change when a package is installed, e.g. DHCP Helper Address may or may not be present in CLI depending on installed package. Thus all automation, testing tools ave to be also auto-generated from YANG in the future.
Path | Type | Mandatory | Description |
---|---|---|---|
/cli/show-cli-plugin | string | no | A path to a plugin for sonic-utilities show CLI command. |
/cli/config-cli-plugin | string | no | A path to a plugin for sonic-utilities config CLI command. |
/cli/clear-cli-plugin | string | no | A path to a plugin for sonic-utilities sonic-clear CLI command. |
Processes And Docker Stats Telemetry HLD
This feature should be supported by SONiC Application Extension without any changes to existing feature implementation.
Processes information is also used to generate monit configuration file based on the critical flag. An installation is triggering monit configuration reload by issueing systemctl reload monit.service.
Path | Type | Mandatory | Description |
---|---|---|---|
/processes/ | list | no | A list defining processes running inside the container. |
/processes/name | string | yes | Process name. |
/processes/name/critical | boolean | no | Wether the process is a critical process. Defaults to False. |
/processes/name/command | string | yes | Command to run the process. |
Given the following processes list:
{
"processes": {
"cpu-reportd": {
"critical": true,
"command": "/usr/bin/cpu-reportd"
}
}
}
Will generate the following monit configuration:
###############################################################################
## Monit configuration for cpu-report container
## process list:
## cpu-reportd
###############################################################################
check program cpu-report|cpu-reportd with path "/usr/bin/process_checker cpu-report /usr/bin/cpu-reportd"
if status != 0 for 5 times within 5 cycles then alert
SONiC controls optional feature (aka services) via FEATURE table in CONFIG DB. Once SONiC Package is installed in the system and it must be treated in the same way as any optional SONiC feature. The SONiC package installation process will register new feature in CONFIG DB.
Optional Feature HLD Reference
Features are configured in FEATURE table in CONFIG DB and backend - hostcfgd daemon - enables, disables features according to the configuration. Default desired state for a SONiC Application Extension is "disabled". After installation, user can enable the feature:
admin@sonic:~$ sudo config feature featureA enabled
Application should be using swss-common library or swsssdk which take care of discovering database instances.
config reload & config load_minigraph are used to clear current configuration and import new configuration from the input file or from /etc/sonic/config_db.json. This command shall stop all services before clearing the configuration and then restarts those services. Thus, any service that consumes CONFIG DB data has to be restarted on reload commands.
As of today, config reload command implementation has a hard-coded list of services it needs to restart on reload. A service is restarted in this case if its dependency is restarted (like swss) or it is restarted explicitly by config reload. A problem arises when the switch will run a service which sonic-utilities is not aware about. Thus, a solution is proposed in which reload command implementation is not aware of exact list of services and order they should be restarted:
-
There will be a new target unit in systemd called – sonic.target that is wanted by ‘multi-user.target’
-
Every existing or newly installed extension service that requires restart on config reload would have the following configuration in service file:
[Unit]
BindsTo=sonic.target
After=sonic.target
[Install]
WantedBy=sonic.target
- "WantedBy" tells systemd to start services when target starts.
- "BindsTo" and "After" guaranty that services bound to sonic.target will be stopped and the operation will be blocked till all services stop. Otherwise, service stop can overlap with subsequent "restart" action.
- Config reload would be simplified to:
systemctl stop sonic.target
systemctl reset-failed `systemctl list-dependencies --plain sonic.target`
systemctl restart sonic.target
A different approach is considered to make easier config reloads. Every SONiC service that has to be restarted on config reload can be defined as PartOf sonic.target. So the systemctl restart sonic.target will restart those services in the ordering is managed by systemd without a need to update the list of services in CLI.
SONiC Package can specify a command to execute inside container to get the debug dump that should be included in system dump file. This command should be specified in manifest. A command should write its debug dump to stdout which will be gzip-ed into a file during show techsupport execution. This file will be included in techsupport under dump/<package-name>/dump.gz.
Path | Value | Mandatory | Description |
---|---|---|---|
/package/debug-dump | string | No | A command to be executed during system dump |
Based on current Multi-ASIC design, a service might be a host namespace service, like telemetry, SNMP, etc., or replicated per each ASIC namespace, like teamd, bgp, etc., or running in all host and ASICs namespaces, like lldp. Based on /host-namespace and /per-namespace fields in manifest, corresponding service files are created per each namespace. systemd-sonic-generator is invoked to create and install service units per each namespace.
Path | Value | Mandatory | Description |
---|---|---|---|
/service/host-namespace | boolean | no | Multi-ASIC field. Wether a service should run in host namespace. Default is True. |
/service/asic-namespace | boolean | no | Multi-ASIC field. Wether a service should run per ASIC namespace. Default is False. |
A SONiC package can specify an order of shutdown on warm-reboot for a service. A "bgp" may specify "radv" in this field in order to avoid radv to announce departure and cause hosts to lose default gateway, while "teamd" service has to stop before "syncd", but after "swss" to be able to send the last LACP PDU though CPU port right before CPU port becomes unavailable.
The warm-reboot and fast-reboot service shutdown scripts have to be auto-generated from a template /usr/share/sonic/templates/fast-shutdown.sh.j2 and /usr/share/sonic/templates/warm-shutdown..sh.j2 wich are symbolic links to the same template. The templates are derived from the fast-reboot script from sonic-utlities.
A services shutdown is an ordered executions of systemctl stop {{ service }} commands with an exception for "swss" service after which a syncd pre-shutdown is requested and database backup is prepared for next boot. A service specific actions that are executed on warm-shutdown are hidden inside the service stop script action.
The *-shutdown.sh are imported and executed in corresponding *-reboot scripts.
...
{% for service in shutdown_orider %}
systemctl stop {{ service }}
{% endfor %}
...
warmboot-finalizer.sh script must also be templatized and updated based on process reconciles flag.
Path | Value | Mandatory | Description |
---|---|---|---|
/service/warm-shutdown/ | object | no | Warm reboot related properties. Used to generate the warm-reboot script. |
/service/warm-shutdown/after | lits of strings | no | Warm shutdown order dependency. List of SONiC services the application is set to stop after on warm shutdown. Example: a "bgp" may specify "radv" in this field in order to avoid radv to announce departure and cause hosts to lose default gateway. NOTE: Putting "radv" here, does not mean the "radv" should be installed as there is no such dependency for the "bgp" package. |
/service/warm-shutdown/before | lits of strings | no | Warm shutdown order dependency. List of SONiC services the application is set to stop before on warm shutdown. Example: a "teamd" service has to stop before "syncd", but after "swss" to be able to send the last LACP PDU though CPU port right before CPU port becomes unavailable. |
/service/fast-shutdown/ | object | no | Fast reboot related properties. Used to generate the fast-reboot script. |
/service/fast-shutdown/after | lits of strings | no | Same as for warm-shutdown. |
/service/fast-shutdown/before | lits of strings | no | Same as for warm-shutdown. |
/processes/<name>/reconciles | boolean | no | Wether process performs warm-boot reconciliation, the warmboot-finalizer service has to wait for. Defaults to False. |
SONiC-2-SONiC upgrade shall work for SONiC packages as well. An upgrade will take the new system packages.json and version requirements and do a comparison between currently running and new SONiC image.
Package | Version | Action |
---|---|---|
Non built-in package | Default version defined in packages.json in new SONiC image is greater then in current running SONiC image | Perform a package installation/upgrade in new SONiC image |
Non built-in package | Default version defined in packages.json in new SONiC image is less then in current running SONiC image | Perform a package installation/upgrade in new SONiC image of currently running package version. |
The old packages.json and new packages.json are merged together and updated in new SONiC image.
Since the installation or upgrade of packages are required to be done on SONiC image installation time a new SONiC image filesystem needs to be mounted and dockerd has to be started in the chroot environment of new image as it is a requisite of sonic-package-manager.
CONFIG DB shall not updated with initial config of the package and a new feature in the FEATURE table in this scenario. A package should keep it's configuration backward compatible with old version. After installation succeeded and reboot into new image is performed all previously installed extensions should be available.
An option to skip migrating packages will be added for users that want to have a clean SONiC installation:
admin@sonic:~$ sudo sonic-installer install -y sonic.bin --no-package-migration
This section is WIP and describes the approach in very high level and needs more deep investigation.
The first thing to note here is that a Kubernetes manifest file can be auto-generated from SONiC Package manifest as it cat provide all the info about how to run the container. This manifest auto-generation is related to Kubernetes master changes while we will be focusing on SONiC OS side, so it is out of the scope here.
Besides of the manifest on the master, the SONiC switch has to have service files, configs, scripts installed for the feature.
- Package installed through SONiC package manager.
During package installation these necessary components will be auto-generated and installed on the switch. The auto-generation must honor new requirements to support "local" and "kube" modes when in "local" mode the container gets started on systemctl start, while in "kube" mode the appropriate feature label is set.
- Docker container upgrades through Kubernetes.
During that processes a new Docker container gets deployed and run on the switch. Kubernetes managed features introduces a state machine for a running container that is reflected in the STATE DB. When container starts it sets its state as "pending" and waits till there is a "ready" flag set.
A state-db-watcherd is listening to those changes. If there is a need to regenerate service file and scripts it initiates sonic-package-manager upgrade re-generation processes. Since the image is already downloaded and running but pending, the package manifest can be read and based on manifest it can generate the required files. Then, the state-db-watcherd can proceed to systemctl stop old container, systemctl start new container where the container_action script set the container state to "ready" and the container resumes to run the applications.
- Docker container deploy through Kubernetes.
The case describe the scenario when the user is setting a label via "config kube label add =". If labels match the manifest on the master the docker image gets deployed. Using the same approach, between "pending" and "ready" states an auto-generation process can happen and register the Docker as a new package. With that, a deployed image will become an installed SONiC Package, that can be managed in "local" mode as well.
It is possible to install 3rd party Docker images if the installer will use a default manifest. This default manifest will have only mandatory fields and an auto-generated service will require only "docker" service to be started. The name of the service is derived from the name of the Package in packages.json.
Default manifest:
{
"service": {
"name": "<package-name>",
"requires": "docker",
"after": "docker"
}
}
The default container run configuration have to be skipped for 3rd party Docker images. E.g. "--net=host", "-e NAMESPACE=$DEV" are not required for 3rd party unless. 'asic-service' field is ignored for 3rd party Docker image in this case. This settings have to be present in container properties in manifest.
3rd party Docker image, as it has no knowledge of SONiC, will not meet the requirements described in section SONiC Package. Thus, for such Docker images the limitations are:
- can be locally managed only, as the requirement for kube managed features is not met.
Another possibility is to allow users to provide a manifest file URL.
An example of the flow for a collectd Docker image:
admin@sonic:~$ sudo sonic-package-manager repository add collectd puckel/docker-collectd
admin@sonic:~$ sudo sonic-package-manager install collectd --manifest https://some-server/manifests/collectd/manifest.json
The manifest is saved under /var/lib/sonic-package-manager/collectd/manifest.json for future access.
This will require to build a new image based on existing 3rd party Docker image and do the change according to requirements described in SONiC Package.
SONiC build system will provide three docker images to be used as a base to build SONiC application extensions - sonic-sdk-buildenv, sonic-sdk and sonic-sdk-dbg.
sonic-sdk-buildenv will have common SONiC packages required to build SONiC application extension and will be a minimal version of sonic-slave container:
- build-essential
- libhiredis-dev
- libnl*-dev
- libswsscommon-dev
- libsairedis-dev
- libsaimeta-dev
- libsaimetadata-dev
- tools, etc.
sonic-sdk will be based on docker-config-engine with addition of packages needed at run time:
- libhiredis
- libnl*
- libswsscommon
- libsairedis
- libsairedis
- libsaimeta
- libsaimetadata
Corresponding -dbg packages will be added for sonic-sdk-dbg image. A list of packages will be extended on demand when a common package is required by community. If a package is required but is specific to the SONiC application extension it should not be added to this list.
No SAI API changes required for this feature.
The current design puts the following restrictions on SONiC packages that can be managed by Application Extension Framework:
- No support to do ASIC programming from extension.
- No support for REST/Klish CLI/gNMI extensions
(TODO)
(TODO)
(TODO)