Device BACNET
+Device service for BACnet protocol written in C. This service may be built to support BACnet devices connected via ethernet (/IP) or serial (/MSTP).
+See README for more details
+From 0f6c6448110e3473ee2960c8e47ae6955f85d849 Mon Sep 17 00:00:00 2001
From: edgex-jenkins EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service.
Core Data
Swagger
-
+
diff --git a/3.1/api/core/Ch-APICoreData/swagger-ffe3b6b8.html b/3.1/api/core/Ch-APICoreData/swagger-ffe3b6b8.html
new file mode 100644
index 0000000000..cb416e534a
--- /dev/null
+++ b/3.1/api/core/Ch-APICoreData/swagger-ffe3b6b8.html
@@ -0,0 +1,120 @@
+
+
+
+
+
+
Core Metadata
device provisioning service deposits and manages device metadata through
this service's API. See Core Metadata for more details about this service.
The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK.
The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK.
When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service.
EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service.
Since this is targeted as a developer testing tool, the storage model is kept simple by using in-memory storage for the recorded data. This should be kept in mind when recording or importing a recoding on systems with limited resources.
Control of this service is accomplished via the following REST API.
- +A sample Postman collection can be found here.
Device service for BACnet protocol written in C. This service may be built to support BACnet devices connected via ethernet (/IP) or serial (/MSTP).
+See README for more details
+Device service for CoAP-based REST protocol
+See README for more details
+Device service for connecting GPIO devices to EdgeX
+See README for more details
+Device service for connecting Modbus devices to EdgeX.
+See README for more details
+Device service for connecting a MQTT enabled devices to EdgeX.
+See README for more details
+Also see Adding MQTT Device Tutorial for +more details on using Device MQTT.
+Use this RESTful API documentation to learn more about the capabilities of the device service.
- +Device service for REST protocol
+See README for more details
+Device service for communicating with LLRP-based RFID readers.
+See README for more details
+Device service for SNMP protocol
+See README for more details
+Device service to connect serial UART devices to EdgeX
+See README for more details
+EdgeX Foundry is a vendor-neutral open source project hosted by The Linux Foundation. EdgeX Foundry builds a common open framework for IoT edge computing. At the heart of the project is an interoperability framework hosted within a full hardware- and OS-agnostic reference software platform to enable an ecosystem of plug-and-play components that unifies the marketplace and accelerates the deployment of IoT solutions.
Docker Quick StartJump in to EdgeX Foundry by running locally with Docker containers.
Snap Quick StartJump in to EdgeX Foundry by running Snaps.
Build & Run NativelyBuild EdgeX and run it natively on your OS.
Build a Device ServiceBuild a custom device service to connect to your sensor or device.
Build an Application ServiceBuild or configure a new application service to get data to the cloud, database, enterprise application or other external system.
Running in Hybrid ModeHow to run a service you are working on natively and then run the rest of EdgeX with Docker containers
"},{"location":"V3TopLevelMigration/","title":"V3 Migration Guide","text":"EdgeX 3.0
Many backward breaking changes occurred in the EdgeX 3.0 (Minnesota) release which may require some migration depending on your use case.
This section describes how to migrate from V2 to V3 at a high level and refers the reader to the appropriate detail documents. The areas to consider for migrating are:
Service configuration is one of the big changes for EdgeX V3
"},{"location":"V3TopLevelMigration/#configuration-provider","title":"Configuration Provider","text":"If you have customized any EdgeX service's configuration (core, support, device, etc.) via the Configuration Provider (Consul), those customization will need to be re-applied to those services' configuration or the common configuration in the Configuration Provider once the V3 versions have started and pushed their configuration into the Configuration Provider. The V3 services now use v3
in the Configuration Provider path rather than 2.0
. The folder structure in the Configuration Provider has been flattened so all services are at the same level. See the Configuration File section below for details on migrating configuration.
Example Configuration Provider paths for V3
.../kv/edgex/v3/core-common-config-bootstrapper\n.../kv/edgex/v3/core-data/\n.../kv/edgex/v3/device-virtual/\n.../kv/edgex/v3/app-rules-engine/\n
The same applies for custom device and application service once they have been migrated following the guides referenced in the Custom Device Service and Custom Applications Service sections below.
Warning
If the Configuration Provider data is not cleared prior to running the V3 services, the V2 configuration will remain and be taking up useful memory. The configuration data in the Configuration Provider can be cleared by deleting the .../edgex/
node with the curl command below prior to starting EdgeX 3.0.
curl --request DELETE http://localhost:8500/v1/kv/edgex?recurse=true`\n
"},{"location":"V3TopLevelMigration/#configuration-file","title":"Configuration File","text":"If you have customized the service configuration files for any EdgeX service (core, support, device, etc.) that configuration will need to be migrated.
The biggest two changes to the service configuration files are:
See V3 Migration of Common Configuration for the details on migrating configuration common to all EdgeX services.
The tool here can be used to convert your customized service configuration file from TOML to YAML. This should be done once all the common configuration has been removed.
The following are where you can find the configuration migration specifics for individual EdgeX services
If you have custom environment overrides for configuration impacted by the V3 changes you will also need to migrate your overrides to use the new name or value depending on what has changed. Refer to the links above and/or below for details for migration of common and/or the service specific configuration to determine if your overrides require migrating.
Note
When using the Configuration Provider, the environment overrides for common configuration are applied to the core-common-config-bootstrapper service. They no longer work when applied to the individual services as the common configuration setting no longer exist in the private configuration.
"},{"location":"V3TopLevelMigration/#custom-compose-file","title":"Custom Compose File","text":"The compose files for V3 have many changes from their V2 counter parts. If you have customized a V2 compose file to add additional services and/or add or modify configuration overrides, it is highly recommended that you start with the appropriate V3 compose file and re-add your customizations. It is very likely that the sections for your additional services will need to be migrated to have the proper environment overrides. Best approach is to use one of the V3 service sections that closest matches your service as a template.
The latest V3 compose files can be found here: Compose Files
"},{"location":"V3TopLevelMigration/#compose-builder","title":"Compose Builder","text":"If the additional service(s) in your custom compose file are EdgeX released device or app services, it is highly recommended that you use the Compose Builder to regenerate your custom compose file.
The latest V3 Compose Builder can be found here: Compose Builder Readme
"},{"location":"V3TopLevelMigration/#command-line-options","title":"Command Line Options","text":"The following command-line options and corresponding environment variables have be renamed for consistency
-c/--confdir
is replaced by -cd/--configDir
EDGEX_CONF_DIR
environment variable is replaced by EDGEX_CONFIG_DIR
-f/--file
is replaced by -cf/--configFile
EDGEX_CONFIG_FILE
has not changed-cp/ --configProvider
has not changedEDGEX_CONFIGURATION_PROVIDER
environment variable is replaced by EDGEX_CONFIG_PROVIDER
If your solution uses any of the renamed options or environment variables you will need to make the appropriate changes to use the new names.
See Command Line Options page for more details on the above options and the Command Line Overrides section for more details on the above environment variables
"},{"location":"V3TopLevelMigration/#database","title":"Database","text":"There currently is no migration path for the data stored in the database. If possible, the database should be cleared prior to starting V3 EdgeX. This will allow the database to be V3 compliant from the start. See Clearing Redis Database section below for details on how to clear the Redis database.
The following sections describe what you need to be aware for the different services that create data in the database.
"},{"location":"V3TopLevelMigration/#core-data","title":"Core Data","text":"The Event/Reading data stored by Core Data is considered transient and of little value once it has become old. The V3 versions of these data collections have minimal changes from their V2 counter parts.
"},{"location":"V3TopLevelMigration/#api-change","title":"API Change","text":"/event/{profileName}/{deviceName}/{sourceName}
to /event/{serviceName}/{profileName}/{deviceName}/{sourceName}
See Core Data API Reference for complete details.
"},{"location":"V3TopLevelMigration/#reading","title":"Reading","text":"tags
field in reading.The field that has changed in V3 is the apiVersion
which is now set to v3
.
Most of the data stored by Core Metadata will be recreated when the V3 versions of the Device Services start-up. The statically declared devices will automatically be created and device discovery will find and add existing devices. Any device profiles, devices, provision watchers created manually via the V2 REST APIs will have to be recreated using the V3 REST API. Any manually-applied AdministrativeState
settings will also need to be re-applied.
Add/ Update/ Get device
LastConnected
, LastReported
and UpdateLastConnected
from device modelAdd/ Update/ Get deviceprofile
optional
field in ResourcePropertiesmask
, shift
, scale
, base
, offset
, maximum
and minimum
from string
to number
in ResourcePropertiesGet UOM
Add/ Get/ Update ProvisionWatcher
DiscoveredDevice
; such as profileName
, Device adminState
, and autoEvents
.DiscoveredDevice
object to allow any additional or customized data.adminState
now. The Device adminState
is moved into the DiscoveredDevice
object.Add/ Update Device
notify
which is never usedtags
and properties
See Core Metadata API Reference for complete details.
"},{"location":"V3TopLevelMigration/#core-command","title":"Core Command","text":""},{"location":"V3TopLevelMigration/#api-change_2","title":"API Change","text":"ds-pushevent
and ds-returnevent
to use bool value, true
or false
, instead of yes
or no
See Core Command API Reference for complete details.
"},{"location":"V3TopLevelMigration/#support-notifications","title":"Support Notifications","text":"Any Subscriptions
created via the V2 REST API will have to be recreated using the V3 REST API. The Notification
and Transmission
collections will be empty until new notifications are sent using EdgeX V3
authmethod
to support-scheduler actions DTO, which indicates how to authenticate the outbound URL. Use NONE
when running in non-secure mode and JWT
when running in secure mode.See Support Scheduler API Reference for complete details.
The statically declared Interval
and IntervalAction
will be created automatically. Any Interval
and/or IntervalAction
created via the V2 REST API will have to be recreated using the V3 REST API. If you have created a custom configuration with additional statically declared Interval
s and IntervalActions
see the Configuration File section under Customized Configuration below.
Application services use the database only when the Store and Forward capability is enabled. If you do not use this capability you can skip this section. This data collection only has data when that data could not be exported. It is recommended not to upgrade to V3 while the Store and Forward data collection is not empty or you are certain the data is no longer needed. You can determine if the Store and Forward data collection is empty by setting the Application Service's log level to DEBUG
and look for the following message which is logged every RetryInterval
:
Example
msg=\" 0 stored data items found for retrying\"\n
Note
The RetryInterval
is in the app-services
section of common configuration. Changing it there will apply to all Application Services that have the Store and Forward capability enabled.
When running EdgeX in Docker the simplest way to clear the database is to remove the db-data
volume after stopping the V2 EdgeX services.
docker compose -f <compose-file> down\ndocker volume rm $(docker volume ls -q | grep db-data)\n
Now when the V3 EdgeX services are started the database will be cleared of the old v2 data.
"},{"location":"V3TopLevelMigration/#snaps","title":"Snaps","text":"Because there are no tools to migrate EdgeX configuration and database, it's not possible to update the edgexfoundry snap from a V2 version to a V3 version. You must remove the V2 snap first, and then install a V3 version of the snap (available from the 3.0 track in the Snap Store). This will result in starting fresh with EdgeX V3 and all V2 data removed.
"},{"location":"V3TopLevelMigration/#local","title":"Local","text":"If you are running EdgeX locally, i.e. not in Docker or snaps and in non-secure mode you can use the Redis CLI to clear the database. The CLI would have been installed when you installed Redis locally. Run the following command to clear the database:
redis-cli FLUSHDB\n
This will not work if running EdgeX in running in secure mode since you will not have the random generated Redis password unless you created an Admin password when you installed Redis.
"},{"location":"V3TopLevelMigration/#custom-device-service","title":"Custom Device Service","text":"If you have custom Device Services they will need to be migrated to the V3 version of the Device SDK. See Device Service V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#custom-device-profile","title":"Custom Device Profile","text":"If you have custom V2 Device Profile(s) for one of the EdgeX Device Services they will need to be migrated to the V3 version of Device Profiles. See Device Service V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#custom-pre-defined-device","title":"Custom Pre-Defined Device","text":"If you have custom V2 Pre-Defined Device(s) for one of the EdgeX Device Services they will need to be migrated to the V3 version of Pre-Defined Devices. See Device Service V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#custom-applications-service","title":"Custom Applications Service","text":"If you have custom Application Services they will need to be migrated to the V3 version of the App Functions SDK. See Application Services V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#security","title":"Security","text":"If you have an add-on services running in secure mode you will need to use the new names of the environment variables in EdgeX V3. See Security Services V3 Migration Guide for more details.
"},{"location":"V3TopLevelMigration/#api-gateway-configuration","title":"API Gateway configuration","text":"The API gateway has changed in EdgeX V3. See Security Services V3 Migration Guide for more details.
"},{"location":"V3TopLevelMigration/#authenticated-rest-apis","title":"Authenticated REST APIs","text":"When security is enable, all V3 EdgeX services REST APIs require a JWT authorization token. See Security Services V3 Migration Guide for more details.
"},{"location":"V3TopLevelMigration/#ekuiper","title":"eKuiper","text":""},{"location":"V3TopLevelMigration/#rules","title":"Rules","text":""},{"location":"V3TopLevelMigration/#rest-action","title":"Rest Action","text":""},{"location":"V3TopLevelMigration/#none-secure-mode","title":"None Secure Mode","text":"If running EdgeX in none secure mode and you have rules with rest
action that reference an EdgeX service the endpoint API version will need to be changed from v2 to V3
Example migration of rest
action with EdgeX endpoint
V2:
\"actions\": [\n{\n\"rest\": {\n\"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Integer-Device/Int64\", ...\n}\n}\n]\n
\u200b V3:
\"actions\": [\n{\n\"rest\": {\n\"url\": \"http://edgex-core-command:59882/api/v3/device/name/Random-Integer-Device/Int64\", ...\n}\n}\n]\n
"},{"location":"V3TopLevelMigration/#secure-mode","title":"Secure Mode","text":"If running EdgeX in secure mode and you have rules with rest
action that reference an EdgeX Core Command you will need to convert the rule to use Command via External MQTT. See eKuiper documentation here for more details. This is due to the new microservice authorization on all EdgeX services' endpoints requiring a JWT token which eKuiper doesn't have.
Note
This approach requires an external MQTT broker to send the command requests. The default EdgeX compose files do not include a MQTT Broker. This broker is supposed to be external to EdgeX.
"},{"location":"about/","title":"About","text":"EdgeX Foundry is an open source, vendor neutral, flexible, interoperable, software platform at the edge of the network, that interacts with the physical world of devices, sensors, actuators, and other IoT objects. In simple terms, EdgeX is edge middleware - serving between physical sensing and actuating \"things\" and our information technology (IT) systems.
The EdgeX platform enables and encourages the rapidly growing community of IoT solution providers to work together in an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale.
By bringing this much-needed interoperability, EdgeX makes it easier to monitor physical world items, send instructions to them, collect data from them, move the data across the fog up to the cloud where it may be stored, aggregated, analyzed, and turned into information, actuated, and acted upon. So EdgeX enables data to travel northwards towards the cloud or enterprise and back to devices, sensors, and actuators.
The initiative is aligned around a common goal: the simplification and standardization of the foundation for tiered edge computing architectures in the IoT market while still enabling the ecosystem to provide significant value-added differentiation.
If you don't need further description and want to immediately use EdgeX Foundry use this link: Getting Started Guide
"},{"location":"about/#edgex-foundry-use-cases","title":"EdgeX Foundry Use Cases","text":"Originally built to support industrial IoT needs, EdgeX today is used in a variety of use cases to include:
EdgeX Foundry was conceived with the following tenets guiding the overall architecture:
EdgeX Foundry must be platform agnostic with regard to
EdgeX Foundry must be extremely flexible
EdgeX Foundry should provide \"reference implementation\" services but encourages best of breed solutions
EdgeX Foundry must provide for store and forward capability
EdgeX Foundry must support and facilitate \"intelligence\" moving closer to the edge in order to address
EdgeX Foundry must support brown and green device/sensor field deployments
EdgeX Foundry must be secure and easily managed
EdgeX was originally built by Dell to run on its IoT gateways. While EdgeX can and does run on gateways, its platform agnostic nature and micro service architecture enables tiered distributed deployments. In other words, a single instance of EdgeX\u2019s micro services can be distributed across several host platforms. The host platform for one or many EdgeX micro services is called a node. This allows EdgeX to leverage compute, storage, and network resources wherever they live on the edge.
Its loosely-coupled architecture enables distribution across nodes to enable tiered edge computing. For example, thing communicating services could run on a programmable logic controller (PLC), a gateway, or be embedded in smarter sensors while other EdgeX services are deployed on networked servers. The scope of a deployment could therefore include embedded sensors, controllers, edge gateways, servers and cloud systems.
EdgeX micro services can be deployed across an array of compute nodes to maximize resources while at the same time position more processing intelligence closer to the physical edge. The number and the function of particular micro services deployed on a given node depends on the use case and capability of the hardware and infrastructure.
"},{"location":"about/#apache-2-license","title":"Apache 2 License","text":"EdgeX is distributed under Apache 2 License backed by the Apache Foundation. Apache 2 licensing is very friendly (\u201cpermissive\u201d) to open and commercial interests. It allows users to use the software for any purpose. It allows users to distribute, modify or even fork the code base without seeking permission from the founding project. It allows users to change or extend the code base without having to contribute back to the founding project. It even allows users to build commercial products without concerns for profit sharing or royalties to go back to the Linux Foundation or open source project organization.
"},{"location":"about/#edgex-foundry-service-layers","title":"EdgeX Foundry Service Layers","text":"EdgeX Foundry is a collection of open source micro services. These micro services are organized into 4 service layers, and 2 underlying augmenting system services. The Service Layers traverse from the edge of the physical realm (from the Device Services Layer), to the edge of the information realm (that of the Application Services Layer), with the Core and Supporting Services Layers at the center.
The 4 Service Layers of EdgeX Foundry are as follows:
The 2 underlying System Services of EdgeX Foundry are as follows:
Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where most of the innate knowledge of what \u201cthings\u201d are connected, what data is flowing through, and how EdgeX is configured resides in an EdgeX instance. Core consists of the following micro services:
Core services provide intermediary communications between the things and the IT systems.
"},{"location":"about/#supporting-services-layer","title":"Supporting Services Layer","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Normal software application duties such as scheduler, and data clean up (also known as scrubbing in EdgeX) are performed by micro services in the supporting services layer.
These services often require some amount of core services in order to function. In all cases, supporting service can be considered optional \u2013 that is they can be left out of an EdgeX deployment depending on use case needs and system resources.
Supporting services include:
Application services are the means to extract, process/transform and send sensed data from EdgeX to an endpoint or process of your choice. EdgeX today offers application service examples to send data to many of the major cloud providers (Amazon IoT Hub, Google IoT Core, Azure IoT Hub, IBM Watson IoT\u2026), to MQTT(s) topics, and HTTP(s) REST endpoints.
Application services are based on the idea of a \"functions pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event messages) in the order specified. The first function in a pipeline is a trigger. A trigger begins the functions pipeline execution. A trigger, for example, is something like a message landing in a message queue. Each function then acts on the message. Common functions include filtering, transformation (i.e. to XML or JSON), compression, and encryption functions. The function pipeline ends when the message has gone through all the functions and is set to a sink. Putting the resulting message into an MQTT topic to be sent to Azure or AWS is an example of a sink completing an application service.
"},{"location":"about/#device-services-layer","title":"Device Services Layer","text":"Device services connect \u201cthings\u201d \u2013 that is sensors and devices \u2013 into the rest of EdgeX.
Device services are the edge connectors interacting with the \"things\" that include, but are not limited to: alarm systems, heating and air conditioning systems in homes and office buildings, lights, machines in any industry, irrigation systems, drones, currently automated transit such as some rail systems, currently automated factories, and appliances in your home. In the future, this may include driverless cars and trucks, traffic signals, fully automated fast food facilities, fully automated self-serve grocery stores, devices taking medical readings from patients, etc.
Device services may service one or a number of things or devices (sensor, actuator, etc.) at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be another gateway (and all of that gateway's devices), a device manager, a device aggregator that acts as a device, or collection of devices, to EdgeX Foundry.
The device service communicates with the devices, sensors, actuators, and other IoT objects through protocols native to each device object. The device service converts the data produced and communicated by the IoT object into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry.
EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, MQTT, etc.
"},{"location":"about/#system-services-layer","title":"System Services Layer","text":"Security Infrastructure
Security elements of EdgeX Foundry protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules.
There are two major EdgeX security components.
System Management
System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored.
"},{"location":"about/#software-development-kits-sdks","title":"Software Development Kits (SDKs)","text":"Two types of SDKs are provided by EdgeX to assist in creating north and south side services \u2013 specifically to create application services and device services. SDKs for both the north and south side services make connecting new things or new cloud/enterprise systems easier by providing developers all the scaffolding code that takes care of the basic operations of the service. Thereby allowing developers to focus on specifics of their connectivity to the south or north side object without worrying about all the raw plumbing of a micro service.
SDKs are language specific; meaning an SDK is written to create services in a particular programming language. Today, EdgeX offers the following SDKs:
EdgeX\u2019s primary job is to collect data from sensors and devices and make that data available to north side applications and systems. Data is collected from a sensor by a device service that speaks the protocol of that device. Example: a Modbus device service would communicate in Modbus to get a pressure reading from a Modbus pump. The device service translates the sensor data into an EdgeX event object. The device service can then either:
put the event object on a message bus (which may be implemented via Redis Streams or MQTT). Subscribers to the event message on the message bus can be application services or core data or both (see step 1.1 below).
send the event object to the core data service via REST communications (see step 1.2).
When core data receives the event (either via message bus or REST), it persists the sensor data in the local edge database. EdgeX uses Redis as our persistence store. There is an abstraction in place to allow you to use another database (which has allowed other databases to be used in the past). Persistence is not required and can be turned off. Data is persisted in EdgeX at the edge for two basics reasons:
When core data receives event objects from the device service via REST, it will put sensor data events on a message topic destined for application services. Redis Pub/Sub is used as the messaging infrastructure by default (step 2). MQTT or NATS (opt-in during build) can also be used as the messaging infrastructure between core data and the application services.
The application service transforms the data as needed and pushes the data to an endpoint. It can also filter, enrich, compress, encrypt or perform other functions on the event before sending it to the endpoint (step 3). The endpoint could be an HTTP/S endpoint, an MQTT topic, a cloud system (cloud topic), etc.
"},{"location":"about/#edge-analytics-and-actuation","title":"Edge Analytics and Actuation","text":"In edge computing, simply collecting sensor data is only part of the job of an edge platform like EdgeX. Another important job of an edge platform is to be able to:
Why edge analytics? Local analytics are important for two reasons:
Local analytics allows systems to operate independently, at least for some stretches of time. For example: a shipping container\u2019s cooling system must be able to make decisions locally without the benefit of Internet connectivity for long periods of time when the ship is at sea. Local analytics also allow a system to act quickly in a low latent fashion when critical to system operations. As an extreme case, imagine that your car\u2019s airbag fired on the basis of data being sent to the cloud and analyzed for collisions. Your car has local analytics to prevent such a potentially slow and error prone delivery of the safety actuation in your automobile.
EdgeX is built to act locally on data it collects from the edge. In other words, events are processed by local analytics and can be used to trigger action back down on a sensor/device.
Just as application services prepare data for consumption by north side cloud systems or applications, application services can process and get EdgeX events (and the sensor data they contain) to any analytics package (see step 4). By default, EdgeX ships with a simple rules engine (the default EdgeX rules engine is eKuiper \u2013 an open source rules engine and now a sister project in LF Edge). Your own analytics package (or ML agent) could replace or augment the local rules engine.
The analytic package can explore the sensor event data and make a decision to trigger actuation of a device. For example, it could check that the pressure reading of an engine is greater than 60 PSI. When such a rule is determined to be true, the analytic package calls on the core command service to trigger some action, like \u201copen a valve\u201d on some controllable device (see step 5).
The core command service gets the actuation request and determines which device it needs to act on with the request; then calling on the owning device service to do the actuation (see step 6). Core command allows developers to put additional security measures or checks in place before actuating.
The device service receives the request for actuation, translates that into a protocol specific request and forwards the request to the desired device (see step 7).
"},{"location":"about/#project-release-cadence","title":"Project Release Cadence","text":"Typically, EdgeX releases twice a year; once in the spring and once in the fall. Bug fix releases may occur more often. Each EdgeX release has a code name. The code name follows an alphabetic pattern similar to Android (code names sequentially follow the alphabet).
The code name of each release is named after some geographical location in the world. The honor of naming an EdgeX release is given to a community member deemed to have contributed significantly to the project. A release also has a version number. The release version follows sematic versioning to indicate the release is major or minor in scope. Major releases typically contain significant new features and functionality and are not always backward compatible with prior releases. Minor releases are backward compatible and usually contain bug fixes and fewer new features. See the project Wiki for more information on releases, versions and patches.
Release Schedule Version Barcelona Oct 2017 0.5.0 California Jun 2017 0.6.0 Delhi Oct 2018 0.7.0 Edinburgh Jul 2019 1.0.0 Fuji Nov 2019 1.1.0 Geneva May 2020 1.2.0 Hanoi November 2020 1.3.0 Ireland Spring 2021 2.0.0 Jakarta Fall 2021 2.1.0 Kamukura Spring 2022 TBD Levski Fall 2022 TBDNote: minor releases of the Device Services and Application Services (along with their associated SDKs) can be release independently. Graphical User Interface, the command line interface (CLI) and other tools can be released independently.
EdgeX community members convene in a meeting right at the time of a release to plan the next release and roadmap future releases.
See the Project Wiki for more detailed information on releases and roadmap.
"},{"location":"about/#edgex-history-and-naming","title":"EdgeX History and Naming","text":"EdgeX Foundry began as a project chartered by Dell IoT Marketing and developed by the Dell Client Office of the CTO as an incubation project called Project Fuse in July 2015. It was initially created to run as the IoT software application on Dell\u2019s introductory line of IoT gateways. Dell entered the project into open source through the Linux Foundation on April 24, 2017. EdgeX was formally announced and demonstrated at Hanover Messe 2017. Hanover Messe is one of the world's largest industrial trade fairs. At the fair, the Linux Foundation also announced the association of 50 founding member organizations \u2013 the EdgeX ecosystem \u2013 to help further the project and the goals of creating a universal edge platform.
The name \u2018foundry\u2019 was used to draw parallels to Cloud Foundry. EdgeX Foundry is meant to be a foundry for solutions at the edge just like Cloud Foundry is a foundry for solutions in the cloud. Cloud Foundry was originated by VMWare (Dell Technologies is a major shareholder of VMWare - recall that Dell Technologies was the original creator of EdgeX). The \u2018X\u2019 in EdgeX represents the transformational aspects of the platform and allows the project name to be trademarked and to be used in efforts such as certification and certification marks.
The EdgeX Foundry Logo represents the nature of its role as transformation engine between the physical OT world and the digital IT world.
The EdgeX community selected the octopus as the mascot or \u201cspirit animal\u201d of the project at its inception. Its eight arms and the suckers on the arms represent the sensors. The sensors bring the data into the octopus. Actually, the octopus has nine brains in a way. It has millions of neurons running down each arm; functioning as mini-brains in each of those arms. The arms of the octopus serve as \u201clocal analytics\u201d like that offered by EdgeX. The mascot is affectionately called \u201cEdgey\u201d by the community.
"},{"location":"api/Ch-APIIntroduction/","title":"Introduction","text":"Each of the EdgeX services (core, supporting, management, device and application) implement a RESTful API. This section provides details about each service's API. You will see there is a common set of API's that all services implement, which are:
Each Edgex Service's RESTful API is documented via Swagger. A link is provided to the swagger document in the service specific documentation.
Also included in this API Reference are a couple 3rd party services (Configuration/Registry and Rules Engine). These services do not implement the above common APIs and don't not have swagger documentation. Links are provided to their appropriate documentation.
See the left side navigation for complete list of services to access their API Reference.
"},{"location":"api/applications/Ch-APIAppFunctionsSDK/","title":"Application Services","text":"The App Functions SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a functions pipeline. This functions pipeline processes messages received by the configured trigger. See Application Functions SDK for more details on this SDK.
The App Functions SDK provides a RESTful API that all Application Services inherit from the SDK.
"},{"location":"api/applications/Ch-APIAppFunctionsSDK/#swagger","title":"Swagger","text":""},{"location":"api/applications/Ch-APIRulesEngine/","title":"Rules Engine","text":"EdgeX Foundry Rules Engine Microservice receives data from the instance of App Service Configurable running the rules-engine
profile (aka app-rules-engine) via the EdgeX MessageBus. EdgeX uses eKuiper
for the rules engine, which is a separate LF Edge project. See the eKuiper Website for more details on this rules engine.
eKuiper's documentation
"},{"location":"api/core/Ch-APICoreCommand/","title":"Core Command","text":"EdgeX Foundry's Command microservice is a conduit for other services to trigger action on devices and sensors through their managing Device Services. See Core Command for more details about this service.
The service provides an API to get the list of commands that can be issued for all devices or a single device. Commands are divided into two groups for each device:
EdgeX uses the 3rd party Consul microservice as the implementations for Configuration and Registry. The RESTful APIs are provided by Consul directly, and several communities supply Consul client libraries for different programming languages, including Go (official), Python, Java, PHP, Scala, Erlang/OTP, Ruby, Node.js, and C#.
For the client libraries of different languages, please refer to the list on this page:
https://developer.hashicorp.com/consul/api-docs/libraries-and-sdks
"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#configuration-management","title":"Configuration Management","text":"For the current API documentation, please refer to the official Consul web site:
https://developer.hashicorp.com/consul/api-docs/kv
"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#service-registry","title":"Service Registry","text":"For the current API documentation, please refer to the official Consul web site:
https://developer.hashicorp.com/consul/api-docs/catalog https://developer.hashicorp.com/consul/api-docs/agent https://developer.hashicorp.com/consul/api-docs/agent/check https://developer.hashicorp.com/consul/api-docs/health
Service Registration
While each microservice is starting up, it will connect to Consul to register its endpoint information, including microservice ID, address, port number, and health checking method. After that, other microservices can locate its URL from Consul, and Consul has the ability to monitor its health status. The RESTful API of registration is described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#register-service
Service Deregistration
Before microservices shut down, they have to deregister themselves from Consul. The RESTful API of deregistration is described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#deregister-service
Service Discovery
Service Discovery feature allows client micro services to query the endpoint information of a particular microservice by its microservice IDor list all available services registered in Consul. The RESTful API of querying service by microservice IDis described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#get-local-service-health-by-id
The RESTful API of listing all available services is described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#list-services
Health Checking
Health checking is a critical feature that prevents using services that are unhealthy. Consul provides a variety of methods to check the health of services, including Script + Interval, HTTP + Interval, TCP + Interval, Time to Live (TTL), and Docker + Interval. The detailed introduction and examples of each checking methods are described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks
The health checks should be established during service registration. Please see the paragraph on this page of Service Registration section.
"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#consul-ui","title":"Consul UI","text":"Consul has UI which allows you to view the health of registered services and view/edit services' individual configuration. Learn more about the UI on the following Consul page:
https://learn.hashicorp.com/tutorials/consul/get-started-explore-the-ui
"},{"location":"api/core/Ch-APICoreData/","title":"Core Data","text":"EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service.
"},{"location":"api/core/Ch-APICoreData/#swagger","title":"Swagger","text":""},{"location":"api/core/Ch-APICoreMetadata/","title":"Core Metadata","text":"The Core Metadata microservice includes the device/sensor metadata database and APIs to expose this database to other services. In particular, the device provisioning service deposits and manages device metadata through this service's API. See Core Metadata for more details about this service.
"},{"location":"api/core/Ch-APICoreMetadata/#swagger","title":"Swagger","text":""},{"location":"api/devices/Ch-APIDeviceSDK/","title":"Device Services","text":"The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK.
The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK.
"},{"location":"api/devices/Ch-APIDeviceSDK/#swagger","title":"Swagger","text":""},{"location":"api/support/Ch-APISupportNotifications/","title":"Support Notifications","text":"When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service.
"},{"location":"api/support/Ch-APISupportNotifications/#swagger","title":"Swagger","text":""},{"location":"api/support/Ch-APISupportScheduler/","title":"Support Scheduler","text":"EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service.
"},{"location":"api/support/Ch-APISupportScheduler/#swagger","title":"Swagger","text":""},{"location":"design/Process/","title":"Use Cases and Design Process","text":"This document describes the EdgeX use case driven requirements engineering and design process.
Approved by consent of the TSC on 2022-07-13
Supersedes the processes documented on the EdgeX Wiki
"},{"location":"design/Process/#use-case-driven-approach-to-requirements-and-design","title":"Use Case Driven Approach to Requirements and Design","text":"Designing an architecture is a very time consuming task. It is best to start that with a solid foundation. The obvious goal is to design an architecture that satisfies the functional requirements, while being secure, flexible, and robust. Requirements are very important factors when designing a system. They should be derived from established, validated, and most importantly, written use cases. To avoid feature creep, the architecture should focus on requirements that are backed by multiple use cases and in the meantime try to remain extensible.
The following figure outlines the EdgeX process around use cases, requirements capture, and architectural design.
"},{"location":"design/Process/#use-cases-and-requirements","title":"Use Cases and Requirements","text":"In any software system, new needs of the software are encountered on a regular basis. Any need that is more than a request to fix a bug or make a minor addition/change to the software should be added as feature requests (on Github) and supported by written use cases. The use cases should be documented in an EdgeX Use Case Record (UCR). UCRs must be reviewed by domain experts and approved by the TSC per the process documented here.
"},{"location":"design/Process/#ucr-template","title":"UCR template","text":"UCRs should be submitted as pull requests against the UCR area of edgex-docs. Use the current UCR template to help create the UCR document.
"},{"location":"design/Process/#ucr-review-and-approval-process","title":"UCR Review and Approval Process","text":"The community can submit UCR. The use cases describe the use case, target users, data, hardware, privacy and security considerations. Each use case should also include a list of functional requirements, the list of existing tools (that satisfy those requirements) and gaps. Use cases and requirements may freely overlap. Submissions get peer reviewed by domain experts and TSC. The TSC approves UCR and allows design work to be conducted based on the requirements. They can be updated to address shortcomings and technological advancements. Once a stable implementation is available addressing all the requirements, the record gets classified as \"supported\".
"},{"location":"design/Process/#designs","title":"Designs","text":"Issues and new requirements lead to design decisions. Design decisions are also made on a regular, if not daily, basis. Some of these decisions are big and impactful to all parts of the system. Other decisions are less significant but still important for everyone to know and understand.
EdgeX has two places to record design decisions.
Note: ADRs should also be documented on the project board with a link to the ADR in edgex-docs in the project board card.
"},{"location":"design/Process/#when-to-use-an-adr","title":"When to use an ADR","text":"\"Significant architectural decisions\" are deemed those that:
Impact more than one EdgeX service and often impact the entire system (such as the definition of a data transfer object used through the system, of a feature that must be supported by all services).\nRequire a lot of manpower (more than two people working over the course of a release or more) to implement the feature outlined in the ADR.\nRequires implementation to be accomplished over multiple releases (either due to the complexity of the feature or dependencies).\n
ADRs must be proceeded by one or more approved UCRs in order to be approved by the TSC - allowing for the design to be implemented in the EdgeX software.
"},{"location":"design/Process/#adr-template","title":"ADR template","text":"ADRs should be submitted as pull requests against the ADR area of edgex-docs. Use the latest current ADR template to help create the ADR document.
"},{"location":"design/Process/#adr-review-and-approval-process","title":"ADR Review and Approval Process","text":"Designs are created to address one or more requirements across one or more use cases. The design would include architecture details as well as references to pre-approved use cases and requirements. The TSC review the proposed design from a technical perspective. Approved designs get added to the EdgeX archive as \"approved\" records. They may get \"deprecated\" before implementation if another design supersedes it or if the requirements become obsolete over time. Designs may also get demoted if experimental implementations prove that they are not suitable (e.g. due to security, performance, dependency deprecation, feasibility). The design, implementation, verification cycles can repeat many times before resulting in a stable release.
"},{"location":"design/Process/#project-board-cards-and-issues","title":"Project Board Cards and Issues","text":"All project design/architectural design decisions captured on the Design Decisions project board will be created as either a:
Issue: for any design decision that will require code and a PR will be submitted against the issue.\nCard: for any design decision that is not itself going to result in code or may need to be broken down into multiple issues (which can be referenced on the card).\n
The template for project board cards documenting each decision is:
When/Where: date of the decision and place where the decision was made (such as TSC meeting, working group meeting, etc.). This section is required.\nDecision Summary: quick write-up on the decision. This section is required.\nNotes/Considerations: any alternatives discussed, any impacts to other decisions or considerations to be considered in the future (which would negate the decision). This section is optional.\n\nRelevant links: link to the meeting recording (if available). Link to ADR if relevant. Link to PRs or Issues if relevant. Required if available.\n
Note there is a Template column on the project board with a single card that specifies this same structure.
"},{"location":"design/Process/#project-board-columns","title":"Project Board Columns","text":"The Design Decisions project board will be permanent and never archived or deleted. For each release, a new column named for that release will be created to hold the decisions (in the form of cards or issues) for that release.
The release columns may be \"frozen\" at the end of a release, but should never be deleted so that all design decisions can be retained for the life of the project.
"},{"location":"design/Process/#ownership-and-cardissue-creation","title":"Ownership and Card/Issue Creation","text":"The TSC chair, vice-chair and product manager will have overall responsibility for the Design Decision project board. These people will also be responsible for capturing any decisions from TSC meetings or the Monthly Architect\u2019s Meeting as cards/issues on the board.
Work Group chairs are responsible for adding new design decision cards/issues that come for their work group or related meetings.
"},{"location":"design/TOC/","title":"Use Cases and Design Records","text":""},{"location":"design/TOC/#use-case-records-ucrs","title":"Use Case Records (UCRs)","text":"Note
UCRs are listed in alphabetical order by title.
Name/Link Short Description Bring Your Own Vault Use Case for bringing your own Vault Common Configuration Use Case for having Common configuration used by all EdgeX services Core Data Retention and Persistent Cap Use Case for capping readings in Core Data Device Parent-Child Relationships Use Case for Device Parent-Child Relationships Extending Device Data Use Case for Extending of Device Data by Application Services Provision Watch via Device Metadata Use Case for Provision Watching via Additional Device Metadata Record and Replay Use Case for Recording and Replaying event/readings System Events for Devices Use Case for System Events for Device add/update/delete Microservice Authentication Use Case for Microservice Authentication URIs for files Use Case for loading service files from URIs"},{"location":"design/TOC/#architectural-design-records-adrs","title":"Architectural Design Records (ADRs)","text":"Note
ADRs are listed in chronological order by sequence number in title.
Name/Link Short Description 0001 Registry Refactor Separate out Registry and Configuration APIs 0002 Array Datatypes Allow Arrays to be held in Readings 0003 V2 API Principles Principles and Goals of V2 API Design 0004 Feature Flags Feature Flag Implementation 0005 Service Self Config Init Service Self Config Init & Config Seed Removal 0006 Metrics Collection Collection of service telemetry data 0007 Release Automation Overview of Release Automation Flow for EdgeX 0008 Secret Distribution Creation and Distribution of Secrets 0009 Secure Bootstrapping Secure Bootstrapping of EdgeX 0011 Device Service REST API The REST API for Device Services in EdgeX v2.x 0012 Device Service Filters Device Service event/reading filters 0013 Device Service Events via Message Bus Device Services send Events via Message Bus 0014 Secret Provider for All Secret Provider for All EdgeX Services 0015 Encryption between microservices Details conditions under which TLS is or is not used 0016 Container Image Guidelines Documents best practices for security of docker images 0017 Securing access to Consul Access control and authorization strategy for Consul 0018 Service Registry Service registry usage for EdgeX services 0019 EdgeX-CLI V2 EdgeX-CLI V2 Implementation 0020 Delay start services (SPIFFE/SPIRE) Secret store tokens for delayed start services 0021 Device Profile Changes Rules on device profile modifications 0022 Unit of Measure Unit of Measure 0023 North South Messaging Provide for messaging from north side systems through command down to device services 0024 System Events System Events (aka Control Plane Events) published to the MessageBus 0025 Record and Replay Record data from various devices and play data back without devices present 0026 Common Configuration Separate out the common configuration setting into a single source for all the services 0027 URIs for Files Add capability to load service files from remote locations using URIs 0028 Microservice communication security Microservice communication security / authentication (token-based)"},{"location":"design/adr/","title":"Architecture Decision Records Folder","text":"This folder contains the EdgeX Foundry architectural decision records (ADR).
At the root of this folder are decisions that are relevant to multiple parts of the project (aka. cross cutting concerns). Sub folders under the ADR folder contain decisions relevant to the specific area of the project and essentially set up along working group lines (security, core, application, etc.).
"},{"location":"design/adr/#naming-and-formatting","title":"Naming and Formatting","text":"ADR documents should follow the RFC (request for comments) naming standard. Specifically, approved ADRs should have a sequentially increasing integer (or serial number) and then the architectural design topic as file names (sequence_number-My-Topic.md). Example: 0001-Separate-Configuration-Interface. The sequence is a global sequence for all EdgeX ADR. Per RFC and Michael Nygard suggestions the makeup of the ADR document should generally include:
EdgeX ADRs should use the template.md file available in this directory.
"},{"location":"design/adr/#ownership","title":"Ownership","text":"EdgeX WG chairman own the sub folder and included documents associated to their work group. The EdgeX TSC chair/vice chair are responsible for the root level, cross cutting concern documents.
"},{"location":"design/adr/#table-of-contents","title":"Table of Contents","text":"A README with a table of contents for current documents is located here. Document authors are asked to keep the TOC updated with each new document entry.
Legacy designs have their own Table of Contents and are located here.
"},{"location":"design/adr/0001-Registy-Refactor/","title":"Registry Refactoring Design","text":"Approved
"},{"location":"design/adr/0001-Registy-Refactor/#context","title":"Context","text":"Currently the Registry Client
in go-mod-registry
module provides Service Configuration and Service Registration functionality. The goal of this design is to refactor the go-mod-registry
module for separation of concerns. The Service Registry functionality will stay in the go-mod-registry
module and the Service Configuration functionality will be separated out into a new go-mod-configuration
module. This allows for implementations for deferent providers for each, another aspect of separation of concerns.
An aspect of using the current Registry Client
is \"Where do the services get the Registry Provider
connection information?\" Currently all services either pull this connection information from the local configuration file or from the edgex_registry
environment variable. Device Services also have the option to specify this connection information on the command line. With the refactoring for separation of concerns, this issue changes to \"Where do the services get the Configuration Provider
connection information?\"
There have been concerns voiced by some in the EdgeX community that storing this Configuration Provider
connection information in the configuration which ultimately is provided by that provider is not the right design.
This design proposes that all services will use the command line option approach with the ability to override with an environment variable. The Configuration Provider
information will not be stored in each service's local configuration file. The edgex_registry
environment variable will be deprecated. The Registry Provider
connection information will continue to be stored in each service's configuration either locally or from theConfiguration Provider
same as all other EdgeX Client and Database connection information.
The new -cp/-configProvider
command line option will be added to each service which will have a value specified using the format {type}.{protocol}://{host}:{port}
e.g consul.http://localhost:8500
. This new command line option will be overridden by the edgex_configuration_provider
environment variable when it is set. This environment variable's value has the same format as the command line option value.
If no value is provided to the -cp/-configProvider
option, i.e. just -cp
, and no environment variable override is specified, the default value of consul.http://localhost:8500
will be used.
if -cp/-configProvider
not used and no environment variable override is specified the local configuration file is used, as is it now.
All services will log the Configuration Provider
connection information that is used.
The existing -r/-registry
command line option will be retained as a Boolean flag to indicate to use the Registry.
All services in the edgex-go mono repo use the new common bootstrap functionality. The plan is to move this code to a go module for the Device Service and App Functions SDKs to also use. The current bootstrap modules pkg/bootstrap/configuration/registry.go
and pkg/bootstrap/container/registry.go
will be refactored to use the new Configuration Client
and be renamed appropriately. New bootstrap modules will be created for using the revised version of Registry Client
. The current use of useRegistry
and registryClient
for service configuration will be change to appropriate names for using the new Configuration Client
. The current use of useRegistry
and registryClient
for service registration will be retained for service registration. Call to the new Unregister() API will be added to shutdown code for all services.
The conf-seed
service will have similar changes for specifying the Configuration Provider
connection information since it doesn't use the common bootstrap package. Beyond that it will have minor changes for switching to using the Configuration Client
interface, which will just be imports and appropriate name refactoring.
Since the Configuration Provider
connection information will no longer be in the service's configuration struct, the config
endpoint processing will be modified to add the Configuration Provider
connection information to the resulting JSON create from service's configuration.
This following is the current Registry Client
Interface
type Client interface {\nRegister() error\nHasConfiguration() (bool, error)\nPutConfigurationToml(configuration *toml.Tree, overwrite bool) error\nPutConfiguration(configStruct interface{}, overwrite bool) error\nGetConfiguration(configStruct interface{}) (interface{}, error)\nWatchForChanges(updateChannel chan<- interface{}, errorChannel chan<- error, configuration interface{}, waitKey string)\nIsAlive() bool\nConfigurationValueExists(name string) (bool, error)\nGetConfigurationValue(name string) ([]byte, error)\nPutConfigurationValue(name string, value []byte) error\nGetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error)\nIsServiceAvailable(serviceId string) error\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client","title":"New Configuration Client","text":"This following is the new Configuration Client
Interface which contains the Service Configuration specific portion from the above current Registry Client
.
type Client interface {\nHasConfiguration() (bool, error)\nPutConfigurationFromToml(configuration *toml.Tree, overwrite bool) error\nPutConfiguration(configStruct interface{}, overwrite bool) error\nGetConfiguration(configStruct interface{}) (interface{}, error)\nWatchForChanges(updateChannel chan<- interface{}, errorChannel chan<- error,\nconfiguration interface{}, waitKey string)\nIsAlive() bool\nConfigurationValueExists(name string) (bool, error)\nGetConfigurationValue(name string) ([]byte, error)\nPutConfigurationValue(name string, value []byte) error\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#revised-registry-client","title":"Revised Registry Client","text":"This following is the revised Registry Client
Interface, which contains the Service Registry specific portion from the above current Registry Client
. The UnRegister()
API has been added per issue #20
type Client interface {\nRegister() error\nUnRegister() error\nIsAlive() bool\nGetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error)\nIsServiceAvailable(serviceId string) error\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#client-configuration-structs","title":"Client Configuration Structs","text":""},{"location":"design/adr/0001-Registy-Refactor/#current-registry-client-config","title":"Current Registry Client Config","text":"The following is the current struct
used to configure the current Registry Client
type Config struct {\nProtocol string\nHost string\nPort int\nType string\nStem string\nServiceKey string\nServiceHost string\nServicePort int\nServiceProtocol string\nCheckRoute string\nCheckInterval string\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client-config","title":"New Configuration Client Config","text":"The following is the new struct
the will be used to configure the new Configuration Client
from the command line option or environment variable values. The Service Registry portion has been removed from the above existing Registry Client Config
type Config struct {\nProtocol string\nHost string\nPort int\nType string\nBasePath string\nServiceKey string\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#new-registry-client-config","title":"New Registry Client Config","text":"The following is the revised struct
the will be used to configure the new Registry Client
from the information in the service's configuration. This is mostly unchanged from the existing Registry Client Config
, except that the Stem
for configuration has been removed
type Config struct {\nProtocol string\nHost string\nPort int\nType string\nServiceKey string\nServiceHost string\nServicePort int\nServiceProtocol string\nCheckRoute string\nCheckInterval string\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#provider-implementations","title":"Provider Implementations","text":"The current Consul
implementation of the Registry Client
will be split up into implementations for the new Configuration Client
in the new go-mod-configuration
module and the revised Registry Client
in the existing go-mod-registry
module.
It was decided to move forward with the above design
After initial ADR was approved, it was decided to retain the -r/--registry
command-line flag and not add the Enabled
field in the Registry provider configuration.
Once the refactoring of go-mod-registry and go-mod-configuration are complete, they will need to be integrated into the new go-mod-bootstrap. Part of this integration will be the Command line option changes above. At this point the edgex-go services will be integrated with the new Registry
and Configuration
providers. The App Services SDK
and Device Services SDK
will then need to integrate go-mod-bootstrap to take advantage of these new providers.
Registry Abstraction - Decouple EdgeX services from Consul (Previous design)
"},{"location":"design/adr/0004-Feature-Flags/","title":"Feature Flag Proposal","text":""},{"location":"design/adr/0004-Feature-Flags/#status","title":"Status","text":"Accepted
"},{"location":"design/adr/0004-Feature-Flags/#context","title":"Context","text":"Out of the proposal for releasing on time, the community suggested that we take a closer look at feature-flags.
Feature-flags are typically intended for users of an application to turn on or off new or unused features. This gives user more control to adopt a feature-set at their own pace \u2013 i.e disabling store and forward in App Functions SDK without breaking backward compatibility.
It can also be used to indicate to developers the features that are more often used than others and can provided valuable feedback to enhance and continue a given feature. To gain that insight of the use of any given feature, we would require not only instrumentation of the code but a central location in the cloud (i.e a TIG stack) for the telemetry to be ingested and in turn reported in order to provide the feedback to the developers. This becomes infeasible primarily because the cloud infrastructure costs, privacy concerns, and other unforeseen legal reasons for sending \u201cUsage Metrics\u201d of an EdgeX installation back to a central entity such as the Linux Foundation, among many others. Without the valuable feedback loop, feature-flags don\u2019t provide much value on their own and they certainly don\u2019t assist in increasing velocity to help us deliver on time.
Putting aside one of the major value propositions listed above, feasibility of a feature flag \u201cmodule\u201d was still evaluated. The simplest approach would be to leverage configuration following a certain format such as FF_[NewFeatureName]=true/false. This is similar to what is done today. Turning on/off security is an example, turning on/off the registry is another. Expanding this further with a module could offer standardization of controlling a given feature such as featurepkg.Register(\u201cMyNewFeature\u201d)
or featurepkg.IsOn(\u201cMyNewFeature\u201d)
. However, this really is just adding complexity on top of the underlying configuration that is already implemented. If we were to consider doing something like this, it lends it self to a central management of features within the EdgeX framework\u2014either its own service or possibly added as part of the SMA. This could help address concerns around feature dependencies and compatibility. Feature A on Service X requires Feature B and Feature C on Service Y. Continuing down this path starts to beget a fairly large impact to EdgeX for value that cannot be fully realized.
The community should NOT pursue a full-fledged feature flag implementation either homegrown or off-the-shelf.
However, it should be encouraged to develop features with a wholistic perspective and consider leveraging configuration options to turn them on/off. In other words, once a feature compiles, can work under common scenarios, but perhaps isn\u2019t fully tested with edge cases, but doesn\u2019t impact any other functionality, should be encouraged.
"},{"location":"design/adr/0004-Feature-Flags/#consequences","title":"Consequences","text":"Allows more focus on the many more competing priorities for this release.
Minimal impact to development cycles and release schedule
"},{"location":"design/adr/0005-Service-Self-Config/","title":"Service Self Config Init & Config Seed Removal","text":""},{"location":"design/adr/0005-Service-Self-Config/#status","title":"Status","text":"approved - TSC vote on 3/25/20 for Geneva release
NOTE: this ADR does not address high availability considerations and concerns. EdgeX, in general, has a number of unanswered questions with regard to HA architecture and this design adds to those considerations.
"},{"location":"design/adr/0005-Service-Self-Config/#context","title":"Context","text":"Since its debut, EdgeX has had a configuration seed service (config-seed) that, on start of EdgeX, deposits configuration for all the services into Consul (our configuration/registry service). For development purposes, or on resource constrained platforms, EdgeX can be run without Consul with services simply reading configuration from the filesystem.
While this process has nominally worked for several releases of EdgeX, there has always been some issues with this extra initialization process (config-seed), not least of which are: - race conditions on the part of the services, as they bootstrap, coming up before the config-seed completes its deposit of configuration into Consul - how to deal with \"overrides\" such as environmental variable provided configuration overrides. As the override is often specific to a service but has to be in place for config-seed in order to take effect. - need for an additional service that is only there for init and then dies (confusing to users)
NOTE - for historical purposes, it should be noted that config-seed only writes configuration into the configuration/registry service (Consul) once on the first start of EdgeX. On subsequent starts of EdgeX, config-seed checks to see if it has already populated the configuration/registry service and will not rewrite configuration again (unless the --overwrite flag is used).
The design/architectural proposal, therefore, is: - removal of the config-seed service (removing cmd/config-seed from the edgex-go repository) - have each EdgeX micro service \"self seed\" - that is seed Consul with their own required configuration on bootstrap of the service. Details of that bootstrapping process are below.
"},{"location":"design/adr/0005-Service-Self-Config/#command-line-options","title":"Command Line Options","text":"All EdgeX services support a common set of command-line options, some combination of which are required on startup for a service to interact with the rest of EdgeX. Command line options are not set by any configuration. Command line options include:
consul.
- for example: -cp=consul.http://localhost:8500
)The distinction of command line options versus configuration will be important later in this ADR.
Two command line options (-o for overwrite and -r for registry) are not overridable by environmental variables.
NOTES: Use of the --overwrite command line option should be used sparingly and with expert knowledge of EdgeX; in particular knowledge of how it operates and where/how it gets its configuration on restarts, etc. Ordinarily, --overwrite is provided as a means to support development needs. Use of --overwrite permanently in production enviroments is highly discouraged.
"},{"location":"design/adr/0005-Service-Self-Config/#configuration-initialization","title":"Configuration Initialization","text":"Each service has (or shall have if not providing it already) a local configuration file. The service may use the local configuration file on initialization of the service (aka bootstrap of the service) depending on command line options and environmental variables (see below) provided at startup.
Using a configuration provider
When the configuration provider is specified, the service will call on the configuration provider (Consul) and check if the top-level (root) namespace for the service exists. If configuratation at the top-level (root) namespace exists, it indicates that the service has already populated its configuration into the configuration provider in a prior startup.
If the service finds the top-level (root) namespace is already populated with configuration information it will then read that configuration information from the configuration provider under namespace for that service (and ignore what is in the local configuration file).
If the service finds the top-level (root) namespace is not populated with configuration information, it will read its local configuration file and populate the configuration provider (under the namespace for the service) with configuration read from the local configuration file.
A configuration provider can be specified with a command line argument (the -cp / --configProvider) or environment variable (the EDGEX_CONFIGURATION_PROVIDER environmental variable which overrides the command line argument).
NOTE: the environmental variables are typically uppercase but there have been inconsistencies in environmental variable casing (example: edgex_registry). This should be considered and made consistent in a future major release.
Using the local configuration file
When a configuration provider isn't specified, the service just uses the configuration in its local configuration file. That is the service uses the configuration in the file associated with the profile, config filename and config file directory command line options or environmental variables. In this case, the service does not contact the configuration service (Consul) for any configuration information.
NOTE: As the services now self seed and deployment specific changes can be made via environment overrides, it will no longer be necessary to have a Docker profile configuration file in each of the service directories (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml). See Consequences below. It will still be possible for users to use the profile mechanism to specify a Docker configuration, but it will no longer be required and not the recommended approach to providing Docker container specific configuration.
"},{"location":"design/adr/0005-Service-Self-Config/#overrides","title":"Overrides","text":"Environment variables used to override configuration always take precedence whether configuration is being sourced locally or read from the config provider/Consul.
Note - this means that a configuration value that is being overridden by an environment variable will always be the source of truth, even if the same configuration is changed directly in Consul.
The name of the environmental variable must match the path names in Consul.
NOTES: - Environmental variables overrides remove the need to change the \"docker\" profile in the res/docker/configuration.toml files - Allowing removal of 50% of the existing configuration.toml files. - The override rules in EdgeX between environmental variables and command line options may be counter intuitive compared to other systems. There appears to be no standard practice. Indeed, web searching \"Reddit & Starting Fights Env Variables vs Command Line Args\" will layout the prevailing differences. - Environment variables used for configuration overrides are named by prepending the the configuration element with the configuration section inclusive of sub-path, where sub-path's \".\"s are replaced with underscores. These configuration environment variable overrides must be specified using camel case. Here are two examples:
Registry_Host for\n[Registry]\nHost = 'localhost'\n\nClients_CoreData_Host for\n[Clients]\n [Clients.CoreData]\n Host = 'localhost'\n
- Going forward, environmental variables that override command line options should be all uppercase. All values overriden get logged (indicating which configuration value or op param and the new value).
"},{"location":"design/adr/0005-Service-Self-Config/#decision","title":"Decision","text":"These features have been implemented (with some minor changes to be done) for consideration here: https://github.com/edgexfoundry/go-mod-bootstrap/compare/master...lenny-intel:SelfSeed2. This code branch will be removed once this ADR is approved and implemented on master.
The implementation for self-seeding services and environmental overrides is already implemented (for Fuji) per this document in the application services and device services (and instituted in the SDKs of each).
"},{"location":"design/adr/0005-Service-Self-Config/#backward-compatibility","title":"Backward compatibility","text":"Several aspects of this ADR contain backward compatibility issues for the device service and application service SDKs. Therefore, for the upcoming minor release, the following guidelines and expections are added to provide for backward compatibility.
As earlier versions of the device service SDKs accepted a URI for --registry, if specified on the command line, use the given URI as the address of the configuration provider. If both --configProvider and --registry specify URIs, then the service should log an error and exit.
If a configProvider URI isn't specified, but --registry (w/out a URI) is specified, then the service will use the Registry provider information from its local configuration file for both configuration and registry providers.
Add it back and use value as if it was EDGEX_CONFIGURATION_PROVIDER and enable use of registry with same settings in URL. Default to http as it is in Fuji.
"},{"location":"design/adr/0005-Service-Self-Config/#consequences","title":"Consequences","text":"There are still high availability concerns that need to be considered and not covered in this ADR at this time.
# all common shared environment variables defined here:\nx-common-env-variables: &common-variables\n EDGEX_SECURITY_SECRET_STORE: \"false\"\n EDGEX_CONFIGURATION_PROVIDER: consul.http://edgex-core-consul:8500\n Clients_CoreData_Host: edgex-core-data\n Clients_Logging_Host: edgex-support-logging\n Logging_EnableRemote: \"true\"\n
Approved Original proposal 10/24/2020 Approved by the TSC on 3/2/22
Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include:
Control plane events (CPE) are defined as events
that occur within an EdgeX instance. Examples of CPE include:
CPE should not be confused with core data Events. Core data Events represent a collection (one or more) of sensor/device readings. Core data Events represent sensing of some measured state of the physical world (temperature, vibration, etc.). CPE represents the detection of some happening inside of the EdgeX software.
This ADR outlines ** metrics (or telemetry) ** collection and handling.
Note
This ADR initially incorporated metrics collection and control plane event processing. The EdgeX architects felt the scope of the design was too large to cover under one ADR. Control plane event processing will be covered under a separate ADR in the future.
"},{"location":"design/adr/0006-Metrics-Collection/#context","title":"Context","text":"System Management services (SMA and executors) currently provide a limited set of \u201cmetrics\u201d to requesting clients (3rd party applications and systems external to EdgeX). Namely, it provides requesting clients with service CPU and memory usage; both metrics about the resource utilization of the service (the executable) itself versus metrics that are about what is happening inside of the service. Arguably, the current system management metrics can be provided by the container engine and orchestration tools (example: by Docker engine) or by the underlying OS tooling.
Info
The SMA has been deprecated (since Ireland release) and will be removed in a future, yet named, release.
Going forward, users of EdgeX will want to have more insights \u2013 that is more metrics telemetry \u2013 on what is happening directly in the services and the tasks that they are preforming. In other words, users of EdgeX will want more telemetry on service activities to include:
Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include:
The collection and dissemination of metric data will require internal service level instrumentation (relevant to that service) to capture and send data about relevant EdgeX operations. EdgeX does not currently offer any service instrumentation.
"},{"location":"design/adr/0006-Metrics-Collection/#metric-use","title":"Metric Use","text":"As a first step in implementation of metrics data, EdgeX will make metric data available to other subscribing 3rd party applications and systems, but will not necessarily consume or use this information itself.
In the future, EdgeX may consume its own metric data. For example, EdgeX may, in the future, use a metric on the number of EdgeX events being sent to core data (or app services) as the means to throttle back device data collection.
In the future, EdgeX application services may optionally subscribe to a service's metrics messages bus (by attaching to the appropriate message pipe for that service). Thus allowing additional filtering, transformation, endpoint control of metric data from that service. At the point where this feature is supported, consideration would need to be made as to whether all events (sensor reading messages and metric messages) go through the same application services.
At this time, EdgeX will not persist the metric data (except as it may be retained as part of a message bus subsystem such as in an MQTT broker). Consumers of metric data are responsible for persisting the data if needed, but this is external to EdgeX. Persistence of metric information may be considered in the future based on requirements and adopter demand for such a feature.
In general, EdgeX metrics are meant to provide internal services and external applications and systems better information about what is happening \"inside\" EdgeX services and the associated devices with which it communicates.
"},{"location":"design/adr/0006-Metrics-Collection/#requirements","title":"Requirements","text":"Writable
area. When a user wishes to change the configuration dynamically (such as turning on/off a metric), then Consul's UI can be used to change it.on
or off
- in other words providing configuration that determines what metrics are collected and reported by default.off
(the default setting) the service does not report the metric. When a metric is turned on
the service collects and sends the metric to the designated message topic.Info
Initially, it was proposed that metrics be associated with a \"level\" and allow metrics to be turned on or off by level (like levels associated to log messages in logging). The level of metrics data seems arbitrary at this time and considered too complex for initial implementation. This may be reconsidered in a future release and based on new requirements/use cases.
It was also proposed to categorize or label metrics - essentially allowing grouping of various metrics. This would allow groups of metrics to be turned on or off, and allow metrics to be organized per the group when reporting. At this time, this feature is also considered beyond the scope of the initial implementation and to be reconsidered in a future release based on requirements/use case needs.
It was also proposed that each service offer a REST API to provide metrics collection information (such as which metrics were being collected) and the ability to turn the collection on or off dynamically. This is deemed out of scope for the first implementation and may be brought back if there are use case requirements / demand for it.
"},{"location":"design/adr/0006-Metrics-Collection/#requested-metrics","title":"Requested Metrics","text":"The following is a list of example metrics requested by the EdgeX community and adopters for various service areas. Again, metrics would generally be collected and pushed to the message topic in some configured interval (example: 1/5/15 minutes or other defined interval). This is just a sample of metrics thought relevant by each work group. It may not reflect the metrics supported by the implementation. The exact metrics collected by each service will be determined by the service implementers (or SDK implementers in the case of the app functions and device service SDKs).
"},{"location":"design/adr/0006-Metrics-Collection/#general","title":"General","text":"The following metrics apply to all (or most) services.
Note
It is envisioned that there may be additional specific metrics for each device service. For example, the ONVIF camera device service may report number of times camera tampering was detected.
"},{"location":"design/adr/0006-Metrics-Collection/#security","title":"Security","text":"Security metrics may be more difficult to ascertain as they are cross service metrics. Given the nature of this design (on a per service basis), global security metrics may be out of scope or security metrics collection has to be copied into each service (leading to lots of duplicate code for now). Also, true threat detection based on metrics may be a feature best provided by 3rd party based on particular threats and security profile needs.
Metric data will be collected and cached by each service. At designated times (kicked off by configurable schedule), the service will collect telemetry data from the cache and push it to a designated message bus topic.
"},{"location":"design/adr/0006-Metrics-Collection/#metrics-messaging","title":"Metrics Messaging","text":"Cached metric data, at the designated time, will be marshaled into a message and pushed to the pre-configured message bus topic.
Each metric message consists of several key/value pairs: - a required name (the name of the metric) such as service-uptime - a required value which is the telemetry value collected such as 120 as the number of hours the service has been up. - a required timestamp is the time (in Epoch timestamp/milliseconds format) at which the data was collected (similar in nature to the origin of sensed data). - an optional collection (array) of tags. The tags are sets of key/value pairs of strings that provide amplifying information about the telemetry. Tags may include: - originating service name - unit of measure associated with the telemetry value - value type of the value - additional values when the metric is more than just one value (example: when using a histogram, it would include min, max, mean and sum values)
The metric name must be unique for that service. Because some metrics are reported from multiple services (such as service uptime), the name is not required to be unique across all services.
All information (keys, values, tags, etc.) is in string format and placed in a JSON array within the message body. Here are some example representations:
Example metric message body with a single value
{\"name\":\"service-up\", \"value\":\"120\", \"timestamp\":\"1602168089665570000\", \"tags\":{\"service\":\"coredata\",\"uom\":\"days\",\"type\":\"int64\"}}\n
Example metric message body with multiple values
{\"name\":\"api-requests\", \"value\":\"24\", \"timestamp\":\"1602168089665570001\", \"tags\":{\"service\":\"coredata\",\"uom\":\"count\",\"type\":\"int64\", \"mean\":\"0.0665\", \"rate1\":\"0.111\", \"rate5\":\"0.150\",\"rate15\":\"0.111\"}}\n
Info
The key or metric name must be unique when using go-metrics as it requires the metric name to be unique per the registry. Metrics are considered immutable.
"},{"location":"design/adr/0006-Metrics-Collection/#configuration","title":"Configuration","text":"Configuration, not unlike that provided in core data or any device service, will specify the message bus type and locations where the metrics messages should be sent. In fact, the message bus configuration will use (or reuse if the service is already using the message bus) the common message bus configuration as defined below.
Common configuration for each service for message queue configuration (inclusive of metrics):
[MessageQueue]\nProtocol = 'redis' ## or 'tcp'\nHost = 'localhost'\nPort = 5573\nType = 'redis' ## or 'mqtt'\nPublishTopicPrefix = \"edgex/events/core\" # standard and existing core or device topic for publishing \n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\n# Client Identifiers\nClientId = \"device-virtual\"\n# Connection information\nQos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\n
Additional configuration must be provided in each service to provide metrics / telemetry specific configuration. This area of the configuration will likely be different for each type of service.
Additional metrics collection configuration to be provided include:
off
and on
. All are false by default. The list of metrics can and likely will be different per service. The keys in this list are the metric name. True and false are used for on
and off
values.[service-name]/[metric-name]
will be appended per metric (allowing subscribers to filter by service or metric name)These metrics configuration options will be defined in the Writable
area of configuration.toml
so as to allow for dynamic changes to the configuration (when using Consul). Specifically, the [Writable].[Writable.Telemetry]
area will dictate metrics collection configuration like this:
[[Writable]]\n[[Writable.Telemetry]]\nInterval = \"30s\"\nPublishTopicPrefix = \"edgex/telemetry\" # /<service-name>/<metric-name> will be added to this Publish Topic prefix\n#available metrics listed here. All metrics should be listed off (or false) by default\nservice-up = false\napi-requests = false\n
Info
It was discussed that in future EdgeX releases, services may want separate message bus connections. For example one for sensor data and one for metrics telemetry data. This would allow the QoS and other settings of the message bus connection to be different. This would allow sensor data collection, for example, to be messaged with a higher QoS than that of metrics. As an alternate approach, we could modify go-mod-messaging to allow setting QoS per topic (and thereby avoid multiple connections). For the initial release of this feature, the service will use the same connection (and therefore configuration) for metrics telemetry as well as sensor data.
"},{"location":"design/adr/0006-Metrics-Collection/#library-support","title":"Library Support","text":"Each service will now need go-mod-messaging support (for GoLang services and the equivalent for C services). Each service would determine when and what metrics to collect and push to the message bus, but will use a common library chosen for each EdgeX language supported (Go or C currently)
Use of go-metrics (a GoLang library to publish application metrics) would allow EdgeX to utilize (versus construct) a library utilized by over 7 thousand projects. It provides the means to capture various types of metrics in a registry (a sophisticated map). The metrics can then be published (reported
) to a number of well known systems such as InfluxDB, Graphite, DataDog, and Syslog. go-metrics is a Go library made from original Java package https://github.com/dropwizard/metrics.
A similar package would need to be selected (or created) for C. Per the Core WG meeting of 2/24/22 - it is important to provide an implementation that is the same in Go or C. The adopter of EdgeX should not see a difference in whether the metrics/telemetry is collected by a C or Go service. Configuration of metrics in a C or Go service should have the same structure. The C based metrics collection mechanism in C services (specifically as provided for in our C device service SDK) may operate differently \"under the covers\" but its configuration and resulting metrics messages on the EdgeX message bus must be formatted/organized the same.
** Considerations in the use of go-metrics **
** Community questions about go-metrics ** Per the Monthly Architect's meeting of 9/20/21):
As an alternative to go-metrics, there is another library called OpenCensus. This is a multi-language metrics library, including Go and C++. This library is more feature rich. OpenCensus is also roughly 5x the size of the go-metrics library.
"},{"location":"design/adr/0006-Metrics-Collection/#additional-open-questions","title":"Additional Open Questions","text":"Writable
configuration and allow Consul to be the means to change the configuration (dynamically). If an adopter chooses not to use Consul, then the configuration with regard to metrics collection, as with all configuration in this circumstance, would be static. If an external API need is requested in the future (such as from an external UI or tool), a REST API may be added. See older versions of this PR for ideas on implementation in this case.reporters
that come with go-metrics that allow for data to be taken directly from go-metrics and pushed to an intermediary for Prometheus and other monitoring/telemetry platforms as referenced above. These capabilities may not be very well supported and is beyond the scope of this EdgeX ADR. However, even without reporters
, it was felt a relatively straightforward exercise (on the part of the adopter) to create an application that listens to the EdgeX metrics message bus and makes that data available via pull REST API for Prometheus if desired.The go-metrics package offers the following types of metrics collection:
g := metrics.NewGauge()\ng.Update(42) // set the value to 42\ng.Update(10) // now set the value to 10\nfmt.Println(g.Value()) // print out the current value in the gauge = 10\n
c := metrics.NewCounter()\nc.Inc(1) // add one to the current counter\nc.Inc(10) // add 10 to the current counter, making it 11\nc.Dec(5) // decrement the counter by 5, making it 6 \nfmt.Println(c.Count()) // print out the current count of the counter = 6\n
m := metrics.NewMeter()\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nfmt.Println(m.Count()) // prints 4\nfmt.Println(m.Rate1()) // prints 0.11075889086811593\nfmt.Println(m.Rate5()) // prints 0.1755318374350548\nfmt.Println(m.Rate15()) // prints 0.19136522498856992\nfmt.Println(m.RateMean()) //prints 0.06665062941438574\n
h := metrics.NewHistogram(metrics.NewUniformSample(4))\nh.Update(10)\nh.Update(20)\nh.Update(30)\nh.Update(40)\nfmt.Println((h.Max())) // prints 40\nfmt.Println(h.Min()) // prints 10\nfmt.Println(h.Mean()) // prints 25\nfmt.Println(h.Count()) // prints 4\nfmt.Println(h.Percentile(0.25)) //prints 12.5\nfmt.Println(h.Variance()) //prints 125\nfmt.Println(h.Sample()) //prints &{4 {0 0} 4 [10 20 30 40]}\n
t := metrics.NewTimer()\nt.Update(10)\ntime.Sleep(15 * time.Second)\nt.Update(20)\ntime.Sleep(15 * time.Second)\nt.Update(30)\ntime.Sleep(15 * time.Second)\nt.Update(40)\ntime.Sleep(15 * time.Second)\nfmt.Println((t.Max())) // prints 40\nfmt.Println(t.Min()) // prints 10\nfmt.Println(t.Mean()) // prints 25\nfmt.Println(t.Count()) // prints 4\nfmt.Println(t.Sum()) // prints 100\nfmt.Println(t.Percentile(0.25)) //prints 12.5\nfmt.Println(t.Variance()) //prints 125\nfmt.Println(t.Rate1()) // prints 0.1116017821771607\nfmt.Println(t.Rate5()) // prints 0.1755821073441404\nfmt.Println(t.Rate15()) // prints 0.1913711954736821\nfmt.Println(t.RateMean()) //prints 0.06665773963998162\n
Note
The go-metrics package does offer some variants of these like the GaugeFloat64 to hold 64 bit floats.
"},{"location":"design/adr/0006-Metrics-Collection/#consequences","title":"Consequences","text":"Possible standards for implementation
Approved (by TSC vote on 3/25/21)
"},{"location":"design/adr/0018-Service-Registry/#context","title":"Context","text":"An EdgeX system may be run with an optional service registry, the use of which (see the related ADR 0001-Registry-Refactor [1]) can be controlled on a per-service basis via the -r/-registry
commmand line options. For the purposes of this ADR, a base assumption is that the registry has been enabled for all services. The default service registry used by EdgeX is Consul [2] from Hashicorp. Consul is also the default configuration provider for EdgeX.
This ADR is meant to address the current usage of the registry by EdgeX services, and in particular whether the EdgeX services are using the registry to determine the location of peer services vs. using static per-service configuration. The reason this is being investigated is that there has been a proposal that EdgeX do away with the registry functionality, as the current implementation is not considered secure, due to the current configuration of Consul as used by the latest version of EdgeX (Hanoi/1.3.0).
According to the original Service Name Design document (v6) [3] written during the California (0.6) release of EdgeX, all EdgeX Foundry microservices should be able to accomplish the following tasks:
The purpose of this design is to ensure that services themselves advertise their location to the rest of the system by first self- registering. Most service registries (including Consul) implement some sort of health check mechanism. If a service is failing one or more health checks, the registry will stop reporting its availability when queried.
Note - the design specifically excludes device services from this service lookup, as Core Metadata maintains a persistent store of DeviceService objects which provide service location for device services.
"},{"location":"design/adr/0018-Service-Registry/#existing-behavior","title":"Existing Behavior","text":"This section documents the existing behavior in the Hanoi (1.3.x) version of EdgeX.
"},{"location":"design/adr/0018-Service-Registry/#device-services","title":"Device Services","text":"Device Virtual's behavior was first tested using the edgexfoundry snap (which is configured to always use the registry) by doing the following:
$ sudo snap install edgexfoundry $ cp /var/snap/edgexfoundry/current/config/device-virtual/res/configuration.toml .
I edited the file, removing the [Client.Data]
section completely and copied the file back into place. Next I enabled device-virtual while monitoring the journal output.
$ sudo cp configuration.toml /var/snap/edgexfoundry/current/config/device-virtual/res/\n$ sudo snap set edgexfoundry device-virtual=on\n
The following error was seen in the journal:
level=INFO app=device-virtual source=httpserver.go:94 msg=\"Web server starting (0.0.0.0:49990)\"\nerror: fatal error; Host setting for Core Data client not configured\n
Next I followed the same steps, but instead of completely removing the client, I instead set the client ports to invalid values. In this case the service logged the following errors and exited:
level=ERROR app=device-virtual source=service.go:149 msg=\"DeviceServicForName failed: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\"\nlevel=ERROR app=device-virtual source=init.go:45 msg=\"Couldn't register to metadata service: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\\n\"\n
Note - in order to run this second test, the easiest way to do so is to remove and reinstall the snap vs. manually wiping out device-virtual's configuration in Consul. I could have also stopped the service, modified the configuration directly in Consul, and restarted the service.
"},{"location":"design/adr/0018-Service-Registry/#registry-client-interface-usage","title":"Registry Client Interface Usage","text":"Next the service's usage of the go-mod-registry Client
interface was examined:
type Client interface {\n // Registers the current service with Registry for discover and health check\n Register() error\n\n // Un-registers the current service with Registry for discover and health check\n Unregister() error\n\n // Simply checks if Registry is up and running at the configured URL\n IsAlive() bool\n\n // Gets the service endpoint information for the target ID from the Registry\n GetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error)\n\n // Checks with the Registry if the target service is available, i.e. registered and healthy\n IsServiceAvailable(serviceId string) (bool, error)\n}\n
"},{"location":"design/adr/0018-Service-Registry/#summary","title":"Summary","text":"If a device service is started with the registry flag set:
IsServiceAvailable
) on startup. Regardless of the registry setting, the Go SDK always sources the addresses of its dependent services from the Client* configuration stanzas.The same approach was used for Core and Support services (i.e. reviewing the usage of go-mod-bootstrap's Client
interface), and ironically, the SMA seems to be the only service in edgex-go that actually queries the registry for service location:
./internal/system/agent/getconfig/executor.go: ep, err := e.registryClient.GetServiceEndpoint(serviceName)\n./internal/system/agent/direct/metrics.go: e, err := m.registryClient.GetServiceEndpoint(serviceName)\n
In summary, other than the SMA's configuration and metrics logic, the Core and Support services behave in the same manner as device-sdk-go.
Note - the SMA also has a longstanding issue #2486 where it continuousy logs errors if one (or more) of the Support Services are not running. As described in the issue, this could be avoided if the SMA used the registry to determine if the services were actually available. See related issue #1662 ('Look at Driving \"Default Services List\" via Configuration').
"},{"location":"design/adr/0018-Service-Registry/#security-proxy-setup","title":"Security Proxy Setup","text":"The security-proxy-setup service also relies on static service address configuration to configure the server routes for each of the services accessible through the API Gateway (aka Kong). Although it uses the same TOML-based client config keys as the other services, these configuration values are only ever read from the security-proxy-setup's local configuration.toml file, as the security services have never supported using our configuration provider (aka Consul).
Note - Another point worth mentioning with respect to security services is that in the Geneva and Hanoi releases the service health checks registered by the services (and the associated IsServiceAvailable
method) are used to orchestrate the ordered startup of the security services via a set of Consul scripts. This additional orchestration is only performed when EdgeX is deployed via docker, and is slated to to be removed as part of the Ireland release.
After a bit of research reaching as far back as the California (0.6.1) release of EdgeX, I've managed to piece together why the current implementation works the way it does. This history focues solely on the core and support services.
The California release of EdgeX was released in June of 2018 and was the first to include services written using Go. This version of EdgeX as well as versions through the Fuji release all relied on a bootstrapping service called core-config-seed which was responsible for seeding the configuration of all of the core and support services into Consul prior to any of the services being started.
This release actually preceded usage of TOML for configuration files, and instead just used a flat key/value format, with keys converted from legacy Java property names (e.g. meta.db.device.url ) to Camel[Pascal]/Case (e.g. MetaDeviceServiceURL).
I chose the config key mentioned above on purpose:
MetaDeviceURL = \"http://edgex-core-metadata:48081/api/v1/device\"\n
Not only did this config key provide the address of core metadata, it also provided the path of a specific REST endpoint. In later releases of EdgeX, the address of the service and the specific endpoint paths were de-coupled. Instead of following the Service Name design (which was finalized two months earlier), the initial implementation followed the legacy Java implementation and initialized its service clients for each required REST endpoint (belonging to another EdgeX service) directly from the associated *URL config key read from Consul (if enabled) or directly from the configuration file.
The shared client initialization code also created an Endpoint monitor goroutine and passed it a go channel channel used by the service to receive updates to the REST API endpoint URL. This monitor goroutine effectively polled Consul every 15s (this became configurable in later versions) for the client's service address and if a change was detected, would write the updated endpoint URL to the given channel, effectively ensuring that the service started using the new URL.
It wasn't till late in the Geneva development cycle that I noticed log messages which made me aware of the fact that every one of our services was making a REST call to check the address of a service endpoint every 15s, for every REST endpoint it used! An issue was filed (https://github.com/edgexfoundry/edgex-go/issues/2594), and the client monitoring was removed as part of the Geneva 1.2.1 release.
"},{"location":"design/adr/0018-Service-Registry/#problem-statement","title":"Problem Statement","text":"The fundamental problem with the existing implementations (as decribed above), is that there is too much duplication of configuration across services. For instance, Core Data's service port can easily be changed by passing the environment variable SERVICE_PORT to the service on startup. This overrides the configuration read from the configuration provider, and will cause Core Data to listen on the new port, however it has no impact on any services which use Core Data, as the client config for each is read from the configuration provider (excluding security-proxy-setup).
This means in order to change a service port, environment variable overrides (e.g. CLIENTS_COREDARA_PORT) need to set for every client service as well as security-proxy-setup (if required).
"},{"location":"design/adr/0018-Service-Registry/#decision","title":"Decision","text":"Update the core, support, and security-proxy-setup services to use go-mod-registry's Client.GetServiceEndpoint
method (if started with the --registry
option) to determine (a) if a service dependency is available and (b) use the returned address information to initialize client endpoints (or setup the correct route in the case of proxy-setup). The same changes also need to be applied to the App Functions SDK and Go Device SDK, with only minor changes required in the C Device SDK (see previous commments re: the current implementation).
Note - this design only works if service registration occurs before the service initializes its clients. For instance, Core Data and Core Metadata both depend on the other, and thus if both defer service registration till after client initialization, neither will be able to successfully lookup the address of the other service.
"},{"location":"design/adr/0018-Service-Registry/#consquences","title":"Consquences","text":"One impact of this decision is that since the security-proxy-setup service currently runs before any of the core and support services are started, it would not be possible to implement this proposal without also modifying the service to use a lazy initialization of the API Gateway's routes. As such, the implementation of this ADR will require more design work with respect to security-proxy-setup. Some of the issues include:
--registry
command-line support to security-proxy-setup).Route
entries use service-keys instead of arbitrary names (e.g. (Route.core-data
vs. Route.CoreData
).Approved by TSC Vote on 4/28/22
"},{"location":"design/adr/0023-North-South-Messaging/#context-and-proposed-design","title":"Context and Proposed Design","text":"Today, data flowing from sensors/devices (the \u201csouthside\u201d) through EdgeX to enterprise applications, databases and cloud-based systems (the \u201cnorthside\u201d) can be accomplished via REST or Message bus. That is, sensor or device data collected by a device service can be sent via REST or message bus to core data. Core data then relays the data to application services via message bus, but the sensor data can also be sent directly from device services to application services via message bus (bypassing core data). The message bus is implemented via Redis Pub/Sub (default) or via MQTT. From the application services, data can be sent to northside endpoints in any number of ways \u2013 including via MQTT.
So, in summary, data can be collected from a sensor or device and be sent from the southside to the northside entirely using message bus technology when desired.
Today, communications from a 3rd party system (enterprise application, cloud application, etc.) to EdgeX in order to acuate a device or get the latest information from a sensor is accomplished via REST. The 3rd party system makes a REST call of the command service which then relays a request to a device service also using REST. There is no built in means to make a message-based request of EdgeX or the devices/sensors it manages. Note, these REST calls are optionally made via the API Gateway in order to provide access control.
In a future release of EdgeX, there is a desire to allow 3rd party systems to make requests of the southside via message bus. Specifically, a 3rd party system will send a command request to the command service via external message broker. The command service would then relay the request via message bus to the managing device service via one of the allowed internal message bus implementations (which could be MQTT or Redis Pub/Sub today). The device service would use the message to trigger action on the device/sensor as it does when it receives a REST request, and respond via message bus back to the command service. In turn, the command service would relay the response to the 3rd party system via external message bus.
In summary, this ADR proposes that the core command service adds support for an external MQTT connection (in the same manner that app services provide an external MQTT connection), which will allow it to act as a bridge between the internal message bus (implemented via either MQTT or Redis Pub/Sub) and external MQTT message bus.
Note
For the purposes of this initial north-to-south message bus communications, external 3rd party communications to the command service will be limited to use of MQTT.
"},{"location":"design/adr/0023-North-South-Messaging/#core-command-as-message-bus-bridge","title":"Core Command as Message Bus Bridge","text":"The core command service will serve as the EdgeX entry point for external, north-to-south message bus requests to the south side.
3rd party systems should not be granted access to the EdgeX internal message bus. Therefore, in order to implement north to south communications via message bus (specifically MQTT), the command service needs to take messages from the 3rd party or external MQTT topics and pass them internally onto the EdgeX internal message bus where they can eventually be routed to the device services and then on to the devices/sensors (southside).
In reverse, response messages from the southside will also be sent through the internal EdgeX message bus to the command service where they can then be bridged to the external MQTT topics and respond to the 3rd party system requester.
Note
Note that eKuiper is allowed access directly to the internal EdgeX message bus. This is a special circumstance of 3rd party external system communication as eKuiper is a sister project that is deemed the EdgeX reference implementation rules engine. In future releases of EdgeX, even eKuiper may be routed through an external to internal message bus bridge for better decoupling and security.
"},{"location":"design/adr/0023-North-South-Messaging/#message-bus-subscriptions-and-publishing","title":"Message Bus Subscriptions and Publishing","text":"The command service will require the means to publish messages to device services via the EdgeX message bus (internal message bus). It would use the messaging client (go-mod-messaging) to create a new MessageClient, connect to the message bus, and publish to designated request message topics (see topic configuration below).
The command service will also need to connect to the EdgeX message bus (internal message bus) in order to receive responses from the device services after a request by message bus has been made. Again, core command will use the go-mod-messaging MessageClient to subscribe and receive response messages from the device services.
In a similar fashion, device services will need to both subscribe and publish to the EdgeX message bus (internal message bus) to get command requests and push back any responses to the command service. Go lang device services will, like the command service, use the go-mod-messaging module and MessagingClient to get command requests and send command responses to and from the EdgeX message bus. C based device services will use a C alternative to subscribe and publish to the EdgeX message bus (internal message bus). Note, device services already use go-mod-messaging when publishing events/readings to the message bus (internal message bus).
The command service will also need to subscribe to 3rd party MQTT topics (external message bus) in order to get command requests from the 3rd party system. The command service will then relay command requests on to the appropriate device service via the internal message bus (forming the message bus to message bus bridge). Likewise, the command service will accept responses from the device services on the EdgeX message bus (internal message bus) and then publish responses to the 3rd party system via the 3rd party MQTT topics (external message bus).
"},{"location":"design/adr/0023-North-South-Messaging/#command-queries-via-command-service","title":"Command Queries via Command Service","text":"Today, 3rd party systems can make a REST call of core command to get the possible commands that can be executed. There are two query REST API endpoints: /device/all (to get the commands for all devices) and device/name/{name} (to get the commands for a specific device by name).
It stands to reason that if a 3rd party system wants to send commands via messaging that they would also want to get an understanding of what commands are available via messaging. For this reason, the core command service will also allow message requests to get all command or get all commands for a particular device name. In other words, the core command service must support command \"queries\" via messaging just as it supports command requests via messaging.
In the case of command queries, the REST responses include the actual REST command endpoints. For example, the REST query would return core command paths, urls and parameters used to construct REST command requests (as shown in the example below).
\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"get\": true,\n\"path\": \"/api/v2/device/name/testDevice1/command/coolingpoint1\",\n\"url\": \"http://localhost:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n}\n]\n}\n]\n
When using messaging to make the \"queries\" the response message must return information about how to pass a message to the appropriate topic to make the command request. Therefore, the query response when using messaging would include something like the following:
\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint/get\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n} ]\n},\n{\n\"name\": \"coolingpoint1\",\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint1/set\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n} ]\n}\n]\n
Note
Per Core WG meeting of 4/7/22 - the JSON above serves as a general example. The implementation will have to address get/set (or read/write) differentiation, but this is considered an implementation detail to be resolved by the developers.
Note
The query response does not contain a URL since it is assumed that the broker address must already be known in order to make the query.
"},{"location":"design/adr/0023-North-South-Messaging/#message-structure","title":"Message Structure","text":"In REST based command requests (and responses), the HTTP request line contains important information such as the path or target of the request, and the HTTP method type (indicating a GET or PUT request). The HTTP status line provides the information such as the response code (ex: 200 for OK). The body or payload of the HTTP message contains the request details (such as parameters to a device PUT call) or response information (such as events and associated readings from a GET call).
Since most message bus protocols lack a generic message header mechanism (as in HTTP), providing request/response metadata is accomplished by defining a message
envelope object associated with each request/response. Therefore, messages described in this ADR must provide JSON envelope
and payload
objects for each request/response.
The message topic names act like the HTTP paths and methods in REST requests. That is, the topic names specify the device receiver of any command request as paths do in the HTTP requests.
"},{"location":"design/adr/0023-North-South-Messaging/#message-envelope","title":"Message Envelope","text":"The messages defined in this ADR are JSON formatted requests and responses that share a common base structure. The outer most JSON object represents the message envelope
, which is used to convey metadata about request/response (e.g. a correlation identifier which will be added to any relayed request message as well as the response message envelope so that the 3rd party system will know to associate the responses to the original request).
Note
A Correlation ID (see this article for a more detailed description) is a unique value that is added to every request and response involved in a transaction which could include multiple requests/responses between one or more microservices. It's not meant to correlate requests to responses, its meant to label every message involved in a potentially multi-request transaction.
A Request ID should be an identifier returned on the response to a request (providing traceability between single request/response).
The envelope
will also contain the API version (something provided in the HTTP path when using REST).
Command requests in HTTP may also contain ds-pushevent and ds-returnevent query parameters (for GET commands). These will be optionally provided key/value pairs represented in the message envelope
's query parameters (and optionally allows for other parameters in the future).
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"API\":\"V2\",\n\"queryParams\": {\n\"ds-pushevent\":\"true\",\n\"ds-returnevent\":\"true\",\n}\n...\n}\n
Note
As with REST requests, if the ds-returnvent was no
, then a message with envelope would be returned but with no payload as there would be no events to return.
The request message payload
to the command service and those relayed to the device service would mimic their HTTP/REST request body alternatives. The payload
provides details needed in executing the command at the south side.
In the example GET and PUT messages below, note the envelope
wraps or encases the message payload
. The payload may be empty (as is typical of GET requests).
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"queryParams\": {\n\"ds-pushevent\":\"true\",\n\"ds-returnevent\":\"true\",\n}\n}\n\n{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"payload\": {\n\"AHU-TargetTemperature\": \"28.5\",\n\"AHU-TargetBand\": \"4.0\",\n\"AHU-TargetHumidity\": {\n\"Accuracy\": \"0.2-0.3% RH\",\n\"Value\": 59\n}\n}\n}\n
Note
Payload could be empty and therefore optional in the message structure - and exemplified in the top example here.
The response message payload
would contain the response from the south side, which is typically EdgeX event/reading objects (in the case of GET requests) but would also include any error message details.
Example response messages for a GET and PUT request are shown below. Again, note that the message envelope
wraps the response payload
.
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"errorCode\": 0,\n\"payload\": {\n\"event\": {\n\"apiVersion\": \"v2\",\n\"id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n\"deviceName\": \"string\",\n\"profileName\": \"string\",\n\"created\": 0,\n\"origin\": 0,\n\"readings\": [\n\"string\"\n],\n\"tags\": {\n\"Gateway-id\": \"HoustonStore-000123\",\n\"Latitude\": \"29.630771\",\n\"Longitude\": \"-95.377603\"\n}\n}\n}\n}\n\n{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"errorCode\": 1,\n\"payload\": {\n\"message\": \"string\"\n}\n}\n
Note
Get command responses may include CBOR data. The message envelope (which has a content type indicator) will indicate that the payload is either CBOR or JSON. The same message envelope content type indicator that is used in REST communications will be used in this message bus communications.
Alert
Open discussions per working group meetings and reviews...
error
boolean and then have the message indicate the error condition. The request message payload
to query the command service would mimic their HTTP/REST request body alternatives. The payload
provides details needed in executing the command at the south side.
In the example query to get all commands below, note the envelope
wraps or encases the message payload
. The payload will be empty. The query parameters will include the offset and limit (as per the REST counter parts).
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"queryParams\": {\n\"offset\":0,\n\"limit\":20,\n}\n}\n\nIn the example query to get commands for a specific device by name, the device name would be in the topic, so the query message would be without information (and removed from the message as queryParams will be optional).\n\n{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n}\n
The response message payload
for queries would contain the information necessary to make a message-based command request.
An example response message is shown below. Again, note that the message envelope
wraps the response payload
.
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"errorCode\": 0,\n\"payload\": {\n\"apiVersion\": \"v2\",\n\"deviceCoreCommands\": [\n{\n\"deviceName\": \"testDevice1\",\n\"profileName\": \"testProfile\",\n\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"get\": true,\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint1/get\",\n\"url\": \"broker.address:1883\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n}\n]\n}\n]\n},\n{\n\"deviceName\": \"testDevice1\",\n\"profileName\": \"testProfile\",\n\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"set\": true,\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint1/set\",\n\"url\": \"broker.address:1883\",\n\"parameters\": [\n{\n\"resourceName\": \"resource5\",\n\"valueType\": \"String\"\n},\n{\n\"resourceName\": \"resource6\",\n\"valueType\": \"Bool\"\n}\n]\n}\n]\n}\n]\n}\n}\n
"},{"location":"design/adr/0023-North-South-Messaging/#topic-naming","title":"Topic Naming","text":""},{"location":"design/adr/0023-North-South-Messaging/#3rd-party-system-topics","title":"3rd party system topics","text":"The 3rd party system or application must publish command requests messages to an EdgeX specified MQTT topic (external message bus) and subscribe to responses from the same. Messages topics should follow the following pattern:
/edgex/command/request/<device-name>/<command-name>/<method>
/edgex/command/response/#
For queries, the following topics are used - Publishing query command request topic: /edgex/commandquery/request
- Subscribing query command response topic: /edgex/commandquery/response
The command service must subscribe to the request topics of the 3rd party MQTT topic (external message bus) to get command requests, publish those to a topic to send them to a device service via the EdgeX message bus (internal message bus), subscribe to response messages on topics from device services (internal), and then publish response messages to a topic on the 3rd party MQTT broker (external). Message topics for the command service would follow the following standard:
edgex/command/request/#
edgex/command/request/<device-service>/<device-name>/<command-name>/<method>
edgex/command/response/#
edgex/command/response/<device-name>/<command-name>/<method>
For queries, the following topics are used:
edgex/commandquery/request
edgex/commandquery/response
The device services must subscribe to the EdgeX command request topic (internal message bus) and publish response messages to an EdgeX command response topic. The following naming standard will be applied to these topic names:
edgex/command/request/#
edgex/command/response/<device-service>/<device-name>/<command-name>/<method>
Both the EdgeX command service and the device services must contain configuration needed to connect to and publish/subscribe to messages from topics on the EdgeX message bus (internal). This includes configuration to access the message bus when secure or insecure.
The command service must also be provided configuration to connect to the 3rd party MQTT broker's topics (external). Because the communications may be done in a secure or insecure fashion, the core command service will need to be provided access to the 3rd party MQTT broker (external)
Similar to EdgeX application services, the command service will have access to an external MQTT broker to get command requests and send 3rd parties a response. This will require the command service to have two message queue configuration settings (internal and external).
"},{"location":"design/adr/0023-North-South-Messaging/#command-service-configuration","title":"command service configuration","text":"Example command service configuration is provided below.
[MessageQueue]\n[InternalMessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nRequestTopicPrefix = \"edgex/command/request/\" # for publishing requests to the device service; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\nResponseTopic = \"edgex/command/response/#\u201d # for subscribing to device service responses\n AuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\n SecretName = \"redisdb\"\n [ExternalMQTT]\n Protocol = \"tcp\"\n Host = \"localhost\"\n Port = 1883\n RequestCommandTopic = \"edgex/command/request/#\" # for subscribing to 3rd party command requests\nResponseCommandTopicPrefix = \"edgex/command/response/\" # for publishing responses back to 3rd party systems /<device-name>/<command-name>/<method> will be added to this publish topic prefix\nRequestQueryTopic = \"edgex/commandquery/request\"\nResponseQueryTopic = \"edgex/commandquery/response\"\n
Note
Core command contains no MessageQueue configuration today. This is all additive/new configuration and therefore backward compatible with EdgeX 2.x implementations.
"},{"location":"design/adr/0023-North-South-Messaging/#device-service-configuration","title":"device service configuration","text":"Example device service configuration is provided below.
[MessageQueue]\n## already existing message queue configuration (for sending events/readings to the message bus)\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\nPublishTopicPrefix = \"edgex/events/device\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\n# Client Identifiers\nClientId = \"device-rest\"\n# Connection information\nQos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\n\n## new configuration to allow device services to also communicate via message bus with core command\nCommandRequestTopic = \"edgex/command/request/#\" # subscribing for inbound command requests\nCommandResponseTopicPrefix = \"edgex/command/response/\" # publishing outbound command responses; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\n
Note
Most of the device service configuration is existing based on its need to already communicate with the message bus for publishing events/readings. The last two lines are added to allow device services to subscribe and publish command messages from/to the message bus.
"},{"location":"design/adr/0023-North-South-Messaging/#edgex-service-internal-message-bus-requests","title":"EdgeX Service (Internal) Message Bus Requests","text":"Application services (or other EdgeX services in the future) may want to also use message communications to make command requests. Application services make command requests today via REST.
In order to support this, the following need to be added:
The command service will also need an internal request topic and internal response topic prefix configuration to allow internal EdgeX services to make command requests (and query requests).
[MessageQueue]\n[InternalMessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nRequestTopicPrefix = \"edgex/command/request/\" # for publishing requests to the device service; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\nResponseTopic = \"edgex/command/response/#\u201d # for subscribing to device service responses\n InternalRequestCommandTopic = \"/command/request/#\" # for subscribing to internal command requests\nInternalResponseCommandTopicPrefix = \"/command/response/\" # for publishing responses back to internal service /<device-name>/<command-name>/<method> will be added to this publish topic prefix\nInternalRequestQueryTopic = \"/commandquery/request\"\nInternalResponseQueryTopic = \"/commandquery/response\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\n
A new command message client will need to be created to allow internal services (app services in this instance) to conveniently use the message bus communications with core command. The client service's configuration will also be expanded to include the corresponding topic and UseMessageBus
flag that enables the new messaging based CommandClient to be created. Example client configuration would look something like the following:
[Clients]\n[Clients.core-command]\nUseMessageBus = true\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nCommandRequestTopicPrefix = \"/command/request\" /<device-name>/<command-name>/<method> will be added to this publish topic prefix\nCommandResponseTopic = \"/command/response/#\"\nCommandQueryRequestTopic = \"/commandquery/request\"\nCommandQueryResponseTopic = \"/commandquery/response\"\n
Do we need separate topics for all the devices or would one on the device service suffice?
Would clients (non EdgeX services and applications) want to get a list of available commands via message (instead of calling REST)?
Dynamic configuration of the message subscription is not a user friendly operation today (requiring configuration changes).
Is it acceptable for more than one response to be published by the device service on the same correlation ID? Eg, send back \"Acknowledged\", then \"Scheduled\", then \"Starting\", then \"Done\" statuses?
Would it make sense to echo the command name into the response, as a reality check?
Would sending/receiving binary data (e.g. CBOR) be supported in this north-south message implementation?
Use of the message bus communications (by the non-EdgeX 3rd party service or application) would bypass the API Gateway.
Note a number of open questions in the Message Structure section that still need to be addressed.
Alert
Per TSC meeting of 4/27/22 - the discussion around error response was reopened. There is still some polite disagreement as to whether to keep the error response simple (as documented in this ADR) or to offer errorCode enumerations that are similar to HTTP response codes for common problems (such as ). As part of this discussion, the question is whether the error code enumerations should be exactly that of the HTTP response codes (400, 404, 423, 500, etc.) or more generic (i.e., non-HTTP) response error codes unique to this implementation.
The resolution to this question was to explore some options at implementation time. The use of an enumeration (HTTP or other) can be explored during development and options brought forth via PR.
Info
This ADR does not handle securing the message bus communications between services. This need is to be covered universally in an upcoming ADR.
"},{"location":"design/adr/0023-North-South-Messaging/#future-considerations","title":"Future Considerations","text":"System Events, aka Control Plane Events (CPE), are new to EdgeX. This ADR addresses the System Events for Devices use case with an extensible design that can address other System Event use cases that may be identified in the future. This extensible design approach and the fact that System Events are produced and consumed by different EdgeX services makes it architecturally significant warranting this ADR.
"},{"location":"design/adr/0024-system-events/#proposed-design","title":"Proposed Design","text":"To address the System Events for Devices use case, Core Metadata will publish a new SystemEvent
DTO to the EdgeX MessageBus when a device is added, updated or deleted. Consumers of these System Events will subscribe to the MessageBus to receive the new SystemEvent
DTO .
This new SystemEvent
DTO will contain the following data describing the System Event:
ObjectValue
in Reading
DTONote
As defined, this DTO should suffice for future System Event use cases.
"},{"location":"design/adr/0024-system-events/#messagebus","title":"MessageBus","text":"Services that publish System Events (Core Metadata) must connect to the EdgeX MessageBus and have MessageBus configuration similar to that of Core Data's here. This design assumes that Core Metadata will have this capability and configuration due to planned implementation of Service Metrics.
The PublishTopicPrefix
property in Core Metadata's MessageQueue
configuration will be used for System Events and set to edgex/system-event
.
The new SystemEvent
DTO will be published to a multi-level topic allowing subscribers to filter by topic. The format of this topic for System Events will be:
\u200b {PublishTopicPrefix}/{source}/{type}/{action}
where
{source}
= Publisher of the System Event, i.e. core-metadata
{type}
= Type of System Event, i.e. device
{action}
= The Action that triggered the System Event, i.e. add
Specific use cases may add additional levels as needed. The Device System Events use case will add the following levels
{owner}
= Owner the data for the System Event, i.e device-onvif-camera
as the device owner`{profile}
= Device profile associated with the Device, i.e onvif-camera
Example - System Event subscription topics
edgex/system-event/# - All system events\nedgex/system-event/core-metadata/# - only system events from Core Metadata\nedgex/system-event/core-metadata/device/# - only device system events from Core Metadata\nedgex/system-event/core-metadata/device/add/device-onvif-camera/# - only add device system events for device-onvif-camera\nedgex/system-event/core-metadata/device/#/#/onvif-camera - only device system events for devices created for the onvif-camera device profile\n
"},{"location":"design/adr/0024-system-events/#consumers","title":"Consumers","text":"Consumers of Device System Events will likely be custom application services as described in System Events for Devices . No changes are required to the App Functions SDK since it already supports processing of different types via the Target Type capability. Developers of custom application services that consume System Events will need to do the following:
&dtos.SystemEvent{}
when creating an instance of ApplicationService
using the NewAppServiceWithTargetType factory function.SystemEvent
DTO and process it accordingly. Similar to how the ToLineProtocol pipeline function expects the Metric DTO.SystemEvent
DTO will be added to this repositoryThis design will satisfy the System Events for Devices use case as well as possibly other future System Event use cases.
"},{"location":"design/adr/0024-system-events/#other-related-adrs","title":"Other Related ADRs","text":"This ADR describes the architecture of the new common configuration capability which impacts all services. Requirements for this new capability are described in the above referenced UCR. This is deemed architecturally significant due to the cross-cutting impacts.
"},{"location":"design/adr/0026-Common%20Configuration/#current-design","title":"Current Design","text":"The following flow chart demonstrates the bootstrapping of each services' configuration in the current Levski release.
"},{"location":"design/adr/0026-Common%20Configuration/#proposed-design","title":"Proposed Design","text":"The configuration settings that are common to all services will be partitioned out into a separate common configuration source. This common configuration source will be pushed into the Configuration Provider by the new core-common-config-bootstrapper
service.
During bootstrapping, each service will either load the common configuration from the Configuration Provider or via URI to some endpoint that provides the common configuration. Each service will have additional private configuration, which may override and/or extend the common configuration.
An additional common configuration setting must be present to indicate all other common settings have been pushed to the Configuration Provider. This setting is stored last and the services must wait for this setting to be present prior to pulling the common settings.
Environment overrides are only applied when configuration is loaded from file. The overridden values are pushed into the Configuration Provider, when used.
"},{"location":"design/adr/0026-Common%20Configuration/#common-config-bootstrapping","title":"Common Config Bootstrapping","text":"The following flow chart demonstrates the bootstrapping (seeding) of the common configuration when using the Configuration Provider.
"},{"location":"design/adr/0026-Common%20Configuration/#service-configuration-bootstrapping","title":"Service Configuration Bootstrapping","text":"The following flow chart demonstrates the bootstrapping of each services' configuration with this new common configuration capability.
"},{"location":"design/adr/0026-Common%20Configuration/#secret-store-configuration","title":"Secret Store Configuration","text":"As part of this design, the Secret Store configuration is being removed from the service configuration (common and private). This is so the Secret Provider can be instantiated prior to processing the service's configuration which may require the Secret Provider. The Secret Store configuration will now be a combination of default values and environment variable overrides. These environment variables will be the same as the ones that are currently used to override the configuration.
"},{"location":"design/adr/0026-Common%20Configuration/#specifying-the-common-configuration-location","title":"Specifying the Common Configuration location","text":"If the -cp/--configProvider
command line option is used, the service will default to pulling the common configuration from a standard path in the Configuration Provider. i.e. edgex/3.0/common/
The -cp/--configProvider
option assumes the usage of the core-common-config-bootstrapper service and cannot be used with the -cc/--commonConfig
option.
The new -cc/--commonConfig
command line option will be added for all services. This option will take the URI that specifies where the common configuration is pulled when not using the Configuration Provider. Authentication will be limited to basic-auth
. In addition, a new environment override variable EDGEX_COMMON_CONFIG
will be added which allows overriding this new command line option.
If the -cp/--configProvider
option is not specified and the -cc/--commonConfig
option is not specified, then the service will start using solely the private configuration. In this scenario, any information in the common configuration must be added to the service's private configuration. The individual bootstrap handlers will need to be enhanced to detect an empty configuration for robust error messaging.
-cp/--configProvider
command line option or the EDGEX_CONFIG_PROVIDER
environment variable.-cc/--commonConfig
command line option or the EDGEX_COMMON_CONFIG
environment variable may be specified using The Writable sections in common and in private configurations will be watched for changes when using the Configuration Provider. When changes to the common Writable are processed, each changed setting must be checked to see if the setting exists in the service's private section. The change will be ignored if the setting exists in the service's private section. This is so that the service's private overrides are always retained.
Changes to the service's private Writable section will be processed as is done currently.
"},{"location":"design/adr/0026-Common%20Configuration/#common-application-and-device-service-settings","title":"Common Application and Device service settings","text":"Any settings that are common to all Application Services and/or to all Device Services will be included in the single common configuration source. These settings will be ignored by services that don't use them when marshaled into the service's configuration struct.
"},{"location":"design/adr/0026-Common%20Configuration/#example-configuration-files","title":"Example Configuration Files","text":""},{"location":"design/adr/0026-Common%20Configuration/#common-configuration_1","title":"Common Configuration","text":"[Writable]\nLogLevel = \"INFO\"\n[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.DB]\npath = \"redisdb\"\n[Writable.InsecureSecrets.DB.Secrets]\nusername = \"\"\npassword = \"\"\n[Writable.Telemetry]\nInterval = \"30s\"\nPublishTopicPrefix = \"edgex/telemetry\" # /<service-name>/<metric-name> will be added to this Publish Topic prefix\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\n# Device SDK Common Service Metrics\nEventsSent = false\nReadingsSent = false\n# App SDK Common Service Metrics\nMessagesReceived = false\nInvalidMessagesReceived = false\nPipelineMessagesProcessed = false # Pipeline IDs are added as the tag for the metric for each pipeline defined\nPipelineMessageProcessingTime = false # Pipeline IDs are added as the tag for the metric for each pipeline defined\nPipelineProcessingErrors = false # Pipeline IDs are added as the tag for the metric for each pipeline defined\nHttpExportSize = false # Single metric used for all HTTP Exports\nMqttExportSize = false # BrokerAddress and Topic are added as the tag for this metric for each MqttExport defined \n# Common Security Service Metrics\nSecuritySecretsRequested = false\nSecuritySecretsStored = false\nSecurityConsulTokensRequested = false\nSecurityConsulTokenDuration = false\n[Writable.Telemetry.Tags] # Contains the service level tags to be attached to all the service's metrics\n# Gateway=\"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only chnage existing value, not added new ones.\n\n# Device Service specifc common Writable configuration\n[Writable.Reading]\nReadingUnits = true\n\n# Application Service specifc common Writable configuration\n[Writable.StoreAndForward]\nEnabled = false\nRetryInterval = \"5m\"\nMaxRetryCount = 10\n\n[Service]\nHealthCheckInterval = \"10s\"\nHost = \"localhost\"\nServerBindAddr = \"\" # Leave blank so default to Host value unless different value is needed.\nMaxResultCount = 1024\nMaxRequestSize = 0 # Not curently used. Defines the maximum size of http request body in bytes\nRequestTimeout = \"5s\"\n[Service.CORSConfiguration]\nEnableCORS = false\nCORSAllowCredentials = false\nCORSAllowedOrigin = \"https://localhost\"\nCORSAllowedMethods = \"GET, POST, PUT, PATCH, DELETE\"\nCORSAllowedHeaders = \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\"\nCORSExposeHeaders = \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\"\nCORSMaxAge = 3600\n\n[Registry]\nHost = \"localhost\"\nPort = 8500\nType = \"consul\"\n\n[Databases]\n[Databases.Primary]\nHost = \"localhost\"\nPort = 6379\nTimeout = 5000\nType = \"redisdb\"\n\n[MessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\"\n# Additional Default NATS Specific options that need to be here to enable evnironment variable overrides of them\nFormat = \"nats\"\nRetryOnFailedConnect = \"true\"\nQueueGroup = \"\"\nDurable = \"\"\nAutoProvision = \"true\"\nDeliver = \"new\"\nDefaultPubRetryAttempts = \"2\"\nSubject = \"edgex/#\" # Required for NATS Jetstram only for stream autoprovsioning\n\n# Device Service specifc common configuration\n[Device]\nDataTransform = true\nMaxCmdOps = 128\nMaxCmdValueLen = 256\nProfilesDir = \"./res/profiles\"\nDevicesDir = \"./res/devices\"\nEnableAsyncReadings = true\nAsyncBufferSize = 16\nLabels = []\nUseMessageBus = true\n[Device.Discovery]\nEnabled = false\nInterval = \"30s\"\n\n# Application Service specifc common configuration \n[Trigger]\nType=\"edgex-messagebus\"\n[Trigger.EdgexMessageBus]\nType = \"redis\"\n[Trigger.EdgexMessageBus.SubscribeHost]\nHost = \"localhost\"\nPort = 6379\nProtocol = \"redis\"\n[Trigger.EdgexMessageBus.PublishHost]\nHost = \"localhost\"\nPort = 6379\nProtocol = \"redis\"\n[Trigger.EdgexMessageBus.Optional]\nauthmode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nsecretname = \"redisdb\"\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\nQos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\"\n# Default NATS Specific options that need to be here to enable environment variable overrides of them\nFormat = \"nats\"\nRetryOnFailedConnect = \"true\"\nQueueGroup = \"\"\nDurable = \"\"\nAutoProvision = \"true\"\nDeliver = \"new\"\nDefaultPubRetryAttempts = \"2\"\nSubject = \"edgex/#\" # Required for NATS JetStream only for stream auto provisioning\n
"},{"location":"design/adr/0026-Common%20Configuration/#core-data-private-configuration","title":"Core Data Private Configuration","text":"MaxEventSize = 25000 # Defines the maximum event size in kilobytes\n\n[Writable]\nPersistData = true\n[Writable.Telemetry]\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\n# Core Data Service Metrics\nEventsPersisted = false\nReadingsPersisted = false\n[Service]\nPort = 59880\nStartupMsg = \"This is the Core Data Microservice\"\n\n[Clients] # Core data no longer dependent on \"Client\" services. Other services will have thier specific clients here\n\n[Databases]\n[Databases.Primary]\nName = \"coredata\"\n\n[MessageQueue]\nPublishTopicPrefix = \"edgex/events/core\" # /<device-profile-name>/<device-name> will be added to this Publish Topic prefix\nSubscribeEnabled = true\nSubscribeTopic = \"edgex/events/device/#\" # required for subscribing to Events from MessageBus\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\nClientId =\"core-data\"\n
"},{"location":"design/adr/0026-Common%20Configuration/#app-rfid-llrp-inventory-private-configuration","title":"App RFID LLRP Inventory Private Configuration","text":"[Service]\nPort = 59711\nStartupMsg = \"RFID LLRP Inventory Service\"\n\n[Clients]\n[Clients.core-data]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59880\n\n[Clients.core-metadata]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59881\n\n[Clients.core-command]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59882\n\n[Trigger]\nType=\"edgex-messagebus\"\n[Trigger.EdgexMessageBus]\n[Trigger.EdgexMessageBus.SubscribeHost]\nSubscribeTopics=\"edgex/events/#/#/#/ROAccessReport,edgex/events/#/#/#/ReaderEventNotification\"\n[Trigger.EdgexMessageBus.PublishHost]\nPublishTopic=\"edgex/events/device/{profilename}/{devicename}/{sourcename}\" # publish to same topic format the Device Services use\n[Trigger.EdgexMessageBus.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\nClientId =\"app-rfid-llrp-inventory\"\n\n[AppCustom]\n# Every device(reader) + antenna port represents a tag location and can be assigned an alias\n# such as Freezer, Backroom etc. to give more meaning to the data. The default alias set by\n# the application has a format of <deviceName>_<antennaId> e.g. Reader-10-EF-25_1 where\n# Reader-10-EF-25 is the deviceName and 1 is the antennaId.\n# See also: https://github.com/edgexfoundry/app-rfid-llrp-inventory#setting-the-aliases\n#\n# In order to override an alias, set the default alias as the key, and the new alias as the value you want, such as:\n# Reader-10-EF-25_1 = \"Freezer\"\n# Reader-10-EF-25_2 = \"Backroom\"\n[AppCustom.Aliases]\n\n# See: https://github.com/edgexfoundry/app-rfid-llrp-inventory#configuration\n[AppCustom.AppSettings]\nDeviceServiceName = \"device-rfid-llrp\"\nAdjustLastReadOnByOrigin = true\nDepartedThresholdSeconds = 600\nDepartedCheckIntervalSeconds = 30\nAgeOutHours = 336\nMobilityProfileThreshold = 6.0\nMobilityProfileHoldoffMillis = 500.0\nMobilityProfileSlope = -0.008\n
"},{"location":"design/adr/0026-Common%20Configuration/#device-mqtt-private-configuration","title":"Device MQTT Private Configuration","text":"MaxEventSize = 0 # value 0 unlimit the maximum event size that can be sent to message bus or core-data\n\n[Writable]\n# InsecureSecrets are required for when Redis is used for message bus\n[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.MQTT]\npath = \"credentials\"\n[Writable.InsecureSecrets.MQTT.Secrets]\nusername = \"\"\npassword = \"\"\n\n[Service]\nPort = 59982\nStartupMsg = \"device mqtt started\"\n\n[Clients]\n[Clients.core-data]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59880\n\n[Clients.core-metadata]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59881\n\n[MessageQueue]\nPublishTopicPrefix = \"edgex/events/device\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n[MessageQueue.Optional]\n# Default MQTT & NATS Specific options that need to be here to enable environment variable overrides of them\nClientId = \"device-mqtt\"\n[MessageQueue.Topics]\nCommandRequestTopic = \"edgex/device/command/request/device-mqtt/#\" # subscribing for inbound command requests\nCommandResponseTopicPrefix = \"edgex/device/command/response\" # publishing outbound command responses; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\n\n[MQTTBrokerInfo]\nSchema = \"tcp\"\nHost = \"localhost\"\nPort = 1883\nQos = 0\nKeepAlive = 3600\nClientId = \"device-mqtt\"\n\nCredentialsRetryTime = 120 # Seconds\nCredentialsRetryWait = 1 # Seconds\nConnEstablishingRetry = 10\nConnRetryWaitTime = 5\n\n# AuthMode is the MQTT broker authentication mechanism. Currently, \"none\" and \"usernamepassword\" is the only AuthMode supported by this service, and the secret keys are \"username\" and \"password\".\nAuthMode = \"none\"\nCredentialsPath = \"credentials\"\n\n# Comment out/remove when using multi-level topics\nIncomingTopic = \"DataTopic\"\nResponseTopic = \"ResponseTopic\"\nUseTopicLevels = false\n\n# Uncomment to use multi-level topics\n# IncomingTopic = \"incoming/data/#\"\n# ResponseTopic = \"command/response/#\"\n# UseTopicLevels = true\n\n[MQTTBrokerInfo.Writable]\n# ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT broker\nResponseFetchInterval = 500\n
"},{"location":"design/adr/0026-Common%20Configuration/#modules-and-services-impacted","title":"Modules and Services Impacted","text":"The following modules and services are impacted:
Currently, in the Levski and earlier releases services can only load configuration, units of measurements, device profiles, device definitions, provision watches, etc. from the local file system. As outlined in the reference UCR, there is a need to be able to load files from a remote locations using URIs to specify the locations.
"},{"location":"design/adr/0027-URIs%20for%20Files/#proposed-design","title":"Proposed Design","text":"This ADR proposes a new helper function for loading files be added to go-mod-bootstrap
. This function will provide the logic for loading a file either from local file system (as is today) or from a remote location. As stated in the UCR, only HTTP and HTTPS URIs will be supported. For HTTPS, certificate validation will be performed using the system's built-in trust anchors. The docker images for all services will have the CA certs installed as is done here in App Service Configurable's Dockerfile.
While not recommended, users will be able to specify username-password (<username>:<password>@
) in the URI in plain text. While this is ok network wise when using HTTPS, it isn't good practice to have these credentials specified in configuration or other service files where the URI is specified.
Example plain text username-password
in URI located in configuration
[UoM]\nUoMFile = \"https://myuser:mypassword@example.com/uom.yaml\"\n
"},{"location":"design/adr/0027-URIs%20for%20Files/#secure-credentials","title":"Secure Credentials","text":"In order to provide a secure way for users to specify credentials, the edgexSecretName
query parameter can be specified on the URI. This parameter specifies a Secret Name from the service's Secret Store where the credentials reside and will be processed by the new helper function.
Example URI with edgexSecretName
query parameter
[UoM]\nUoMFile = \"https://example.com/uom.yaml?edgexSecretName=mySecretName\"\n
The type of authentication as well as the credentials will be contained in the secret data specified by the Secret Name. Only one type of authentication will be supported initially, which is httpheader
. The httpheader
type will accommodate various forms of authorization placed in the header. Others types can be added in the future when need is determined.
Note
Digest Auth will not be supported at this time. It can be added in the future based on feedback indicating its need.
When httpheader
is specified as the type in the secret data, the header name and contents from the secret data will be placed in the HTTP header.
Example secret data - Basic Auth
using httpheader
type=httpheader\nheadername=Authorization\nheadercontents=Basic bXl1c2VyOm15cGFzc3dvcmQ=\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\nAuthorization: Basic bXl1c2VyOm15cGFzc3dvcmQ=\n
Example secret data - API-Key
using httpheader
type=httpheader\nheadername=X-API-KEY\nheadercontents=abcdef12345\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\nX-API-KEY: abcdef12345\n
Example secret data - Bearer
using httpheader
type=httpheader\nheadername=Authorization\nheadercontents=Bearer eyJhbGciO...\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\nAuthorization: Bearer eyJhbGciO...\n
All Services will be impacted for enabling the loading the common configuration and private configuration files using URIs. This will be handled in go-mod-bootstrap's
processing of the -cc/--commonConfig
and -cf/--configFile
command line flags.
Core Metadata's loading of the UOM file will be adjusted to use the new file load function.
Device Service's loading of device profiles, device definitions and provision watchers files will be adjusted to load an index file specified by a URI in place of the configured folder name. The contents of the index file will be used to load the individual files by URI by appending the filenames to the original URI. Any authentication specified in the original URI will be used in the subsequent URIs.
Example DevicesDir configuration in service configuration
[Device]\n...\nProfilesDir = \"./res/profiles\"\nDevicesDir = \"http://example.com/devices/index.json\"\nProvisionWatchersDir = \"./res/provisionwatchers\"\n...\n
Example Device Index file http://example.com/devices/index.json
[\n\"device1.yaml\", \"device2.yaml\"\n]\n
Example resulting device file URIs from above example
http://example.com/devices/device1.yaml\nhttp://example.com/devices/device2.yaml\n
Other files (existing or future) not listed above may also be candidates for using this new URI capability. Those listed above are the most impactful for deployment at scale.
Implement as designed above
"},{"location":"design/adr/0027-URIs%20for%20Files/#other-related-adrs","title":"Other Related ADRs","text":"Approved
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#context","title":"Context","text":"Currently EdgeX Events are sent from Device Services via HTTP to Core Data, which then puts the Events on the MessageBus after optionally persisting them to the database. This ADR details how Device Services will send EdgeX Events to other services via the EdgeX MessageBus.
Note: Though this design is centered on device services, it does have cross cutting impacts with other EdgeX services and modules
Note: This ADR is dependent on the Secret Provider for All to provide the secrets for secure Message Bus connections.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#decision","title":"Decision","text":""},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#which-message-bus-implementations","title":"Which Message Bus implementations?","text":"Multiple Device Services may need to be publishing Events to the MessageBus concurrently. ZMQ
will not be a valid option if multiple Device Services are configured to publish. This is because ZMQ
only allows for a single publisher. ZMQ
will still be valid if only one Device Service is publishing Events. The MQTT
and Redis Streams
are valid options to use when multiple Device Services are required, as they both support multiple publishers. These are the only other implementations currently available for Go services. The C base device services do not yet have a MessageBus implementation. See the C Device SDK below for details.
Note: Documentation will need to be clear when ZMQ
can be used and when it can not be used.
The Go Device SDK will take advantage of the existing go-mod-messaging
module to enable use of the EdgeX MessageBus. A new bootstrap handler will be created which initializes the MessageBus client based on configuration. See Configuration section below for details. The Go Device SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.
The C Device SDK will implement its own MessageBus abstraction similar to the one in go-mod-messaging
. The first implementation type (MQTT or Redis Streams) is TBD. Using this abstraction allows for future implementations to be added when use cases warrant the additional implementations. As with the Go SDK, the C SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.
With this design, Events will be sent directly to Application Services w/o going through Core Data and thus will not be persisted unless changes are made to Core Data. To allow Events to optionally continue to be persisted, Core Data will become an additional or secondary (and optional) subscriber for the Events from the MessageBus. The Events will be persisted when they are received. Core Data will also retain the ability to receive Events via HTTP, persist them and publish them to the MessageBus as is done today. This allows for the flexibility to have some device services to be configured to POST Events and some to be configured to publish Events while we transition the Device Services to all have the capability to publishing Events. In the future, once this new Publish
approach has been proven, we may decide to remove POSTing Events to Core Data from the Device SDKs.
The existing PersistData
setting will be ignored by the code path subscribing to Events since the only reason to do this is to persist the Events.
There is a race condition for Marked As Pushed
when Core Data is persisting Events received from the MessageBus. Core Data may not have finished persisting an Event before the Application Service has processed the Event and requested the Event be Marked As Pushed
. It was decided to remove Mark as Pushed
capability and just rely on time based scrubbing of old Events.
As this development will be part of the Ireland release all Events published to the MessageBus will use the V2 Event DTO. This is already implemented in Core Data for the V2 AddEvent API.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#validation","title":"Validation","text":"Services receiving the Event DTO from the MessageBus will log validation errors and stop processing the Event.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#message-envelope","title":"Message Envelope","text":"EdgeX Go Services currently uses a custom Message Envelope for all data that is published to the MessageBus. This envelope wraps the data with metadata, which is ContentType
(JSON or CBOR), Correlation-Id
and the obsolete Checksum
. The Checksum
is used when the data is CBOR encoded to identify the Event in V1 API to be mark it as pushed. This checksum is no longer needed as the V2 Event DTO requires the ID be set by the Device Services which will always be used in the V2 API to mark the Events as pushed. The Message Envelope will be updated to remove this property.
The C SDK will recreate this Message Envelope.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services","title":"Application Services","text":"As part of the V2 API consumption work in Ireland the App Services SDK will be changed to expect to receive V2 Event DTOs rather than the V1 Event model. It will also be updated to no longer expect or use the Checksum
currently on the Message Envelope. Note these changes must occur for the V2 consumption and are not directly tied to this effort.
The App Service SDK will be enhanced for the secure MessageBus connection described below. See Secure Connections for details
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus-topics","title":"MessageBus Topics","text":"Note: The change recommended here is not required for this design, but it provides a good opportunity to adopt it.
Currently Core Data publishes Events to the simple events
topic. All Application Services running receive every Event published, whether they want them or not. The Events can be filtered out using the FilterByDeviceName
or FilterByResourceName
pipeline functions, but the Application Services still receives every Event and process all the Events to some extent. This could cause load issues in a deployment with many devices and large volume of Events from various devices or a very verbose device that the Application Services is not interested in.
Note: The current FilterByDeviceName
is only good if the device name is known statically and the only instance of the device defined by the DeviceProfileName
. What we really need is FilterByDeviceProfileName
which allows multiple instances of a device to be filtered for, rather than a single instance as it it now. The V2 API will be adding DeviceProfileName
to the Events, so in Ireland this filter will be possible.
Pub/Sub systems have advanced topic schema, which we can take advantage of from Application Services to filter for just the Events the Application Service actual wants. Publishers of Events must add the DeviceProfileName
, DeviceName
and SourceName
to the topic in the form edgex/events/<device-profile-name>/<device-name>/<source-name>
. The SourceName
is the Resource
or Command
name used to create the Event. This allows Application Services to filter for just the Events from the device(s) it wants by only subscribing to those DeviceProfileNames
or the specific DeviceNames
or just the specific SourceNames
Example subscribe topics if above schema is used:
Int16
device resource from devices created from the Random-Integer-Device device profile. HVACValues
device command from devices created from the Modbus-Device device profile.The MessageBus abstraction allows for multiple subscriptions, so an Application Service could specify to receive data from multiple specific device profiles or devices by creating multiple subscriptions. i.e. edgex/Events/Random-Integer-Device/#
and edgex/Events/Random-Boolean-Device/#
. Currently the App SDK only allows for a single subscription topic to be configured, but that could easily be expanded to handle a list of subscriptions. See Configuration section below for details.
Core Data's existing publishing of Events would also need to be changed to use this new topic schema. One challenge with this is Core Data doesn't currently know the DeviceProfileName
or DeviceName
when it receives a CBOR encoded event. This is because it doesn't decode the Event until after it has published it to the MessageBus. Also, Core Data doesn't know of SourceName
at all. The V2 API will be enhanced to change the AddEvent endpoint from /event
to /event/{profile}/{device}/{source}
so that DeviceProfileName
, DeviceName
, and SourceName
are always know no matter how the request is encoded.
This new topic approach will be enabled via each publisher's PublishTopic
having the DeviceProfileName
, DeviceName
and SourceName
added to the configured PublishTopicPrefix
PublishTopicPrefix = \"edgex/events\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n
See Configuration section below for details.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#configuration","title":"Configuration","text":""},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#device-services","title":"Device Services","text":"All Device services will have the following additional configuration to allow connecting and publishing to the MessageBus. As describe above in the MessageBus Topics section, the PublishTopic
will include the DeviceProfileName
and DeviceName
.
A MessageQueue section will be added, which is similar to that used in Core Data today, but with PublishTopicPrefix
instead of Topic
.To enable secure connections, the Username
& Password
have been replaced with ClientAuth & SecretPath
, See Secure Connections section below for details. The added Enabled
property controls whether the Device Service publishes to the MessageBus or POSTs to Core Data.
[MessageQueue]\nEnabled = true\nProtocol = \"tcp\"\nHost = \"localhost\"\nPort = 1883\nType = \"mqtt\"\nPublishTopicPrefix = \"edgex/events\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\n# Client Identifiers\nClientId =\"<device service key>\"\n# Connection information\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\nClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert`\nSecretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#core-data","title":"Core Data","text":"Core data will also require additional configuration to be able to subscribe to receive Events from the MessageBus. As describe above in the MessageBus Topics section, the PublishTopicPrefix
will have DeviceProfileName
and DeviceName
added to create the actual Public Topic.
The MessageQueue
section will be changed so that the Topic
property changes to PublishTopicPrefix
and SubscribeEnabled
and SubscribeTopic
will be added. As with device services configuration, the Username
& Password
have been replaced with ClientAuth
& SecretPath
for secure connections. See Secure Connections section below for details. In addition, the Boolean SubscribeEnabled
property will be used to control if the service subscribes to Events from the MessageBus or not.
[MessageQueue]\nProtocol = \"tcp\"\nHost = \"localhost\"\nPort = 1883\nType = \"mqtt\"\nPublishTopicPrefix = \"edgex/events\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\nSubscribeEnabled = true\nSubscribeTopic = \"edgex/events/#\"\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\n# Client Identifiers\nClientId =\"edgex-core-data\"\n# Connection information\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\nClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert`\nSecretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services_1","title":"Application Services","text":""},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus","title":"[MessageBus]","text":"Similar to above, the Application Services MessageBus
configuration will change to allow for secure connection to the MessageBus. The Username
& Password
have been replaced with ClientAuth
& SecretPath
for secure connections. See Secure Connections section below for details.
[MessageBus.Optional]\n# MQTT Specific options\n# Client Identifiers\nClientId =\"<app sevice key>\"\n# Connection information\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\nClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert`\nSecretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#binding","title":"[Binding]","text":"The Binding
configuration section will require changes for the subscribe topics scheme described in the MessageBus Topics section above to filter for Events from specific device profiles or devices. SubscribeTopic
will change from a string property containing a single topic to the SubscribeTopics
string property containing a comma separated list of topics. This allows for the flexibility for the property to be a single topic with the #
wild card so the Application Service receives all Events as it does today.
Receive only Events from the Random-Integer-Device
and Random-Boolean-Device
profiles
[Binding]\nType=\"messagebus\"\nSubscribeTopics=\"edgex/events/Random-Integer-Device, edgex/events/Random-Boolean-Device\"\n
Receive only Events from the Random-Integer-Device1
from the Random-Integer-Device
profile [Binding]\nType=\"messagebus\"\nSubscribeTopics=\"edgex/events/Random-Integer-Device/Random-Integer-Device1\"\n
or receives all Events:
[Binding]\nType=\"messagebus\"\nSubscribeTopics=\"edgex/events/#\"\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#secure-connections","title":"Secure Connections","text":"As stated earlier, this ADR is dependent on the Secret Provider for All ADR to provide a common Secret Provider for all Edgex Services to access their secrets. Once this is available, the MessageBus connection can be secured via the following configurable client authentications modes which follows similar implementation for secure MQTT Export and secure MQTT Trigger used in Application Services.
Secret Provider
using the configured SecretPath
.How the secrets are injected into the Secret Provider
is out of scope for this ADR and covered in the Secret Provider for All ADR.
ZMQ
or Redis Streams
then there must be a MQTT Broker running when a C Device service is in use and configured to publish to MessageBus.DeviceProfileName
and DeviceName
the V2 API must restrict the characters used in device names to those allowed in a topic. An issue for V2 API already exists for restricting the allowable characters to RFC 3986 , which will suffice.InsecureSecrets
SecretProvider
reside?Approved
"},{"location":"design/adr/014-Secret-Provider-For-All/#context","title":"Context","text":"This ADR defines the new SecretProvider
abstraction that will be used by all EdgeX services, including Device Services. The Secret Provider is used by services to retrieve secrets from the Secret Store. The Secret Store, in secure mode, is currently Vault. In non-secure mode it is configuration in some form, i.e. DatabaseInfo
configuration or InsecureSecrets
configuration for Application Services.
The Secret Provider abstraction defined in this ADR is based on the Secret Provider abstraction implementations in the Application Functions SDK (App SDK) for Application Services and the one in go-mod-bootstrap (Bootstrap) used by the Core, Support & Security services in edgex-go. Device Services do not currently use secure secrets. The App SDK implementation was initially based on the Bootstrap implementation.
The similarities and differences between these implementations are:
SecretClient
from go-mod-secretsSecretClient
based on the SecretStore
configuration(s)GetDatabaseCredentials
APICredentialsProvider
& CertificateProvider
) while the App SDK's use a single interface (SecretProvider
) for the abstraction GetCertificateKeyPair
API, which the App SDK's does notInitialize
API (Bootstrap's initialization is done by the bootstrap handler)StoreSecrets
API GetSecrets
APIInsecureSecretsUpdated
APISecretsLastUpdated
APISecretClient
for the Application Service instance's exclusive secrets.StoreSecrets
& GetSecrets
APIsSecretClient
is considered the shared client for secrets that all Application Service instances share. It is only used by the GetDatabaseCredentials
APIInsecureSecrets
A secret is a collection of key/value pairs stored in a SecretStore
at specified path whose values are sensitive in nature. Redis database credentials are an example of a Secret
which contains the username
and password
key/values stored at the redisdb
path.
Service Exclusive secrets are those that are exclusive to the instance of the running service. An example of exclusive secrets are the HTTP Auth tokens used by two running instances of app-service-configurable (http-export) which export different device Events to different endpoints with different Auth tokens in the HTTP headers. Service Exclusive secrets are seeded by POSTing the secrets to the /api/vX/secrets
endpoint on the running instance of each Application Service.
Service Shared secrets are those that all instances of a class of service, such a Application Services, share. Think of Core Data as it own class of service. An example of shared secrets are the database credentials for the single database instance for Store and Forward data that all Application Services may need to access. Another example is the database credentials for each of instance the Core Data. It is shared, but only one instance of Core Data is currently ever run. Service Shared secrets are seeded by security-secretstore-setup using static configuration for static secrets for known services. Currently database credentials are the only shared secrets. In the future we may have Message Bus credentials as shared secrets, but these will be truly shared secrets for all services to securely connect to the Message Bus, not just shared between instances of a service.
Application Services currently have the ability to configure SecretStores
for Service Exclusive and/or Service Shared secrets depending on their needs.
Known Services are those identified in the static configuration by security-secretstore-setup
These currently are Core Data, Core Metadata, Support Notifications, Support Scheduler and Application Service (class)
Unknown Services are those not known in the static configuration that become known when added to the Docker compose file or Snap.
Application Service (instance) are examples of these services.
Service exclusive SecretStore
can be created for these services by adding the services' unique name , i.e. appservice-http-export, to the EDGEX_ADD_SECRETSTORE_TOKENS
environment variable for security-secretstore-setup
EDGEX_ADD_SECRETSTORE_TOKENS: \"appservice-http-export, appservice-mqtt-export\"\n
This creates an exclusive secret store token for each service listed. The name provided for each service must be used in the service's SecretStore
configuration and Docker volume mount (if applicable). Typically the configuration is set via environment overrides or is already in an existing configuration profile (http-export profile for app-service-configurable).
Example docker-compose file entries:
environment:\n...\nSecretStoreExclusive_Path: \"/v1/secret/edgex/appservice-http-export/\"\nTokenFile: \"/tmp/edgex/secrets/appservice-http-export/secrets-token.json\"\n\nvolumes:\n...\n- /tmp/edgex/secrets/appservice-http-export:/tmp/edgex/secrets/appservice-http-export:ro,z\n
Database credentials are currently the only secrets of this type
Runtime Secrets are those not known in the static configuration and that become known during run time. These secrets are seeded at run time via the Application Services /api/vX/secrets
endpoint
type CredentialsProvider interface {\nGetDatabaseCredentials(database config.Database) (config.Credentials, error)\n}\n
and
type CertificateProvider interface {\nGetCertificateKeyPair(path string) (config.CertKeyPair, error)\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods","title":"Factory and bootstrap handler methods","text":"type SecretProvider struct {\nsecretClient pkg.SecretClient\n}\n\nfunc NewSecret() *SecretProvider {\nreturn &SecretProvider{}\n}\n\nfunc (s *SecretProvider) BootstrapHandler(\nctx context.Context,\n_ *sync.WaitGroup,\nstartupTimer startup.Timer,\ndic *di.Container) bool {\n...\nIntializes the SecretClient and adds it to the DIC for both interfaces.\n...\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#app-sdks-current-implementation","title":"App SDK's current implementation","text":""},{"location":"design/adr/014-Secret-Provider-For-All/#interface","title":"Interface","text":"type SecretProvider interface {\nInitialize(_ context.Context) bool\nStoreSecrets(path string, secrets map[string]string) error\nGetSecrets(path string, _ ...string) (map[string]string, error)\nGetDatabaseCredentials(database db.DatabaseInfo) (common.Credentials, error)\nInsecureSecretsUpdated()\nSecretsLastUpdated() time.Time\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods_1","title":"Factory and bootstrap handler methods","text":"type SecretProviderImpl struct {\nSharedSecretClient pkg.SecretClient\nExclusiveSecretClient pkg.SecretClient\nsecretsCache map[string]map[string]string // secret's path, key, value\nconfiguration *common.ConfigurationStruct\ncacheMuxtex *sync.Mutex\nloggingClient logger.LoggingClient\n//used to track when secrets have last been retrieved\nLastUpdated time.Time\n}\n\nfunc NewSecretProvider(\nloggingClient logger.LoggingClient, configuration *common.ConfigurationStruct) *SecretProviderImpl {\nsp := &SecretProviderImpl{\nsecretsCache: make(map[string]map[string]string),\ncacheMuxtex: &sync.Mutex{},\nconfiguration: configuration,\nloggingClient: loggingClient,\nLastUpdated: time.Now(),\n}\n\nreturn sp\n}\n
type Secrets struct {\n}\n\nfunc NewSecrets() *Secrets {\nreturn &Secrets{}\n}\n\nfunc (_ *Secrets) BootstrapHandler(\nctx context.Context,\n_ *sync.WaitGroup,\nstartupTimer startup.Timer,\ndic *di.Container) bool {\n...\nCreates NewNewSecretProvider, calls Initailizes() and adds it to the DIC\n...\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#secret-store-for-non-secure-mode","title":"Secret Store for non-secure mode","text":"Both Bootstrap's and App SDK's implementation use the DatabaseInfo
configuration for GetDatabaseCredentials
API in non-secure mode. The App SDK only uses it, for backward compatibility, if the database credentials are not found in the new InsecureSecrets
configuration section. For Ireland it was planned to only use the new InsecureSecrets
configuration section in non-secure mode.
Note: Redis credentials are blank
in non-secure mode
Core Data
[Databases]\n[Databases.Primary]\nHost = \"localhost\"\nName = \"coredata\"\nUsername = \"\"\nPassword = \"\"\nPort = 6379\nTimeout = 5000\nType = \"redisdb\"\n
Application Services
[Database]\nType = \"redisdb\"\nHost = \"localhost\"\nPort = 6379\nUsername = \"\"\nPassword = \"\"\nTimeout = \"30s\"\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#insecuresecrets-configuration","title":"InsecureSecrets Configuration","text":"The App SDK defines a new Writable
configuration section called InsecureSecrets
. This structure mimics that of the secure SecretStore
when EDGEX_SECURITY_SECRET_STORE
environment variable is set to false
. Having the InsecureSecrets
in the Writable
section allows for the secrets to be updated without restarting the service. Some minor processing must occur when the InsecureSecrets
section is updated. This is to call the InsecureSecretsUpdated
API. This API simply sets the time the secrets were last updated. The SecretsLastUpdated
API returns this timestamp so pipeline functions that use credentials for exporting know if their client needs to be recreated with new credentials, i.e MQTT export.
type WritableInfo struct {\nLogLevel string\n...\nInsecureSecrets InsecureSecrets\n}\n\ntype InsecureSecrets map[string]InsecureSecretsInfo\n\ntype InsecureSecretsInfo struct {\nPath string\nSecrets map[string]string\n}\n
[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.DB]\npath = \"redisdb\"\n[Writable.InsecureSecrets.DB.Secrets]\nusername = \"\"\npassword = \"\"\n[Writable.InsecureSecrets.mqtt]\npath = \"mqtt\"\n[Writable.InsecureSecrets.mqtt.Secrets]\nusername = \"\"\npassword = \"\"\ncacert = \"\"\nclientcert = \"\"\nclientkey = \"\"\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#decision","title":"Decision","text":"The new SecretProvider
abstraction defined by this ADR is a combination of the two implementations described above in the Existing Implementations section.
To simplify the SecretProvider
abstraction, we need to reduce to using only exclusive SecretStores
. This allows all the APIs to deal with a single SecretClient
, rather than the split up way we currently have in Application Services. This requires that the current Application Service shared secrets (database credentials) must be copied into each Application Service's exclusive SecretStore
when it is created.
The challenge is how do we seed static secrets for unknown services when they become known. As described above in the Known and Unknown Services section above, services currently identify themselves for exclusive SecretStore
creation via the EDGEX_ADD_SECRETSTORE_TOKENS
environment variable on security-secretstore-setup. This environment variable simply takes a comma separated list of service names.
EDGEX_ADD_SECRETSTORE_TOKENS: \"<service-name1>,<service-name2>\"\n
If we expanded this to add an optional list of static secret identifiers for each service, i.e. appservice/redisdb
, the exclusive store could also be seeded with a copy of static shared secrets. In this case the Redis database credentials for the Application Services' shared database. The environment variable name will change to ADD_SECRETSTORE
now that it is more than just tokens.
ADD_SECRETSTORE: \"app-service-xyz[appservice/redisdb]\"\n
Note: The secret identifier here is the short path to the secret in the existing appservice SecretStore
. In the above example this expands to the full path of /secret/edgex/appservice/redisdb
The above example results in the Redis credentials being copied into app-service-xyz's SecretStore
at /secret/edgex/app-service-xyz/redis
.
Similar approach could be taken for Message Bus credentials where a common SecretStore
is created with the Message Bus credentials saved. The services request the credentials are copied into their exclusive SecretStore
using common/messagebus
as the secret identifier.
Full specification for the environment variable's value is a comma separated list of service entries defined as:
<service-name1>[optional list of static secret IDs sperated by ;],<service-name2>[optional list of static secret IDs sperated by ;],...\n
Example with one service specifying IDs for static secrets and one without static secrets
ADD_SECRETSTORE: \"appservice-xyz[appservice/redisdb; common/messagebus], appservice-http-export\"\n
When the ADD_SECRETSTORE
environment variable is processed to create these SecretStores
, it will copy the specified saved secrets from the initial SecretStore
into the service's SecretStore
. This all depends on the completion of database or other credential bootstrapping and the secrets having been stored prior to the environment variable being processed. security-secretstore-setup will need to be refactored to ensure this sequencing.
The following will be the new SecretProvider
abstraction interface used by all Edgex services
type SecretProvider interface {\n// Stores new secrets into the service's exclusive SecretStore at the specified path.\nStoreSecrets(path string, secrets map[string]string) error\n// Retrieves secrets from the service's exclusive SecretStore at the specified path.\nGetSecrets(path string, _ ...string) (map[string]string, error)\n// Sets the secrets lastupdated time to current time. \nSecretsUpdated()\n// Returns the secrets last updated time\nSecretsLastUpdated() time.Time\n}\n
Note: The GetDatabaseCredentials
and GetCertificateKeyPair
APIs have been removed. These are no longer needed since insecure database credentials will no longer be stored in the DatabaseInfo
configuration and certificate key pairs are secrets like any others. This allows these secrets to be retrieved via the GetSecrets
API.
The factory method and bootstrap handler will follow that currently in the Bootstrap implementation with some tweaks. Rather than putting the two split interfaces into the DIC, it will put just the single interface instance into the DIC. See details in the Interfaces and factory methods section above under Existing Implementations.
"},{"location":"design/adr/014-Secret-Provider-For-All/#caching-of-secrets","title":"Caching of Secrets","text":"Secrets will be cached as they are currently in the Application Service implementation
"},{"location":"design/adr/014-Secret-Provider-For-All/#insecure-secrets","title":"Insecure Secrets","text":"Insecure Secrets will be handled as they are currently in the Application Service implementation. DatabaseInfo
configuration will no longer be an option for storing the insecure database credentials. They will be stored in the InsecureSecrets
configuration only.
[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.DB]\npath = \"redisdb\"\n[Writable.InsecureSecrets.DB.Secrets]\nusername = \"\"\npassword = \"\"\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#handling-on-the-fly-changes-to-insecuresecrets","title":"Handling on-the-fly changes to InsecureSecrets
","text":"All services will need to handle the special processing when InsecureSecrets
are changed on-the-fly via Consul. Since this will now be a common configuration item in Writable
it can be handled in go-mod-bootstrap
along with existing log level processing. This special processing will be taken from App SDK.
Proper mock of the SecretProvider
interface will be created with Mockery
to be used in unit tests. Current mock in App SDK is hand written rather then generated with Mockery
.
SecretProvider
reside?","text":""},{"location":"design/adr/014-Secret-Provider-For-All/#go-services","title":"Go Services","text":"The final decision to make is where will this new SecretProvider
abstraction reside? Originally is was assumed that it would reside in go-mod-secrets
, which seems logical. If we were to attempt this with the implementation including the bootstrap handler, go-mod-secrets
would have a dependency on go-mod-bootstrap
which will likely create a circular dependency.
Refactoring the existing implementation in go-mod-bootstrap
and have it reside there now seems to be the best choice.
The C Device SDK will implement the same SecretProvider
abstraction, InsecureSercets configuration and the underling SecretStore
client.
Writable.InsecureSecrets
section added to their configurationInsecureSecrets
definition will be moved from App SDK to go-mod-bootstrapSecretStore
configuration section will be added to all Device ServicesSecretProvider
interface from the DIC in place of current usage of the GetDatabaseCredentials
and GetCertificateKeyPair
interfaces.GetDatabaseCredentials
and GetCertificateKeyPair
will be replaced with calls to GetSecrets
API and appropriate processing of the returned secrets will be added. GetSecrets
API in place of the GetDatabaseCredentials
APISecretProvider
bootstrap handlerSecretStoreExclusive
configuration and just use the existing SecretStore
configurationSecretStore
requires stopping and restarting all the services. The is because security-secretstore-setup has completed but not stopped. If it is rerun without stopping the other services, there tokens and static secrets will have changed. The planned refactor of security-secretstore-setup
will attempt to resolve this.This design involves creating a new Application Service that is responsible for the requirements in the above referenced UCR. This document is created as a means of formal design review.
"},{"location":"design/adr/application/0025-Record-and-Replay/#proposed-design","title":"Proposed Design","text":"A new Application Service will be created with a RESTful API to handle the Record, Replay, Export and Import capabilities. An Application Service has been chosen since the Record capability requires a service that can connect to the MessageBus and consume Events over a long period of time (just like other App Services). The service will not create or start a Functions Pipeline on start-up as normally done in Application Services. It will wait until the Record request has been received. Once the recording is complete the Functions Pipeline will be stopped.
Note
Application Services do not receive data when the Functions Pipelines are stopped.
"},{"location":"design/adr/application/0025-Record-and-Replay/#record-endpoint","title":"Record Endpoint","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#post","title":"POST","text":"This POST
API will start recording data as specified in the request Data Transfer Object (DTO) defined below. The request handler will validate the DTO and then create a new Functions Pipeline and Start the Functions Pipeline to process incoming data. An error is retuned if a recording is already in progress.
The Functions Pipeline will contain the following pipeline functions in the following order
The async function receiving the data will first stop the Functions Pipeline and then save the data for later replay and/or export. It will also determine the list of unique Device Profile and Device Names from the data and store them along side the recorded data. Since app services can receive Events out of order per their timestamps, the saved Event data must be sorted by the Event timestamps. All data will saved in in-memory storage.
Note
Starting a new recording will overwrite any previous recorded data.
"},{"location":"design/adr/application/0025-Record-and-Replay/#record-request-dto","title":"Record Request DTO","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#duration","title":"Duration","text":"Time duration in which to record data. Required if Event Limit is not specified.
"},{"location":"design/adr/application/0025-Record-and-Replay/#event-limit","title":"Event Limit","text":"Maximum number Events
to record. Required if Duration is not specified
Optional list of Device Profile Names to filter for
"},{"location":"design/adr/application/0025-Record-and-Replay/#include-device-names","title":"Include Device Names","text":"Optional list of Device Names to filter for
"},{"location":"design/adr/application/0025-Record-and-Replay/#exclude-device-profile-names","title":"Exclude Device Profile Names","text":"Optional list of Device Profile Names to filter out
"},{"location":"design/adr/application/0025-Record-and-Replay/#exclude-device-names","title":"Exclude Device Names","text":"Optional list of Device Names to filter out
"},{"location":"design/adr/application/0025-Record-and-Replay/#delete","title":"DELETE","text":"The DELETE
API will cancel current in progress recording. An error is returned if a recording is not in progress.
This GET
API will return the status of Record. If Record is not active the status will be for the last Record session that was run. The API response will be the following DTO:
Boolean indicating if Record is in progress or not.
"},{"location":"design/adr/application/0025-Record-and-Replay/#event-count","title":"Event Count","text":"Count of Events that have been captured. 0 if not running and no past Record has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#duration_1","title":"Duration","text":"Duration that the recording has been active. 0 if not running and no past Record has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#replay-endpoint","title":"Replay endpoint","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#post_1","title":"POST","text":"This POST
API will start replaying the recorded data as specified in the request Data Transfer Object (DTO) defined below. An error is retuned is there is already a replay session in progress. The request handler will validate the DTO and that the appropriate Device Profiles and Devices from the data exist. It will then start an async Go function to handle the replay so the request doesn't timeout on long replays.
The replay async Go function will use the Background Publishing capability to send the recorded Events to the EdgeX MessageBus using the same publish topic scheme used by Device Services, which is edgex/events/device/<device-profile-name>/<device-name>/<source-name>
. The App SDK has the Publish Topic Placeholders capability built-in to facilitate this. The data for these topics is available from the Event DTO. The timestamps in the Events and Readings published will be set to the current date/time. This requires a copy be made of the Event/Readings as they are published in order to not corrupt the original data.
Once the first event is published the replay function will calculate the wait time to use before sending the next Event from the recorded data. This will be based on the time difference from the original timestamp of the previous event published and the timestamp of the next event multiplied by the inverse of the Replay Rate
specified in the request DTO.
Examples - Replay Rate wait time calculation
Delta time between original Events is 800ms Replay rate is 2.0 (100% faster) making wait time 400ms (800ms * (1 / 2.0)) Replay rate is 0.5 (100% slower) making wait time 1600ms (800ms * (1 / 0.5))
The replay function will repeat publishing the recorded data per the Repeat Count
in from the DTO.
Required rate at which to replay the data compared to the rate the data was recorded. Float value greater than 0 where 1 is the same rate, less than 1 is slower rate and greater than 1 is faster rate than the rate the data was recorded.
"},{"location":"design/adr/application/0025-Record-and-Replay/#repeat-count","title":"Repeat Count","text":"Optional count of number of times to repeat the replay. Defaults to 1 if not specified or is set to 0.
"},{"location":"design/adr/application/0025-Record-and-Replay/#delete_1","title":"DELETE","text":"This DELETE
API will cancel current in progress replay. An error is returned if a replay is not in progress.
This GET
API will return the status of Replay. If Replay is not active the status will be for the last Replay that was run. The API response will be the following DTO:
Boolean indicating if a Replay is in progress or not
"},{"location":"design/adr/application/0025-Record-and-Replay/#event-count_1","title":"Event Count","text":"Count of Events that have been replayed. 0 if not running and no past Replay has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#duration_2","title":"Duration","text":"Duration that the Replay has been active. 0 if not running and no past Replay has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#repeat-count_1","title":"Repeat Count","text":"Count of repeats. Value indicates the Replay in progress or competed. 0 if not running and no past Replay has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#download-endpoint-export","title":"Download endpoint (Export)","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#get_2","title":"GET","text":"This GET
API will request that the previously recorded data be exported as a file download. It will accept an optional query parameter to specify compression (NONE, ZIP or GZIP). An error is returned if no data has been recorded or invalid compression type requested.
The file content will be the Recorded Data DTO as define below. The request handler will build the DTO described below by extracting the recorded Events
from in-memory storage, pulling the referenced Device Profiles
and Devices
from Core Metadata using the names from in-memory storage. The file extension used will be .json
, .zip
or .gzip
depending on the compression selected.
List of Events
(with Readings
) that were recorded
List of Device Profiles
(complete profiles) that are referenced in the recorded Events
List of Device defintions
that are referenced in the recorded Events
This POST
API will upload previously exported recorded data file. It will accept an optional Boolean query parameter to specify to not overwrite existing Device Profiles and/or Devices if they already exist. Default is to overwrite existing with those captured with the recorded data.
The request handler will receive the file as a Recorded Data DTO described above and detect if it is compressed and un-compress the contents if needed before un-marshaling the JSON into the DTO. The compression will be determined based the Content-Encoding
from the request header. The Event
data from the DTO will then be saved to the in-memory storage along with the Device Profile and Device Names. The Device Profiles
and Devices
will be pushed to Core Metadata if they don't exist or if overwrite is enabled.
Note
Import will overwrite any previous recorded data.
"},{"location":"design/adr/application/0025-Record-and-Replay/#considerations","title":"Considerations","text":"Implement this design as outlined above using a RESTful API and in-memory storage
"},{"location":"design/adr/application/0025-Record-and-Replay/#other-related-adrs","title":"Other Related ADRs","text":"Accepted by EdgeX Foundry working groups as of Core Working Group meeting 16-Jan-2020
Note
This ADR was written pre-Geneva with an assumption that the V2 APIs would be available in Geneva. In actuality, the full V2 APIs will be delivered in the Ireland release (Spring 2020)
"},{"location":"design/adr/core/0003-V2-API-Principles/#context","title":"Context","text":"A redesign of the EdgeX Foundry API is proposed for the Geneva release. This is understood by the community to warrant a 2.0 release that will not be backward compatible. The goal is to rework the API using solid principles that will allow for extension over the course of several release cycles, avoiding the necessity of yet another major release version in a short period of time.
Briefly, this effort grew from the acknowledgement that the current models used to facilitate requests and responses via the EdgeX Foundry API were legacy definitions that were once used as internal representations of state within the EdgeX services themselves. Thus if you want to add or update a device, you populate a full device model rather than a specific Add/UpdateDeviceRequest. Currently, your request model has the same definition, and thus validation constraints, as the response model because they are one and the same! It is desirable to separate and be specific about what is required for a given request, as well as its state validity, and the bare minimum that must be returned within a response.
Following from that central need, other considerations have been used when designing this proposed API. These will be enumerated and briefly explained below.
1.) Transport-agnostic Define the request/response data transfer objects (DTO) in a manner whereby they can be used independent of transport. For example, although an OpenAPI doc is implicitly coupled to HTTP/REST, define the DTOs in such a way that they could also be used if the platform were to evolve to a pub/sub architecture.
2.) Support partial updates via PATCH Given a request to, for example, update a device the user should be able to update only some properties of the device. Previously this would require an endpoint for each individual property to be updated since the \"update device\" endpoint, facilitated by a PUT, would perform a complete replacement of the device's data. If you only wanted to update the LastConnected timestamp, then a separate endpoint for that property was required. We will leverage PATCH in order to update an entity and only those properties populated on the request will be considered. Properties that are missing or left blank will not be touched.
3.) Support multiple requests at once Endpoints for the addition or updating of data (POST/PATCH) should accept multiple requests at once. If it were desirable to add or update multiple devices with one request, for example, the API should facilitate this.
4.) Support multiple correlated responses at once Following from #3 above, each request sent to the endpoint must result in a corresponding response. In the case of HTTP/REST, this means if four requests are sent to a POST operation, the return payload will have four responses. Each response must expose a \"code\" property containing a numeric result for what occurred. These could be equivalent to HTTP status codes, for example. So while the overall call might succeed, one or more of the child requests may not have. It is up to the caller to examine each response and handle accordingly.
In order to correlate each response to its original request, each request must be assigned its own ID (in GUID format). The caller can then tie a response to an individual request and handle the result accordingly, or otherwise track that a response to a given request was not received.
5.) Use of 207 HTTP Status (Multi-Result) In the case where an endpoint can support multiple responses, the returned HTTP code from a REST API will be 207 (Multi-status)
6.) Each service should provide a \"batch\" request endpoint In addition to use-case specific endpoints that you'd find in any REST API, each service should provide a \"batch\" endpoint that can take any kind of request. This is a generic endpoint that allows you to group requests of different types within a single call. For example, instead of having to call two endpoints to get two jobs done, you can call a single endpoint passing the specific requests and have them routed appropriately within the service. Also, when considering agnostic transport, the batch endpoint would allow for the definition and handling of \"GET\" equivalent DTOs which are now implicit in the format of a URL.
7.) GET endpoints returning a list of items must support pagination URL parameters must be supported for every GET endpoint to support pagination. These parameters should indicate the current page of results and the number of results on a page.
"},{"location":"design/adr/core/0003-V2-API-Principles/#decision","title":"Decision","text":"Commnunity has accepted the reasoning for the new API and the design principles outlined above. The approach will be to gradually implement the V2 API side-by-side with the current V1 APIs. We believe it will take more than a single release cycle to implement the new specification. Releases of that occur prior to the V2 API implementation completion will continue to be major versioned as 1.x. Subsequent to completion, releases will be major versioned as 2.x.
"},{"location":"design/adr/core/0003-V2-API-Principles/#consequences","title":"Consequences","text":"Approved (by TSC vote on 10/6/21)
"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#context","title":"Context","text":"This ADR presents a technical plan for creation of a 2.0 version of edgex-cli which supports the new V2 REST APIs developed as part of the Ireland release of EdgeX.
"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#existing-behavior","title":"Existing Behavior","text":"The latest version of edgex-cli (1.0.1) only supports the V1 REST APIs and thus cannot be used with V2 releases of EdgeX.
As the edgex-cli was developed organically over time, the current implementation has a number of bugs mostly involving a lack of consistent behavior, especially with respect to formatting of output.
Other issues with the existing client include:
The original Hanoi V1 client was created by a team at VMWare which is no longer participating in the project. Canonical will lead the development of the Ireland/Jakarta V2 client.
"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#decision","title":"Decision","text":"-d
, --debug
show additional output for debugging purposes (e.g. REST URL, request JSON, \u2026). This command-line arg will replace -v, --verbose and will no longer trigger output of the response JSON (see -j, --json). -j
, --json
output the raw JSON response returned by the EdgeX REST API and nothing else. This output mode is used for script-based usage of the client. --version
output the version of the client and if available, the version of EdgeX installed on the system (using the version of the metadata data service) Restructure the Go code hierarchy to follow the most recent recommended guidelines. For instance /cmd should just contain the main application for the project, not an implementation for each command - that should be in /internal/cmd
Take full advantage of the features of the underlying command-line library, Cobra, such as tab-completion of commands.
Allow overlap of command names across services by supporting an argument to specify the service to use: -m/--metadata
, -c/--command
, -n/--notification
, -s/--scheduler
or --data
(which is the default). Examples:
edgex-cli ping --data
edgex-cli ping -m
edgex-cli version -c
Implement all required V2 endpoints for core services
Core Command - edgex-cli command
read | write | list
Core Data - edgex-cli event
add | count | list | rm | scrub**
- edgex-cli reading
count | list
Metadata - edgex-cli device
add | adminstate | list | operstate | rm | update
- edgex-cli deviceprofile
add | list | rm | update
- edgex-cli deviceservice
add | list | rm | update
- edgex-cli provisionwatcher
add | list | rm | update
Support Notifications - edgex-cli notification
add | list | rm
- edgex-cli subscription
add | list | rm
Support Scheduler - edgex-cli interval
add | list | rm | update
**Common endpoints in all services**\n- **`edgex-cli version`**\n- **`edgex-cli ping`**\n- **`edgex-cli metrics`**\n- **`edgex-cli status`**\n\nThe commands will support arguments as appropriate. For instance:\n- `event list` using `/event/all` to return all events\n- `event list --device {name}` using `/event/device/name/{name}` to return the events sourced from the specified device.\n
Currently, some commands default to always displaying GUIDs in objects when they're not really needed. Change this so that by default GUIDs aren't displayed, but add a flag which causes them to be displayed.
scrub may not work with Redis being secured by default. That might also apply to the top-level db
command (used to wipe the entire db). If so, then the commands will be disabled in secure mode, but permitted in non-secure mode.
Have built-in defaults with port numbers for all core services and allow overrides, avoiding the need for static configuration file or configuration provider.
(Stretch) implement a -o
/--output
argument which could be used to customize the pretty-printed objects (i.e. non-JSON).
(Stretch) Implement support for use of the client via the API Gateway, including being able to connect to a remote EdgeX instance. This might require updates in go-mod-core-contracts.
** Approved ** By TSC Vote on 2/14/22
Please see a prior PR on this topic that detailed much of the debate and context on this issue. For clarity and simplicity, that PR was closed in favor of this simpler ADR.
"},{"location":"design/adr/core/0021-Device-Profile-Changes/#context","title":"Context","text":"While the device profile has always been the way to describe a device/sensor and template its communications to the rest of the EdgeX platform, over the course of EdgeX evolution there have been changes in what could change in a profile (often based on its associations to other EdgeX objects). This document is meant to address the issue of change surrounding device profiles in EdgeX going forward \u2013 specifically when can a device profile (or its sub-elements such as device resources) be added, modified or removed.
"},{"location":"design/adr/core/0021-Device-Profile-Changes/#summary-of-device-profile-rules","title":"Summary of Device Profile Rules","text":"These rules will be implemented in core metadata on device profile API calls.
The following APIs would be added to the metadata REST service in order to meet the design specified above.
Some adopters may not view event/reading data as ephemeral or short lived. These adopters may choose not to allow device profiles to be modified or removed when associated to an event or reading. For this reason, two new configuration options, in the [Writable.ProfileChange]
section, will be added to metadata configuration that are used to reject modifications or deletions.
When either of these config settings are set to true, metadata would accordingly reject changes to or removal of profiles (note: metadata will not check that there are actually events or readings - or any object - associated to the device profile when these are set to true. It simply rejects all modification or deletes to device profiles with the assumption that there could be events, readings or other objects associated and which need to be preserved).
"},{"location":"design/adr/core/0021-Device-Profile-Changes/#consequencesconsiderations","title":"Consequences/Considerations","text":"In order to allow device profiles to be updated or removed even when associated to an EdgeX event/reading, a new property needs to be added to the reading object.
ReadingUnits
(set true by default) will allow adopters to indicate they do not want units to be added to the readings (for cases where there is a concern about the number of readings and the extra data of adding units).ReadingUnits
configuration option will be added to the [Writable.Reading]
section of device services (and addressed in the device service SDKs).Approved by TSC Vote on 3/16/2022
This ADR began under a different ADR pull request. The prior ADR recommended a UoM per device resource and just allowed for the association of an arbitrary set of unit of measure references against the resource. However, it did not include any specific units of measure or validation of those units against the actual profiles (and ultimately the associated readings). See the previous UoM ADR for details and prior debate.
Implementation: to be determined, but could be as soon as Kamakura (Spring 2022).
"},{"location":"design/adr/core/0022-UoM/#context","title":"Context","text":"Unit of measurement (UoM) is defined as \"a standard amount of a physical quantity, such as length, mass, energy, etc, specified multiples of which are used to express magnitudes of that physical quantity\". In EdgeX, data collected from sensors are physical quantities which should be associated to some unit of measure to express magnitude of that physical quantity. For example, if EdgeX collected a temperature reading from a thermostat as 45
, the user of that sensor reading would want to know if the unit of measure for the 45
quantity was expressed in Celsius, Fahrenheit or even the Kelvin scale.
Since the founding of the project, there has been consensus that a unit of measure should be associated to any sensor or metric quantity collected by EdgeX. Also since the founding of the project, a unit of measure has therefore been specified (directly or indirectly) to each device resource (found in device profiles) and associated values collected as part of readings.
The unit of measure was, however, in all cases just a string reference to some arbitrary unit (which may or may not be in a UoM standard) to be interpreted by the consumer of EdgeX data. The reporting sensor/device or programmer of the device service could choose what UoM string was associated to the device resources (and readings produced by the device service) as the unit of measure for any piece of data. Per the temperature example above, the unit of measure could have been \"F\" or \"C\", \"Celsius\" or \"Fahrenheit\", or any other representation. In other words, the associated unit of measure for all data in EdgeX was left to agreement and interpretation by the data provider/producer and EdgeX data consumer.
There are various specifications and standards around unit of measure. Specifically, there are several options to choose from as it relates to the exchange of data in electronic communications - and units of measure associated in that exchange. As examples, two big competing standards around EDI (electronic data exchange) that both have associated unit of measure codes are:
The Unified Code for Units of Measure provides an alternative list (not a standard) that is used by various organizations like OSGI and the Eclipse Foundation.
While standards exist, use by various open source projects (especially IoT/edge projects) is inconsistent and haphazard. Groups like oneM2M seem to define their own selection of units in specifications per vertical (home for example) while Kura doesn't even appear to use the UoM JSR (a Java related unit of measure specification for Java applications like Kura).
"},{"location":"design/adr/core/0022-UoM/#decision","title":"Decision","text":"It would be speculative and inappropriate for EdgeX to select a unit of measure standard which is not widely adopted in the industry or choose a static unit of measure list that is incomplete with regard to possible IoT / edge use case needs. At this time, there does not appear to be a single and unequivocal standard for units of measure that encompasses all EdgeX related use cases (now and in the future).
Therefore, EdgeX chooses not to select or adopt a unit of measure specification, standard, or code list to apply across the platform. Instead, EdgeX adopters will be allowed to optionally specify which unit of measure specification, standard, or unit of measure code list they would like used in their instance(s) of EdgeX.
"},{"location":"design/adr/core/0022-UoM/#specifying-the-units-of-measure","title":"Specifying the Units of Measure","text":"Units of measure allowed by the instance of EdgeX will be specified in a configuration file (in YAML format called uom.yaml
by default). Note: the UoM configuration is a separate configuration YAML file (separate from the metadata service configuration file - configuration.yaml
).
EdgeX 3.0
For EdgeX 3.0 the UoM definition file is changed to YAML instead of TOML format.
The units of measure in the configuration file can be attributed, optionally, to a specification, document, or other UoM definition source. The source
only helps provide the location of documentation about the origins and details of the units specified for the reader, but it will not be used or checked by EdgeX. An optional default source can be provided at the top level configuration (as shown in the examples below) so that other sources are only needed when there are specific units used that are not found in the default source.
The units of measure can be categorized for better organization and to allow for different sources to be specified for different units. The categories are defined by the YAML section names (the UoM dot labels).
Sample YAML unit of measure configuration
Source: reference to source for all UoM if not specified below\nUnits:\ntemperature:\nSource: www.weather.com\nValues:\n- C\n- F\n- K\nweights:\nSource: www.usa.gov/federal-agencies/weights-and-measures-division\nValues:\n- lbs\n- ounces\n- kilos\n- grams\n
"},{"location":"design/adr/core/0022-UoM/#specifying-the-uom-file-location","title":"Specifying the UoM File Location","text":"The location of the UoM file will be specified in core metadata's configuration (currently in res/configuration.yaml
) - see example A below.
Example Metadata Configuration - location of of the UoM configuration file
Writable:\nUoM:\nValidation: false ## false (meaning off) by default\n\n## in the non-writable area - example file specified to units of measure\nUoM:\nUoMFile: ./res/uom.yaml # the UoMFile location can be either absolute or relative path location\n
The location of the UoM file should point to an accessible file (relative to application executable or absolute path). The file must be something that the service can reach (ex: in shared volume, volume mount, etc.) in order to allow for the adopter to provide the units of measure independently during configuration/setup of the EdgeX instance without requiring a build of the metadata service or a reconstruction of the Docker image/container.
Info
In future versions, multiple UoM definition files might be specified. This may help the organization of the units in the future.
Note
The environmental overrides can be used to specify and override the location of the UoM configuration file.
Info
It was discussed that the file location could be done via URI and even allow for HTTP, HTTPS or other protocol access of the file. For this first implementation, it was decided (per Monthly Architect's meeting of 2/28/22) to only allow for a simple file path reference (relative or absolute). Future implementation can consider URI use.
"},{"location":"design/adr/core/0022-UoM/#specifying-validation-on-or-off","title":"Specifying Validation on or off","text":"Additionally, in metadata's configuration, a configuration option for unit of measure validation being on
or off
will be provided (note Validation
in both example above). The location of the UoM file is static, but the ability to turn validation on/off is dynamic and therefore in the writable area of configuration. For backward compatibility, validation will be off by default.
Note
on
and off
are specified by boolean values true
and false
in the configuration file.
Core metadata will read the units of measure from its configuration file. Like all configuration information, this data will be stored in the configuration service (Consul today) on initial startup of the core metadata service.
When validation is turned on
(Writable.UoM.validation is set to true), all device profile units
(in device resource, device properties) will be validated against the list of units of measure by core metadata. In other words, when a device profile is created or updated or when a device resource is added or updated via the core metadata API, the units specified in the device resource's units
field (see resource example below) will be checked against the valid list of UoM provided via core metadata configuration. If the units
value matches any one of the configuration units of measure, then the device resource is considered valid - allowing the create or update operation to continue.
If the units
value does not match any one of the configuration units of measure, then the device profile or device resource operation (create or update) is rejected (error code 500 is returned) and an appropriate error message is returned in the response to the caller of the core metadata API.
Note
Importantly (as discussed in Core WG 2/17/22), the units
field on a profile is and shall remain optional. If the units
field is not specified in the device profile, then it is assumed that the device resource does not have well defined units of measure. In other words, core metadata will not fail a profile with no units
field specified on a device resource.
In the example device resource below, core metadata would check that C
is in the list of units of measure in the configuration.
deviceResources:\n-\nname: \"RoomTemperature\"\nisHidden: false\ndescription: \"Room Temperature x10 \u00b0C (Read Only)\"\nattributes:\n{ primaryTable: \"INPUT_REGISTERS\", startingAddress: 3, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: 0.1\nunits: \"C\" ## core metadata checks this value against its list of valid units of measure\n
By checking the units
property of the device resources (on creation or updates of the device profile or create/update of the device resources), and rejecting any additions or changes that include non-valid units of measure, then we can be assured that all readings created by the device service will contain valid units by default (assuming that validation of the units of measure is always on) or that the units are inconsequential (when the units
field is not specified for a device resource). This means, the units in a reading do not need to be validated separately.
Based on discussion in the Core WG meeting of 2/3/22, it was decided that without validation and some valid list of actual UoM, the ADR was just adding metadata to the profile and thus did not even rise to the level of \"significant\" architectural decision. It was further felt that in order to really provide any value to adopters and to get adherence to their chosen units of measure, EdgeX had to allow for a valid list of units of measure to be specified and be used to check profile units - but in a way that is easy to configure/provide without having to rebuild a service for example. If the units of measure were defined just in the standard configuration file, it would make it hard to change this list in deployments.
This new UoM ADR is the result of that discussion. In general, it specifies, through adopter provided configuration, the exact unit of measures that are allowed for the EdgeX instance and any optional reference (such as a specification) where those units are defined. It does so through a separate core metadata configuration file making it easier to change.
"},{"location":"design/adr/core/0022-UoM/#use-of-senml","title":"Use of SenML","text":"SenML was suggested as a specification (currently a proposed standard) from which EdgeX may draw some guidance or inspiration with regard to unit of measure representation in \"simple sensor measurements and device parameters.\"
In fact, SenML defines a simple data model (in JSON, CBOR, XML, EXI) for the exchange of what EdgeX would call readings. A JSON example is below:
[{\"n\":\"urn:dev:ow:10e2073a01080063\",\"u\":\"Cel\",\"v\":23.1}]\n
In the example above, the array (what EdgeX would consider a collection of readings) has a single SenML Record with a measurement for a sensor named \"urn:dev:ow:10e2073a01080063\" with a current value of 23.1 for degrees measured in Celsius (Cel) unit of measure. However, SenML suggests the use of short names for the keys in most cases, but long names could be used. In which case, the JSON SenML reading would look like the following:
[{\"Name\":\"urn:dev:ow:10e2073a01080063\",\"Unit\":\"Cel\",\"Value\":23.1}]\n
In this way, the parallels to EdgeX model are, by accident, uncanny - at least in the JSON instance. SenML goes to much more depth to provide extensions and more definitions around measurements. But at its base, the EdgeX format is not unlike SenML and could easily be aligned with SenML in the future (or allow for an application service to export in SenML with an additional function fairly easily and if there were demand).
However, on the basis of \"unit of measure\", SenML is actually light on details. With regard to UoM, the SenML specification only says:
Quote
If the Record has no Unit, the Base Unit is used as the Unit. Having no Unit and no Base Unit is allowed; any information that may be required about units applicable to the value then needs to be provided by the application context.
A SenML Units Registry provides for a list of unit symbols (the \"SenML Units registry\"). This list could be used as one of the sources for EdgeX UoM definition.
SenML should be examined for future versions of EdgeX with regard to data model, but its relevance to unit of measure is believed to be minimal at this time.
"},{"location":"design/adr/core/0022-UoM/#future-considerationsadditionsimprovements","title":"Future Considerations/Additions/Improvements","text":"In the future, validation may be turned on
or off
per device service; allowing the decision to validate units of measure to be accomplished on a service or even allow the device service to validate/not validate based on particular devices.
In the future, additional criteria may be added to the unit of measure information to all for more specific (or allowing more granularity) validation. For example, the category of units of measure could be specified in a device resource so that a profile's units are validated against specific sources or collections of unit of measure.
Use of URI to specify the unit of measures file was discussed. This would be novel with regard to providing EdgeX information. Per core working group of 2/17/22 and then again at the monthly architect's meeting of 2/28/22, we may look to use a URI to specify a configuration file to specify UoM in the future. Indeed, URIs may be used (an EdgeX 3.0 consideration) to point to device profiles, configuration files, and other information in the future. This would even allow multiple EdgeX instances to use the same configuration or profile (multiple EdgeX instances using the same URI to use a shared profile for example). However, it was deemed scope creep and too much to do for this first iteration.
Initially, this ADR allowed for the UoM to also or alternately to be defined in the standard metadata service configuration file (`configuration.yaml'). During the Core WG meeting of 3/3/22, it was decided to simplify the design and strictly limit UoM to a separate configuration file. If future use cases or adopters request inline definition, this can be implemented in a future release.
"},{"location":"design/adr/core/0022-UoM/#consequences","title":"Consequences","text":"Approved
"},{"location":"design/adr/device-service/0002-Array-Datatypes/#context","title":"Context","text":"The current data model does not directly provide for devices which provide array data. Small fixed-length arrays may be handled by defining multiple device resources - one for each element - and aggregating them via a resource command. Other array data may be passed using the Binary type. Neither of these approaches is ideal: the binary data is opaque and any service processing it would need specific knowledge to do so, and aggregation presents the device service implementation with a multiple-read request that could in many cases be better handled by a single request.
This design adds arrays of primitives to the range of supported types in EdgeX. It comprises an extension of the DeviceProfile model, and an update to the definition of Reading.
"},{"location":"design/adr/device-service/0002-Array-Datatypes/#decision","title":"Decision","text":""},{"location":"design/adr/device-service/0002-Array-Datatypes/#deviceprofile-extension","title":"DeviceProfile extension","text":"The permitted values of the Type
field in PropertyValue
are extended to include: \"BoolArray\", \"Uint8Array\", \"Uint16Array\", \"Uint32Array\", \"Uint64Array\", \"Int8Array\", Int16Array\", \"Int32Array\", \"Int64Array\", \"Float32Array\", \"Float64Array\"
In the API (v1 and v2), Reading.Value
is a string representation of the data. If this is maintained, the representation for Array types will follow the JSON array syntax, ie [\"value1\", \"value2\", ...]
Any service which processes Readings will need to be reworked to account for the new Reading type.
"},{"location":"design/adr/device-service/0002-Array-Datatypes/#device-service-considerations","title":"Device Service considerations","text":"The API used for interfacing between device SDKs and devices service implementations contains a local representation of reading values. This will need to be updated in line with the changes outlined here. For C, this will involve an extension of the existing union type. For Go, additional fields may be added to the CommandValue
structure.
Processing of numeric data in the device service, ie offset
, scale
etc will not be applied to the values in an array.
Approved
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#context","title":"Context","text":"This ADR details the REST API to be provided by Device Service implementations in EdgeX version 2.x. As such, it supercedes the equivalent sections of the earlier \"Device Service Functional Requirements\" document. These requirements should be implemented as far as possible within the Device Service SDKs, but they also apply to any Device Service implementation.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#decision","title":"Decision","text":""},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#common-endpoints","title":"Common endpoints","text":"The DS should provide the REST endpoints that are expected of all EdgeX microservices, specifically:
PUT
and POST
callback/device/name/{name} DELETE
callback/profile PUT
callback/watcher PUT
and POST
callback/watcher/name/{name} DELETE
parameter meaning {name} the name of the device or watcher These endpoints are used by the Core Metadata service to inform the device service of metadata updates. Endpoints are defined for each of the objects of interest to a device service, ie Devices, Device Profiles and Provision Watchers. On receipt of calls to these endpoints the device service should update its internal state accordingly. Note that the device service does not need to be informed of the creation or deletion of device profiles, as these operations may only occur where no devices are associated with the profile. To avoid stale profile entries the device service should delete a profile from its cache when the last device using it is deleted.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#object-deletion","title":"Object deletion","text":"When an object is deleted, the Metadata service makes a DELETE
request to the relevant callback/{type}/name/{name} endpoint.
When an object is created or updated, the Metadata service makes a POST
or PUT
request respectively to the relevant callback/{type} endpoint. The payload of the request is the new or updated object, ie one of the Device, DeviceProfile or ProvisionWatcher DTOs.
GET
and PUT
parameter meaning {name} the name of the device {command} the command name The command specified must match a deviceCommand or deviceResource name in the device's profile
body (for PUT
): An application/json
SettingRequest, which is a set of key/value pairs where the keys are valid deviceResource names, and the values provide the command argument for that resource. Example: {\"AHU-TargetTemperature\": \"28.5\", \"AHU-TargetBand\": \"4.0\"}
response body: A successful GET
operation will return a JSON-encoded EventResponse object, which contains one or more Readings. Example: {\"apiVersion\":\"v2\",\"deviceName\":\"Gyro\",\"origin\":1592405201763915855,\"readings\":[{\"deviceName\":\"Gyro\",\"name\":\"Xrotation\",\"value\":\"124\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Yrotation\",\"value\":\"-54\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Zrotation\",\"value\":\"122\",\"origin\":1592405201763915855,\"valueType\":\"int32\"}]}
This endpoint is used for obtaining readings from a device, and for writing settings to a device.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#data-formats","title":"Data formats","text":"The values obtained when readings are taken, or used to make settings, are expressed as strings.
Type EdgeX types Representation BooleanBool
\"true\" or \"false\" Integer Uint8-Uint64
, Int8-Int64
Numeric string, eg \"-132\" Float Float32
, Float64
Decimal with exponent, eg \"1.234e-5\" String String
string Binary Bytes
octet array Array BoolArray
, Uint8Array-Uint64Array
, Int8Array-Int64Array
, Float32Array
, Float64Array
JSON Array, eg \"[\"1\", \"34\", \"-5\"]\" Notes: - The presence of a Binary reading will cause the entire Event to be encoded using CBOR rather than JSON - Arrays of String and Binary data are not supported
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#readings-and-events","title":"Readings and Events","text":"A Reading represents a value obtained from a deviceResource. It contains the following fields
Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken value The reading value valueType The type of the dataOr for binary Readings, the following fields
Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken binaryValue The reading value mediaType The MIME type of the dataAn Event represents the result of a GET
command. If the command names a deviceResource, the Event will contain a single Reading. If the command names a deviceCommand, the Event will contain as many Readings as there are deviceResources listed in the deviceCommand.
The fields of an Event are as follows:
Field name Description deviceName The name of the Device from which the Readings are taken profileName The name of the Profile describing the Device origin The time at which the Event was created readings An array of Readings"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#query-parameters","title":"Query Parameters","text":"Calls to the device endpoints may include a Query String in the URL. This may be used to pass parameters relating to the request to the device service. Individual device services may define their own parameters to control specific behaviors. Parameters beginning with the prefix ds-
are reserved to the Device SDKs and the following parameters are defined for GET requests:
GET
will result in an event being pushed to the EdgeX system ds-returnevent \"true\" or \"false\" \"true\" If set to false, there will be no Event returned in the http response EdgeX 3.0
The valid values of ds-pushevent and ds-returnevent is changed to true/false
instead of yes/no
in EdgeX 3.0.
A Device in EdgeX has two states associated with it: the Administrative state and the Operational state. The Administrative state may be set to LOCKED
(normally UNLOCKED
) to block access to the device for administrative reasons. The Operational state may be set to DOWN
(normally UP
) to indicate that the device is not currently working. In either case access to the device via this endpoint will be denied and HTTP 423 (\"Locked\") will be returned.
A number of simple data transformations may be defined in the deviceResource. The table below shows these transformations in the order in which they are applied to outgoing data, ie Readings. The transformations are inverted and applied in reverse order for incoming data.
Transform Applicable reading types Effect mask Integers The reading is masked (bitwise-and operation) with the specified value. shift Integers The reading is bit-shifted by the specified value. Positive values indicate right-shift, negative for left. base Integers and Floats The reading is replaced by the specified value raised to the power of the reading. scale Integers and Floats The reading is multiplied by the specified value. offset Integers and Floats The reading is increased by the specified value.The operation of the mask transform on incoming data (a setting) is that the value to be set on the resource is the existing value bitwise-anded with the complement of the mask, bitwise-ored with the value specified in the request.
ie, new-value = (current-value & !mask) | request-value
The combination of mask and shift can therefore be used to access data contained in a subdivision of an octet.
It is possible that following the application of the specified transformations, a value may exceed the range that may be represented by its type. Should this occur on a set operation, a suitable error should be logged and returned, along with the Bad Request
http code 400. If it occurs as part of a get operation, the Reading's value should be set to the String \"overflow\"
and its valueType to String
.
Assertions are another attribute in a device resource's PropertyValue, which specify a string which the reading value is compared against. If the comparison fails, then the http request returns a string of the form \"Assertion failed for device resource: \\, with value: \\\", this also has a side-effect of setting the device operatingstate to DISABLED
. A 500 status code is also returned. Note that the error response and status code should be returned regardless of the ds-returnevent
setting.
Assertions are also checked where an event is being generated due to an AutoEvent, or asynchronous readings are pushed. In these cases if the assertion is triggered, an error should be logged and the operating state should be set as above.
Assertions are not checked for settings, only for readings.
Mappings may be defined in a deviceCommand. These allow Readings of string type to be remapped. Mappings are applied after assertions are checked, and are the final transformation before Readings are created. Mappings are also applied, but in reverse, to settings (PUT
request data).
Each Device has as part of its metadata a timestamp named lastConnected
, this indicates the most recent occasion when the device was successfully interacted with. The device service should update this timestamp every time a GET or PUT operation succeeds, unless it has been configured not to do so (eg for performance reasons).
POST
A call to this endpoint triggers the device discovery process, if enabled. See Discovery Design for details.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#consequences","title":"Consequences","text":""},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#changes-from-v1x-api","title":"Changes from v1.x API","text":"GET
requests take parameters controlling what is to be done with resulting Events, and the default behavior does not send the Event to core-dataOpenAPI definition of v2 API : https://github.com/edgexfoundry/device-sdk-go/blob/master/openapi/v2/device-sdk.yaml
Device Service Functional Requirements (Geneva) : https://wiki.edgexfoundry.org/download/attachments/329488/edgex-device-service-requirements-v11.pdf?version=1&modificationDate=1591621033000&api=v2
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/","title":"Device Service Filters","text":""},{"location":"design/adr/device-service/0012-DeviceService-Filters/#status","title":"Status","text":"** Approved ** (by TSC vote on 3/15/21)
In EdgeX today, sensor/device data collected can be \"filtered\" by application services before being exported or sent to some north side application or system. Built-in application service functions (available through the app services SDK) allow EdgeX event/reading objects to be filtered by device name or by device ResourceName. That is, event/readings can be filtered by:
There are potentially two places where \"filtering\" in a device service could be useful.
Event/Reading
objects and pushes those to core data). A sensor data filter would allow the device service to essentially ignore some of the raw sensed data. This would allow for some device service optimization in that the device service would not have perform type transformations and creation of event/reading objects if the data can be eliminated at this early stage. This first level filtering would, if put in place, likely occur in code associated with the read command gets done by the ProtocolDriver
.Event/Reading
objects, there is a desire to filter some of the Readings
based on the Reading
values or Reading
name (which is the device ResourceName) or some combination of value and name.At this time, this design only addresses the need for the second filter (Reading Filter). At the time of this writing, no applicable use case has yet to be defined to warrant the Sensor Data Filter.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#reading-filters","title":"Reading Filters","text":"Reading filters will allow, not unlike application service filter functions today, to have Readings
in an Event
to be removed if:
the value was outside or inside some range, or the value was greater than, less than or equal to some value
Reading
value (numeric) of a Reading
outside a specified range (min/max) described in the service configuration. Thus avoiding sending in outlier or jittery data Readings
that could negatively effect analytics.Reading
value (numeric) equal to or near (with in some specified range) the last reading. This allows a device service to reduce sending in Event/Readings
that do not represent any significant change. This differs from the already implemented onChangeOnly in that it is filtering Readings
within a specified degree of change. Note: this feature would require caching of readings which has not fully been implemented in the SDK. The existing mechanism for autoevents
provides a partial cache. Added for future reference, but this feature would not be accomplished in the initial implementation; requiring extra design work on caching to be implemented.the value was the same as some or not the same as some specified value or values (for strings, boolean and other non-numeric values)
temperature
or humidity
as example device resources.Unlike application services, there is not a need to filter on a device name (or identifier). Simply disable the device in the device service if all Event/Readings
are to be stopped for the device.
In the case that all Readings
of an Event
are filtered, it is assumed the entire Event
is deemed to be worthless and not sent to core data by the device service. If only some Readings
from and Event
are filtered, the Event
minus the filtered Readings
would be sent to core data.
The filter behaves the same whether the collection of Readings
and Events
is triggered by a scheduled collection of data from the underlying sensor/device or triggered by a command request (as from the command service). Therefore, the call for a command request still results in a successful status code and a return of no results (or partial results) if the filter causes all or some of the readings to be removed.
A new function interface shall be defined that, when implemented, performs a Reading Filter operation. A ReadingFilter function would take a parameter (an Event
containing readings), check whether the Readings
of the Event
match on the filtering configuration (see below) and if they do then remove them from the Event
. The ReadingFilter function would return the Event
object (minus filtered Readings
) or nil
if the Event
held no more Readings
. Pseudo code for the generic function is provided below. The results returned will include a boolean to indicate whether any Reading
objects were removed from the Event
(allowing the receiver to know if some were filtered from the original list).
func (f Filter) ReadingFilter(lc logger.LoggingClient, event *models.Event) (*models.Event, error, boolean) {\n// depending on impl; filtering for values in/out of a range, >, <, =, same, not same, from a particular name (device resource), etc.\n// The boolean will indicate whether any Readings were filtered from the Event. \nif (len(event.Reading )) > 0)\nif (len filteredReadings > 0)\nreturn event, true\nelse return event, false\nelse\nreturn nil, true\n}\n
Based on current needs/use cases, implementations of the function interface could include the following filter functions:
func (f Filter) FilterByValue (lc logger.LoggingClient, event *models.Event) (*models.Event, error, boolean) {}\n\nfunc (f Filter) FilterByResourceNamesMatch (lc logger.LoggingClient, event *models.Event) (*models.Event, error, boolean) {}\n
Note
The app functions SDK comes with FilterByDeviceName
and FilterByResourceName
functions today. The FilterByResourceName would behave similarly to FilterByResourceNameMatch.
The Filter structure houses the configuration parameters for which the filter functions work and filter on.
Note
The app functions SDK uses a fairly simple Filter structure.
type Filter struct {\nFilterValues []string\nFilterOut bool\n}\n
Given the collection of filter operations (in range, out of range, equal or not equal), the following structure is proposed:
type Filter struct {\nFilterValues []string\nTargetResourceName string\nFilterOp string // enum of in (in range inclusive), out (outside a range exclusive), eq (equal) or ne (not equal)\n}\n
Examples use of the Filter structure to specify filtering:
Filter {FilterValues: {10, 20}, \"Int64\", FilterOp: \"in\"} // filter for those Int64 readings with values between 10-20 inclusive\nFilter {FilterValues: {10, 20}, \"Int64\", FilterOp: \"out\"} // filter for those Int64 readings with values outside of 10-20.\nFilter {FilterValues: {8, 10, 12}, \"Int64\", FilterOp: \"eq\"} //filter for those Int64 readings with values of 8, 10, or 12.\nFilter {FilterValues: {8, 10}, \"Int64\", FilterOp: \"ne\"} //filter for those Int64 readings with values not equal to 8 or 10\nFilter {FilterValues: {\"Int32\", \"Int64\"}, nil, FilterOp: \"eq\"} //filter to be used with FilterByResourceNameMatch. Filter for resource names of Int32 or Int64.\nFilter {FilterValues: {\"Int32\"}, nil, FilterOp: \"ne\"} //filter to be used with FilterByResourceNameMatch. Filter for resource names not equal to (excluding) Int32.\n
A NewFilter function creates, initializes and returns a new instance of the filter based on the configuration provided.
func NewReadingNameFilter(filterValues []string, filterOp string) Filter {\nreturn Filter{FilterValues: filterValues, TargetResourceName string, FilterOp: filterOp}\n}\n
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#sharing-filter-functions","title":"Sharing filter functions","text":"If one were to explore the filtering functions in the app functions SDK filter.go (both FilterByDeviceName
and FilterByValueDescriptor
), the filters operate on the Event
model object and return the same objects (Event
or nil). Ideally, since both app services and device services generally share the same interface model (from go-mod-core-contracts
), it would be the desire to share the same filter functions functions between SDKs and associated services.
Decisions on how to do this in Go - whether by shared module for example - is left as a future release design and implementation task - and as the need for common filter functions across device services and application services are identified in use cases. C needs are likely to be handled in the SDK directly.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#additional-design-considerations","title":"Additional Design Considerations","text":"As Device Services do not have the concept of a functions pipeline like application services do, consideration must be given as to how and where to:
At this time, custom filters will not be supported as the custom filters would not be known by the SDK and therefore could not be specified in configuration. This is consistent with the app functions SDK and filtering.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#function-inflection-point","title":"Function Inflection Point","text":"It is precisely after the convert to Event/Reading
objects (after the async readings are assembled into events) and before returning that result in common.SendEvent
(in utils.go) function that the device service should invoke the required filter functions. In the existing V1 implementation of the device-sdk-go, commands, async readings, and auto-events all call the function common.SendEvent()
. Note: V2 implementation will require some re-evaluation of this inflection point. Where possible, the implementation should locate a single point of inflection if possible. In the C SDK, it is likely that the filters will be called before conversion to Event/Reading objects - they will operate on commandresult objects (equivalent to CommandValues).
The order in which functions are called is important when more than one filter is provided. The order that functions are called should be reflected in the order listed in the configuration of the filters.
Events containing binary values (event.HasBinaryValue), will not be filtered. Future releases may include binary value filters.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#setting-filter-function-and-configuration","title":"Setting Filter Function and Configuration","text":"When filter functions are shared (or appear to be doing the same type of work) between SDKs, the configuration of the similar filter functions should also look similar. The app functions SDK configuration model for filters should therefore be followed.
While device services do not have pipelines, the inclusion and configuration of filters for device services should have a similar look (to provide symmetry with app services). The configuration has to provide the functions required and parameters to make the functions work - even though the association to a pipeline is not required. Below is the common app service configuration as it relates to filters:
[Writable.Pipeline]\nExecutionOrder = \"FilterByDeviceName, TransformToXML, SetOutputData\"\n[Writable.Pipeline.Functions.FilterByDeviceName]\n[Writable.Pipeline.Functions.FilterByDeviceName.Parameters]\nDeviceNames = \"Random-Float-Device,Random-Integer-Device\"\nFilterOut = \"false\"\n
Suggested and hypothetical configuration for the device service reading filters should look something like that below.
[Writable.Filters]\n# filter readings where resource name equals Int32 \nExecutionOrder = \"FilterByResourceNamesMatch, FilterByValue\"\n[Writable.Filter.Functions.FilterByResourceNamesMatch]\n[Writable.Filter.Functions.FilterByResourceNamesMatch.Parameters]\nFilterValues = \"Int32\"\nFilterOps =\"eq\"\n# filter readings where the Int64 readings (resource name) is Int64 and the values are between 10 and 20\n[Writable.Filter.Functions.FilterByValue]\n[Writable.Filter.Functions.FilterByValue.Parameters]\nTargetResourceName = \"Int64\"\nFilterValues = {10,20}\nFilterOp = \"in\"\n
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#decision","title":"Decision","text":"To be determined
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#consequences","title":"Consequences","text":"This design does not take into account potential changes found with the V2 API.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#references","title":"References","text":""},{"location":"design/adr/devops/0007-Release-Automation/","title":"Release Automation","text":""},{"location":"design/adr/devops/0007-Release-Automation/#status","title":"Status","text":"Approved by TSC 04/08/2020
"},{"location":"design/adr/devops/0007-Release-Automation/#context","title":"Context","text":"EdgeX Foundry is a framework composed of microservices to ease development of IoT/Edge solutions. With the framework getting richer, project growth, the number of artifacts to be released has increased. This proposal outlines a method for automating the release process for the base artifacts.
"},{"location":"design/adr/devops/0007-Release-Automation/#requirements","title":"Requirements","text":""},{"location":"design/adr/devops/0007-Release-Automation/#release-artifact-definition","title":"Release Artifact Definition","text":"For the scope of Hanoi release artifact types are defined as:
This list is likely to expand in future releases.
*The building and publishing of snaps was removed from community scope in September 2020 and is managed outside the community by Canonical.
"},{"location":"design/adr/devops/0007-Release-Automation/#general-requirements","title":"General Requirements","text":"As the EdgeX Release Czar I gathered the following requirements for automating this part of the release.
The code that will manage the release automation for EdgeX Foundry will live in a repository called cd-management
. This repository will have a branch named release
that will track the releases of artifacts off the main
branch of the EdgeX Foundry repositories.
EdgeX Foundry has this idea of multple release streams that basically coincides with different named branches in GitHub. For the majority of the main releases we will be targeting those off the main
branch. In our cd-management
repository we will have a release
branch that will track the main
branches EdgeX repositories. In the future we will mark a specific release for long term support (LTS). When this happens we will have to branch off main
in the EdgeX repositories and create a separate release stream for the LTS. The suggestion at that point will be to branch off the release
branch in cd-management
as well and use this new release branch to track the LTS branches in the EdgeX repositories.
Go modules, Application and Device SDKs only release a GitHub tag as their release. Go modules, Application and Device SDKs are set up to automatically increment a developmental version tag on each merge to main
. (IE: 1.0.0-dev.1 -> 1.0.0-dev.2)
The release automation for Go Modules, Device and Application SDKs is used to set the final release version git tag. (IE: 1.0.0-dev.X -> 1.0.0) For each release, the Go Modules, Device and Application SDK repositories will be tagged with the release version.
"},{"location":"design/adr/devops/0007-Release-Automation/#core-services-including-security-and-system-management-services-application-services-device-services-and-supporting-docker-images","title":"Core Services (Including Security and System Management services), Application Services, Device Services and Supporting Docker Images","text":""},{"location":"design/adr/devops/0007-Release-Automation/#during-development_1","title":"During Development","text":"For the Core Services, Application Services, Device Services and Supporting Docker Images we release Github tags and docker images. On every merge to the main
branch we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2), stage docker images in our Nexus repository (docker.staging).
The release automation will need to do the following:
For supporting release assets (e.g. edgex-cli) we release GitHub tags on every merge to the main
branch. For every merge to main
we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) and store the build artifacts in our Nexus repository.
For EdgeX releases the release automation will set the final release version by creating a git tag (e.g. 1.0.0-dev.X -> 1.0.0) and produce a Github Release containing the binary assets targeted for release.
"},{"location":"design/adr/devops/0010-Release-Artifacts/","title":"Release Artifacts","text":""},{"location":"design/adr/devops/0010-Release-Artifacts/#status","title":"Status","text":"Approved
"},{"location":"design/adr/devops/0010-Release-Artifacts/#context","title":"Context","text":"During the Geneva release of EdgeX Foundry the DevOps WG transformed the CI/CD process with new Jenkins pipeline functionality. After this new functionality was added we also started adding release automation. This new automation is outlined in ADR 0007 Release Automation. However, in ADR 0007 Release Automation only two release artifact types are outlined. This document is meant to be a living document to try to outlines all currently supported artifacts associated with an EdgeX Foundry release, and should be updated if/when this list changes.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#release-artifact-types","title":"Release Artifact Types","text":""},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-images","title":"Docker Images","text":"Tied to Code Release? Yes
Docker images are released for every named release of EdgeX Foundry. During development the community releases images to the docker.staging
repository in Nexus. At the time of release we promote the last tested image from docker.staging
to docker.release
. In addition to that we will publish the docker image on DockerHub.
Retention Policy: 90 days since last download
Contains: Docker images that are not expected to be released. This contains images to optimize the builds in the CI infrastructure. The definitions of these docker images can be found in the edgexfoundry/ci-build-images Github repository.
Docker Tags Used: Version, Latest
"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerstaging","title":"docker.staging","text":"Retention Policy: 180 days since last download
Contains: Docker images built for potential release and testing purposes during development.
Docker Tags Used: Version (ie: v1.x), Release Branch (master, fuji, etc), Latest
"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerrelease","title":"docker.release","text":"Retention Policy: No automatic removal. Requires TSC approval to remove images from this repository.
Contains: Officially released docker images for EdgeX.
Docker Tags Used:\u2022Version (ie: v1.x), Latest
Nexus Cleanup Policies Reference
"},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-compose-files","title":"Docker Compose Files","text":"Tied to Code Release? Yes
Docker compose files are released alongside the docker images for every release of EdgeX Foundry. During development the community maintains compose files a folder named nightly-build
. These compose files are meant to be used by our testing frameworks. At the time of release the community makes compose files for that release in a folder matching it's name. (ie: geneva
)
Tied to Code Release? No
After Docker images are published to DockerHub, automation should be run to update the image Overviews and Descriptions of the necessary images. This automation is located in the edgex-docker-hub-documentation
branch of the cd-management repository. In preparation for the release the community makes changes to the Overview and Description metadata as appropriate. The Release Czar will coordinate the execution of the automation near the release time.
Tied to Code Release? No
EdgeX Foundry releases a set of documentation for our project at http://docs.edgexfoundry.org. This page is a Github page that is managed by the edgex/foundry/edgex-docs Github repository. As a community we make our best effort to keep these docs up to date. On this page we are also versioning the docs with the semantic versions of the named releases. As a community we try to version our documentation site shortly after the official release date but documentation changes are addressed as we find them throughout the release cycle.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-tags","title":"GitHub Tags","text":"Tied to Code Release? Yes, for the final semantic version
Github tags are used to track the releases of EdgeX Foundry. During development the tags are incremented automatically for each commit using a development suffix (ie: v1.1.1-dev.1
-> v1.1.1-dev.2
). At the time of release we release a tag with the final semantic version (ie: v1.1.1
).
Tied to Code Release? Yes
The building of snaps was removed from community scope in September 2020 but are still available on the snapcraft store.
Canonical publishes daily arm64 and amd64 releases of the following snaps to latest/edge in the Snap Store. These builds take place on the Canonical Launchpad platform and use the latest code from the master branch of each EdgeX repository, versioned using the latest git tag.
edgexfoundry edgex-app-service-configurable edgex-device-camera edgex-device-rest edgex-device-modbus edgex-device-mqtt edgex-device-grove edgex-cli (work-in-progress) Note - this list may expand over time.
At code freeze the edgexfoundry snap revision in the edge channel is promoted to latest/beta and $TRACK/beta. Publishing to beta will trigger the Canonical checkbox automated tests, which include tests on a variety of hardware hosted by Canonical.
When the project tags a release of any of the snaps listed above, the resulting snap revision is first promoted from the edge channel to latest/candidate and $TRACK/candidate. Canonical tests this revision, and if all looks good, releases to latest/stable and $TRACK/stable.
Canonical may also publish updates to the EdgeX snaps after release to address high/critical bugs and CVEs (common vulnerabilities and exposures).
Note - in the above descriptions, $TRACK corresponds to the named release tracks (e.g. fuji, geneva, hanoi, ...) which are created for every major/minor release of EdgeX Foundry.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#swaggerhub-api-docs","title":"SwaggerHub API Docs","text":"Tied to Code Release? No
In addition to our documentation site EdgeX foundry also releases our API specifications on Swaggerhub.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#testing-framework","title":"Testing Framework","text":"Tied to Code Release? Yes
The EdgeX Foundry community has a set of tests we maintain to do regression testing during development this framework is tracking the master
branch of the components of EdgeX. At the time of release we will update the testing frameworks to point at the released Github tags and add a version tag to the testing frameworks themselves. This creates a snapshot of testing framework at the time of release for validation of the official release.
Tied to Code Release? Yes
GitHub release functionality is utilized on some repositories to release binary artifacts/assets (e.g. zip/tar files). These are versioned with the semantic version and found on the repository's GitHub Release page under 'Assets'.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#known-build-dependencies-for-edgex-foundry","title":"Known Build Dependencies for EdgeX Foundry","text":"There are some internal build dependencies within the EdgeX Foundry organization. When building artifacts for validation or a release you will need to take into the account the build dependencies to make sure you build them in the correct order.
This document is meant to be a living document of all the release artifacts of EdgeX Foundry. With this ADR we would have a good understanding on what needs to be released and when they are released. Without this document this information will remain tribal knowledge within the community.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/","title":"Creation and Distribution of Secrets","text":""},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#status","title":"Status","text":"Approved
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#context","title":"Context","text":"This ADR seeks to clarify and prioritize the secret handling approach taken by EdgeX.
EdgeX microservices need a number of secrets to be created and distributed in order to create a functional, secure system. Among these secrets are:
There is a lack of consistency on how secrets are created and distributed to EdgeX microservices, and when developers need to add new components to the system, it is unclear on what the preferred approach should be.
This document assumes a threat model wherein the EdgeX services are sandboxed (such as in a snap or a container) and the host system is trusted, and all services running in a single snap share a trust boundary.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#terms","title":"Terms","text":"The following terms will be helpful for understading the subsequent discussion:
While EdgeX implements a sophisticated secret handling mechanism, that mechanism itself requires secrets. For example, every microservice that talks to Vault must have its own unique secret to authenticate: Vault itself cannot be used to distribute these secrets. SECRETSLOC fulfills the role that the non-routable instance data IP address, 169.254.169.254, fulfills in the public cloud: delivery of bootstrapping secrets. As EdgeX does not have a hypervisor nor virtual machines for this purpose, a protected file system path is used instead.
SECRETSLOC is implementation-dependent. A desirable feature of SECRETSLOC would be that data written here is kept in RAM and is not persisted to storage media. This property is not achieveable in all circumstances.
For Docker, a list of suggested paths--in preference order--is:
/run/edgex/secrets
(a tmpfs
volume on a Linux host)/tmp/edgex/secrets
(a temporary file area on Linux and MacOS hosts)For snaps, a list of suggested paths-in preference order--is: * /run/snap.
$SNAP_NAME/
(a tmpfs
volume on a Linux host) * $SNAP_DATA/secrets
(a snap-specific persistent data area) * TBD (a content interface that allows for sharing of secrets from the core snap)
A survey on the existing EdgeX secrets reveals the following appoaches.
A designation of \"compliant\" means that the current implementation is aligned with the recommended practices documented in the next section. A designation of \"non-compliant\" means that the current implementation uses an implemention mechanism outside of the recommended practices documented in the next section. A \"non-compliant\" implementation is a candidate for refactoring to bring the implementation into conformance with the recommended practices.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#system-managed-secrets","title":"System-managed secrets","text":"Snaps: PKI generated by standalone utility every cold start of the framework. Deployed to SECRETSLOC. (Compliant.)
Secret store master password
Snaps: Stored in $SNAP_DATA/config/security-secrets-setup/res
. (Non-compliant.)
Secret store per-service authentication tokens
Snaps: Distribution via SECRETSLOC, generated every cold start of the framework. (Compliant.)
Postgres superuser password
Snaps: Generated at snap install time via \"apg\" (\"automatic password generator\") tool, installed into Postgres, cached to $SNAP_DATA/config/postgres/kongpw
(non-compliant), and passed to Kong via $KONG_PG_PASSWORD
.
MongoDB service account passwords
Snaps: Direct consumption from secret store. (Compliant.)
Redis authentication password
Snaps: Server--staged to $SNAP_DATA/secrets/edgex-redis/redis5-password
and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.)
Kong client authentication tokens
Note: in the current implementation, Consul is being operated as a public service. Consul will be a subject of a future \"bootstrapping ADR\" due to its role in serivce location.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#user-managed-secrets","title":"User-managed secrets","text":"User-managed secrets functionality is provided by app-functions-sdk-go
.
If security is enabled, secrets are retrieved from Vault. If security is disabled, secrets are retreived from the configuration provider. If the configuration provider is not available, secrets are read from the underlying .toml
. It is taken as granted in this ADR that secrets originating in the configuration provider or from .toml
configuration files are not secret. The fallback mechanism is provided as a convienience to the developer, who would otherwise have to litter their code with \"if (isSecurityEnabled())\" logic leading to implementation inconsistencies.
The central database credential is supplied by GetDatabaseCredentials()
and returns the database credential assigned to app-service-configurable
. If security is enabled, database credentials are retreived using the standard flow. If security is disabled, secrets are retreived from the configuration provider from a special section called [Writable.InsecureSecrets]
. If not found there, the configuration provider is searched for credentials stored in the legacy [Databases.Primary]
section using the Username
and Password
keys.
Each user application has its own exclusive-use area of the secret store that is accessed via GetSecrets()
. If security is enabled, secret requests are passed along to go-mod-secrets
using an application-specific access token. If security is disabled, secret requets are made to the configuration provider from the [Writable.InsecureSecrets]
section. There is no fallback configuration location.
As user-managed secrets have no framework support for initialization, a special StoreSecrets()
method is made available to the application for the application to initialize its own secrets. This method is only available in security-enabled mode.
No changes to user-managed secrets are being proposed in this ADR.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#decision","title":"Decision","text":""},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#creation-of-secrets","title":"Creation of secrets","text":"Management of hardware-bound secrets is platform-specific and out-of-scope for the EdgeX framework. EdgeX open source will contain only the necessary hooks to integrate platform-specific functionality.
For software-managed secrets, the system of reference of secrets in EdgeX is the EdgeX secret store. The EdgeX secret store provides for encryption of secrets at rest. This term means that if a secret is replicated, the EdgeX secret store is the authoritative source of truth of the secret. Whenever possible, the EdgeX secret store should also be the record of origin of a secret as well. This means creating secrets inside of the EdgeX secret store is preferable to importing an externally-created secret into the secret store. This can often be done for framework-managed secrets, but not possible for user-managed secrets.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#choosing-between-alternative-forms-of-secrets","title":"Choosing between alternative forms of secrets","text":"When given a choice between plain-text secrets and cryptographic keys, cryptographic keys should be preferred.
An example situation would be the introduction of an MQTT message broker. A broker may support both TLS client authentication as well as username/password authentication. In such a situation, TLS client authentication would be preferred:
TLS client authentication should not be used unless there is a capability to revoke a compromised certificate, such as by replacing the certificate authority, or providing a certificate revokation list to the server. If certificate revokation is not supported, plain-text secrets (such as username/password) should be used instead, as they are typically easier to revoke.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#distribution-and-consumption-of-secrets","title":"Distribution and consumption of secrets","text":""},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#prohibited-practices","title":"Prohibited practices","text":"Use of hard-coded secrets is an instance of CWE-798: Use of hard-coded credentials and is not allowed. A hard-coded secret is a secret that is the same across multiple EdgeX instances. Hard-coded secrets make devices susceptible to BORE (break-once-run-everywhere) attacks, where collections of machines can compromised by a single replicated secret. Specific cases where this is likely to come up are:
EdgeX is an open-source project. Any secret that is present in an EdgeX repository is public to the world, and therefore not a secret, by definition. Configuration files, such as .toml files, .json files, .yaml files (including docker-compose.yml
) are specific instances of this practice.
Binaries are usually not protected against confidentiality threats, and binaries can be easily reverse-engineered to find any secrets therein. Binaries included compile executables as well as Docker images.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#recommended-practices","title":"Recommended practices","text":"This approach is only possible for components that have native support for Hashicorp Vault. This includes any EdgeX service that links to go-mod-secrets.
For example, if secretClient is an instance of the go-mod-secrets secret store client:
secrets, err := secretClient.GetSecrets(\"myservice\", \"username\", \"password\")\n
The above code will retrieve the username
and password
properties of the myservice
secret.
Environment variables are part of a process' environment block and are mapped into a process' memory. In this scenario, an intermediary makes a connection to the secret store to fetch a secret, store it into an environment variable, and then launches a target executable, thereby passing the secret in-memory to the target process.
Existing examples of this functionality include vaultenv, envconsul, or env-aws-params. These tools authenticate to a remote network service, inject secrets into the process environment, and then exec's a replacment process that inherits the secret-enriched enviornment block.
There are a few potential risks with this approach:
Environment-variable-sniffing malware (introduced by compromised 3rd party libaries) is a proven attack method.
Dynamic injection of secret into container-scoped tmpfs
volume
An example of this approach is consul-template. This approach is useful when a secret is required to be in a configuration file and cannot be passed via an environment variable or directly consumed from a secret store.
This option is the most widely supported secret distribution mechanism by container orchestrators.
EdgeX supports runtime environments such as standard Docker and snaps that have no built-in secret management features.
Generic Docker does not have a built-in secrets mechanism. Manual configuration of a SECRETSLOC should utilize either a host file file system path or a Docker volume.
Snaps also do not have a built-in secrets mechanism. The options for SECRETSLOC are limited to designated snap-writable directories.
For comparison:
Docker Swarm: Swarm swarm mode is not officially supported by the EdgeX project. Docker Swarm secrets are shared via the /run/secrets
volume, which is a Linux tmpfs
volume created on the host and shared with the container. For an example of Docker Swarm secrets, see the docker-compose secrets stanza. Secrets distributed in this manner become part of the RaftDB, and thus it becomes necessary to enable swarm autolock mode, which prevents the Raft database encryption key from being stored plaintext on disk. Swarm secrets have an additional limitation in that they are not mutable at runtime.
Kubernetes: Kubernetes is not officially supported by the EdgeX project. Kubernetes also supports the secrets volume approach, though the secrets volume can be mounted anywhere in the container namespace. For an example of Kubernetes secrets volumes, see the Kubernetes secrets documentation. Secrets distributed in this manner become part of the etcd
database, and thus it becomes necessary to specify a KMS provider for data encryption to prevent etcd
from storing plaintext versions of secrets.
As the existing implementation is not fully-compliant with this ADR, significant scope will be added to current and future EdgeX releases in order to bring the project into compliance.
List of needed improvements:
security-secrets-setup
utility.All: Investigate hardware protection of cached Consul and Vault PKI secret keys. (Vault cannot unseal its own TLS certificate.)
Special case: Bring-your-own external Kong certificate and key
The Kong external certificate and key is already stored in Vault, however, additional metadata is needed to signal whether these are auto-generated or manually-installed. A manually-installed certificate and key would not be overwritten by the framework bringup logic. Installing a custom certificate and key can then be implemented by overwriting the system-generated ones and setting a flag indicating that they were manually-installed.
Secret store master password
All: Enable hooks for hardware protection of secret store master password.
Secret store per-service authentication tokens
No changes required.
Postgres superuser password
Cache in Vault and inject into Kong using environment variable injection.
MongoDB service account passwords
No changes required.
Redis(v5) authentication password
No changes on client side.
Redis(v6) passwords (v6 adds multiple user support)
No changes on client side (each service accesses its own credential)
Kong authentication tokens
** Approved **
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#context","title":"Context","text":"Docker-compose, the tool used by EdgeX to manage its Docker-based stack, lags in its support for initialization logic.
Docker-compose v2.x used to have a depends_on / condition
directive that would test a service's HEALTHCHECK and block startup until the service was \"healthy\". Unfortunately, this feature was removed in 3.x docker-compose. (This feature is also unsuppported in swarm mode as well.)
Snaps have an explicit install phase and Kubernetes PODs have optional init containers. In other frameworks, initialization is allowed to run to completion prior to application components being started in production mode. This functionality does not exist in Docker nor docker-compose.
The current lack of an initialization phase is a blocking issue for implementing microservice communication security, as critical EdgeX core components that are involved with microservice communication (specifically Consul) are being brought up in an insecure configuration. (Consul's insecure configuration is will be addressed in a separate ADR.)
Activities that are best done in the initialization phase include the following:
Workarounds when an installation phase is not present include:
EdgeX does not have a manual installation flow, and uses a combination of the last three approaches.
The objective of this ADR is to define a framework for Docker-based initialization logic in EdgeX. This will enable the removal of certain hard-coded secrets in EdgeX and enable certain components (such as Consul) to be started in a secure configuration. These improvement are necessary pre-requisites to implementing microservice communication security.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#history","title":"History","text":"In previous releases, container startup sequencing has been primarily been driven by Consul service health checks backed healthcheck endpoints of particular services or by sentinel files placed in the file system when certain intialization milestones are reached.
The implementation has been plagued by several issues:
Sentinel files are not cleaned up if the framework fails or is shut down. Invalid state left over from previous instantiations of the framework causes difficult-to-resolve race conditions. (Implementation of this ADR will try to remove as many as possible, focusing on those that are used to gate startup. Some use of sentinel files may still be required to indicate completion of initialization steps so that they are not re-done if there is no API-based mechanism to determine if such initialization has been completed.)
Consul healh checks are reported in a difficult-to-parse JSON structure, which has lead to the creation of specialized tools that are insensitive to libc implementations used by different container images.
Consul is being used not only for service health, but for service location and configuration as well. The requirement to synchronize framework startup for the purpose of securely initializing Consul means that a non-Consul mechanism must be used to stage-gate EdgeX initialization.
This last point is the primary motivator of this ADR.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#decision","title":"Decision","text":""},{"location":"design/adr/security/0009-Secure-Bootstrapping/#stage-gate-mechanism","title":"Stage-gate mechanism","text":"The stage-gate mechanism must work in the following environments:
Startup sequencing will be driven by two primary mechanisms:
Use of entrypoint scripts to:
Block on stage-gate and service dependencies
The bootstrap container will inject entrypoint scripts into the other containers in the case where EdgeX is directly consuming an upstream container. Docker will automatically retry restarting containers if its entrypoint script is missing.
Use of TCP sockets for startup sequencing is commonly used in Docker environments. Due to its popularlity, there are several existing tools for this, including wait-for-it, dockerize, and wait-for. The TCP mechanism is portable across platforms and will work in distributed multi-node scenarios.
At least three new ports will be added to EdgeX for sequencing purposes:
bootstrap
port. This port will be opened once first-time initialization has been completed.tokens_ready
port. This port signals that secret-store tokens have been provisioned and are valid.ready_to_run
port. This port will be opened once stateful services have completed initialization and it is safe for the majority of EdgeX core services to start.The stateless EdgeX services should block on ready_to_run
port.
The following diagram shows the \"as-is\" startup flow.
There are several components being removed via activity unrelated with this ADR. These proposed edits are shown to reduce clutter in the TO-BE diagram. * secrets-setup is being eliminated through a separate ADR to eliminate TLS for single-node usage. * kong-migrations is being combined with the kong service via an entrypoint script. * bootstrap-redis will be incorporated into the Redis entrypoint script to set the Redis password before Redis starts to fix the time delay before a Redis password is set.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#to-be-startup-flow","title":"\"To-be\" startup flow","text":"The following diagram shows the \"to-be\" startup flow. Note that the bootstrap flows are always processed, but can be short-circuited.
Another difference to note in the \"to-be\" diagram is that the Vault depdendency on Consul is reversed in order to provide better security.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#new-bootstraprtr-container","title":"New Bootstrap/RTR container","text":"The purpose of this new container is to:
bootstrap
semaphoreready_to_run
semaphore (these are the stateful components such as databases, and blocking waiting for sercret store tokens to be provisioned)ready_to_run
semaphoreThis ADR is expected to yield the following benefits after completion of the related engineering tasks:
Introduction of a new container into the startup flow (but other containers are eliminated or combined).
Expanded scope and responsibility of entrypoint scripts, which must not only block component startup, but now must also configure a component for secure operation.
In this scenario, instead of a service waiting on a TCP-socket semaphore created by another service, services would open a socket and wait for a coordinator/controller to issue a \"go\" command.
This solution was not chosen for several reasons:
In this scenario, the system management agent is responsbile bringing up the EdgeX framework. Since the system management agent has access to the Docker socket, it has the ability to start services in a prescribed order, and as a management agent, has knowledge about the desired state of the framework.
This solution was not chosen for several reasons:
This alternative would create a mega-install container that has locally installed verions of critical components needed for bootstrapping such as Vault, Consul, PostgreSQL, and others.
A sequential script would start each component in turn, intiailizing each to run in a secure configuration, and then shut them all down again.
The same stage-gate mechanism would be used to block startup of these same components, but Docker would start them in production configuration.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#manual-secret-provisioning","title":"Manual secret provisioning","text":"A typical cloud-based microservice architecture typically has a manual provisioning step. This step would include activities such as configuring Vault, installing a database schema, setting up database service account passwords, and seeding initial secrets such as PKI private keys that have been generated offline (possibly requiring several days of lead time). A cloud team may have weeks or months to prepare for this event, and it might take the greater part of a day.
In contrast, EdgeX up to this point has been a \"turnkey\" middleware framework: it can be deployed with the same ease as an application, such as via a docker-compose file, or via a snap install. This means that most of the secret provisioning must be automated and the provisioning logic must be built into the framework in some way. The proposals presented in this ADR are compatibile with continuance of this functionality.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#references","title":"References","text":"** Approved **
"},{"location":"design/adr/security/0015-in-cluster-tls/#context","title":"Context","text":"This ADR seeks to define the EdgeX direction on using encryption to secure \"in-cluster\" EdgeX communications, that is, internal microservice-to-microservice communication.
This ADR will seek to clarify the EdgeX direction in several aspects with regard to:
This ADR will be used to triage EdgeX feature requests in this space.
"},{"location":"design/adr/security/0015-in-cluster-tls/#background","title":"Background","text":""},{"location":"design/adr/security/0015-in-cluster-tls/#why-encrypt","title":"Why encrypt?","text":"Why consider encryption in the first place? Simple. Encryption helps with the following problems:
Client authentication of servers. The client knows that it is talking to the correct server. This is typically achieved using TLS server certificates that the client checks against a trusted root certificate authority. Since the client is not in charge of network routing, TLS server authentication provides a good assurance that the requests are being routed to the correct server.
Server authentication of clients. The server knows the identity of the client that has connected to it. There are a variety of mechanims to achieve this, such as usernames and passwords, tokens, claims, et cetera, but the mechanism under consideration by this ADR is TLS client authentication using TLS client certificates.
Confidentiality of messages exchanged between services. Confidentiality is needed to protect authentication data flowing between communicating microservices as well as to protect the message payloads if they contain nonpublic data. TLS provides communication channel confidentiality.
Integrity of messages exchanged between services. Integrity is needed to ensure that messages between communicating microservices are not maliciously altered, such as inserting or deleting data in the middle of the exchange. TLS provides communication channel integrity.
A microservice architecture normally strives for all of the above protections.
Besides TLS, there are other mechanisms that can be used to provide some of the above properties. For example, IPSec tunnels provide confidentity, integrity, and authentication of the hosts (network-level protection). SSH tunnels provide confidentiality, integrity, and authentication of the tunnel endpoints (also network-level protection). TLS, however, is preferred, because it operates in-process at the application level and provides better point-to-point security.
"},{"location":"design/adr/security/0015-in-cluster-tls/#why-to-not-encrypt","title":"Why to not encrypt?","text":"In the case of TLS communications, microservices depend on an asymmetric private key to prove their identity. To be of value, this private key must be kept secret. Applications typically depend on process-level isolation and/or file system protections for the private key. Moreover, interprocess communication using sockets is mediated by the operating system kernel. An attacker running at the privilege of the operating system has the ability to compromise TLS protections, such as by substituting a private key or certificate authority of their choice, accessing the unencrypted data in process memory, or intercepting the network communications that flow through the kernel. Therefore, within a single host, TLS protections may slow down an attacker, but are not likely to stop them. Additionally, use of TLS requires management of additional security assets in the form of TLS private keys.
Microservice communcation across hosts, however, is vulnerable to intereception, and must be protected via some mechanism such as, but not limited to: IPSec or SSH tunnels, encrypted overlay networks, service mesh middlewares, or application-level TLS.
Another reason to not encrypt is that TLS adds overhead to microservice communication in the form of additional network around-trips when opening connections and performing cryptographic public key and symmetric key operations.
"},{"location":"design/adr/security/0015-in-cluster-tls/#decision","title":"Decision","text":"At this time, EdgeX is primarily a single-node IoT application framework. Should this position change, this ADR should be revisited. Based on the single-node assumption:
This ADR if approved would close the following issues as will-not-fix.
It would also close https://github.com/edgexfoundry/edgex-go/issues/1925 as there is no current need for TLS as a mutual authentication strategy.
"},{"location":"design/adr/security/0015-in-cluster-tls/#alternatives","title":"Alternatives","text":""},{"location":"design/adr/security/0015-in-cluster-tls/#encrypted-overlay-networks","title":"Encrypted overlay networks","text":"Encrypted overlay networks provide varying protection based on the product used. Some can only encrypt data, such as an IPsec tunnel. Some can encrypt and provide for network microsegmentation, such as Docker Swarm networks with encryption enabled. Some can encrypt and enforce network policy such as restrictions on ingress traffic or restrictions on egress traffic.
"},{"location":"design/adr/security/0015-in-cluster-tls/#service-mesh-middleware","title":"Service mesh middleware","text":"Service mesh middleware is an alternative that should be investigated if EdgeX decides to fully support a Kubernetes-based deployment using distributed Kubernetes pods.
A service mesh typically achieves most of the security objectives of security microservice commuication by intercepting microservice communications and imposing a configuration-driven policy that typically includes confidentiality and integrity protection.
These middlewares typically rely on the Kubernetes pod construct and are difficult to support for non-Kubernetes deployments.
"},{"location":"design/adr/security/0015-in-cluster-tls/#edgex-public-key-infrastructure","title":"EdgeX public key infrastructure","text":"An EdgeX public key infrastructure that is natively supported by the architecture should be considered if EdgeX decides to support an out-of-box distributed deployment on non-Kubernetes platforms.
Native support of TLS requires a significant amount of glue logic, and exceeds the availble resources in the security working group to implement this strategy. The following text outlines a proposed strategy for supporting native TLS in the EdgeX framework:
EdgeX will use Hashicorp Vault to secure the EdgeX PKI, through the use of the Vault PKI secrets engine. Vault will be configured with a root CA at initialization time, and a Vault-based sub-CA for dynamic generation of TLS leaf certificates. The root CA will be restricted to be used only by the Vault root token.
EdgeX microservices that are based on third-party containers require special support unless they can talk natively to Vault for their secrets. Certain tools, such as those mentioned in the \"Creation and Distribution of Secrets\" ADR (envconsul
, consul-template
, and others) can be used to facilitiate third-party container integration. These services are:
Consul: Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container.
Vault: As Vault's database is encrypted, Vault cannot natively bootstrap its own TLS certificate. Requires TLS certificate to be injected into container and its location set in a configuration file.
PostgreSQL: Requires TLS certificate to be injected into '$PGDATA' (default: /var/lib/postgresql/data
) which is where the writable database files are kept.
Kong (admin): Requires environment variable to be set to secure admin port with TLS, with a TLS certificates injected into the container.
Kong (external): Requires a bring-your-own (BYO) external certificate, or as a fallback, a default one should be generated using a configurable external hostname. (The Kong ACME plugin could possibly be used to automate this process.)
Redis (v6): Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container.
Mosquitto: Requires TLS certificate set by configuration file, with a TLS certificate injected into the container.
Additionally, every EdgeX microservice consumer will require access to the root CA for certificate verification purposes, and every EdgeX microservice server will need a TLS leaf certificate and private key.
Note that Vault bootstrapping its own PKI is tricky and not natively supported by Vault. Expect that a non-trivial amount of effort will need to be put into starting Vault in non-secure mode to create the CA hierarchy and a TLS certificate for Vault itself, and then restarting Vault in a TLS-enabled configuration. Periodic certificate rotation is a non-trivial challenge as well.
The Vault bootstrapping flow would look something like this:
There are no current plans for mutual auth TLS. Supporting mutual auth TLS would require creation of a separate PKI hierarchy for generation of TLS client certificates and glue logic to persist the certificates in the service's key-value secret store and provide them when connecting to other EdgeX services.
"},{"location":"design/adr/security/0016-docker-image-guidelines/","title":"Docker image guidelines","text":""},{"location":"design/adr/security/0016-docker-image-guidelines/#status","title":"Status","text":"Approved
"},{"location":"design/adr/security/0016-docker-image-guidelines/#context","title":"Context","text":"When deploying the EdgeX Docker containers some security measures are recommended to ensure the integrity of the software stack.
"},{"location":"design/adr/security/0016-docker-image-guidelines/#decision","title":"Decision","text":"When deploying Docker images, the following flags should be set for heightened security.
no-new-privileges
option in their Docker compose file (example below). More details about this flag can be found here. This follows Rule #4 for Docker security found here.security_opt:\n - \"no-new-privileges:true\"\n
NOTE: Alternatively an AppArmor security profile can be used to isolate the docker container. More details about apparmor profiles can be found here
security_opt: [ \"apparmor:unconfined\" ]\n
--user=<userid>
or -u=<userid>
option in their Docker compose file (example below). More details about this flag can be found here. This follows Rule #2 for Docker security found here.services:\n device-virtual:\n image: ${REPOSITORY}/docker-device-virtual-go${ARCH}:${DEVICE_VIRTUAL_VERSION}\nuser: $CONTAINER-PORT:$CONTAINER-PORT # user option using an unprivileged user\n ports:\n - \"127.0.0.1:49990:49990\"\ncontainer_name: edgex-device-virtual\n hostname: edgex-device-virtual\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-virtual\n depends_on:\n - consul\n - data\n - metadata\n
NOTE: exception Sometimes containers will require root access to perform their fuctions. For example the System Management Agent requires root access to control other Docker containers. In this case you would allow it run as default root user.
resource limits
should be set for each container. More details about resource limits
can be found here. This follows Rule #7 for Docker security found here.services:\n device-virtual:\n image: ${REPOSITORY}/docker-device-virtual-go${ARCH}:${DEVICE_VIRTUAL_VERSION}\nuser: 4000:4000 # user option using an unprivileged user\n ports:\n - \"127.0.0.1:49990:49990\"\ncontainer_name: edgex-device-virtual\n hostname: edgex-device-virtual\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-virtual\n depends_on:\n - consul\n - data\n - metadata\n deploy: # Deployment resource limits\n resources:\n limits:\n cpus: '0.001'\nmemory: 50M\n reservations:\n cpus: '0.0001'\nmemory: 20M\n
--read_only
flag should be set. More details about this flag can be found here. This follows Rule #8 for Docker security found here. device-rest:\n image: ${REPOSITORY}/docker-device-rest-go${ARCH}:${DEVICE_REST_VERSION}\nports:\n - \"127.0.0.1:49986:49986\"\ncontainer_name: edgex-device-rest\n hostname: edgex-device-rest\n read_only: true # read_only option\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-rest\n depends_on:\n - data\n - command\n
NOTE: exception If a container is required to have write permission to function, then this flag will not work. For example, the vault needs to run setcap in order to lock pages in memory. In this case the --read_only
flag will not be used.
NOTE: Volumes If writing persistent data is required then a volume can be used. A volume can be attached to the container in the following way
device-rest:\n image: ${REPOSITORY}/docker-device-rest-go${ARCH}:${DEVICE_REST_VERSION}\nports:\n - \"127.0.0.1:49986:49986\"\ncontainer_name: edgex-device-rest\n hostname: edgex-device-rest\n read_only: true # read_only option\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-rest\n depends_on:\n - data\n - command\n volumes:\n - consul-config:/consul/config:z\n
NOTE: alternatives If writing non-persistent data is required (ex. a config file) then a temporary filesystem mount can be used to accomplish this goal while still enforcing --read_only
. Mounting a tmpfs
in Docker gives the container a temporary location in the host systems memory to modify files. This location will be removed once the container is stopped. More details about tmpfs
can be found here
for additional docker security rules and guidelines please check the Docker security cheatsheet
"},{"location":"design/adr/security/0016-docker-image-guidelines/#consequences","title":"Consequences","text":"Create a more secure Docker environment
"},{"location":"design/adr/security/0016-docker-image-guidelines/#references","title":"References","text":"** Approved **
"},{"location":"design/adr/security/0017-consul-security/#context","title":"Context","text":"This ADR defines the motiviation and approach used to secure access to the Consul component in the EdgeX architecture for security-enabled configurations only. Non-secure configuations continue to use Consul in anonymous read-write mode. As this Consul security feature requires Vault to function, if EDGEX_SECURITY_SECRET_STORE=false
and Vault is not present, the legacy behavior (unauthenticated Consul access) will be preserved.
Consul provides several services for the EdgeX architecture:
Use of the services provided by Consul is optional on a service-by-service basis. Use of the registry is controlled by the -r
or --registry
flag provided to an EdgeX service. Use of mutable configuration data is controlled by the -cp
or --configProvider
flag provided to an EdgeX service. When Consul is enabled as a configuration provider, the configuration.toml
is parsed into individual settings and seeded into the Consul key-value store on the first start of a service. Configuration reads and writes are then done to Consul if it is specified as the configuration provider, otherwise the static configuration.toml
is used. Writes to the [Writable]
section in Consul trigger per-service callbacks notifying the application of the changed data. Updates to non-[Writable]
sections are parsed only once at startup and require a service restart to take effect.
Since configuration data can affect the runtime behavior of services, compensating controls must be introduced in order to mitigate the risks introduced by moving configuration from a static file into to an HTTP-accessible service with mutable state.
The current practice is that Consul is exposed via unencrypted HTTP in anonymous read/write mode to all processes and EdgeX services running on the host machine.
"},{"location":"design/adr/security/0017-consul-security/#decision","title":"Decision","text":"Consul will be configured with access control list (ACL) functionality enabled, and each EdgeX service will utilize a Consul access token to authenticate to Consul. Consul access tokens will be requested from the Vault Consul secrets engine (to avoid introducing additional bootstrapping secrets).
DNS will be disabled via configuration as it is not used in EdgeX.
Consul Access Via API Gateway
In security enabled EdgeX, the API gateway will be configured to proxy the Consul service over the /consul
path, using the request-transformer
plugin to add the global management token to incoming requests via the X-Consul-Token
HTTP header. Thus, ability to access remote APIs also grants the ability to modify Consul's key-value store. At this time, service access via API gateway is all-or-nothing, but this does not preclude future fine-grained authorization at the API gateway layer to specific microservices, including Consul.
Proxying of the Consul UI is problematic and there is no current solution, which would involve proper balacing of the externally-visible URL, the path-stripping effect (or not) of the proxy, Consul's ui_content_path
, and UI authentication (the request-transfomer
does not work on the UI).
Full implementation of this ADR will deny Consul access to all existing Consul clients. To limit the impacts of the change, deployment will take place in phases. Phase 1 is basic plumbing work and leaves Consul configured in a permissive mode and thus is not a breaking change. Phase 2 will affect the APIs of Go modules and will change the default policy to \"deny\", both of which are breaking changes. Phase 3 is a refinement of access control; presuming the existing services are \"well-behaved\", that is, they do not access configuration of other services, Phase 3 will not introduce any breaking changes on top of the Phase 2 breaking changes.
"},{"location":"design/adr/security/0017-consul-security/#phase-1-completed-in-ireland-release","title":"Phase 1 (completed in Ireland release)","text":"ready_to_run
signal.)/acl/token/self
).Migtigations:
** Approved ** via TSC vote on 2021-12-14
"},{"location":"design/adr/security/0020-spiffe/#context","title":"Context","text":"In security-enabled EdgeX, there is a component called security-secretstore-setup
that seeds authentication tokens for Hashicorp Vault--EdgeX's secret store--into directories reserved for each EdgeX microservice. The implementation is provided by a sub-component, security-file-token-provider
, that works off of a static configuration file (token-config.json
) that configures known EdgeX services, and an environment variable that lists additional services that require tokens. The token provider creates a unique token for each service and attaches a custom policy to each token that limits token access in a manner that paritions the secret store's namespace.
The current solution has some problematic aspects:
These tokens have an initial TTL of one hour (1h) and become invalid if not used and renewed within that time period. It is not possible to delay the start of EdgeX services until a later time (that is, greater than the default token TTL), as they will not be able to connect to the EdgeX secret store to obtain required secrets.
Transmission of the authentication token requires one or more shared file systems between the service and security-secretstore-setup
. In the Docker implementation, this shared file system is constructed by bind-mounting a host-based directory to multiple containers. The snap implementation is similar, utilizing a content-interface between snaps. In a Kubernetes implementation limited to a single worker node, a CSI storage driver that provided RWO volumes would suffice.
The current approach cannot support distributed services without an underlying distributed file system to distribute tokens, such as GlusterFS, running across the participating nodes. For Kubernetes, the requirement would be a remote shared file system persistent volume (RWX volume).
EdgeX will create a new service, security-spiffe-token-provider
. This service will be a mutual-auth TLS service that exchanges a SPIFFE X.509 SVID for a secret store token.
An SPIFFE identifier is a URI of the format spiffe://trust domain/workload identifier
. For example: spiffe://edgexfoundry.org/service/core-data
. A SPIFFE Verifiable Identity Document (SVID) is a cryptographically-signed version of a SPIFFE ID, typically a X.509 certificate with the SPIFFE ID encoded into the subjectAltName
certificate extension, or a JSON web token (encoded into the sub
claim). The EdgeX implementation will use a naming convention on the path component, such as the above, in order to be able to extract the requesting service from the SPIFFE ID.
The SPIFFE token provider will take three parameters:
An X.509 SVID used in mutual-auth TLS for the token provider and the service to cross-authenticate.
The reqested service key. If blank, the service key will default to the service name encoded in the SVID. If the service name follows the pattern device-(name)
, then the service key must follow the format device-(name)
or device-name-*
. If the service name is app-service-configurable
, then the service key must follow the format app-*
. (This is an accomodation for the Unix workload attester not being able to distingish workloads that are launched using the same executable binary. Custom app services that support multiple instances won't be supported unless they name the executable the same as the standard app service binary or modify this logic.)
A list of \"known secret\" identifiers that will allow new services to request database passwords or other \"known secrets\" to be seeded into their service's partition in the secret store.
The go-mod-secrets
module will be modified to enable a new mode whereby a secret store token is obtained by:
Obtaining an X.509 SVID by contacting a local SPIFFE agent's workload API on a local Unix domain socket.
Connecting to the security-spiffe-token-provider
service using the X.509 SVID to request a secret store token.
The SPIFFE authentication mode will be an opt-in feature.
The SPIFFE implementation will be user-replaceable; specifically, the workload API socket will be configurable, as well as the parsing of the SPIFFE ID. Reasons for doing so might include: changing the name of the trust domain in the SPIFFE ID, or moving the SPIFFE server out of the edge.
This feature is estimated to be a \"large\" or \"extra large\" effort that could be implemented in a single release cycle.
"},{"location":"design/adr/security/0020-spiffe/#technical-architecture","title":"Technical Architecture","text":"The work flow is as follows:
token generate
and shared to the EdgeX secrets volume.security-secret-store-setup
initializes it and creates an admin token for security-spiffe-token-provider
to use.security-spiffe-token-provider
service is started. It obtains an SVID from the SIFFE agent and uses it as a TLS server certificate.security-spiffe-token-provider
service. The EdgeX microservice uses the trust bundle as a server CA to verify the TLS certificate of the remote service.security-spiffe-token-provider
verifies the SVID using the trust bundle as client CA to verify the client, extracts the service key, and issues an appropriate Vault service token.The server uses a workload registration Unix domain socket that allows authorization entries to be added to the authorization database. This socket is protected by Unix file system permissions to control who is allowed to add entries to the database.
In this proposal, a subcommand will be added to the EdgeX secrets-config
utility to simplify the process of registering new services that uses the registration socket above.
The agent uses a workload attesation Unix domain socket that is open to the world. This socket is shared via a snap content-interface of via a shared host bind mount for Docker. There is one agent per node.
"},{"location":"design/adr/security/0020-spiffe/#trust-bundle","title":"Trust Bundle","text":"SVID's must be traceable back to a known issuing authority (certificate authority) to determine their validity.
In the proposed implementation, we will generate a CA on first boot and store it persistently. This root CA will be distributed as the trust bundle. The SPIFFE server will then generate a rotating sub-CA for issuing SVIDs, and the issued SVID will include both the leaf certificate and the intermediate certificate.
This implementation differs from the default implementation, which uses a transient CA that is rotated periodically and that keeps a log of past CA's. The default implementation is not suitable because only the Kubernetes reference implementation of the SPIRE server has a notification hook that is invoked when the CA is rotated. CA rotation would just result in issuing of SVIDs that are not trusted by microservices that received only the initial CA.
The SPIFFE implementation is replaceable. The user is free to replace this default implementation with potentally a cloud-based SPIFFE server and a cloud-based CA.
"},{"location":"design/adr/security/0020-spiffe/#workload-authorization","title":"Workload Authorization","text":"Workloads are authenticated by connecting to the spiffe-agent
via a Unix domain socket, which is capable of identifying the process ID of the remote client. The process ID is fed into one of following workload attesters, which gather additional metadata about the caller:
docker:label:com.docker.compose.service:edgex-core-data
where the service label is the key value in the services
section of the docker-compose.yml
. It is also possible to refer to labels built-in to the container image.Once authenticated, the metadata is sent to the SPIFFE server to authorize the workload.
Workloads are authorized via an authorization database connected to the SPIFFE server. Supported databases are SQLite (default), PostgreSQL, and MySQL. Due to startup ordering issues, SQLite will be used. (Disclaimer: SQlite, according for the Turtle book is intended for development and test only. We will use SQlite anyway because because Redis is not supported.)
The only service that needs to be seeded to the database as this time is security-spiffe-token-provier
. For example:
spire-server entry create -parentID \"${local_agent_svid}\" -dns edgex-spiffe-token-provider -spiffeID \"${svid_service_base}/edgex-spiffe-token-provider\" -selector \"docker:label:com.docker.compose.service:edgex-spiffe-token-provider\"\n
The above command associates a SPIFFE ID with a selector, in this case, a container label, and configures a DNS subjectAltName in the X.509 certificate for server-side TLS.
A snap-based installation of EdgeX would use a unix:path
or unix:sha256
selector instead.
There are two extension mechanims for authorization additional workloads:
spire-server entry create
commands for each additional service.edgex-secrets-config
utility (that will wrap the spire-server entry create
command) for ad-hoc authorization of new services.The authorization database is persistent across reboots.
"},{"location":"design/adr/security/0020-spiffe/#consequences","title":"Consequences","text":"This proposal will require addition of several new, optional, EdgeX microservices:
security-spiffe-token-provider
, running on the main nodespiffe-agent
, running on the main node and each remote nodespiffe-server
, running on the main nodespiffe-config
, a one-shot service running on the main nodeNote that like Vault, the recommended SPIFFE configuration is to run the SPIFFE server on a dedicated node. If this is a concern, bring your own SPIFFE implementation.
Minor changes will be needed to security-secretstore-setup
to preserve the token-creating-token used by security-file-token-provider
so that it can be used by security-spiffe-token-provider
.
The startup flow of the framework will be adjusted as follows:
spiffe-server
spiffe-config
(can be combined with spifee-server
)spiffe-agent
security-spiffe-token-provider
There is no direct dependency between spiffe-server
and any other microservice. security-spiffe-token-provider
requires an SVID from spiffe-agent
and a Vault admin token.
None of these new services will be proxied via the API gateway.
In the future, this mechanism may become the default secret store distribution mechanism, as it eliminates several secrets volumes used to share secrets between security-secretstore-setup
and various EdgeX microservices.
The EdgeX automation will only configure the SPIFEE agent on the main node. Additional nodes can be manually added by the operator by obtaining a join token from the main node and using it to bootstrap a remote node.
SPIFFE/SPIRE has native support for Kubernetes and can distribute the trust bundle via a Kubernetes ConfigMap to more easily enable distributed scenarios, removing a major roadblock to usage of EdgeX in a Kubernetes environment.
"},{"location":"design/adr/security/0020-spiffe/#footprint","title":"Footprint","text":"NOTE: This data is limited by the fact that the pre-built SPIRE reference binaries are compiled with CGO enabled.
"},{"location":"design/adr/security/0020-spiffe/#spire-server","title":"SPIRE Server","text":" 69 MB executable, dynamically linked\n 151 MB inside of a Debian-slim container\n 30 MB memory usage, as container\n
"},{"location":"design/adr/security/0020-spiffe/#spire-agent","title":"SPIRE Agent","text":" 33 MB executable, dynamically linked\n 114 MB inside of a Debian-slim container\n 64 MB memory usage, as container\n
"},{"location":"design/adr/security/0020-spiffe/#spiffe-base-secret-store-token-provider","title":"SPIFFE-base Secret Store Token Provider","text":"The following is the minimum size:
> 6 MB executable (likely much larger)\n > 29 MB memory usage, as container\n
"},{"location":"design/adr/security/0020-spiffe/#limitations","title":"Limitations","text":"The following are known limitations with this proposal:
The capabilities enabled by this solution would only be enabled on Linux platforms. SIFFE/SPIRE Agent is not available for native Windows and pre-built binaries are only avaiable for Linux. (It is unclear as to whether other *nix'es are supported.)
The capabilities enabled by this solution would only be supported for Go-based services. The SPIFFE API's are implemented in gRPC, which is only ported to C#, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, and Ruby. Notably, the C language is not supported, and the only other EdgeX supported language is Go.
That default TTL of an x.509 SVID is one hour. As such, all SVID consumers must be capable of auto-renewal of SVIDs on both the client and server side.
Leave C-SDK device services behind. In this option, C device services would be unable to participate in the delayed-start services architecture.
Fork a grpc-c library. Forking a grpc-c library and rehabilitating it is one option. There is at least one grpc-c library that has been proven to work, but it requires additional features to make it compatible with the SPIRE workload agent. However, the project is extremely large and it is unlikely that EdgeX is big enough to carry the project. Available libraries include:
https://github.com/lixiangyun/grpc-c
This library is several years out-of-date, does not compile on current Linux distributions without some rework, and does not pass per-request metadata tags. Proved to work via manual patching. Not supportable.
https://github.com/Juniper/grpc-c
This library is serveral years out-of-date, also does not compile on current Linux distributiosn without some rework. Uses hard-coded Unix domain socket paths. May support per-request metadata tags, but did not test. Not supportable.
https://github.com/HewlettPackard/c-spiffe
This library is yet untested. Rather than a gRPC library, this library implements the workload API client directly. Ultimately, this library also wraps the gRPC C++ library, and statically links to it. There is no benefit to the EdgeX project to use this library as we can call the underlying library directly.
Hybrid device services. In this model, device services would always be written in Go, but in the case where linking to a C language library is required, CGO features would be used to invoke native C functions from golang. This option would commit the EdgeX project to a one-time investment to port the existing C device services to the new hybrid model. This option is the best choice if the long-term strategy is to end-of-life the C Device SDK.
Bridge. In this model, the C++ implementation to invoke the SPIFFE/SPIRE workload API would be hidden behind a dynamic shared library with C linkage. This would require minimal change to the existing C SDK. However, the resulting binaries would have be based on GLIBC vs MUSL in order to get dlopen()
support. This will also limit the choice of container base images for containerized services.
Modernize. In this model, the Device SDK would be rewritten either partially or in-full in C++. Under this model, the SPIFFE/SPIRE workload API could be accessed via a community-supported C++ GRPC SDK. There are many implementation options:
A \"C++ compilation-switch\" where the C SDK could be compiled in C-mode or C++-mode with enhanced functionality.
A C++ extension API. The original C SDK would remain as-is, but if compiling with __cplusplus
defined, additional API methods would be exposed. The SDK could thus be composed of a mixture of .c
files with C linkage and .cc
files with C++ linkage. The linker would ultimately determine whether or not the C++ runtime library needed to be linked in.
Native C++ device SDK with legacy C wrapper facade.
Compile existing code in C++ mode, with optional C++ facade.
If one of the following things were to happen, it would push this proposal \"over the edge\" from being an optional opt-in feature to a required standard feature for security:
The \"on-demand\" method of obtaining a secret store token is the default method of obtaining a token for non-core EdgeX services.
The \"on-demand\" method of obtaining a secret store token is the default method for all EdgeX services.
SPIFFE SVID's become the implementation mechanism for microservice-level authentication. (Not in scope for this ADR.)
Keeping these as separate executables clearly separates the on-demand secret store tokens feature as an optional service. It is possible to combine the services, but there would need to be a configuration switch in order to enable the SPIFFE feature. It would also increase the base executable size to include the extra logic.
"},{"location":"design/adr/security/0020-spiffe/#alternatives-regarding-spiffe-ca","title":"Alternatives regarding SPIFFE CA","text":""},{"location":"design/adr/security/0020-spiffe/#transient-ca-option","title":"Transient CA option","text":"The SPIFFE server can be configured with no \"upstream authority\" (certificate authority), and the server will periodically generate a new, transient CA, and keep a bounded history of previous CA's. A rotating trust bundle only practically works in a Kubernetes environment, since a configmap can be updated real-time. For everyone else, we need a static CA that can be pre-distributed to remote nodes. Thus, this solution was not chosen.
"},{"location":"design/adr/security/0020-spiffe/#vault-based-ca-option","title":"Vault-based CA option","text":"The SPIFFE server can be configured to make requests to a Hashicorp Vault PKI secrets engine to generate intermediate CA certificates for signing SVID's. This is an option for future integrations, but is omitted from this proposal due to the jump in implementation complexity and the desire that the current proposal be on add-on feature. The current implementation allows the SPIFFE server and Vault to be started simultaneously. Using a Vault-based CA would require a complex interlocking sequence of steps.
"},{"location":"design/adr/security/0020-spiffe/#references","title":"References","text":"The AS-IS Architecture figure below depicts the current state of microservice communication security prior to EdgeX 3.0, when security is enabled:
As shown in the diagram, many of the foundational services used by EdgeX Foundry have already been secured:
Communication with EdgeX's secret store, as implemented by Hashicorp Vault, is secured over a local HTTP socket with token-based authentication. An access control list limits access to the keyspace of the key value store.
Communication with EdgeX's service registry and configuration provider, as implemented by Hashicorp Consul, is secured over a local HTTP socket with token-based authentication, with the token being mediated by Hashicorp Vault. An access control list limits access to the keyspace of the configuration store.
Communication with EdgeX's default database, Redis, is secured using username/password authentication, with the password stored in Hashicorp Vault. An access control list limits the commands that clients are allowed to issue to the server.
External access to EdgeX microservices has also been secured. EdgeX microservices only bind to local ports, and are only exposed externally through a Kong API gateway. This gateway is configured to use TLS 1.3, using RS256 or ES256 JWT authentication (at the user's discretion). All external requests are filtered at the API gateway. URL rewriting is used to concentrate microservices on a single HTTP-accessible port.
Behind the proxy, it is not possible to verify Kong as the origin of local network traffic because mutual-auth TLS is not supported in the open source version of Kong. Although the Kong JWT plugin will set request headers on the backend request that identify the caller, there is no mechanism by which Kong can prove to a backend service that it was the component that performed the authentication step. Even though the original JWT passes through the proxy, the Kong authentication plugins do not expose token introspection endpoints that the backend service could use to check token validity independently.
The consequence of having an API gateway that performs all microservice authentication is that communication between EdgeX microservices running behind the API gateway are not authenticated in any way. EdgeX microservices are unable to distinguish malicious traffic that has evaded the API gateway from legitimate microservice traffic.
"},{"location":"design/adr/security/0028-authentication/#proposed-design","title":"Proposed Design","text":"This ADR proposes an implementation of the Microservice Authentication UCR that uses a token-based authentication mechanism.
This ADR proposes to relieve the Kong API gateway of its JWT management responsibility, and instead use Hashicorp Vault for this purpose, which is already used as EdgeX's secret store. This change requires minimal modification of existing clients written to perform JWT-based authentication at the Kong gateway: they simply use a Vault-issued JWT instead of a Kong-issued JWT or a self-issued JWT.
This ADR proposes a layered authentication scheme, with the reverse proxy performing an initial check for all external requests, and EdgeX services themselves authenticating all internal and external requests. There are three reasons for the layered approach:
Authentication at the proxy layer provides a choke point and policy enforcement points for incoming requests. By customizing the behavior of the proxy-auth component, it is possible to allow access to some URLs and deny access to other URLs based on arbitrary criteria, such as source IP address, JWT-based claims, or user identity and role mappings.
It means that individual microservices do not immediately need to implement fine-grained authorization to get the same effect as having custom policy enforcement at the proxy.
It provides defense-in-depth against microservice implementation bugs and other technical debt that might otherwise put EdgeX microservices at risk. Getting a known response to /core-data/api/v2/ping
as a result of an anonymous HTTP request would positively identify an EdgeX installation. Similarly, an adopter porting their custom services to EdgeX 3.0 without adding authentication hooks could be vulnerable to outside attacks that might be mitigated by the additional check at the proxy layer.
EdgeX microservices shall utilize Vault to assess JWT validity and an NGINX reverse proxy shall use the ngx_http_auth_request_module to delegate confirmation of JWT validity. TLS termination at the reverse proxy shall be enabled by default so as to be consistent with ADR 0015 - Encryption between microservices.
Behind the proxy, there are two major changes:
Every EdgeX service, when security is enabled, requires a JWT be passed as part of the HTTP request that is validated using Vault's token introspection endpoint, or manually validated based on published signature keys.
Every EdgeX service, when security is enabled, uses a Vault-supplied JWT to authenticate outgoing calls to peer EdgeX services. The original caller's identity may be passed through at the developers' discretion for microservice chaining scenarios.
The new TO-BE architecture is diagrammed in the following figure:
"},{"location":"design/adr/security/0028-authentication/#implementation-pre-requisites","title":"Implementation pre-requisites","text":"This ADR assumes a minor refactoring to the security bootstrapping components use the Vault identity API and one or more authentication engines to issue identity-based Vault tokens instead of raw Vault tokens. Affected services include, go-mod-secrets
(configure identity, issue and validate JWT's), security-secretstore-setup
, security-file-token-provider
, and security-spiffe-token-provider
.
This refactoring results in several benefits:
It de-privileges security-secretstore-setup
's use of Vault, which currently requires Vault \"sudo\" capability to issue raw Vault tokens. (This is a blocking issue for customers that want to bring their own Vault.)
An external user identity could be authenticated by an external service, such as Auth0. Alternatively, username/password or AppRole authentication could be used if an external source of identity is not available. This is viewed as beneficial, as downstream EdgeX deployments are already building their own similar integrations.
An internal service identity could be authenticated by a Kubernetes service account token. This could eliminate the requirement to pre-distribute Vault tokens to services via a shared filesystem volume, simplifying Kubernetes-based deployments of EdgeX.
As an added bonus, Vault supports longer JWT key sizes than the Kong JWT plugin.
Additionally, security-bootstrapper
will need to modified to not block on availability of Postgres before issuing the ready-to-run signal. (This change is already completed.)
The following list of changes is derived from the proof of concept implementation to actually effect the change (besides the prerequisite changes above):
Kong and Postgres is removed from compose files and snaps.
Add an NGINX reverse proxy with using the proxy auth module.
Create a new security-proxy-auth
service to check the incoming JWT for validity. (NGINX will be configured to delegate to this service for authentication checks. NGINX could also delegate to a minimal function like /api/v2/version, but the reason as to why the function was called wouldn't be as clear as having a separate authentication service.)
The security-proxy-setup
container remains, with the binary replaced with a small shell script to create a default TLS certificate and key.
The secrets-config
utility will create new users in Vault instead of Kong, and update TLS configuration for NGINX on disk instead of the Kong API.
Modifications to go-mod-core-contracts
to support an injectable authentication interface to add JWT's to outgoing HTTP requests.
Modifications to go-mod-bootstrap
to realize the go-mod-secrets
changes, create common JWT authentication handlers, and inject JWT authentication to the core-contracts clients.
Modifications to individual EdgeX services to authenticate selected routes (that is, every route except /api/v2/ping
, which remains anonymous).
Modifications to security-bootstrapper
to build an entrypoint script for NGINX and a default NGINX configuration.
Documentation updates.
Token-based authentication is flexible and works in a wide variety of use cases, but does not address issues of network security.
For scenarios where all EdgeX services are running on the same host, or there is an existing solution to network security already in place, such as an encrypted network overlay as might be found in some Kubernetes deployments of EdgeX, the token-based solution offers significant memory and disk savings over the Kong-based solution used in EdgeX releases prior to 3.0.
For scenarios where token-based authentication credentials can be exposed over a network, an authentication solution based on end-to-end encryption would be more appropriate.
"},{"location":"design/adr/security/0028-authentication/#considerations","title":"Considerations","text":""},{"location":"design/adr/security/0028-authentication/#size-and-space-impact-of-kong-postgres-versus-alternatives","title":"Size and Space Impact of Kong + Postgres Versus Alternatives","text":""},{"location":"design/adr/security/0028-authentication/#disk-space","title":"Disk space","text":"A savings of up to ~300 MB in docker images can be expected, depending on specific selection of container images used. (The POC implementation successfully used the smallest NGINX available, alpine-slim.)
Image Tag Image ID Age Size nginx alpine 2bc7edbc3cf2 6 days ago 40.7MB nginx alpine-slim c59097225492 6 days ago 11.5MB nginx latest 3f8a00f137a0 8 days ago 142MB kong 2.8 0affcb95d383 6 days ago 139MB postgres 13.8-alpine 551b13d106b4 4 months ago 213MB edgexfoundry/security-proxy-auth 0.0.0-dev b2ee5c21efba 8 days ago 16.2MBImage data collected on 2023-02-17.
"},{"location":"design/adr/security/0028-authentication/#memory","title":"Memory","text":"A memory savings of up to ~150 MB has been observed in the POC implementation upon initial startup of the framework.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS cad71e71ab32 edgex-kong 0.03% 109.4MiB / 15.61GiB 0.68% 255kB / 263kB 0B / 69.6kB 2 9ab4de1e5448 edgex-kong-db 0.11% 64.51MiB / 15.61GiB 0.40% 232kB / 183kB 32.2MB / 53.9MB 18 ff1e97c16e55 edgex-nginx 0.00% 4.289MiB / 15.61GiB 0.03% 3.24kB / 248B 0B / 0B 5 42629157e65c edgex-proxy-auth 0.00% 6.258MiB / 15.61GiB 0.04% 22.9kB / 16.2kB 7.3MB / 0B 11"},{"location":"design/adr/security/0028-authentication/#alternative-using-kong-to-mediate-edgex-internal-microservice-interactions","title":"Alternative: Using Kong to Mediate EdgeX Internal Microservice Interactions","text":"One approach that is seen in some microservice architectures is to force all communication between microservices to go through the external API gateway. There are two problems with this approach:
In the typical EdgeX runtime environment, there is no mechanism to block direct microservice-to-microservice communication.
The external address of the API gateway may not be known to internal code, increasing implementation difficulty for the programmer.
Neither the JWT nor OAuth2 plugins offer a token introspection endpoint, though it would be possible to create a fake service that EdgeX microservices could call to validate a bearer token. Using the Kong Admin API to obtain a public key for JWT validation via database dump would be unnecessarily complex. Validation of an opaque OAuth2 token would require direct access to Kong's backend database and is also unnecessarily complex.
"},{"location":"design/adr/security/0028-authentication/#other-related-adrs","title":"Other Related ADRs","text":"Some device protocols allow for devices to be discovered automatically. A Device Service may include a capability for discovering devices and creating the corresponding Device objects within EdgeX. A framework for doing so will be implemented in the Device Service SDKs.
The discovery process will operate as follows:
A boolean configuration value Device/Discovery/Enabled
defaults to false. If this value is set true, and the DS implementation supports discovery, discovery is enabled.
The SDK will respond to POST requests on the the /discovery endpoint. No content is required in the request. This call will return one of the following codes:
In each of the failure cases a meaningful error message should be returned.
In the case where discovery is triggered, the discovery process will run in a new thread or goroutine, so that the REST call may return immediately.
An integer configuration value Device/Discovery/Interval
defaults to zero. If this value is set to a positive value, and discovery is enabled, the discovery process will be triggered at the specified interval (in seconds).
When discovery is triggered, the SDK calls the implementation function provided by the Device Service. This should perform whatever protocol-specific procedure is necessary to find devices, and pass these devices into the SDK by calling the SDK's filtered device addition function.
Note: The implementation should call back for every device found. The SDK is to take responsibility for filtering out devices which have already been added.
The information required for a found device is as follows:
The filtered device addition function will take as an argument a collection of structs containing the above data. An implementation may choose to make one call per discovered device, but implementors are encouraged to batch the devices if practical, as in future EdgeX versions it will be possible for the SDK to create all required new devices in a single call to core-metadata.
Rationale: An alternative design would have the implementation function return the collection of discovered devices to the SDK. Using a callback mechanism instead has the following advantages:
The filter criteria for discovered devices are represented by Provision Watchers. A Provision Watcher contains the following fields:
Identifiers
: A set of name-value pairs against which a new device's ProtocolProperties are matchedBlockingIdentifiers
: A further set of name-value pairs which are also matched against a new device's ProtocolPropertiesProfile
: The name of a DeviceProfile which should be assigned to new devices which pass this ProvisionWatcherAdminState
: The initial Administrative State for new devices which pass this ProvisionWatcherA candidate new device passes a ProvisionWatcher if all of the Identifiers
match, and none of the BlockingIdentifiers
.
For devices with multiple Device.Protocols
, each Device.Protocol
is considered separately. A pass (as described above) on any of the protocols results in the device being added.
The values specified in Identifiers
are regular expressions.
Note: If a discovered Device is manually removed from EdgeX, it will be necessary to adjust the ProvisionWatcher via which it was added, either by making the Identifiers
more specific or by adding BlockingIdentifiers
, otherwise the Device will be re-added the next time Discovery is initiated.
Note: ProvisionWatchers are stored in core-metadata. A facility for managing ProvisionWatchers is needed, eg edgex-cli
could be extended
This document sets out the required functionality of a Device SDK other than the implementation of its REST API (see ADR 0011) and the Dynamic Discovery mechanism (see Discovery).
This functionality is categorised into three areas - actions required at startup, configuration options to be supported, and support for push-style event generation.
"},{"location":"design/legacy-requirements/device-service/#startup","title":"Startup","text":"When the device service is started, in addition to any actions required to support functionality defined elsewhere, the SDK must:
The core-metadata service maintains an extent of device service registrations so that it may route requests relating to particular devices to the correct device service. The SDK should create (on first run) or update its record appropriately. Device service registrations contain the following fields:
Name
- the name of the device serviceDescription
- an optional brief description of the serviceLabels
- optional string labelsBaseAddress
- URL of the base of the service's REST APIThe default device service Name
is to be hardcoded into every device service implementation. A suffix may be added to this name at runtime by means of commandline option or environment variable. Service names must be unique in a particular EdgeX instance; the suffix mechanism allows for running multiple instances of a given device service.
The Description
and Labels
are configured in the [Service]
section of the device service configuration.
BaseAddress
may be constructed using the [Service]/Host
and [Service]/Port
entries in the device service configuration.
During startup the SDK must supply to the implementation that part of the service configuration which is specific to the implementation. This configuration is held in the Driver
section of the configuration file or registry.
The SDK must also supply a logging facility at this stage. This facility should by default emit logs locally (configurable to file or to stdout) but instead should use the optional logging service if the configuration element Logging/EnableRemote
is set true
. Note: the logging service is deprecated and support for it will be removed in EdgeX v2.0
The implementation on receipt of its configuration should perform any necessary initialization of its own. It may return an error in the event of unrecoverable problems, this should cause the service startup itself to fail.
"},{"location":"design/legacy-requirements/device-service/#configuration","title":"Configuration","text":"Configuration should be supported by the SDK, in accordance with ADR 0005
"},{"location":"design/legacy-requirements/device-service/#commandline-processing","title":"Commandline processing","text":"The SDK should handle commandline processing on behalf of the device service. In addition to the common EdgeX service options, the --instance
/ -i
flag should be supported. This specifies a suffix to append to the device service name.
The SDK should also handle environment variables. In addition to the common EdgeX variables, EDGEX_INSTANCE_NAME
should if set override the --instance
setting.
The SDK should use (or for non-Go implementations, re-implement) the standard mechanisms for obtaining configuration from a file or registry.
The configuration parameters to be supported are:
"},{"location":"design/legacy-requirements/device-service/#service-section","title":"Service section","text":"Option Type Notes Host String This is the hostname to use when registering the service in core-metadata. As such it is used by other services to connect to the device service, and therefore must be resolvable by other services in the EdgeX deployment. Port Int Port on which to accept the device service's REST API. The assigned port for experimental / in-development device services is 49999. Timeout Int Time (in milliseconds) to wait between attempts to contact core-data and core-metadata when starting up. ConnectRetries Int Number of times to attempt to contact core-data and core-metadata when starting up. StartupMsg String Message to log on successful startup. CheckInterval String The checking interval to request if registering with Consul. Consul will ping the service at this interval to monitor its liveliness. ServerBindAddr String The interface on which the service's REST server should listen. By default the server is to listen on the interface to which theHost
option resolves. A value of 0.0.0.0
means listen on all available interfaces."},{"location":"design/legacy-requirements/device-service/#clients-section","title":"Clients section","text":"Defines the endpoints for other microservices in an EdgeX system. Not required when using Registry.
"},{"location":"design/legacy-requirements/device-service/#data","title":"Data","text":"Option Type Notes Host String Hostname on which to contact the core-data service. Port Int Port on which to contact the core-data service."},{"location":"design/legacy-requirements/device-service/#metadata","title":"Metadata","text":"Option Type Notes Host String Hostname on which to contact the core-metadata service. Port Int Port on which to contact the core-metadata service."},{"location":"design/legacy-requirements/device-service/#device-section","title":"Device section","text":"Option Type Notes DataTransform Bool For enabling/disabling transformations on data between the device and EdgeX. Defaults to true (enabled). Discovery/Enabled Bool For enabling/disabling device discovery. Defaults to true (enabled). Discovery/Interval Int Time between automatic discovery runs, in seconds. Defaults to zero (do not run discovery automatically). MaxCmdOps Int Defines the maximum number of resource operations that can be sent to the driver in a single command. MaxCmdResultLen Int Maximum string length for command results returned from the driver. UpdateLastConnected Bool If true, update the LastConnected attribute of a device whenever it is successfully accessed (read or write). Defaults to false."},{"location":"design/legacy-requirements/device-service/#logging-section","title":"Logging section","text":"Option Type Notes LogLevel String Sets the logging level. Available settings in order of increasing severity are:TRACE
, DEBUG
, INFO
, WARNING
, ERROR
."},{"location":"design/legacy-requirements/device-service/#driver-section","title":"Driver section","text":"This section is for options specific to the protocol driver. Any configuration specified here will be passed to the driver implementation during initialization.
"},{"location":"design/legacy-requirements/device-service/#push-events","title":"Push Events","text":"The SDK should implement methods for generating Events other than on receipt of device GET requests. The AutoEvent mechanism provides for generating Events at fixed intervals. The asynchronous event queue enables the device service to generate events at arbitrary times, according to implementation-specific logic.
"},{"location":"design/legacy-requirements/device-service/#autoevents","title":"AutoEvents","text":"Each device may have as part of its definition in Metadata a number of AutoEvents
associated with it. An AutoEvent
has the following fields:
The device SDK should schedule device readings from the implementation according to these AutoEvent
defininitions. It should use the same logic as it would if the readings were being requested via REST.
The SDK should provide a mechanism whereby the implementation may submit device readings at any time without blocking. This may be done in a manner appropriate to the implementation language, eg the Go SDK provides a channel on which readings may be pushed, the C SDK provides a function which submits readings to a workqueue.
"},{"location":"design/ucr/","title":"Use Case Records Folder","text":"This folder contains the EdgeX Foundry use case records (UCRs).
"},{"location":"design/ucr/#naming-and-formatting","title":"Naming and Formatting","text":"UCR documents should include the title in their file name as Use-Case-Title.md
. E
EdgeX UCRs should use the template.md file available in this directory.
"},{"location":"design/ucr/#table-of-contents","title":"Table of Contents","text":"A README with a table of contents for current documents is located here. Document authors are asked to keep the TOC updated with each new document entry.
Legacy requirements have their own Table of Contents and are located here.
"},{"location":"design/ucr/Bring-Your-Own-Vault/","title":"Bring Your Own Vault (BYOV) Use Case Requirements","text":""},{"location":"design/ucr/Bring-Your-Own-Vault/#submitters","title":"Submitters","text":"Any segments using EdgeX in secure mode (using Vault to secure EdgeX secrets) and wanting to incorporate their pre-existing or non-EdgeX Vault store.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#motivation","title":"Motivation","text":"Hashicorp Vault is a secure store to manage and protect sensitive (secret) data. Open-source Vault is used in EdgeX to secure any EdgeX micro service secrets (API keys, passwords, database credentials, service credentials, tokens, certificates etc.). The Vault secret store serves as the central repository to keep these secrets in an EdgeX deployment.
Vault provides a unified interface to any secret, while providing tight access control and multiple authentication mechanisms (token, LDAP, etc.). Additionally, Vault supports pluggable \"secrets engines\". EdgeX uses three secrets engines today: key-value secrets engine, Consul secrets engine, and identity secrets engine. EdgeX uses the Consul secrets engine to allow Vault to issue Consul access tokens to EdgeX microservices. See EdgeX Secret Store for more details.
Today, when the secret store is in place and used as the EdgeX secret store, EdgeX requires adopters to use a new instance of Vault provided by the deployment options offered by the EdgeX community (i.e. Docker Compose files, Kubernetes examples, Snaps, etc.). In other words, EdgeX must totally own the Vault install.
In some edge environments where EdgeX may run, Vault is already in place and could be shared by EdgeX. Additionally, adopters may find several applications running at the edge and want these applications to share a single instance of Vault. However, having an existing or new instance of Vault that EdgeX uses but does not instantiate and run (a concept the community has called \u201cbringing your own Vault\u201d) is not straightforward.
If an adopter wishes to use an instance of Vault that they stand up or pre-exists in their environment, the EdgeX project does not provide any guidance or recipe for how to do this. While technically possible, it would require a lot of work on the part of the adopter. See the original issue driving this requirement for a potential list of changes that would be required. In short, this is some tedious work and work that is not documented well (or in some cases at all). It would require an adopter to study the secretstore-setup code and rework or replace the secretstore-setup service with new code to use the existing Vault instance.
Therefore, the motivation for this EdgeX change is to make it easier to allow adopters to \u201cbring their own Vault\u201d instance and have EdgeX use that instance without any changes to the overall function of the EdgeX platform.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#target-users","title":"Target Users","text":"Any adopter that runs EdgeX in secure mode and with a pre-existing Vault or intention to share a Vault instance among edge applications.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#description","title":"Description","text":"Adopters running EdgeX in an environment that has (or will have) an existing Vault instance not setup by EdgeX:
There are no existing solutions for BYOV.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#requirements","title":"Requirements","text":"The basic requirements are straightforward:
Currently the configuration for all the EdgeX services have many common settings. Most of these common settings have the same value for every service deployed in a single EdgeX based solution and possible across identical deployments of the same solution. The motivation for the UCR is to limit this redundancy by having common settings in one location which are then used across all EdgeX services.
"},{"location":"design/ucr/Common%20Configuration/#description","title":"Description","text":"See Common Configuration for complete list of common configuration sections. As stated above most of the values for these common settings are the same across all the EdgeX Services. Below are a couple examples.
Example - Common configuration - Service & Registry
[Service]\nHealthCheckInterval = \"10s\"\nHost = \"localhost\" <overriden in compose file for service specific>\nPort = <Service Specific>\nServerBindAddr = \"\" # Leave blank so default to Host value unless different value is needed.\nStartupMsg = <Service Specific>\nMaxResultCount = 1024\nMaxRequestSize = 0 # Not curently used. Defines the maximum size of http request body in bytes\nRequestTimeout = \"5s\"\n[Service.CORSConfiguration]\nEnableCORS = false\nCORSAllowCredentials = false\nCORSAllowedOrigin = \"https://localhost\"\nCORSAllowedMethods = \"GET, POST, PUT, PATCH, DELETE\"\nCORSAllowedHeaders = \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\"\nCORSExposeHeaders = \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\"\nCORSMaxAge = 3600\n
...
[Registry] Host = \"localhost\" Port = 8500 Type = \"consul\" ```
In the above example only the Port and StartupMsg settings have unique values for each EdgeX Service.
In the Levski release the additional common security metrics require all services must have the Writable.Telemetry and MessageQueue and sections.
Example - Common configuration - Writable.Telemetry and MessageQueue
...\n[Writable.Telemetry]\nInterval = \"30s\"\nPublishTopicPrefix = \"edgex/telemetry\" # /<service-name>/<metric-name> will be added to this Publish Topic prefix\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\n# Service Specifc Metrics\n<Service Specific metric name> = false\n...\n# Common Security Service Metrics\nSecuritySecretsRequested = false\nSecuritySecretsStored = false\nSecurityConsulTokensRequested = false\nSecurityConsulTokenDuration = false\n[Writable.Telemetry.Tags] # Contains the service level tags to be attached to all the service's metrics\n# Gateway=\"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only chnage existing value, not added new ones.\n...\n[MessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\" <override in compose file same for every service>\nPort = 6379\nType = \"redis\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\nPublishTopicPrefix = <Service Specific>\nSubscribeEnabled = <Service Specific>\nSubscribeTopic = <Service Specific>\n[MessageQueue.Topics]\n<service specific name> = <Service specific value>\n...\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\nClientId = <Service Specific>\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\"\n# Additional Default NATS Specific options that need to be here to enable evnironment variable overrides of them\nFormat = \"nats\"\nRetryOnFailedConnect = \"true\"\nQueueGroup = \"\"\nDurable = \"\"\nAutoProvision = \"true\"\nDeliver = \"new\"\nDefaultPubRetryAttempts = \"2\"\nSubject = \"edgex/#\" # Required for NATS Jetstram only for stream autoprovsioning\n
In the above example only the PublishTopicPrefix, SubscribeTopic, SubscribeEnabled, MessageQueue.Topics and ClientId settings have unique values to that of the default EdgeX deployment values.
Note
In Levski release App Services don't have the MessageQueue section and Core Command's is MessageQueue.Internal. These inconstancies will be rectified in EdgeX 3.0 so all EdgeX services have the same MessageQueue section specified in the same manner. Also in EdgeX 3.0, the PublishTopicPrefix and SubscribeTopic settings will be replaced by entries in MessageQueue.Topics.
There are other similar common sections not shown above. As can be seen from the two examples above there is much duplication of configuration settings across all the EdgeX services. This gives rise to the need to have all these common duplicate configuration settings in a single global source.
In addition to the above common settings, Application services and Device services have their own common configuration settings that may have the same values across deployed application or devices services. For Application services these are the Trigger, Writable.Telemetry.Metrics and Clients.core-metadata configuration sections. For Device services these are the Device, Clients and Writable.Telemetry.Metrics configuration sections.
"},{"location":"design/ucr/Common%20Configuration/#existing-solutions","title":"Existing solutions","text":"There are no existing solutions for global configuration that would apply to EdgeX since the current configuration implementation is specific to EdgeX. See 0005-Service-Self-Config for more details on current configuration design.
"},{"location":"design/ucr/Common%20Configuration/#requirements","title":"Requirements","text":""},{"location":"design/ucr/Common%20Configuration/#general","title":"General","text":"Services shall be able reference a common configuration in a manner that is flexible for use with and without the Configuration Provider
Services must be able to override any of the common configuration settings with private service specific configuration values
Example Core Data specific Writable.Telemetry and Service configuration settings in private configuration file
[Writable.Telemetry]\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\nEventsPersisted = false\nReadingsPersisted = false\n
...\n\n [Service]\n Port = 59880\n StartupMsg = \"This is the Core Data Microservice\"\n
Application services shall be able to load separate common configuration specific to Application services
Device services shall be able to load separate common configuration specific to Device services
Service shall have a common way to specify the common configurations to load.
Secret Store configuration shall no longer be part of the each services' standard configuration as it is needed prior to connecting to the Configuration Provider.
Jim White (IOTech Systems)
"},{"location":"design/ucr/Core-Data-Retention/#status","title":"Status","text":"Approved By TSC vote on 1/31/23
Per Architect's meeting of 2/1/23, it was decided that this requirement does not require and ADR (it is not architecturally significant and can be accomplished in core data revisions). Also note that the existing core data clean up in the scheduler service will remain and that it is up to the user to configure this such that it does not conflict with the core data clean up schedule (see other related issues below).
"},{"location":"design/ucr/Core-Data-Retention/#change-log","title":"Change Log","text":"Formerly referred to as Core Data Cache
"},{"location":"design/ucr/Core-Data-Retention/#market-segments","title":"Market Segments","text":"Any/All
"},{"location":"design/ucr/Core-Data-Retention/#motivation","title":"Motivation","text":"Reduction in the amount of data that is persisted at the edge. Reduction in the amount of data sent to the north. Reduction in the amount of data sent to edge analytics (rules engines, etc.).
"},{"location":"design/ucr/Core-Data-Retention/#target-users","title":"Target Users","text":"In cases where there is a need to store data at the edge and that data is subsequently sent to the \u201cnorth\u201d (cloud or enterprise systems, rules engines, AI/ML, etc.), there may be a need to keep (persist) only the latest readings. \u201cLatest\u201d should be configurable and defined by the user \u2013 allowing for a cap on the number of readings for a particular device resource. Queries of core data should also allow for requesting the \u201clatest\u201d N readings as well.
For example, as a temperature sensor may report the current temperature (the device resource) very frequently (say once every 5 seconds), that data may only be sent to other services or systems every minute. The user may wish to have only the last two readings persisted and subsequently sent north during the minute interval (batch and send). Thus, a retention cap is placed on core data for a certain number of readings.
"},{"location":"design/ucr/Core-Data-Retention/#existing-solutions","title":"Existing solutions","text":"Today, core data will persist all data sent to it. The scheduler can be used to \u201cclean\u201d older data (data collected with a timestamp exceeding a specific timeframe). However, there is no way to retain only X number or latest readings. Query methods do not, by default, provide a simple way to query for \u201clatest\u201d readings. On most core data query methods, one could set the limit parameter = 1 (or some other number) and thereby return the latest event or reading since the results are sorted based on origin.
"},{"location":"design/ucr/Core-Data-Retention/#requirements","title":"Requirements","text":"Per the Architect's meeting of 2/1/23, it was determined that this can be implemented in Core Data without the need for additional ADR write up. This feature shall be implemented such that it is off by default (meaning that core data retention will be as is without any cap as specified in the requirements above). The existing scheduler ability to clean older data in core data shall remain in place (with current defaults). It will be up to the user to turn this data retention feature on (setting the hard cap and purging interval) and it will also be up to the user to ensure the standard scheduled data clean up does not conflict with this new data retention feature.
"},{"location":"design/ucr/Core-Data-Retention/#references","title":"References","text":"This UCR describes Use Cases for new Device metadata for Parent to Child Relationships for a given Device.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#submitters","title":"Submitters","text":"Any that deploy EdgeX systems to manage multiple devices. In particular, Industrial Gateway systems that connect to multiple south-bound devices and provide their data to north-bound services.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#motivation","title":"Motivation","text":"It is frequently important to north-bound services to know the parent-child relationships of the devices found in an EdgeX system. This information is generally used for either protocol data constructs or for display purposes.
If not know or provided by the south-bound Device Service, this information might be added to the Device instance's metadata by the north-bound or analytics services, or by the user.
It is desirable that the means of conveying this information become standardized for those systems which provide and use it, so that application services can rely on it, hence proposing here that there be a common definition and usage of this metadata.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#target-users","title":"Target Users","text":"Some north-bound protocols and some UI designs present the system devices in a hierarchial manner, where it is necessary to know which devices are parents and which are their children.
These considerations are most important for gateways that are implemented with the EdgeX framework, since there are potentially many south-bound devices connected to a system.
Examples are * North-bound BACnet Service - where only one \"main\" device is present at the point of external connection (eg, UDP port 0xBAC0) and all other devices must be presented as \"virtually routed devices\" connected to that main \"virtual router\" device. * Azure IoT Hub - where the normal connection for IoT Plug and Play / Digital Twin is for a single device, and any other devices need to somehow fall under that device (eg, with Device Twin \"Modules\") * UI device presentation - where child devices can be shown grouped under their parent, often rolled up until they are expanded to show their data * Multi-tenant deployments of multi-point energy meters - where a main meter has up to 80 Branch Circuit Monitoring (BCM) points connected to it, each BCM modeled as a Device consisting of the same 6 or so energy channels (Device Resources), and each BCM is assigned to a particular tenant. Tenants will be given access to the data from their BCM point(s) but not those of other tenants. A gateway may connect more than one of these multi-point energy meters.
Since there are multiple similar uses for this relationship information on the north side, it is proposed to locate this relationship metadata in the Device object as accessed from core-metadata by all services, rather than to locate it in each north-bound service (which would be particularly problematic for the UI, which gets its data through REST APIs).
The south-bound Device Service that creates a Device is ideally the service which establishes this relationship data, though it is possible that it is unaware of the parent-child relationship. It should be permitted, therefore, for this relationship information to also be set by north-bound services (most likely the UI) and simply ignored by the south-bound Device Service.
It is also necessary to indicate which device is the \"main\" or \"publisher\" device (ie, the gateway device), as any devices without a configured relationship can be inferred to be children of that device.
It is frequently a pattern in data servers to \"walk the device tree\", starting with the main device, then recursively processing its direct child devices, and then the child devices (if any) of those devices, until all devices have been processed. This is normally part of the initialization of device data for a server, since the parent must be processed and initialized before its child devices. Consequently, there is a need for a means to answer the question \"What are the child devices (if any) of device x.y.z?\"; this is commonly done either with the device structure listing its children, or by providing a query that can answer this question.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#extensions-to-the-main-use-case","title":"Extensions to the main Use Case","text":"The Device structure in Eaton's legacy products indicated this parent-child relationship bidirectionally: each device indicated its parent device (if any) with one field, and its child devices (if any) with a list of IDs.
The Device structure in Eaton's cloud solution is a \"DeviceTree\", which is a recursive, hierarchial structure of the connected devices, starting with the \"publisher\" device and its first-level child devices.
There is the BACnet \"virtual routed devices\" model, but I would not recommend it, as it is too convoluted for this simple relationship.
The existing EdgeX UIs group devices by their Device Service, which is a good approach for simple devices without children of their own, but fails if those devices have child devices too.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#requirements","title":"Requirements","text":"Not a requirement: inheritance of device status via the parent-child relationship. Apparently this was a point over which past consideration of parent-child relationships in EdgeX foundered, but it seems complicated for independent services, and can generally be inferred by other services anyway.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#other-related-issues","title":"Other Related Issues","text":"Use Case for Application Services Extending Device Data Extending Device Data later (./Extending-Device-Data.md) may be related, as, depending on its solution, it may have to indicate a different Device Relationship (\"Extends\").
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#references","title":"References","text":"Azure IoT Edge Gateways and Child Devices
BACnet Virtual Devices: The full BACnet spec is paywalled by ASHRAE. But the relevant snippet is from Annex H, section H.1.1.2 Multiple \"Virtual\" BACnet Devices in a Single Physical Device:
A BACnet device is one that possesses a Device object and communicates using the procedures specified in this standard. In some instances, however, it may be desirable to model the activities of a physical building automation and control device through the use of more than one BACnet device. Each such device will be referred to as a virtual BACnet device. This can be accomplished by configuring the physical device to act as a router to one or more virtual BACnet networks. The idea is that each virtual BACnet device is associated with a unique DNET and DADR pair, i.e. a unique BACnet address. The physical device performs exactly as if it were a router between physical BACnet networks.
"},{"location":"design/ucr/Extending-Device-Data/","title":"Extending Device Data","text":""},{"location":"design/ucr/Extending-Device-Data/#extending-device-data","title":"Extending Device Data","text":"This UCR describes the Use Case for Extending of Device Data by Application Services for a given south-bound Device.
"},{"location":"design/ucr/Extending-Device-Data/#submitters","title":"Submitters","text":"Any that deploy EdgeX systems with analytics, utility, or north-bound microservices that add new Device Resources that are extensions of the original south-bound or service-based Device data.
"},{"location":"design/ucr/Extending-Device-Data/#motivation","title":"Motivation","text":"We find a consistent need as we design microservices for our industrial products: The new analytics, utility, and north-bound microservices almost always need to add Device Resources to manage their configuration, transforms, and status reporting. These Resources are usually needed on a per-Device basis (rather than just overall service configuration or status), which can be seen as extending (adding to) the data of the original south-bound devices.
Adding configuration and status via Resources that extend the original south-bound Device make this configuration and status data easily accessible and translatable to other Application Services and to the UI via REST; we think that this general solution is better than disparate solutions which add custom APIs in each Application Service to Get and Set this data.
What is needed is a common means of showing the relationship between these added Resources, their owning service, and the original south-bound Device Resources; that is, to indicate that these Resources \"extend\" the original Device data.
It is desirable that the means of conveying this information become standardized for those EdgeX microservices which provide and use it, hence proposing here that there be a common EdgeX way defined to do this.
"},{"location":"design/ucr/Extending-Device-Data/#target-users","title":"Target Users","text":"Picture the extremely simple case of a south-bound sensor device that just measures Temperature and Humidity and provides these as Device Resources. If we then add analytics and north-bound microservices: - A Trending service that needs Device Resources to indicate that Temperature and Humidity are trended for, eg, Minimum, Average, and Maximum over a 1 hour trend interval. - An Alarming service that needs Device Resources to describe the Alarm Rules used to monitor Temperature and Humidity, plus a device-level InAlarm status. - A Cloud service that reports not just the Temperature and Humidity but also their Trend configuration and Alarm Rule Resources. In addition, the Cloud service adds its own Resources to direct the Cadence with which this Device's data is reported.
Now scale this up to 100 such Temperature/Humidity sensors, and, if not using extended devices as described here, it would grow difficult to match all of the free-standing (unassociated) Resources to their original sensor data. And add the requirement that all these resources must be able to be seen and managed locally via REST or Message Bus, and potentially from north-bound services like Modbus/TCP, and from the Cloud (because everybody wants to control everything from the Cloud).
Furthermore, from the end user's point of view, the Trend configuration, Alarm Rules, and Cloud Cadence that are added for a given Device are all seen as aspects of the Temperature/Humidity Device, as is common for Digital Twin representations, and not as separated, free-standing entities. So there must be some means to relate the extended Device Resources to the original south-bound Device and its Device Resources.
"},{"location":"design/ucr/Extending-Device-Data/#existing-solutions","title":"Existing solutions","text":"In EdgeX today, Devices and their Resources such as those described in the last section can be added, but they are not seen as related to the south-bound Device or to each other, except perhaps by well-chosen Labels or Tags.
The existing south-bound Device Profiles could be extended to simply add the new Resources, but nothing connects these Resources to their owning Service (ie, so core-command could be used to manage them).
"},{"location":"design/ucr/Extending-Device-Data/#requirements","title":"Requirements","text":"Not a requirement: means of using or combining Resources from multiple south-bound Devices into one Extended Resource.
"},{"location":"design/ucr/Extending-Device-Data/#other-related-issues","title":"Other Related Issues","text":""},{"location":"design/ucr/Extending-Device-Data/#references","title":"References","text":""},{"location":"design/ucr/Microservice-Authentication/","title":"Microservice Authentication","text":""},{"location":"design/ucr/Microservice-Authentication/#microservice-authentication","title":"Microservice Authentication","text":""},{"location":"design/ucr/Microservice-Authentication/#submitters","title":"Submitters","text":"Modern cybersecurity standards for IoT require peer-to-peer authentication of software components. Representative IoT security standards make explicit reference to authentication of both human and non-human interactions between components:
CR 1.2 (Requirement): Components shall provide the capability to identify itself and authenticate with any other component (software application, embedded device, host device and network devices), according to ISA-62443-3-3 SR 1.2.
SR 1.2 (Requirement): The control system shall provide the capability to identify and authenticate all software processes and devices. This capability shall enforce such identification and authentication on all interfaces which provide access to the control system to support least privilege in accordance with applicable security policies and procedures.
PR.AC-1: Identities and credentials are issued, managed, verified, revoked, and audited for authorized devices, users, and processes.
"},{"location":"design/ucr/Microservice-Authentication/#target-users","title":"Target Users","text":"Microservice authentication provides the following benefits, which are potentially valuable to all of the listed target users:
Provides a defense against malware running on the device, as currently there is no mechanism to ensure that only authorized users or processes are allowed to invoke EdgeX services.
Provides greater auditability as to who initiated a particular action on the device.
Depending on implementation, may provide a way to revoke access that was previously granted, or allow customers to tie in to enterprise identity management systems.
For purposes of this UCR, microservice authentication implies that the receiving microservice has access to the identity of the caller and can write program logic based on that identity.
"},{"location":"design/ucr/Microservice-Authentication/#existing-solutions","title":"Existing solutions","text":"Microservice authentication is currently implemented around two primary vectors:
Initiator sends an identifier along with a request to the receiver. The identifier is cryptographically validated using a key trusted by the receiver, or the receiver asks a trusted third party to verify the identifier.
A benefit of token-based authentication schemes is identity delegation, whereby the identifier can be passed through a chain of calls to preserve the identity of the original initiator. The identifier can often be tunneled through other protocols. Another benefit of token-based authentication is that it flows easily through a web application firewall.
A drawback of token-based authentication is that due to MITM threats, token-based authentication over an unencrypted network is insecure. Another drawback of token-based authentication is that it is unidirectional: the receiver can authenticate the initiator, but not vice-versa.
End-to-end encryption implies that only the original sender and the final intended receiver ever see the unencrypted message contents. If a message is simply encrypted from process-to-process or machine-to-machine, where an intermediary can decrypt the message, even if the entire flow encrypted point-to-point, then the message is simply said to be \"encrypted in-transit.\" If the architecture of the system requires a server-based intermediary between two clients, then in a E2EE system, only the two communicating clients have access the unencrypted data.
"},{"location":"design/ucr/Microservice-Authentication/#requirements","title":"Requirements","text":"When an EdgeX service is running in secure mode, unauthenticated inbound requests shall be rejected.
When an EdgeX service is running in secure mode and initiating an outbound request to a peer EdgeX service, the outbound request shall be authenticated.
Authentication shall work in the context of bare-metal deployments, snap-based deployments, docker-based deployments, and Kubernetes-based deployments.
This UCR does not prescribe what layer in the software stack performs authentication.
"},{"location":"design/ucr/Microservice-Authentication/#other-related-issues","title":"Other Related Issues","text":"Including identity and access management in EdgeX system (edgex-go#3845): Expresses the desire to integrate human identity into the EdgeX system. The BSI presentation to EdgeX TSC also explicitly mentions Auth0 integration.
Investigate alternatives to Kong that have better platform support and use less memory (edgex-go#3747): Expresses the concern over the size of the Kong+Postgres implementation, and a desire to find something more efficient.
None
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/","title":"Provision Watch via Device Metadata","text":""},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#provision-watch-via-device-metadata","title":"Provision Watch via Device Metadata","text":"This UCR describes the Use Case for Provision Watching via Additional Device Metadata, beyond the protocol properties currently used exclusively for matching in Provision Watchers.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#submitters","title":"Submitters","text":"Any that deploy EdgeX systems with south-bound Device Services where Provisioning is dependent on device data discovered in devices, not just their protocol properties. Any that deploy EdgeX systems with analytics, utility, or north-bound microservices that must \"discover\" Devices added to the EdgeX core-metadata by south-bound Device Services.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#motivation","title":"Motivation","text":"The autodiscovery of Devices using Provision Watchers is a useful feature of Device Services; currently, the Provision Watcher implementation in the two Device SDKs uses only the protocol properties of a discovered Device to match against the \"identifiers\" specified in the Provision Watcher metadata. The implementations use regular expression matching against the \"identifiers\", and also filter out any Devices whose protocol properties match the \"blockingIdentifiers\" of the Provision Watcher metadata.
Provisioning for south-bound services today must have a strict knowledge of the devices that will be discovered, but some protocols (eg, BACnet) have discoverable device properties which can provide a further discrimination, for example, to use the device's modelName to determine which Device Profile should be applied to it. We would like that the metadata from the Device (not necessarily from core-metadata, but properties of the Device) can be selected to match for provisioning, and not limit the property names to a fixed set of properties.
We are finding that Hybrid App-Device Services later (./Hybrid-App-Device-Services.md) also want to use Provision Watchers, so that they can be configured at run-time to work with new Devices, but these do not need or want to match the protocol properties of a Device; instead, they want to match or exclude based on Device instance metadata properties such as the \"modelName\", \"profileName\", \"name\", and \"labels\".
This UCR describes the Use Case for using these additional properties for Provision Watching.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#target-users","title":"Target Users","text":"Application Services using the Device SDKs (ie, Hybrid App-Device Services) can take advantage of the Provision Watching feature and APIs to \"discover\" new EdgeX devices from the south-bound Device Services, match them to app-specific Device Profiles, and handle their data with analysis or transforms.
A south-bound Device Service may discover devices across a range of protocol properties, and those devices may need different Device Profiles depending upon metadata properties of the discovered devices, for example, the ModelName field of BACnet data. While the \"modelName\" is an obvious target, the Device Service may want to use other device metadata for Provisioning as well for inclusion or exclusion.
For another example, consider the case where each of three Hybrid App-Device Services (a Trending Service, an Alarm Monitoring Service, and a Cloud Service) want to handle the data originating in a south-bound Modbus service for any \"Watt-o-Meter\" (Model Name) Device. So each service is configured with a Provision Watcher that will try to match that \"modelName\", or else \"profileName\" of \"Watt-o-Meter-Modbus-Profile-01\", of devices discovered in core-metadata or shown as added via the control plane events and, if a match is found, add a new \"extended\" Device to each service using the appropriate Device Profile (eg, \"Watt-o-Meter-Trends-Profile-01\" for the Trending Service), and giving the new extended Device a name, for example based on the original and the service (eg, \"Meter-333-Trending\").
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#extensions","title":"Extensions","text":"Other Device metadata properties appear to be good candidates for a user to choose from: - name: The Device name may be good for regular expression matching, eg \"name\":\"Meter-*\" - labels: Since this one is free-form and open to the owner to add labels of their choosing, this one should be good for both matching and the exclusion list. Eg, if a Device had \"labels\": \"meter, basement, energy\", then it could be matched or excluded for \"labels\":\"basement\". - serial number: with regular expressions, this can be a powerful matching choice. - MAC address: similar to serial number for a specific range of vendor devices.
The Device Service which discovers the Device will probably want to permit specific metadata properties to be used.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#existing-solutions","title":"Existing solutions","text":"In EdgeX today, as noted, Provision Watchers match only the protocol properties, using regular expression matching and excluding. The example given for the REST API is a good one:
\"identifiers\": {\n\"address\": \"localhost\",\n\"port\": \"3[0-9]{2}\"\n},\n
Note its use of regular expression matching for the port number. "},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#requirements","title":"Requirements","text":"Currently one must have physical devices and appropriate environment to produce real device data (Event/Readings) into an EdgeX solution for other EdgeX services (Core Data, App Services, eKuiper Rules Engine) to consume. This is often not the case when someone is developing/testing one of these consuming EdgeX services. A good example of this is the RFID LLRP Inventory App Service . Testing this service is dependent on the RFID LLRP Device Service, physical LLRP RFID readers, RFID Tags and environment where these are deployed. Having a way to record EdgeX the Event/Readings from an actual deployment that then could be replayed in development environment for testing would be very valuable.
Other potential uses are:
Target users have the need to be able to replay recorded EdgeX Event/Readings for functional, performance or reproducible testing. This UCR describes a new capability that allows user to first record Event/Readings from real devices in real-time and be able to replay the Event/Readings as if it was in real-time. The static device profile and device definition files at time of capture will need to be available and loaded at the time the captured data is replayed.
"},{"location":"design/ucr/Record-and-Replay/#existing-solutions","title":"Existing solutions","text":"There are simulators for some devices (i.e. Modbus), but there isn't an general solution to reproduce real device data into EdgeX without the physical devices being present. These simulators also don't have a way to produce a specific set of results in a timeline as do physical devices.
"},{"location":"design/ucr/Record-and-Replay/#requirements","title":"Requirements","text":""},{"location":"design/ucr/Record-and-Replay/#record","title":"Record","text":"Shall have capability to record device Event/Reading data as received from the Device Service(s).
Recorded Event/Reading timestamps shall be sufficient to determine the captured rate.
Recorded Events shall be sufficient to determine the major version of EdgeX used, i.e. ApiVersion
Record must allow the duration of capture and/or the max number of Readings to capture to be specified
Record must capture all Device definitions for devices referenced in the captured Event/Readings
Record must capture all Device Profiles for devices referenced in the captured Event/Readings
Record must have the ability to record data only from specified target devices or device profiles.
Shall have capability to export captured data for use at a later time or to send to other users
Exported data shall be in JSON format with option for it to be compressed (ZIP or GZIP).
Exported binary Event/Readings shall not be re-encoded in CBOR.
Shall have capability to import data that was previously recorded and exported
Import must validate that the current EdgeX version is compatible with the APIVersion captured in the recorded Events.
Import must add any captured Device definitions and Device Profiles to the system prior to play back.
Import shall have option to overwrite or use existing Device definitions and Device Profiles
Shall have capability to replay previously captured data
Replay must allow captured data to be replayed at captured, slower or faster speeds
Replay must adjust Event/Reading timestamps to be current time when published.
Replay must allow for the repeat replay option with number of times to repeat the captured data.
Replay must allow recorded data from multiple sources at the same time (this mimics more device services feeding EdgeX )
When a new camera device is added to the system only Core Metadata and the Device Service managing the new camera device are aware. There are use cases when other parts of the system need to know when a new device has been added to the system. This UCR will focus on the camera management use case which illustrates the need for these new System Events.
"},{"location":"design/ucr/System-Events-for-Devices/#target-users","title":"Target Users","text":"System Events (aka Control Plane Events - CPE) are events generated by the system when there are changes in part of the system that are important for other parts of the system to know about. This UCR will focus on the Device System Events use case as related to camera management. These Device System Events could be utilized by many other use cases in similar manner.
The new EdgeX USB and ONVIF camera Device Services (not yet released) implement auto provisioning which detects when a new camera device has been connected to USB or added to the network. New Device objects are created in Core Metadata for the new camera devices that have been auto provisioned. The auto provisioning also detects existing known camera devices, determines if there have been changes to the device details, such as IP address, and updates the Device object in Core Metadata with any changes. Device objects can also be manually deleted from Core Metadata once camera devices have been permanently disconnected.
A camera management application service needs to know when a new camera has been added so that it can initiate AI/ML processing on the stream from the new camera. The service also needs to know when an existing camera device has been updated so that it can make any needed adjustments such as restarting the AI/ML processing using the new IP address of the camera. Finally the service needs to know when an existing camera device has been removed so that it can stop the AI/ML processing for the removed camera.
"},{"location":"design/ucr/System-Events-for-Devices/#existing-solutions","title":"Existing solutions","text":"Parts of the system (i.e. application service) must poll Core Metadata for list of devices to determine if a device has been added, update or deleted. To do this it must keep its own list of Device objects to make these determinations.
As a temporary stop gap for the initial upcoming release of the new ONVIF camera device service, an enhancement was added which publishes an EdgeX Event/Reading when a new camera device has been added, updated or modified. The Reading contains the information about the event type and the device name. This is improper use of the EdgeX Event/Reading which is intended for readings from devices, not System Events. This feature in the ONVIF Camera Device Service will be removed once System Events for Devices are in place.
"},{"location":"design/ucr/System-Events-for-Devices/#requirements","title":"Requirements","text":"Subscription shall allow filtering for Device System Events for the following:
Device Service (i.e. only want Events for which the device is owned by device-onvif-camera)
Device Profile (i.e. only want Events for which the device is for a specific device profile)
Event Type, (i.e. only want Add events)
Each Device System Event must contain at a minimum the following, which is all that is needed to send a command to the device to get the stream URL or stop the AI/ML processing:
Event Type: Added, Updated or Deleted
Note
Other details about the device, if not present in the System Event, can be queried from Core Metadata using the Device Name
"},{"location":"design/ucr/System-Events-for-Devices/#other-related-issues","title":"Other Related Issues","text":"Deployment at scale, i.e. identical or almost identical deployments across many locations, would benefit from the ability to load service files from a central location. This would allow the maintainer to make changes once to a shared file and have them apply to all or a subset of deployments. The following are some EdgeX service files that would benefit for this capability:
Unit of Measure file used by Core Metadata
Service Configuration files
./res/configuration.toml
, but can be overridden via -cf/--configFile command line flag.Token Configuration file for Security File Token Provider
Device Profiles, Device Definition and Provision Watchers
These files can reside in a device services local file system and are pushed to Core Metadata the first time the service starts. Example here
These files are found by scanning the folders specified in configuration here
Note
These files are only pushed to Core Metadata the first time the device service is loaded. They are not currently re-pushed once they exist in Core Metadata even when the files have changed locally. Thus updating the files locally or in a shared location will not result in changing the contents of these files in Core Metadata. They still benefit from this capability during initial deployment and when new files are added.
Currently all files loaded by services are expected to be on the local file system, thus are duplicated many times when deploying at scale.
"},{"location":"design/ucr/URIs-for-Files/#target-users","title":"Target Users","text":"This UCR proposes to enhance loading of files in EdgeX by allowing the location of the file to be optionally specified as an URI.
"},{"location":"design/ucr/URIs-for-Files/#existing-solutions","title":"Existing solutions","text":"Loading shared files via a URI is not new in the software industry. Here is the Wiki page for Uniform Resource Identifier
"},{"location":"design/ucr/URIs-for-Files/#requirements","title":"Requirements","text":"username:password@
http
and https
schemes from the above spec shall be supported as well as plain paths
as is todayThe file
scheme shall not be supported as it doesn't allow for relative paths
The URI spec shall be extended to allow the specifying of EdgeX service secrets from the service's Secret Store in order to avoid credentials in plain text. Details on how are left to the ADR.
-cc/--commonConfig
flag can be a URI to a remote files. The implementation of this portion of the ADR is dependent on the UCR and following ADR.In addition to the examples listed in this section of the documentation, you will find other examples in the EdgeX Examples Repository.
The tabs below provide a listing (may be partial based on latest updates) for reference.
Application ServicesDeploymentDevice ServicesSecuritySee App Service Examples for a listing of custom and configurable application service examples.
Example Location Helm (Kubernetes) Github - examples, deployment Raspberry Pi 4 Github - examples, raspberry-pi-4 Example Location TBD Example Location security-enabled EdgeX Remote Device Service Github - examples, securityWarning
Not all the examples in the EdgeX Examples repository are available for all EdgeX releases. Check the documentation for details.
"},{"location":"examples/AppServiceExamples/","title":"App Service Examples","text":"The following is a list of examples we currently have available that demonstrate various ways that the Application Functions SDK or App Service Configurable can be used. All of the examples can be found here in the edgex-examples repo. They focus on how to leverage various built in provided functions as mentioned above as well as how to write your own in the case that the SDK does not provide what is needed.
Example Name Description Camera Management Utilizes the ONVIF and USB device services and demonstrates the management of these cameras and their integration with video inferencing Simple Filter XML Demonstrates Filtering of Events by Device names and transforming data to XML Simple Filter XML HTTP Same example as #1, but result published to HTTP Endpoint Simple Filter XML MQTT Same example as #1, but result published to MQTT Broker Simple CBOR Filter Demonstrates Filtering of Events by Resource names for Event that is CBOR encoded containing a binary reading Advanced Filter Convert Publish Demonstrates Filtering of Events by Resource names, custom function to convert the reading and them publish the modified Event back to the MessageBus under a different topic. Advanced Target Type Demonstrates use of custom Target Type and use of HTTP Trigger Cloud Export MQTT Demonstrates simple custom Cloud transform and exporting to Cloud MQTT Broker. Cloud Event Transform Demonstrates custom transforms that convert Event/Readings to and from Cloud Events Send Command Demonstrates sending commands to a Device via the Command Client. Secrets Demonstrates how to retrieve secrets from the service SecretStore Custom Trigger Demonstrates how to create and use a custom trigger NATS RPC Demonstrates how to create a synchronous request/reply trigger using NATS messaging Fledge Export Demonstrates custom conversion of Event/Reading to Fledge format and then exporting to Fledge service REST endpoint Influxdb Export Demonstrates custom conversion of Event/Reading to InfluxDB timeseries format and then exporting to InFluxDB via MQTT Json Logic Demonstrates using the built in JSONLogic Evaluate pipeline function IBM Export Profile Demonstrates a custom App Service Configurable profile for exporting to IBM Cloud"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/","title":"Command Devices with eKuiper Rules Engine","text":""},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#overview","title":"Overview","text":"This document describes how to actuate a device with rules trigger by the eKuiper rules engine. To make the example simple, the virtual device device-virtual is used as the actuated device. The eKuiper rules engine analyzes the data sent from device-virtual services, and then sends a command to virtual device based a rule firing in eKuiper based on that analysis. It should be noted that an application service is used to route core data through the rules engine.
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#use-case-scenarios","title":"Use Case Scenarios","text":"Rules will be created in eKuiper to watch for two circumstances:
Random-UnsignedInteger-Device
device (one of the default virtual device managed devices), and if a uint8
reading value is found larger than 20
in the event, then send a command to Random-Boolean-Device
device to start generating random numbers (specifically - set random generation bool to true).Random-Integer-Device
device (another of the default virtual device managed devices), and if the average for int8
reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device
device to stop generating random numbers (specifically - set random generation bool to false).These use case scenarios do not have any real business meaning, but easily demonstrate the features of EdgeX automatic actuation accomplished via the eKuiper rule engine.
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#prerequisite-knowledge","title":"Prerequisite Knowledge","text":"This document will not cover basic operations of EdgeX or LF Edge eKuiper. Readers should have basic knowledge of:
Make sure you read the EdgeX eKuiper Rule Engine Tutorial and successfully run eKuiper with EdgeX.
First create a stream that can consume streaming data from the EdgeX application service (rules engine profile). This step is not required if you already finished the EdgeX eKuiper Rule Engine Tutorial.
curl -X POST \\\nhttp://$ekuiper_docker:59720/streams \\\n-H 'Content-Type: application/json' \\\n-d '{\"sql\": \"create stream demo() WITH (FORMAT=\\\"JSON\\\", TYPE=\\\"edgex\\\")\"}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#get-and-test-the-command-url","title":"Get and Test the Command URL","text":"Since both use case scenario rules will send commands to the Random-Boolean-Device
virtual device, use the curl request below to get a list of available commands for this device.
curl http://127.0.0.1:59882/api/v3/device/name/Random-Boolean-Device | jq\n
It should print results like those below.
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"deviceCoreCommand\": {\n\"deviceName\": \"Random-Boolean-Device\",\n\"profileName\": \"Random-Boolean-Device\",\n\"coreCommands\": [\n{\n\"name\": \"WriteBoolValue\",\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Bool\",\n\"valueType\": \"Bool\"\n},\n{\n\"resourceName\": \"EnableRandomization_Bool\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"WriteBoolArrayValue\",\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/WriteBoolArrayValue\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"BoolArray\",\n\"valueType\": \"BoolArray\"\n},\n{\n\"resourceName\": \"EnableRandomization_BoolArray\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"Bool\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/Bool\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Bool\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"BoolArray\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/BoolArray\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"BoolArray\",\n\"valueType\": \"BoolArray\"\n}\n]\n}\n]\n}\n}\n
From this output, look for the URL associated to the PUT
command (the first URL listed). This is the command eKuiper will use to call on the device. There are two parameters for this command:
Bool
: Set the returned value when other services want to get device data. The parameter will be used only when EnableRandomization_Bool
is set to false.EnableRandomization_Bool
: Enable/disable the randomization generation of bool values. If this value is set to true, then the 1st parameter will be ignored.You can test calling this command with its parameters using curl as shown below.
curl -X PUT \\\nhttp://edgex-core-command:59882/api/v3/device/name/Random-Boolean-Device/WriteBoolValue \\\n-H 'Content-Type: application/json' \\\n-d '{\"Bool\":\"true\", \"EnableRandomization_Bool\": \"true\"}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#create-rules","title":"Create rules","text":"Now that you have EdgeX and eKuiper running, the EdgeX stream defined, and you know the command to actuate Random-Boolean-Device
, it is time to build the eKuiper rules.
Again, the 1st rule is to monitor for events coming from the Random-UnsignedInteger-Device
device (one of the default virtual device managed devices), and if a uint8
reading value is found larger than 20
in the event, then send the command to Random-Boolean-Device
device to start generating random numbers (specifically - set random generation bool to true). Given the URL and parameters to the command, below is the curl command to declare the first rule in eKuiper.
curl -X POST \\\nhttp://$ekuiper_server:59720/rules \\\n-H 'Content-Type: application/json' \\\n-d '{\n \"id\": \"rule1\",\n \"sql\": \"SELECT uint8 FROM demo WHERE uint8 > 20\",\n \"actions\": [\n {\n \"rest\": {\n \"url\": \"http://edgex-core-command:59882/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n \"method\": \"put\",\n \"dataTemplate\": \"{\\\"Bool\\\":\\\"true\\\", \\\"EnableRandomization_Bool\\\": \\\"true\\\"}\",\n \"sendSingle\": true\n }\n },\n {\n \"log\":{}\n }\n ]\n}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#the-second-rule","title":"The second rule","text":"The 2nd rule is to monitor for events coming from the Random-Integer-Device
device (another of the default virtual device managed devices), and if the average for int8
reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device
device to stop generating random numbers (specifically - set random generation bool to false). Here is the curl request to setup the second rule in eKuiper. The same command URL is used as the same device action (Random-Boolean-Device's PUT bool command
) is being actuated, but with different parameters.
curl -X POST \\\nhttp://$ekuiper_server:59720/rules \\\n-H 'Content-Type: application/json' \\\n-d '{\n \"id\": \"rule2\",\n \"sql\": \"SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0\",\n \"actions\": [\n {\n \"rest\": {\n \"url\": \"http://edgex-core-command:59882/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n \"method\": \"put\",\n \"dataTemplate\": \"{\\\"Bool\\\":\\\"false\\\", \\\"EnableRandomization_Bool\\\": \\\"false\\\"}\",\n \"sendSingle\": true\n }\n },\n {\n \"log\":{}\n }\n ]\n}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#watch-the-ekuiper-logs","title":"Watch the eKuiper Logs","text":"Both rules are now created in eKuiper. eKuiper is busy analyzing the event data coming for the virtual devices looking for readings that match the rules you created. You can watch the edgex-kuiper container logs for the rule triggering and command execution.
docker logs edgex-kuiper\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#explore-the-results","title":"Explore the Results","text":"You can also explore the eKuiper analysis that caused the commands to be sent to the service. To see the the data from the analysis, use the SQL below to query eKuiper filtering data.
SELECT int8, \"true\" AS randomization FROM demo WHERE uint8 > 20\n
The output of the SQL should look similar to the results below.
[{\"int8\":-75, \"randomization\":\"true\"}]\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#extended-reading","title":"Extended Reading","text":"Use these resources to learn more about the features of LF Edge eKuiper.
EdgeX - Levski Release
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview","title":"Overview","text":"In this example, we use a script to simulate a custom-defined MQTT device, instead of a real device. This provides a straight-forward way to test the device-mqtt features using an MQTT-broker.
Note
Multi-Level Topics move metadata (i.e. device name, command name,... etc) from the payload into the MQTT topics. Notice the sections marked with Using Multi-level Topic: for relevant input/output throughout this example.
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-the-custom-device-configuration","title":"Prepare the Custom Device Configuration","text":"In this section, we create folders that contain files required for deployment of a customized device configuration to work with the existing device service:
- custom-config\n |- devices\n |- my.custom.device.config.yaml\n |- profiles\n |- my.custom.device.profile.yml\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#device-configuration","title":"Device Configuration","text":"Use this configuration file to define devices and schedule jobs. device-mqtt generates a relative instance on start-up.
Create the device configuration file, named my.custom.device.config.yaml
, as shown below:
# Pre-define Devices\ndeviceList:\n- name: \"my-custom-device\"\nprofileName: \"my-custom-device-profile\"\ndescription: \"MQTT device is created for test purpose\"\nlabels: [ \"MQTT\", \"test\" ]\nprotocols:\nmqtt:\nCommandTopic: \"command/my-custom-device\"\nautoEvents:\n- interval: \"30s\"\nonChange: false\nsourceName: \"message\"\n
Note
CommandTopic
is used to publish the GET or SET command request
The DeviceProfile defines the device's values and operation method, which can be Read or Write.
Create a device profile, named my.custom.device.profile.yml
, with the following content:
name: \"my-custom-device-profile\"\nmanufacturer: \"iot\"\nmodel: \"MQTT-DEVICE\"\ndescription: \"Test device profile\"\nlabels:\n- \"mqtt\"\n- \"test\"\ndeviceResources:\n-\nname: randnum\nisHidden: true\ndescription: \"device random number\"\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\n-\nname: ping\nisHidden: true\ndescription: \"device awake\"\nproperties:\nvalueType: \"String\"\nreadWrite: \"R\"\n-\nname: message\nisHidden: false\ndescription: \"device message\"\nproperties:\nvalueType: \"String\"\nreadWrite: \"RW\"\n-\nname: json\nisHidden: false\ndescription: \"JSON message\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"RW\"\nmediaType: \"application/json\"\n\ndeviceCommands:\n-\nname: values\nreadWrite: \"R\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"randnum\" }\n- { deviceResource: \"ping\" }\n- { deviceResource: \"message\" }\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-docker-compose-file","title":"Prepare docker-compose file","text":"$ git clone git@github.com:edgexfoundry/edgex-compose.git\n$ cd edgex-compose\n$ git checkout main\n
!!! note Use main branch until levski is released.$ cd compose-builder\n$ make gen ds-mqtt mqtt-broker no-secty ui\n
$ ls | grep 'docker-compose.yml'\ndocker-compose.yml\n
Create a docker-compose file docker-compose.override.yml
to extend the compose file which generated by the compose-builder. In this file, we add volume path and environment variables as shown below:
# docker-compose.override.yml\n\nversion: '3.7'\n\nservices:\ndevice-mqtt:\nenvironment:\nDEVICE_DEVICESDIR: /custom-config/devices\nDEVICE_PROFILESDIR: /custom-config/profiles\nvolumes:\n- /path/to/custom-config:/custom-config\n
Note
Replace the /path/to/custom-config
in the example with the correct path
Deploy EdgeX using the following commands:
$ cd edgex-compose/compose-builder\n$ docker compose pull\n$ docker compose -f docker-compose.yml -f docker-compose.override.yml up -d\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#using-a-mqtt-device-simulator","title":"Using a MQTT Device Simulator","text":""},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview_1","title":"Overview","text":""},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#expected-behaviors","title":"Expected Behaviors","text":"Using the detailed script below as a simulator, there are three behaviors:
The simulator publishes the data to the MQTT broker with topic incoming/data/my-custom-device/values
and the message is similar to the following:
{\n\"randnum\" : 4161.3549,\n\"ping\" : \"pong\",\n\"message\" : \"Hello World\"\n}\n
Receive the reading request, then return the response.
The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/randnum/get/293d7a00-66e1-4374-ace0-07520103c95f
and message returned is similar to the following:
{\"randnum\":\"42.0\"}\n
The simulator returns the response to the MQTT broker, the topic is command/response/#
and the message is similar to the following:
{\"randnum\":\"4.20e+01\"}\n
Receive the set request, then change the device value.
The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/testmessage/set/293d7a00-66e1-4374-ace0-07520103c95f
and the message is similar to the following:
{\"message\":\"test message...\"}\n
The simulator changes the device value and returns the response to the MQTT broker, the topic is command/response/#
and the message is similar to the following:
{\"message\":\"test message...\"}\n
To implement the simulated custom-defined MQTT device, create a javascript, named mock-device.js
, with the following content:
function getRandomFloat(min, max) {\nreturn Math.random() * (max - min) + min;\n}\n\nconst deviceName = \"my-custom-device\";\nlet message = \"test-message\";\nlet json = {\"name\" : \"My JSON\"};\n\n// DataSender sends async value to MQTT broker every 15 seconds\nschedule('*/15 * * * * *', ()=>{\nvar data = {};\ndata.randnum = getRandomFloat(25,29).toFixed(1);\ndata.ping = \"pong\"\ndata.message = \"Hello World\"\n\npublish( 'incoming/data/my-custom-device/values', JSON.stringify(data));\n});\n\n// CommandHandler receives commands and sends response to MQTT broker\n// 1. Receive the reading request, then return the response\n// 2. Receive the set request, then change the device value\nsubscribe( \"command/my-custom-device/#\" , (topic, val) => {\nconst words = topic.split('/');\nvar cmd = words[2];\nvar method = words[3];\nvar uuid = words[4];\nvar response = {};\nvar data = val;\n\nif (method == \"set\") {\nswitch(cmd) {\ncase \"message\":\nmessage = data[cmd];\nbreak;\ncase \"json\":\njson = data[cmd];\nbreak;\n}\n}else{\nswitch(cmd) {\ncase \"ping\":\nresponse.ping = \"pong\";\nbreak;\ncase \"message\":\nresponse.message = message;\nbreak;\ncase \"randnum\":\nresponse.randnum = 12.123;\nbreak;\ncase \"json\":\nresponse.json = json;\nbreak;\n}\n}\nvar sendTopic =\"command/response/\"+ uuid;\npublish( sendTopic, JSON.stringify(response));\n});\n
To run the device simulator, enter the commands shown below with the following changes: $ mv mock-device.js /path/to/mqtt-scripts\n$ docker run --rm --name=mqtt-scripts \\\n -v /path/to/mqtt-scripts:/scripts --network host \\\n dersimn/mqtt-scripts --dir /scripts\n
Note
Replace the /path/to/mqtt-scripts
in the example mv command with the correct path
Then the mqtt-scripts show logs as below:
2022-08-12 09:52:42.086 <info> mqtt-scripts 1.2.2 starting\n2022-08-12 09:52:42.227 <info> mqtt connected mqtt://127.0.0.1\n2022-08-12 09:52:42.733 <info> /scripts/mock-device.js loading\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-commands","title":"Execute Commands","text":"Now we're ready to run some commands.
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#find-executable-commands","title":"Find Executable Commands","text":"Use the following query to find executable commands:
$ curl http://localhost:59882/api/v3/device/all | json_pp\n\n{\n\"deviceCoreCommands\" : [\n{\n\"profileName\" : \"my-custom-device-profile\",\n\"coreCommands\" : [\n{\n\"name\" : \"values\",\n\"get\" : true,\n\"path\" : \"/api/v3/device/name/my-custom-device/values\",\n\"url\" : \"http://edgex-core-command:59882\",\n\"parameters\" : [\n{\n\"resourceName\" : \"randnum\",\n\"valueType\" : \"Float32\"\n},\n{\n\"resourceName\" : \"ping\",\n\"valueType\" : \"String\"\n},\n{\n\"valueType\" : \"String\",\n\"resourceName\" : \"message\"\n}\n]\n},\n{\n\"url\" : \"http://edgex-core-command:59882\",\n\"parameters\" : [\n{\n\"resourceName\" : \"message\",\n\"valueType\" : \"String\"\n}\n],\n\"name\" : \"message\",\n\"get\" : true,\n\"path\" : \"/api/v3/device/name/my-custom-device/message\",\n\"set\" : true\n},\n{\n\"name\": \"json\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/MQTT-test-device/json\",\n\"url\" : \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"json\",\n\"valueType\": \"Object\"\n}\n]\n}\n],\n\"deviceName\" : \"my-custom-device\"\n}\n],\n\"apiVersion\" : \"v2\",\n\"statusCode\" : 200\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-set-command","title":"Execute SET Command","text":"Execute a SET command according to the url and parameterNames, replacing [host] with the server IP when running the SET command.
$ curl http://localhost:59882/api/v3/device/name/my-custom-device/message \\\n -H \"Content-Type:application/json\" -X PUT \\\n -d '{\"message\":\"Hello!\"}'\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-get-command","title":"Execute GET Command","text":"Execute a GET command as follows:
$ curl http://localhost:59882/api/v3/device/name/my-custom-device/message | json_pp\n\n{\n\"apiVersion\":\"v2\",\n\"event\":{\n\"apiVersion\":\"v2\",\n\"deviceName\":\"my-custom-device\",\n\"id\":\"13164041-2e6c-4454-9bc3-8e8987e85311\",\n\"origin\":1660298227470009014,\n\"profileName\":\"my-custom-device-profile\",\n\"readings\":[\n{\n\"deviceName\":\"my-custom-device\",\n\"id\":\"c58e65b4-62f0-4e41-b368-645993ec0bfd\",\n\"origin\":1660298227470005426,\n\"profileName\":\"my-custom-device-profile\",\n\"resourceName\":\"message\",\n\"value\":\"Hello!\",\n\"valueType\":\"String\"\n}\n],\n\"sourceName\":\"message\"\n},\n\"statusCode\":200\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#schedule-job","title":"Schedule Job","text":"The schedule job is defined in the autoEvents
section of the device definition file:
autoEvents:\nInterval: \"30s\"\nOnChange: false\nSourceName: \"message\"\n
After the service starts, query core-data's reading API. The results show that the service auto-executes the command every 30 secs, as shown below:
$ curl http://localhost:59880/api/v3/reading/resourceName/message | json_pp\n\n{\n\"statusCode\" : 200,\n\"readings\" : [\n{\n\"value\" : \"test-message\",\n\"id\" : \"e91b8ca6-c5c4-4509-bb61-bd4b09fe835c\",\n\"resourceName\" : \"message\",\n\"origin\" : 1624418361324331392,\n\"profileName\" : \"my-custom-device-profile\",\n\"deviceName\" : \"my-custom-device\",\n\"valueType\" : \"String\"\n},\n{\n\"resourceName\" : \"message\",\n\"value\" : \"test-message\",\n\"id\" : \"1da58cb7-2bf4-47f0-bbb8-9519797149a2\",\n\"deviceName\" : \"my-custom-device\",\n\"valueType\" : \"String\",\n\"profileName\" : \"my-custom-device-profile\",\n\"origin\" : 1624418330822988843\n},\n...\n],\n\"apiVersion\" : \"v2\"\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#async-device-reading","title":"Async Device Reading","text":"The device-mqtt
subscribes to a DataTopic
, which waits for the real device to send value to MQTT broker, then device-mqtt
parses the value and forward to the northbound.
The data format contains the following values:
The following results show that the mock device sent the reading every 15 secs:
$ curl http://localhost:59880/api/v3/reading/resourceName/randnum | json_pp\n\n{\n\"readings\" : [\n{\n\"origin\" : 1624418475007110946,\n\"valueType\" : \"Float32\",\n\"deviceName\" : \"my-custom-device\",\n\"id\" : \"9b3d337e-8a8a-4a6c-8018-b4908b57abb8\",\n\"resourceName\" : \"randnum\",\n\"profileName\" : \"my-custom-device-profile\",\n\"value\" : \"2.630000e+01\"\n},\n{\n\"deviceName\" : \"my-custom-device\",\n\"valueType\" : \"Float32\",\n\"id\" : \"06918cbb-ada0-4752-8877-0ef8488620f6\",\n\"origin\" : 1624418460007833720,\n\"profileName\" : \"my-custom-device-profile\",\n\"value\" : \"2.570000e+01\",\n\"resourceName\" : \"randnum\",\n},\n...\n],\n\"statusCode\" : 200,\n\"apiVersion\" : \"v2\"\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mqtt-device-service-configuration","title":"MQTT Device Service Configuration","text":"MQTT Device Service has the following configurations to implement the MQTT protocol.
Configuration Default Value Description MQTTBrokerInfo.Schema tcp The URL schema MQTTBrokerInfo.Host localhost The URL host MQTTBrokerInfo.Port 1883 The URL port MQTTBrokerInfo.Qos 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) MQTTBrokerInfo.KeepAlive 3600 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 MQTTBrokerInfo.ClientId device-mqtt ClientId to connect to the broker with MQTTBrokerInfo.CredentialsRetryTime 120 The retry times to get the credential MQTTBrokerInfo.CredentialsRetryWait 1 The wait time(seconds) when retry to get the credential MQTTBrokerInfo.ConnEstablishingRetry 10 The retry times to establish the MQTT connection MQTTBrokerInfo.ConnRetryWaitTime 5 The wait time(seconds) when retry to establish the MQTT connection MQTTBrokerInfo.AuthMode none Indicates what to use when connecting to the broker. Must be one of \"none\" , \"usernamepassword\" MQTTBrokerInfo.CredentialsPath credentials Name of the path in secret provider to retrieve your secrets. Must be non-blank. MQTTBrokerInfo.IncomingTopic DataTopic (incoming/data/#) IncomingTopic is used to receive the async value MQTTBrokerInfo.ResponseTopic ResponseTopic (command/response/#) ResponseTopic is used to receive the command response from the device MQTTBrokerInfo.UseTopicLevels false (true) Boolean setting to use multi-level topics MQTTBrokerInfo.Writable.ResponseFetchInterval 500 ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT brokerNote
Using Multi-level Topic: Remember to change the defaults in parentheses in the table above.
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overriding-with-environment-variables","title":"Overriding with Environment Variables","text":"The user can override any of the above configurations using environment:
variables to meet their requirement, for example:
# docker-compose.override.yml\n\nversion: '3.7'\n\nservices:\ndevice-mqtt:\nenvironment:\nMQTTBROKERINFO_CLIENTID: \"my-device-mqtt\"\nMQTTBROKERINFO_CONNRETRYWAITTIME: \"10\"\nMQTTBROKERINFO_USETOPICLEVELS: \"false\"\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/","title":"Modbus","text":"EdgeX - Ireland Release
This page describes how to connect Modbus devices to EdgeX. In this example, we simulate the temperature sensor instead of using a real device. This provides a straightforward way to test the device service features.
To fulfill the issue #61, there is an important incompatible change after v2 (Ireland release). In the Device Profile attributes section, the startingAddress
becomes an integer data type and zero-based value. In v1, startingAddress
was a string data type and one-based value.
You can use any operating system that can install docker and docker-compose. In this example, we use Ubuntu to deploy EdgeX using docker.
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-device-simulator","title":"Modbus Device Simulator","text":"1.Download ModbusPal
Download the fixed version of ModbusPal from the https://sourceforge.net/p/modbuspal/discussion/899955/thread/72cf35ee/cd1f/attachment/ModbusPal.jar .
2.Install required lib:
sudo apt install librxtx-java\n
3.Startup the ModbusPal: sudo java -jar ModbusPal.jar\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-register-table","title":"Modbus Register Table","text":"You can find the available registers in the user manual.
Modbus TCP \u2013 Holding Registers
Address Name R/W Description 4000 ThermostatL R/W Lower alarm threshold 4001 ThermostatH R/W Upper alarm threshold 4002 Alarm mode R/W 1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher 4004 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105)"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#setup-modbuspal","title":"Setup ModbusPal","text":"To simulate the sensor, do the following:
Add registers according to the register table:
Add the ModbusPal support value auto-generator, which can bind to the registers:
Enable the value generator and click the Run
button.
The following sections describe how to complete the set up before starting the services. If you prefer to start the services and then add the device, see Set Up After Starting Services
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#create-a-custom-configuration-folder","title":"Create a Custom configuration folder","text":"Run the following command:
mkdir -p custom-config\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-profile","title":"Set Up Device Profile","text":"Run the following command to create your device profile:
cd custom-config\nnano temperature.profile.yml\n
Fill in the device profile according to the Modbus Register Table, as shown below:
name: \"Ethernet-Temperature-Sensor\"\nmanufacturer: \"Audon Electronics\"\nmodel: \"Temperature\"\nlabels:\n- \"Web\"\n- \"Modbus TCP\"\n- \"SNMP\"\ndescription: \"The NANO_TEMP is a Ethernet Thermometer measuring from -55\u00b0C to 125\u00b0C with a web interface and Modbus TCP communications.\"\n\ndeviceResources:\n-\nname: \"ThermostatL\"\nisHidden: true\ndescription: \"Lower alarm threshold of the temperature\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 3999, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"RW\"\nscale: 0.1\n-\nname: \"ThermostatH\"\nisHidden: true\ndescription: \"Upper alarm threshold of the temperature\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4000, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"RW\"\nscale: 0.1\n-\nname: \"AlarmMode\"\nisHidden: true\ndescription: \"1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4001 }\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"RW\"\n-\nname: \"Temperature\"\nisHidden: false\ndescription: \"Temperature x 10 (np. 10,5 st.C to 105)\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4003, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: 0.1\n\ndeviceCommands:\n-\nname: \"AlarmThreshold\"\nreadWrite: \"RW\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"ThermostatL\" }\n- { deviceResource: \"ThermostatH\" }\n-\nname: \"AlarmMode\"\nreadWrite: \"RW\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"AlarmMode\", mappings: { \"1\":\"OFF\",\"2\":\"Lower\",\"3\":\"Higher\",\"4\":\"Lower or Higher\"} }\n
In the Modbus protocol, we provide the following attributes: 1.primaryTable
: HOLDING_REGISTERS, INPUT_REGISTERS, COILS, DISCRETES_INPUT
2.startingAddress
This attribute defines the zero-based startingAddress in Modbus device. For example, the GET command requests data from the Modbus address 4004 to get the temperature data, so the starting register address should be 4003.
3.IS_BYTE_SWAP
, IS_WORD_SWAP
: To handle the different Modbus binary data order, we support Int32, Uint32, Float32 to do the swap operation before decoding the binary data.
For example: { primaryTable: \"INPUT_REGISTERS\", startingAddress: \"4\", isByteSwap: \"false\", isWordSwap: \"true\" }
4.RAW_TYPE
: This attribute defines the binary data read from the Modbus device, then we can use the value type to indicate the data type that the user wants to receive.
We only support Int16
, Int32
and Uint16
for rawType. The corresponding value type must be Float32
and Float64
. For example:
deviceResources:\n-\nname: \"Temperature\"\nisHidden: false\ndescription: \"Temperature x 10 (np. 10,5 st.C to 105)\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4003, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: 0.1\n
In the device-modbus, the Property rawType
(or valueType
if rawType
is not defined) decides how many registers will be read. Like Holding registers, a register has 16 bits. If the Modbus device's user manual specifies that a value has two registers, define it as Float32
or Int32
or Uint32
in the deviceProfile.
Once we execute a command, device-modbus knows its value type and register type, startingAddress, and register length. So it can read or write value using the modbus protocol.
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-service-configuration","title":"Set Up Device Service Configuration","text":"Run the following command to create your device configuration:
cd custom-config\nnano device.config.yaml\n
Fill in the device.config.yaml file, as shown below: deviceList:\nname: \"Modbus-TCP-Temperature-Sensor\"\nprofileName: \"Ethernet-Temperature-Sensor\"\ndescription: \"This device is a product for monitoring the temperature via the ethernet\"\nlabels: - \"temperature\"\n- \"modbus\"\n- \"TCP\"\nprotocols:\nmodbus-tcp:\nAddress: \"172.17.0.1\"\nPort: \"502\"\nUnitID: \"1\"\nTimeout: \"5\"\nIdleTimeout: \"5\"\nautoEvents:\ninterval: \"30s\"\nonChange: false\nsourceName: \"Temperature\"\n
The address 172.17.0.1
is point to the docker bridge network which means it can forward the request from docker network to the host.
Use this configuration file to define devices and AutoEvent. Then the device-modbus will generate the relative instance on startup.
The device-modbus offers two types of protocol, Modbus TCP and Modbus RTU, which can be defined as shown below:
protocol Name Protocol Address Port UnitID BaudRate DataBits StopBits Parity Timeout IdleTimeout Modbus TCP Gateway address TCP 10.211.55.6 502 1 5 5 Modbus RTU Gateway address RTU /tmp/slave 502 2 19200 8 1 N 5 5In the RTU protocol, Parity can be:
$ git clone git@github.com:edgexfoundry/edgex-compose.git\n
$ cd edgex-compose/compose-builder\n$ make gen ds-modbus\n
Add prepared configuration files to docker-compose file, you can mount them using volumes and change the environment for device-modbus internal use.
Open the docker-compose.yml
file and then add volumes path and environment as shown below:
device-modbus:\n...\nenvironment:\n...\nDEVICE_DEVICESDIR: /custom-config\nDEVICE_PROFILESDIR: /custom-config\nvolumes:\n...\n- /path/to/custom-config:/custom-config\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#start-edgex-foundry-on-docker","title":"Start EdgeX Foundry on Docker","text":"Since we generate the docker-compose.yml
file at the previous step, we can deploy EdgeX as shown below:
$ cd edgex-compose/compose-builder\n$ docker compose up -d\nCreating network \"compose-builder_edgex-network\" with driver \"bridge\"\nCreating volume \"compose-builder_consul-acl-token\" with default driver\n...\nCreating edgex-core-metadata ... done\nCreating edgex-core-command ... done\nCreating edgex-core-data ... done\nCreating edgex-device-modbus ... done\nCreating edgex-app-rules-engine ... done\nCreating edgex-sys-mgmt-agent ... done\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-after-starting-services","title":"Set Up After Starting Services","text":"If the services are already running and you want to add a device, you can use the Core Metadata API as outlined in this section. If you set up the device profile and Service as described in Set Up Before Starting Services, you can skip this section.
To add a device after starting the services, complete the following steps:
Upload the device profile above to metadata with a POST to http://localhost:59881/api/v3/deviceprofile/uploadfile and add the file as key \"file\" to the body in form-data format, and the created ID will be returned. The following example command uses curl to send the request:
$ curl http://localhost:59881/api/v3/deviceprofile/uploadfile \\\n -F \"file=@temperature.profile.yml\"\n
Ensure the Modbus device service is running, adjust the service name below to match if necessary or if using other device services.
Add the device with a POST to http://localhost:59881/api/v3/device, the body will look something like:
$ curl http://localhost:59881/api/v3/device -H \"Content-Type:application/json\" -X POST \\\n -d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\" :\"Modbus-TCP-Temperature-Sensor\",\n \"description\":\"This device is a product for monitoring the temperature via the ethernet\",\n \"labels\":[ \n \"Temperature\",\n \"Modbus TCP\"\n ],\n \"serviceName\": \"device-modbus\",\n \"profileName\": \"Ethernet-Temperature-Sensor\",\n \"protocols\":{\n \"modbus-tcp\":{\n \"Address\" : \"172.17.0.1\",\n \"Port\" : \"502\",\n \"UnitID\" : \"1\",\n \"Timeout\" : \"5\",\n \"IdleTimeout\" : \"5\"\n }\n },\n \"autoEvents\":[ \n { \n \"Interval\":\"30s\",\n \"onChange\":false,\n \"SourceName\":\"Temperature\"\n }\n ],\n \"adminState\":\"UNLOCKED\",\n \"operatingState\":\"UP\"\n }\n }\n ]'\n
The service name must match/refer to the target device service, and the profile name must match the device profile name from the previous steps.
Now we're ready to run some commands.
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#find-executable-commands","title":"Find Executable Commands","text":"Use the following query to find executable commands:
$ curl http://localhost:59882/api/v3/device/all | json_pp\n\n{\n\"apiVersion\" : \"v2\",\n\"deviceCoreCommands\" : [\n{\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\",\n\"coreCommands\" : [\n{\n\"url\" : \"http://edgex-core-command:59882\",\n\"name\" : \"AlarmThreshold\",\n\"get\" : true,\n\"set\" : true,\n\"parameters\" : [\n{\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"ThermostatL\"\n},\n{\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"ThermostatH\"\n}\n],\n\"path\" : \"/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold\"\n},\n{\n\"get\" : true,\n\"url\" : \"http://edgex-core-command:59882\",\n\"name\" : \"AlarmMode\",\n\"set\" : true,\n\"path\" : \"/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmMode\",\n\"parameters\" : [\n{\n\"resourceName\" : \"AlarmMode\",\n\"valueType\" : \"Int16\"\n}\n]\n},\n{\n\"get\" : true,\n\"url\" : \"http://edgex-core-command:59882\",\n\"name\" : \"Temperature\",\n\"path\" : \"/api/v3/device/name/Modbus-TCP-Temperature-Sensor/Temperature\",\n\"parameters\" : [\n{\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"Temperature\"\n}\n]\n}\n]\n}\n],\n\"statusCode\" : 200\n}\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-set-command","title":"Execute SET command","text":"Execute SET command according to url
and parameterNames
, replacing [host] with the server IP when running the SET command.
$ curl http://localhost:59882/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold \\\n -H \"Content-Type:application/json\" -X PUT \\\n -d '{\"ThermostatL\":\"15\",\"ThermostatH\":\"100\"}'\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-get-command","title":"Execute GET command","text":"Replace \\<host> with the server IP when running the GET command.
$ curl http://localhost:59882/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold | json_pp\n\n{\n\"statusCode\" : 200,\n\"apiVersion\" : \"v2\",\n\"event\" : {\n\"origin\" : 1624324686964377495,\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"f3d44a0f-d2c3-4ef6-9441-ad6b1bfb8a9e\",\n\"sourceName\" : \"AlarmThreshold\",\n\"readings\" : [\n{\n\"resourceName\" : \"ThermostatL\",\n\"value\" : \"1.500000e+01\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"9aa879a0-c184-476b-8124-34d35a2a51f3\",\n\"valueType\" : \"Float32\",\n\"mediaType\" : \"\",\n\"binaryValue\" : null,\n\"origin\" : 1624324686963970614,\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n},\n{\n\"value\" : \"1.000000e+02\",\n\"resourceName\" : \"ThermostatH\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"bf7df23b-4338-4b93-a8bd-7abd5e848379\",\n\"valueType\" : \"Float32\",\n\"mediaType\" : \"\",\n\"binaryValue\" : null,\n\"origin\" : 1624324686964343768,\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n}\n],\n\"apiVersion\" : \"v2\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n}\n}\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#autoevent","title":"AutoEvent","text":"The AutoEvent is defined in the autoEvents
section of the device definition file:
deviceList:\nautoEvents:\ninterval: \"30s\"\nonChange: false\nsourceName: \"Temperature\"\n
After service startup, query core-data's API. The results show that the service auto-executes the command every 30 seconds. $ curl http://localhost:59880/api/v3/event/device/name/Modbus-TCP-Temperature-Sensor | json_pp\n\n{\n\"events\" : [\n{\n\"readings\" : [\n{\n\"value\" : \"5.300000e+01\",\n\"binaryValue\" : null,\n\"origin\" : 1624325219186870396,\n\"id\" : \"68a66a35-d3cf-48a2-9bf0-09578267a3f7\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"mediaType\" : \"\",\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"Temperature\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n}\n],\n\"apiVersion\" : \"v2\",\n\"origin\" : 1624325219186977564,\n\"id\" : \"4b235616-7304-419e-97ae-17a244911b1c\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"sourceName\" : \"Temperature\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n},\n{\n\"readings\" : [\n{\n\"profileName\" : \"Ethernet-Temperature-Sensor\",\n\"resourceName\" : \"Temperature\",\n\"valueType\" : \"Float32\",\n\"id\" : \"56b7e8be-7ce8-4fa9-89e2-3a1a7ef09050\",\n\"origin\" : 1624325189184675483,\n\"value\" : \"5.300000e+01\",\n\"binaryValue\" : null,\n\"mediaType\" : \"\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\"\n}\n],\n\"profileName\" : \"Ethernet-Temperature-Sensor\",\n\"sourceName\" : \"Temperature\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"fbab44f5-9775-4c09-84bd-cbfb00001115\",\n\"origin\" : 1624325189184721223,\n\"apiVersion\" : \"v2\"\n},\n...\n],\n\"apiVersion\" : \"v2\",\n\"statusCode\" : 200\n}\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-the-modbus-rtu-device","title":"Set up the Modbus RTU Device","text":"This section describes how to connect the Modbus RTU device. We use Ubuntu OS and a Modbus RTU device for this example.
Connect the device to your machine(laptop or gateway,etc.) via RS485/USB adaptor and power on.
Execute a command on the machine, and you can find a message like the following:
$ dmesg | grep tty\n...\n...\n[18006.167625] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0\n
It shows the USB attach to ttyUSB0, then you can check whether the device path exists:
$ ls /dev/ttyUSB0\n/dev/ttyUSB0\n
For security reason, the EdgeX set up the user permission as below:
device-modbus:\n...\nuser: 2002:2001 # UID:GID\n
So we need to change the owner for the specified group by the following command: sudo chown :2001 /dev/ttyUSB0\n\n# Or change the permissions for multiple files\nsudo chown :2001 /dev/tty*\n
Note
Since the owner will reset after the system reboot, we can add this script to the startup script. For Raspberry Pi as example, add script to /etc/rc.local
, then the Pi will run this script at bootup.
Modify the docker-compose.yml file to mount the device path to the device-modbus, and here are two ways to mount the device path:
Using devices
:
device-modbus:\n...\ndevices:\n- /dev/ttyUSB0\n
Or using volumes
and device_cgroup_rules
:
device-modbus:\n...\nvolumes:\n...\n- /dev:/dev\ndevice_cgroup_rules:\n- 'c 188:* rw'
$ docker compose up -d\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#add-device-to-edgex","title":"Add device to EdgeX","text":"$ nano modbus.rtu.demo.profile.yml\n
name: \"Modbus-RTU-IO-Module\"\nmanufacturer: \"icpdas\"\nmodel: \"M-7055\"\nlabels:\n- \"Modbus RTU\"\n- \"IO Module\"\ndescription: \"This IO module offers 8 isolated channels for digital input and 8 isolated channels for digital output.\"\n\ndeviceResources:\n-\nname: \"DO0\"\nisHidden: true\ndescription: \"On/Off , 0-OFF 1-ON\"\nattributes:\n{ primaryTable: \"COILS\", startingAddress: 0 }\nproperties:\nvalueType: \"Bool\"\nreadWrite: \"RW\"\n-\nname: \"DO1\"\nisHidden: true\ndescription: \"On/Off , 0-OFF 1-ON\"\nattributes:\n{ primaryTable: \"COILS\", startingAddress: 1 }\nproperties:\nvalueType: \"Bool\"\nreadWrite: \"RW\"\n-\nname: \"DO2\"\nisHidden: true\ndescription: \"On/Off , 0-OFF 1-ON\"\nattributes:\n{ primaryTable: \"COILS\", startingAddress: 2 }\nproperties:\nvalueType: \"Bool\"\nreadWrite: \"RW\"\n\ndeviceCommands:\n-\nname: \"DO\"\nreadWrite: \"RW\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"DO0\" }\n- { deviceResource: \"DO1\" }\n- { deviceResource: \"DO2\" }\n
Upload the device profile
$ curl http://localhost:59881/api/v3/deviceprofile/uploadfile \\\n -F \"file=@modbus.rtu.demo.profile.yml\"\n
Create the device entity to the EdgeX. You can find the Modbus RTU setting on the device or the user manual.
$ curl http://localhost:59881/api/v3/device -H \"Content-Type:application/json\" -X POST \\\n-d '[\n{\n\"apiVersion\" : \"v3\",\n\"device\": {\n\"name\" :\"Modbus-RTU-IO-Module\",\n\"description\":\"The device can be used to monitor the status of the digital input and digital output channels.\",\n\"labels\":[ \"IO Module\",\n\"Modbus RTU\"\n],\n\"serviceName\": \"device-modbus\",\n\"profileName\": \"Ethernet-Temperature-Sensor\",\n\"protocols\":{\n\"modbus-tcp\":{\n\"Address\" : \"/dev/ttyUSB0\",\n\"BaudRate\" : \"19200\",\n\"DataBits\" : \"8\",\n\"StopBits\" : \"1\",\n\"Parity\" : \"N\",\n\"UnitID\" : \"1\",\n\"Timeout\" : \"5\",\n\"IdleTimeout\" : \"5\"\n}\n},\n\"adminState\":\"UNLOCKED\",\n\"operatingState\":\"UP\"\n}\n}\n]'\n
EdgeX - Ireland Release
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#overview","title":"Overview","text":"In this example, you add a new Patlite Signal Tower which communicates via SNMP. This example demonstrates how to connect a device through the SNMP Device Service.
Patlite Signal Tower, model NHL-FB2
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#setup","title":"Setup","text":""},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#hardware-needed","title":"Hardware needed","text":"In order to exercise this example, you will need the following hardware
In addition to the hardware, you will need the following software
If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry.
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-service-to-your-docker-composeyml","title":"Add the SNMP Device Service to your docker-compose.yml","text":"The EdgeX docker-compose.yml file used to run EdgeX must include the SNMP device service for this example. You can either:
See Getting Started using Docker if you need assistance running EdgeX once you have your Docker Compose file.
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-profile-and-device","title":"Add the SNMP Device Profile and Device","text":"SNMP devices, like the Patlite Signal Tower, provide a set of managed objects to get and set property information on the associated device. Each managed object has an address call an object identifier (or OID) that you use to interact with the SNMP device's managed object. You use the OID to query the state of the device or to set properties on the device. In the case of the Patlite, there are managed object for the colored lights and the buzzer of the device. You can read the current state of a colored light (get) or turn the light on (set) by making a call to the proper OIDs for the associated managed object.
For example, on the NH series signal towers used in this example, a \"get\" call to the 1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1
OID returns the current state of the Red
signal light. A return value of 1 would signal the light is off. A return value of 2 says the light is on. A return value of 3 says the light is flashing. Read this SNMP tutorial to learn more about the basics of the SNMP protocol. See the Patlite NH Series User's Manual for more information on the SNMP OIDs and function calls and parameters needed for some requests.
A device profile has been created for you to get and set the signal tower's three colored lights and to get and set the buzzer. The patlite-snmp
device profile defines three device resources for each of the lights and the buzzer.
Note that the attributes of each device resource specify the SNMP OID that the device service will use to make a request of the signal tower. For example, the device resource YAML below (taken from the profile) provides the means to get the current Red
light state. Note that a specific OID is provided that is unique to the RED
light, current state property.
-\nname: \"RedLightCurrentState\"\nisHidden: false\ndescription: \"red light current state\"\nattributes:\n{ oid: \"1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1\", community: \"private\" } properties:\nvalueType: \"Int32\"\nreadWrite: \"R\"\ndefaultValue: \"1\"\n
Below is the device resource definitions for the Red
light control state and timer. Again, unique OIDs are provided as attributes for each property.
-\nname: \"RedLightControlState\"\nisHidden: true\ndescription: \"red light state\"\nattributes:\n{ oid: \"1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1\", community: \"private\" } properties:\nvalueType: \"Int32\"\nreadWrite: \"W\"\ndefaultValue: \"1\"\n-\nname: \"RedLightTimer\"\nisHidden: true\ndescription: \"red light timer\"\nattributes:\n{ oid: \"1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1\", community: \"private\" } properties:\nvalueType: \"Int32\"\nreadWrite: \"W\"\ndefaultValue: \"1\"\n
In order to set the Red
light on, one would need to send an SNMP request to set OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1
to a value of 2 (on state) along with a number of seconds delay to the time at OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1
. Sending a zero value (0) to the timer would say you want to turn the light on immediately.
Because setting a light or buzzer requires both of the control state and timer OIDs to be set together (simultaneously), the device profile contains deviceCommands
to set the light and timer device resources (and therefore their SNMP property OIDs) in a single operation. Here is the device command to set the Red
light.
-\nname: \"RedLight\"\nreadWrite: \"W\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"RedLightControlState\" }\n- { deviceResource: \"RedLightTimer\" }\n
You will need to upload this profile into core metadata. Download the Patlite device profile to a convenient directory. Then, using the following curl
command, request the profile be uploaded into core metadata.
curl -X 'POST' 'http://localhost:59881/api/v3/deviceprofile/uploadfile' --form 'file=@\"/home/yourfilelocationhere/patlite-snmp.yml\"'\n
Alert
Note that the curl command above assumes that core metadata is available at localhost
. Change localhost
to the host address of your core metadata service. Also note that you will need to replace the /home/yourfilelocationhere
path with the path where the profile resides.
With the Patlite device profile now in metadata, you can add the Patlite device in metadata. When adding the device, you typically need to provide the name, description, labels and admin/op states of the device when creating it. You will also need to associate the device to a device service (in this case the device-snmp
device service). You will ned to associate the new device to a profile - the patlite profile just added in the step above. And you will need to provide the protocol information (such as the address and port of the device) to tell the device service where it can find the physical device. If you wish the device service to automatically get readings from the device, you will also need to provide AutoEvent properties when creating the device.
The curl command to POST the new Patlite device (named patlite1
) into metadata is provide below. You will need to change the protocol Address
(currently 10.0.0.14
) and Port
(currently 161
) to point to your Patlite on your network. In this request to add a new device, AutoEvents are setup to collect the current state of the 3 lights and buzzer every 10 seconds. Notice the reference to the current state device resources in setting up the AutoEvents.
curl -X 'POST' 'http://localhost:59881/api/v3/device' -d '[{\"apiVersion\" : \"v3\", \"device\": {\"name\": \"patlite1\",\"description\": \"patlite #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"patlite\"],\"serviceName\": \"device-snmp\",\"profileName\": \"patlite-snmp-profile\",\"protocols\": {\"TCP\": {\"Address\": \"10.0.0.14\",\"Port\": \"161\"}}, \"AutoEvents\":[{\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"RedLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"GreenLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"AmberLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"BuzzerCurrentState\"}]}}]'\n
Info
Rather than making a REST API call into metadata to add the device, you could alternately provide device configuration files that define the device. These device configuration files would then have to be provided to the service when it starts up. Since you did not create a new Docker image containing the device configuration and just used the existing SNMP device service Docker image, it was easier to make simple API calls to add the profile and device. However, this would mean the profile and device would need to be added each time metadata's database is cleaned out and reset.
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#test","title":"Test","text":"If the device service is up and running and the profile and device have been added correctly, you should now be able to interact with the Patlite via the core command service (and SNMP under the covers via the SNMP device service).
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#get-the-current-state","title":"Get the Current State","text":"To get the current state of a light (in the example below the Green
light), make a curl request like the following of the command service.
curl 'http://localhost:59882/api/v3/device/name/patlite1/GreenLightCurrentState' | json_pp\n
Alert
Note that the curl command above assumes that the core command service is available at localhost
. Change the host address of your core command service if it is not available at localhost
.
The results should look something like that below.
{\n\"statusCode\" : 200,\n\"apiVersion\" : \"v2\",\n\"event\" : {\n\"origin\" : 1632188382048586660,\n\"deviceName\" : \"patlite1\",\n\"sourceName\" : \"GreenLightCurrentState\",\n\"id\" : \"1e2a7ba1-c273-46d1-b919-207aafbc60ba\",\n\"profileName\" : \"patlite-snmp-profile\",\n\"apiVersion\" : \"v2\",\n\"readings\" : [\n{\n\"origin\" : 1632188382048586660,\n\"resourceName\" : \"GreenLightCurrentState\",\n\"deviceName\" : \"patlite1\",\n\"id\" : \"a41ac1cf-703b-4572-bdef-8487e9a7100e\",\n\"valueType\" : \"Int32\",\n\"value\" : \"1\",\n\"profileName\" : \"patlite-snmp-profile\"\n}\n]\n}\n}\n
Info
Note the value
will be one of 4 numbers indicating the current state of the light
To turn a signal tower light or the buzzer on, you can issue a PUT device command via the core command service. The example below turns on the Green
light.
curl --location --request PUT 'http://localhost:59882/api/v3/device/name/patlite1/GreenLight' --header 'cont: application/json' --data-raw '{\"GreenLightControlState\":\"2\",\"GreenLightTimer\":\"0\"}'\n
This command sets the light on (solid versus flashing) immediate (as denoted by the GreenLightTimer parameter is set to 0). The timer value is the number of seconds delay in making the request to the light or buzzer. Again, the control state can be set to one of four values as listed in the table above.
Alert
Again note that the curl command above assumes that the core command service is available at localhost
. Change the host address of your core command service if it is not available at localhost
.
Did you notice that EdgeX obfuscates almost all information about SNMP, and managed objects and OIDs? The power of EdgeX is to abstract away protocol differences so that to a user, getting data from a device or setting properties on a device such as this Patlite signal tower is as easy as making simple REST calls into the command service. The only place that protocol information is really seen is in the device profile (where the attributes specify the SNMP OIDs). Of course, the device service must be coded to deal with the protocol specifics and it must know how to translate the simple command REST calls into protocol specific requests of the device. But even device service creation is made easier with the use of the SDKs which provide much of the boilerplate code found in almost every device service regardless of the underlying device protocol.
"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/","title":"Modbus - Data Type Conversion","text":"In use cases where the device resource uses an integer data type with a float scale, precision can be lost following transformation.
For example, a Modbus device stores the temperature and humidity in an Int16 data type with a float scale of 0.01. If the temperature is 26.53, the read value is 2653. However, following transformation, the value is 26.
To avoid this scenario, the device resource data type must differ from the value descriptor data type. This is achieved using the optional rawType
attribute in the device profile to define the binary data read from the Modbus device, and a valueType
to indicate what data type the user wants to receive.
If the rawType
attribute exists, the device service parses the binary data according to the defined rawType
, then casts the value according to the valueType
defined in the properties
of the device resources.
The following extract from a device profile defines the rawType
as Int16 and the valueType
as Float32:
Example - Device Profile
deviceResources:\n- name: \"humidity\"\ndescription: \"The response value is the result of the original value multiplied by 100.\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: \"1\", rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: \"0.01\"\nunits: \"%RH\"\n\n- name: \"temperature\"\ndescription: \"The response value is the result of the original value multiplied by 100.\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: \"2\", rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: \"0.01\"\nunits: \"degrees Celsius\"\n
"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#read-command","title":"Read Command","text":"A Read command is executed as follows:
A Write command is executed as follows:
You generally need to transform data when scaling readings between a 16-bit integer and a float value.
The following limitations apply:
rawType
supports only Int16, Uint16 and Int32 data typesvalueType
must be Float32 or Float64If an unsupported data type is defined for the rawType
attribute, the device service throws an exception similar to the following:
Read command failed. Cmd:temperature err:the raw type Int64 is not supported\n
"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#supported-transformations","title":"Supported Transformations","text":"The supported transformations are as follows:
FromrawType
To valueType
Int16 Float32 Int16 Float64 Int32 Float64 Uint16 Float32 Uint16 Float64"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/","title":"Sending and Consuming Binary Data From EdgeX Device Services","text":"EdgeX - Ireland Release
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#overview","title":"Overview","text":"In this example, we will demonstrate how to send EdgeX Events and Readings that contain arbitrary binary data.
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#deviceservice-implementation","title":"DeviceService Implementation","text":""},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-profile","title":"Device Profile","text":"To indicate that a deviceResource represents a Binary type, the following format is used:
deviceResources:\n-\nname: \"camera_snapshot\"\nisHidden: false\ndescription: \"snapshot from camera\"\nproperties:\nvalueType: \"Binary\"\nreadWrite: \"R\"\nmediaType: \"image/jpeg\"\ndeviceCommands:\n-\nname: \"OnvifSnapshot\"\nisHidden: false\nreadWrite: \"R\"\nresourceOperations:\n- { deviceResource: \"camera_snapshot\" }\n
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-service","title":"Device Service","text":"Here is a snippet from a hypothetical Device Service's HandleReadCommands()
method that produces an event that represents a JPEG image captured from a camera:
if req.DeviceResourceName == \"camera_snapshot\" {\ndata, err := cameraClient.GetSnapshot() // returns ([]byte, error)\ncheck(err)\n\ncv, err := sdkModels.NewCommandValue(reqs[i].DeviceResourceName, common.ValueTypeBinary, data)\ncheck(err)\n\nresponses[i] = cv\n}\n
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#calling-device-service-command","title":"Calling Device Service Command","text":"Querying core-metadata for the Device's Commands and DeviceName provides the following as the URL to request a reading from the snapshot command: http://localhost:59990/api/v3/device/name/camera-device/OnvifSnapshot
Unlike with non-binary Events, making a request to this URL will return an event in CBOR representation. CBOR is a representation of binary data loosely based off of the JSON data model. This Event will not be human-readable.
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#parsing-cbor-encoded-events","title":"Parsing CBOR Encoded Events","text":"To access the data enclosed in these Events and Readings, they will first need to be decoded from CBOR. The following is a simple Go program that reads in the CBOR response from a file containing the response from the previous HTTP request. The Go library recommended for parsing these events can be found at https://github.com/fxamacker/cbor/
package main\n\nimport (\n\"io/ioutil\"\n\n\"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos/requests\"\n\"github.com/fxamacker/cbor/v2\"\n)\n\nfunc check(e error) {\nif e != nil {\npanic(e)\n}\n}\n\nfunc main() {\n// Read in our cbor data\nfileBytes, err := ioutil.ReadFile(\"/Users/johndoe/Desktop/image.cbor\")\ncheck(err)\n\n// Decode into an EdgeX Event\neventRequest := &requests.AddEventRequest{}\nerr = cbor.Unmarshal(fileBytes, eventRequest)\ncheck(err)\n\n// Grab binary data and write to a file\nimgBytes := eventRequest.Event.Readings[0].BinaryValue\nioutil.WriteFile(\"/Users/johndoe/Desktop/image.jpeg\", imgBytes, 0644)\n}\n
In the code above, the CBOR data is read into a byte array , an EdgeX Event struct is created, and cbor.Unmarshal
parses the CBOR-encoded data and stores the result in the Event struct. Finally, the binary payload is written to a file from the BinaryValue
field of the Reading.
This method would work as well for decoding Events off the EdgeX message bus.
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#encoding-arbitrary-structures-in-events","title":"Encoding Arbitrary Structures in Events","text":"The Device SDK's NewCommandValue()
function above only accepts a byte slice as binary data. Any arbitrary Go structure can be encoded in a binary reading by first encoding the structure into a byte slice using CBOR. The following illustrates this method:
// DeviceService HandleReadCommands() code:\nfoo := struct {\nX int\nY int\nZ int\nBar string\n} {\nX: 7,\nY: 3,\nZ: 100,\nBar: \"Hello world!\",\n}\n\ndata, err := cbor.Marshal(&foo)\ncheck(err)\n\ncv, err := sdkModels.NewCommandValue(reqs[i].DeviceResourceName, common.ValueTypeBinary, data)\nresponses[i] = cv\n
This code takes the anonymous struct with fields X, Y, Z, and Bar (of different types) and serializes it into a byte slice using the same cbor
library, and passing the output to NewCommandValue()
.
When consuming these events, another level of decoding will need to take place to get the structure out of the binary payload.
func main() {\n// Read in our cbor data\nfileBytes, err := ioutil.ReadFile(\"/Users/johndoe/Desktop/foo.cbor\")\ncheck(err)\n\n// Decode into an EdgeX Event\neventRequest := &requests.AddEventRequest{}\nerr = cbor.Unmarshal(fileBytes, eventRequest)\ncheck(err)\n\n// Decode into arbitrary type\nfoo := struct {\nX int\nY int\nZ int\nBar string\n}{}\n\nerr = cbor.Unmarshal(eventRequest.Event.Readings[0].BinaryValue, &foo)\ncheck(err)\nfmt.Println(foo)\n}\n
This code takes a command response in the same format as the previous example, but uses the cbor
library to decode the CBOR data inside the EdgeX Reading's BinaryValue
field.
Using this approach, an Event can be sent containing data containing an arbitrary, flexible structure. Use cases could be a Reading containing multiple images, a variable length list of integer read-outs, etc.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/","title":"Using the Virtual Device Service","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#overview","title":"Overview","text":"The Virtual Device Service GO can simulate different kinds of devices to generate Events and Readings to the Core Data Micro Service. Furthermore, users can send commands and get responses through the Command and Control Micro Service. The Virtual Device Service allows you to execute functional or performance tests without any real devices. This version of the Virtual Device Service is implemented based on Device SDK GO, and uses ql (an embedded SQL database engine) to simulate virtual resources.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#introduction","title":"Introduction","text":"For information on the virtual device service see virtual device under the Microservices tab.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#working-with-the-virtual-device-service","title":"Working with the Virtual Device Service","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-container","title":"Running the Virtual Device Service Container","text":"The virtual device service depends on the EdgeX core services. By default, the virtual device service is part of the EdgeX community provided Docker Compose files. If you use one of the community provide Compose files, you can pull and run EdgeX inclusive of the virtual device service without having to make any changes.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-natively-in-development-mode","title":"Running the Virtual Device Service Natively (in development mode)","text":"If you're going to download the source code and run the virtual device service in development mode, make sure that the EdgeX core service containers are up before starting the virtual device service. See how to work with EdgeX in a hybrid environment in order to run the virtual device service outside of containers. This same file will instruct you on how to get and run the virtual device service code.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#get-command-example","title":"GET command example","text":"The virtual device service is configured to send simulated data to core data every few seconds (from 10-30 seconds depending on device - see the device configuration file for AutoEvent details). You can exercise the GET
request on the command service to see the generated value produced by any of the virtual device's simulated devices. Use the curl command below to exercise the virtual device service API (via core command service).
curl -X GET localhost:59882/api/v3/device/name/Random-Integer-Device/Int8\n
Warning
The example above assumes your core command service is available on localhost
at the default service port of 59882. Also, you must replace your device name and command name in the example above with your virtual device service's identifiers. If you are not sure of the identifiers to use, query the command service for the full list of commands and devices at http://localhost:59882/api/v3/device/all
.
The virtual device should respond (via the core command service) with event/reading JSON similar to that below.
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"id\": \"3beb5b83-d923-4c8a-b949-c1708b6611c1\",\n\"deviceName\": \"Random-Integer-Device\",\n\"profileName\": \"Random-Integer-Device\",\n\"sourceName\": \"Int8\",\n\"origin\": 1626227770833093400,\n\"readings\": [\n{\n\"id\": \"baf42bc7-307a-4647-8876-4e84759fd2ba\",\n\"origin\": 1626227770833093400,\n\"deviceName\": \"Random-Integer-Device\",\n\"resourceName\": \"Int8\",\n\"profileName\": \"Random-Integer-Device\",\n\"valueType\": \"Int8\",\n\"binaryValue\": null,\n\"mediaType\": \"\",\n\"value\": \"-5\"\n}\n]\n}\n}\n
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#put-command-example-assign-a-value-to-a-resource","title":"PUT command example - Assign a value to a resource","text":"The virtual devices managed by the virtual device can also be actuated. The virtual device can be told to enable or disable random number generation. When disabled, the virtual device services can be told what value to respond with for all GET
operations. When setting the fixed value, the value must be valid for the data type of the virtual device. For example, the minimum value of Int8 cannot be less than -128 and the maximum value cannot be greater than 127.
Below is example actuation of one of the virtual devices. In this example, it sets the fixed GET
return value to 123 and turns off random generation.
curl -X PUT -d '{\"Int8\": \"123\", \"EnableRandomization_Int8\": \"false\"}' localhost:59882/api/v3/device/name/Random-Integer-Device/Int8\n
Note
The value of the resource's EnableRandomization property is simultaneously updated to false when sending a put command to assign a specified value to the resource. Therefore, the need to set EnableRandomization_Int8 to false is not actually required in the call above
Return the virtual device to randomly generating numbers with another PUT
call.
curl -X PUT -d '{\"EnableRandomization_Int8\": \"true\"}' localhost:59882/api/v3/device/name/Random-Integer-Device/Int8\n
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#reference","title":"Reference","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#architectural-diagram","title":"Architectural Diagram","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#sequence-diagram","title":"Sequence Diagram","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#virtual-resource-table-schema","title":"Virtual Resource Table Schema","text":"Column Type DEVICE_NAME STRING COMMAND_NAME STRING DEVICE_RESOURCE_NAME STRING ENABLE_RANDOMIZATION BOOL DATA_TYPE STRING VALUE STRING"},{"location":"examples/Ch-OSImageWithEdgeX/","title":"Creating an EdgeX Ubuntu Core Image","text":""},{"location":"examples/Ch-OSImageWithEdgeX/#introduction","title":"Introduction","text":"This guide walks you through creating an Ubuntu Core OS image that is preloaded with an EdgeX stack. We use Ubuntu Core as the Linux distribution because it is optimized for IoT and is secure by design. We configure the image and bundle the current snapped versions of EdgeX components. After the deployment the snaps will continue to receive updates for the latest security and bug fixes (depending on the selected channel).
This guide is divided into three chapters to create:
Each chapter results in a working Ubuntu Core OS image that can be flashed on a disk and booted with the expected EdgeX stack.
In this example, we will create an amd64
image for Intel and AMD processors. The instructions can be adapted to other architectures and even for a Raspberry Pi. We will use the Device Virtual service to simulate devices and produce synthetic events.
Note
This guide has been tested on an amd64
Ubuntu 22.04 as the desktop OS. It may work on other Linux distributions and Ubuntu versions.
Some commands are executed on the desktop computer, but some others on the target Ubuntu Core system. For clarity, we use \ud83d\udda5 Desktop and \ud83d\ude80 Ubuntu Core titles for code blocks to distinguish where those commands are being executed.
An Intel NUC11TNH with 8GB RAM and 250GB NAND flash storage has been used as the target amd64
hardware.
We use the following tools on the desktop machine:
Install them using the following commands: \ud83d\udda5 Desktop
sudo snap install snapcraft --classic\nsudo snap install yq\nsudo snap install ubuntu-image --classic --channel=2/stable\n
Before we start, it is a good idea to read through the following documents:
In this chapter, we will create an OS image that includes the expected EdgeX components.
"},{"location":"examples/Ch-OSImageWithEdgeX/#create-an-ubuntu-core-model-assertion","title":"Create an Ubuntu Core model assertion","text":"The model assertion is a digitally signed document that describes the content of the OS image.
Refer to this article for details on how to sign the model assertion. Here are the needed steps:
1) Create a developer account
Follow the instructions here to create a developer account, if you don't already have one.
2) Create and register a key
\ud83d\udda5 Desktop
snap login\nsnap keys\n# continue if you have no existing keys\n# you'll be asked to set a passphrase which is needed before signing\nsnap create-key edgex-demo\nsnapcraft register-key edgex-demo\n
We now have a registered key named edgex-demo
which we'll use later. 3) Create the model assertion
First, make yourself familiar with the Ubuntu Core model assertion.
Find your developer ID using the Snapcraft CLI: \ud83d\udda5 Desktop
$ snapcraft whoami\n...\ndeveloper-id: <developer-id>\n
or from the Snapcraft Dashboard. YAML Model Assertion Unlike the official documentation which uses JSON, we use YAML serialization for the model. This is for consistency with all the other serialization formats in this tutorial. Moreover, it allows us to comment out some parts for testing or add comments to describe the details inline.
Create model.yaml
with the following content, replacing authority-id
, brand-id
, and timestamp
:
type: model\nseries: '16'\n\n# set authority-id and brand-id to your developer-id\nauthority-id: <developer-id>\nbrand-id: <developer-id>\n\nmodel: ubuntu-core-22-amd64\narchitecture: amd64\n\n# timestamp should be within your signature's validity period\ntimestamp: '2022-06-21T10:45:00+00:00'\nbase: core22\n\ngrade: dangerous\n\nsnaps:\n- name: pc\ntype: gadget\ndefault-channel: 22/stable\nid: UqFziVZDHLSyO3TqSWgNBoAdHbLI4dAH\n\n- name: pc-kernel\ntype: kernel\ndefault-channel: 22/stable\nid: pYVQrBcKmBa0mZ4CCN7ExT6jH8rY1hza\n\n- name: snapd\ntype: snapd\ndefault-channel: latest/candidate # temporary for latest pc-gadget compatibility; see https://github.com/canonical/edgex-ubuntu-core-testing/issues/1\nid: PMrrV4ml8uWuEUDBT8dSGnKUYbevVhc4\n\n# Snap base for EdgeX snaps\n- name: core22\ntype: base\ndefault-channel: latest/stable\nid: amcUKQILKXHHTlmSa7NMdnXSx02dNeeT\n\n- name: edgexfoundry\ntype: app\ndefault-channel: latest/edge # replace with latest/stable after EdgeX v3 release\nid: AZGf0KNnh8aqdkbGATNuRuxnt1GNRKkV\n\n- name: edgex-device-virtual\ntype: app\ndefault-channel: latest/edge # replace with latest/stable after EdgeX v3 release\nid: AmKuVTOfsN0uEKsyJG34M8CaMfnIqxc0\n
Note
We use the gadget and kernel snaps for 64bit personal computers using Intel or AMD processors. For a Raspberry Pi, you need to change the model, architecture, as well as the gadget and kernel snaps.
Finding Snap IDs
Query the unique store ID of a snap, for example the edgexfoundry
snap:
$ snap info edgexfoundry | grep snap-id\nsnap-id: AZGf0KNnh8aqdkbGATNuRuxnt1GNRKkV\n
4) Sign the model assertion
We sign the model using the edgex-demo
key created and registered earlier.
The snap sign
command takes JSON as input and produces YAML as output! We use the YQ app to convert our model assertion to JSON before passing it in for signing.
# sign\nyq eval model.yaml -o=json | snap sign -k edgex-demo > model.signed.yaml\n\n# check the signed model\ncat model.signed.yaml\n
Note
You need to repeat the signing every time you change the input model, because the signature is calculated based on the model.
"},{"location":"examples/Ch-OSImageWithEdgeX/#build-the-ubuntu-core-image","title":"Build the Ubuntu Core image","text":"We use ubuntu-image and set the path to signed model assertion YAML file.
This will download all the snaps specified in the model assertion and build an image file called pc.img
.
If you plan to use an emulator to install and run Ubuntu Core from the resulting image, it is a good idea to allocate additional writable storage. This is necessary only if you want to install additional snaps interactively or upgrade existing ones on the emulator.
The default size of the ubuntu-data
partition is 1G
as defined in the gadget snap. When installing on actual hardware, this partition extends automatically to take the whole remaining space on the disk volume. However, when using QEMU, the partition will have the exact same size because the image size is calculated based on the defined partition structure. The 1GB ubuntu-data
partition will be mostly full after first boot. You can configure the image to be larger so that the installer expands the partition automatically as with a large disk volume.
To extend the image size, use the --image-size
flag in the following command. For example, to add 500MB extra (the original image is around 3.5GB), set --image-size=4G
.
$ ubuntu-image snap model.signed.yaml --validation=enforce\nFetching snapd\nFetching pc-kernel\nFetching core22\nFetching pc\nFetching edgexfoundry\nFetching edgex-device-virtual\n\n# check the created image file\n$ file pc.img\npc.img: DOS/MBR boot sector, extended partition table (last)\n
Done
The image file is now ready to be flashed on a medium to create a bootable drive with the needed applications!
"},{"location":"examples/Ch-OSImageWithEdgeX/#boot-into-the-os","title":"Boot into the OS","text":"You can now flash the image on your disk and boot to start the installation. However, during development it is best to boot in an emulator to quickly detect and diagnose possible issues.
Instead of flashing and installing the OS on actual hardware, we will continue this guide using an emulator. Every other step will be similar to when image is flashed and installed on actual hardware.
Refer to the following to:
In this step, we connect to the machine that has the image installed over SSH, validate the installation, and do some manual configurations.
We SSH to the emulator from the previous step: \ud83d\udda5 Desktop
ssh <user>@localhost -p 8022\n
If you used the default approach (using console-conf
) and entered your Ubuntu account email address at the end of the installation, then <user>
is your Ubuntu account ID. If you don't know your ID, look it up using a browser from here or programmatically from https://login.ubuntu.com/api/v2/keys/<email>
. List the installed snaps and their services: \ud83d\ude80 Ubuntu Core
$ snap list\nName Version Rev Tracking Publisher Notes\ncore22 20230503 634 latest/stable canonical\u2713 base\nedgex-device-virtual 3.0.0-dev.50 669 latest/edge canonical\u2713 -\nedgexfoundry 3.0.0-dev.163 4452 latest/edge canonical\u2713 -\npc 22-0.3 127 22/stable canonical\u2713 gadget\npc-kernel 5.15.0-71.78.1 1281 22/stable canonical\u2713 kernel\nsnapd 2.59.4 19361 latest/candidate canonical\u2713 snapd\n\n$ snap services\nService Startup Current Notes\nedgex-device-virtual.device-virtual disabled inactive -\nedgexfoundry.consul disabled inactive -\nedgexfoundry.core-command disabled inactive -\nedgexfoundry.core-common-config-bootstrapper disabled inactive -\nedgexfoundry.core-data disabled inactive -\nedgexfoundry.core-metadata disabled inactive -\nedgexfoundry.nginx disabled inactive -\nedgexfoundry.redis disabled inactive -\nedgexfoundry.security-bootstrapper-consul disabled inactive -\nedgexfoundry.security-bootstrapper-nginx disabled inactive -\nedgexfoundry.security-bootstrapper-redis disabled inactive -\nedgexfoundry.security-proxy-auth disabled inactive -\nedgexfoundry.security-secretstore-setup disabled inactive -\nedgexfoundry.support-notifications disabled inactive -\nedgexfoundry.support-scheduler disabled inactive -\nedgexfoundry.vault disabled inactive -\n
Everything is inactive by default. Let start the platform: \ud83d\ude80 Ubuntu Core
$ snap start --enable edgexfoundry\nStarted.\n
We need to also start Device Virtual, but before doing so, increase the logging verbosity using snap options to add logging for the produced data: \ud83d\ude80 Ubuntu Core
$ snap set edgex-device-virtual config.writable-loglevel=DEBUG\n$ snap start --enable edgex-device-virtual\nStarted.\n
Inspect the logs: \ud83d\ude80 Ubuntu Core
$ snap logs edgexfoundry\n...\n2023-05-24T15:43:54Z edgexfoundry.consul[2785]: 2023-05-24T15:43:54.667Z [INFO] agent: Synced check: check=support-notifications\n2023-05-24T15:43:54Z edgexfoundry.consul[2785]: 2023-05-24T15:43:54.801Z [INFO] agent: Synced check: check=core-data\n2023-05-24T15:43:55Z edgexfoundry.consul[2785]: 2023-05-24T15:43:55.220Z [INFO] agent: Synced check: check=core-command\n2023-05-24T15:43:55Z edgexfoundry.consul[2785]: 2023-05-24T15:43:55.368Z [INFO] agent: Synced check: check=core-metadata\n2023-05-24T15:43:56Z edgexfoundry.consul[2785]: 2023-05-24T15:43:56.208Z [INFO] agent: Synced check: check=support-scheduler\n2023-05-24T15:44:03Z edgexfoundry.consul[2785]: 2023-05-24T15:44:03.596Z [INFO] agent: Synced check: check=device-virtual\n\n\n$ snap logs -f edgex-device-virtual\n...\n2023-05-24T15:44:14Z edgex-device-virtual.device-virtual[3369]: level=DEBUG ts=2023-05-24T15:44:14.269393977Z app=device-virtual source=utils.go:80 msg=\"Event(profileName: Random-UnsignedInteger-Device, deviceName: Random-UnsignedInteger-Device, sourceName: Uint64, id: 77701381-5bbc-404d-a9b5-f30d58182ac6) published to MessageBus on topic: edgex/events/device/device-virtual/Random-UnsignedInteger-Device/Random-UnsignedInteger-Device/Uint64\"\n2023-05-24T15:44:19Z edgex-device-virtual.device-virtual[3369]: level=DEBUG ts=2023-05-24T15:44:19.066059149Z app=device-virtual source=reporter.go:195 msg=\"Publish 0 metrics to the 'edgex/telemetry/device-virtual' base topic\"\n2023-05-24T15:44:19Z edgex-device-virtual.device-virtual[3369]: level=DEBUG ts=2023-05-24T15:44:19.06612871Z app=device-virtual source=manager.go:123 msg=\"Reported metrics...\"\n^C\n
All services appear healthy. The Device Virtual logs show that the service is producing the expected synthetic data.
Let's exit the SSH session: \ud83d\ude80 Ubuntu Core
$ exit\nlogout\nConnection to localhost closed.\n
... and query data from outside via the API Gateway: \ud83d\udda5 Desktop
curl --insecure https://localhost:8443/core-data/api/v3/reading/all?limit=2\n
Since the security is enabled, the access is not authorized. You can follow the instructions from the getting started to add a user to API Gateway, and generate a JWT token to access the API securely.
In this chapter, we demonstrated how to build an image that is pre-loaded with some EdgeX snaps. We then connected into a (virtual) machine instantiated with the image, verified the setup and performed additional steps to interactively start and configure the services.
In the next chapter, we walk you through creating an image that comes pre-loaded with this configuration, so it boots into a working EdgeX environment.
"},{"location":"examples/Ch-OSImageWithEdgeX/#b-override-configurations","title":"B. Override configurations","text":"In this chapter, we will improve our OS image so that:
Overriding the snap configurations upon installation is possible with gadget snaps.
The pc
gadget is available as a prebuilt snap in the store, however, in this chapter, we need to build our own to include custom configurations, passed in as default values to snaps. We will use the source code for Core22 AMD64 gadget from here as basis.
Tip
For a Raspberry Pi, you need to use the pi-gadget instead.
Clone the repo branch: \ud83d\udda5 Desktop
git clone https://github.com/snapcore/pc-amd64-gadget.git --branch=22\n
Add the following root level object to pc-amd64-gadget/gadget.yml
:
defaults:\n# edgexfoundry\nAZGf0KNnh8aqdkbGATNuRuxnt1GNRKkV: # snap id\n# automatically start all the services\nautostart: true\n# disable security\nsecurity: false\n# override a single service's startup message\napps.core-data.config.service-startupmsg: \"Core Data Startup message from gadget!\"\n# set bind address of services to all interfaces via the common config\napps.core-common-config-bootstrapper.config.all-services-service-serverbindaddr: 0.0.0.0\n\n# edgex-device-virtual\nAmKuVTOfsN0uEKsyJG34M8CaMfnIqxc0: # snap id\n# automatically start the service\nautostart: true\nconfig:\n# configure the service so it does not use the secret store\nedgex-security-secret-store: false\n# override the startup message\nservice-startupmsg: \"Startup message from gadget!\"\n
For service startup and other configuration overrides, refer to Managing services and Config Overrides.
Build: \ud83d\udda5 Desktop
$ cd pc-amd64-gadget\n$ snapcraft -v\n...\nCreated snap package pc_22-0.3_amd64.snap\n\n$ cd ..\n
Note
You need to rebuild the snap every time you change the gadget.yaml
file.
Use ubuntu-image tool again to build a new image. Use the same instructions as before but with an additional flag to set the path to gadget snap that we locally built above.
\ud83d\udda5 Desktop$ ubuntu-image snap model.signed.yaml --validation=enforce \\\n--snap pc-amd64-gadget/pc_22-0.3_amd64.snap # sideload the gadget\nFetching snapd\nFetching pc-kernel\nFetching core22\nFetching edgexfoundry\nFetching edgex-device-virtual\nWARNING: \"pc\" installed from local snaps disconnected from a store cannot be refreshed subsequently!\nCopying \"pc-amd64-gadget/pc_22-0.3_amd64.snap\" (pc)\n
The warning is because we sideloaded the gadget instead of pulling it from a store.
Tip
In production settings, a custom gadget would need to be uploaded to the IoT App Store to also receive OTA updates.
Note
You need to repeat the build every time you change and sign the model or rebuild the gadget.
Done
The image file is now ready to be flashed on a medium to create a bootable drive with the needed applications and basic configurations.
"},{"location":"examples/Ch-OSImageWithEdgeX/#try-it-out_1","title":"TRY IT OUT","text":"Refer to the following to:
This time, as set in the gadget defaults, services are started by default and security is disabled.
SSH to the Ubuntu Core machine as before and verify some of the seeded configurations:
\ud83d\ude80 Ubuntu Core$ snap services\nService Startup Current Notes\nedgex-device-virtual.device-virtual enabled active -\nedgexfoundry.consul enabled active -\nedgexfoundry.core-command enabled active -\nedgexfoundry.core-common-config-bootstrapper enabled inactive -\nedgexfoundry.core-data enabled active -\nedgexfoundry.core-metadata enabled active -\nedgexfoundry.nginx disabled inactive -\nedgexfoundry.redis enabled active -\nedgexfoundry.security-bootstrapper-consul disabled inactive -\nedgexfoundry.security-bootstrapper-nginx disabled inactive -\nedgexfoundry.security-bootstrapper-redis disabled inactive -\nedgexfoundry.security-proxy-auth disabled inactive -\nedgexfoundry.security-secretstore-setup disabled inactive -\nedgexfoundry.support-notifications enabled active -\nedgexfoundry.support-scheduler enabled active -\nedgexfoundry.vault disabled inactive -\n\n$ snap get edgex-device-virtual -d\n{\n \"autostart\": true,\n \"config\": {\n \"edgex-security-secret-store\": false,\n \"service-startupmsg\": \"Startup message from gadget!\"\n }\n}\n
Verify that Device Virtual has the startup message set from the gadget: \ud83d\ude80 Ubuntu Core
$ snap logs -n=all edgex-device-virtual | grep \"Startup message\"\n2023-05-24T16:52:05Z edgex-device-virtual.device-virtual[2807]: level=INFO ts=2023-05-24T16:52:05.791386915Z app=device-virtual source=variables.go:457 msg=\"Variables override of 'Service/StartupMsg' by environment variable: SERVICE_STARTUPMSG=Startup message from gadget!\"\n2023-05-24T16:52:22Z edgex-device-virtual.device-virtual[3010]: level=INFO ts=2023-05-24T16:52:22.342760716Z app=device-virtual source=message.go:55 msg=\"Startup message from gadget!\"\n
Since security is disabled and Core Data has been configured to listen on all interfaces (instead of just the loopback), we can now query data (insecurely) from outside: \ud83d\udda5 Desktop
$ curl --no-progress-meter http://localhost:59880/api/v3/reading/all?limit=2 | jq\n{\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"totalCount\": 86,\n \"readings\": [\n{\n\"id\": \"66c0e3ae-70a5-41b1-931f-bf680b2814ed\",\n \"origin\": 1684948755626088200,\n \"deviceName\": \"Random-Boolean-Device\",\n \"resourceName\": \"Bool\",\n \"profileName\": \"Random-Boolean-Device\",\n \"valueType\": \"Bool\",\n \"value\": \"true\"\n},\n {\n\"id\": \"94ec2182-7a0b-4515-8bcd-5445b8d59d2d\",\n \"origin\": 1684948755624763400,\n \"deviceName\": \"Random-UnsignedInteger-Device\",\n \"resourceName\": \"Uint32\",\n \"profileName\": \"Random-UnsignedInteger-Device\",\n \"valueType\": \"Uint32\",\n \"value\": \"2463192424\"\n}\n]\n}\n
We can do that only for servers that have their ports forwarded to the emulator's host as configured in Run in an emulator. Query all registered devices from Core Metadata: \ud83d\udda5 Desktop
$ curl --no-progress-meter http://localhost:59881/api/v3/device/all | jq '.devices[].name'\n\"Random-Boolean-Device\"\n\"Random-Float-Device\"\n\"Random-UnsignedInteger-Device\"\n\"Random-Binary-Device\"\n\"Random-Integer-Device\"\n
The response shows 5 virtual devices, registered by Device Virtual. In this chapter, we created an OS image which comes with EdgeX components that have overridden server configurations. We can extend the server configurations by setting other defaults in the gadget. This mechanism is made possible via a combination of snap options and environment variable overrides implemented for EdgeX services.
Overriding configuration fields is sufficient in most scenarios. However, there are situations in which we need to override entire configuration files instead of just some fields:
There are different ways to tackle the above situations such as pre-populating the EdgeX Config Provider and Core Metadata with the needed data, or deploying a local agent which takes care of the provisioning on runtime. In the next chapter, we will address the above requirements by deploying a snap which supplies custom configuration files to applications.
"},{"location":"examples/Ch-OSImageWithEdgeX/#c-replace-configuration-files","title":"C. Replace configuration files","text":"This chapter builds on top of what we did previously and shows how to override entire configuration files supplied via a snap package, called the config provider snap.
"},{"location":"examples/Ch-OSImageWithEdgeX/#create-a-config-provider-for-device-virtual","title":"Create a config provider for Device Virtual","text":"The EdgeX Device Virtual service cannot be fully configured using environment variables / snap options. Because of that, we need to package the modified config files and replace the defaults. Moreover, it is tedious to override many configurations one by one, compared to having a file which contains all the needed modifications.
Since we want to create an OS image pre-loaded with the configured system, we need to make sure the configurations are there without any manual user interaction. We do that by creating a snap which provides the configuration files to the Device Virtual snap.
For this exercise, we will replace the default Device Virtual configurations with a new set of files, containing just one virtual device and profile.
We use the config provider snap example as basis which already includes the mentioned configuration files:
\ud83d\udda5 Desktop$ git clone https://github.com/canonical/edgex-config-provider.git\n\n$ tree edgex-config-provider/examples/device-virtual/res/\nedgex-config-provider/examples/device-virtual/res/\n\u251c\u2500\u2500 configuration.yaml\n\u251c\u2500\u2500 devices\n\u2502 \u2514\u2500\u2500 devices.yaml\n\u251c\u2500\u2500 profiles\n\u2502 \u2514\u2500\u2500 device.virtual.float.yaml\n\u2514\u2500\u2500 README.md\n
This example includes only Device Virtual configurations. However, it is structured to allow the supply of configuration files to multiple EdgeX app and device services.
We'll continue with this example snap which is named edgex-config-provider-example
.
Tip
In production settings, you would create your own snap under a unique name and release it to the public snap store or a private IoT App Store along with your gadget. This will allow OTA updates as well as secure control of the provided configuration.
Build: \ud83d\udda5 Desktop
$ cd edgex-config-provider\n$ snapcraft -v\n...\nCreated snap package edgex-config-provider-example_<...>.snap\n\n$ cd ..\n
This will build for our host architecture which is amd64
. You can perform remote builds to build for other architectures.
Let's upload the snap and release it to the latest/edge
channel: \ud83d\udda5 Desktop
snapcraft upload --release=latest/edge edgex-config-provider/edgex-config-provider-example_<...>.snap\n
Uploading to the store is necessary because we need to define a connection contract on the OS between the config provider and Device Virtual snaps. Query the snap ID from the store: \ud83d\udda5 Desktop
$ snap info edgex-config-provider-example | grep snap-id\nsnap-id: WWPGZGi1bImphPwrRfw46aP7YMyZYl6w\n
"},{"location":"examples/Ch-OSImageWithEdgeX/#add-the-config-provider-to-the-image","title":"Add the config provider to the image","text":"Perform the following:
1) Add the config provider snap to model.yaml
:
- name: edgex-config-provider-example\ntype: app\ndefault-channel: latest/edge\nid: WWPGZGi1bImphPwrRfw46aP7YMyZYl6w\n
2) Sign the model as before: \ud83d\udda5 Desktop
yq eval model.yaml -o=json | snap sign -k edgex-demo > model.signed.yaml\n
3) Add the following root level object to pc-amd64-gadget/gadget.yaml
:
connections:\n- # Connect edgex-device-virtual's plug (consumer)\nplug: AmKuVTOfsN0uEKsyJG34M8CaMfnIqxc0:device-virtual-config\n# to edgex-config-provider-example's slot (provider) to override the default configuration files.\nslot: WWPGZGi1bImphPwrRfw46aP7YMyZYl6w:device-virtual-config\n
This tells the system to connect the device-virtual-config
plug of the Device Virtual snap to the slot of the same name on the config provider snap. 4) Rebuild the gadget: \ud83d\udda5 Desktop
$ cd pc-amd64-gadget\n$ snapcraft -v\n...\nCreated snap package pc_22-0.3_amd64.snap\n\n$ cd ..\n
"},{"location":"examples/Ch-OSImageWithEdgeX/#build-the-image_1","title":"Build the image","text":"Use ubuntu-image tool again to build a new image. Use the same instructions as before to build:
\ud83d\udda5 Desktop$ ubuntu-image snap model.signed.yaml --validation=enforce \\\n--snap pc-amd64-gadget/pc_22-0.3_amd64.snap\nFetching snapd\nFetching pc-kernel\nFetching core22\nFetching edgexfoundry\nFetching edgex-device-virtual\nFetching edgex-config-provider-example\nWARNING: \"pc\" installed from local snaps disconnected from a store cannot be refreshed subsequently!\nCopying \"pc-amd64-gadget/pc_22-0.3_amd64.snap\" (pc)\n
Note the addition of our config provider snap in the output.
Done
The image file is now ready to be flashed on a medium to create a bootable drive with the needed applications and custom configuration files.
"},{"location":"examples/Ch-OSImageWithEdgeX/#try-it-out_2","title":"TRY IT OUT","text":"Refer to the following to:
SSH to the Ubuntu Core machine and verify the installations:
List of snaps: \ud83d\ude80 Ubuntu Core
$ snap list\nName Version Rev Tracking Publisher Notes\ncore22 20230503 634 latest/stable canonical\u2713 base\nedgex-config-provider-example v3.0.0-beta+git6.1778bd4 29 latest/edge farshidtz -\nedgex-device-virtual 3.0.0-dev.51 673 latest/edge canonical\u2713 -\nedgexfoundry 3.0.0-dev.164 4455 latest/edge canonical\u2713 -\npc 22-0.3 x1 - - gadget\npc-kernel 5.15.0-71.78.1 1281 22/stable canonical\u2713 kernel\nsnapd 2.59.4 19361 latest/candidate canonical\u2713 snapd\n
Note that we now also have edgex-config-provider-example
in the list. Verify that Device Virtual has the startup message overridden via the gadget defaults: \ud83d\ude80 Ubuntu Core
$ snap logs -n=all edgex-device-virtual | grep \"Startup message\"\n2023-05-25T10:24:50Z edgex-device-virtual.device-virtual[2924]: level=INFO ts=2023-05-25T10:24:50.447466922Z app=device-virtual source=variables.go:457 msg=\"Variables override of 'Service/StartupMsg' by environment variable: SERVICE_STARTUPMSG=Startup message from gadget!\"\n2023-05-25T10:25:03Z edgex-device-virtual.device-virtual[3136]: level=INFO ts=2023-05-25T10:25:03.761993667Z app=device-virtual source=message.go:55 msg=\"Startup message from gadget!\"\n
From the host machine, query the device metadata to ensure that Device Virtual has registered only a single virtual device: \ud83d\udda5 Desktop
$ curl --no-progress-meter http://localhost:59881/api/v3/device/all | jq '.devices[].name'\n\"Random-Float-Device\"\n
Congratulations! You deployed a system that is pre-configured to have:
Running the image in an emulator makes it easier to quickly try the image and find out possible issues.
We use a amd64
QEMU emulator. Refer to Testing Ubuntu Core with QEMU to setup the dependencies and learn about the various emulation options. Here, we provide the command to run without TPM emulation.
Warning
The pc.img
file passed to the emulator is used as the secondary storage. It persists any changes made to the partitions during the installation and any user modifications after the boot. You can stop and re-start the emulator at a later time without losing your changes.
To do a fresh start or to flash this image on disk, your need to rebuild the image. Alternatively, you can make a copy before using it in QEMU.
Run the following command and wait for the boot to complete: \ud83d\udda5 Desktop
sudo qemu-system-x86_64 \\\n-smp 4 \\\n-m 4096 \\\n-drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on \\\n-drive file=pc.img,cache=none,format=raw,id=disk1,if=none \\\n-device virtio-blk-pci,drive=disk1,bootindex=1 \\\n-machine accel=kvm \\\n-serial mon:stdio \\\n-net nic,model=virtio \\\n-net user,hostfwd=tcp::8022-:22,hostfwd=tcp::8443-:8443,hostfwd=tcp::59880-:59880,hostfwd=tcp::59881-:59881\n
The above command forwards:
22
of the emulator to 8022
on the host8433
for external access in chapter A59880
59881
Could not set up host forwarding rule 'tcp::8443-:8443'
This means that the port 8443 is not available on the host. Try stopping the service that uses this port or change the host port (left hand side) to another port number, e.g. tcp::18443-:8443
.
Success
Once the installation is complete, you'll see the initialization interface; Refer here for details.
"},{"location":"examples/Ch-OSImageWithEdgeX/#flash-the-image-on-disk","title":"Flash the image on disk","text":"Warning
If you have used pc.img
to install in QEMU, the image has changed. You need to rebuild a new copy before continuing.
The installation instructions are device specific. You may refer to Ubuntu Core section in this page. For example:
A precondition to continue with some of the instructions is to compress pc.img
. This speeds up the transfer and makes the input file similar to official images, improving compatibility with the available instructions.
To compress with the lowest compression rate of zero: \ud83d\udda5 Desktop
$ xz -vk -0 pc.img\npc.img (1/1)\n100 % 817.2 MiB / 3,309.0 MiB = 0.247 10 MiB/s 5:30 \n\n$ ls -lh pc.*\n-rw-rw-r-- 1 ubuntu ubuntu 3.3G Sep 16 17:03 pc.img\n-rw-rw-r-- 1 ubuntu ubuntu 818M Sep 16 17:03 pc.img.xz\n
A higher compression rate significantly increases the processing time and needed resources, with very little gain. Follow the device specific instructions.
Success
You may refer here for the initialization steps appearing by default.
"},{"location":"examples/Ch-OSImageWithEdgeX/#initialization","title":"Initialization","text":"Once the installation is complete, you will see the interface of the console-conf
program. It will walk you through the networking and user account setup. You'll need to enter the email address of your Ubuntu account to create a OS user account with your registered username and have your SSH public keys deployed as authorized SSH keys for that user. If you haven't done so, follow the instructions here to add your SSH keys before doing this setup.
Read here to know how the manual account setup looks like and how it can be automated.
"},{"location":"examples/Ch-OSImageWithEdgeX/#references","title":"References","text":"Use the Camera Management Example application service to auto discover and connect to nearby ONVIF and USB based cameras. This application will also control cameras via commands, create inference pipelines for the camera video streams and publish inference results to MQTT broker.
This app uses EdgeX compose, Edgex Onvif Camera device service, Edgex USB Camera device service, Edgex MQTT device service and Edge Video Analytics Microservice.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-dependencies","title":"Install Dependencies","text":""},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#environment","title":"Environment","text":"This example has been tested with a relatively modern Linux environment - Ubuntu 20.04 and later
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-docker","title":"Install Docker","text":"Install Docker from the official repository as documented on the Docker site.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#configure-docker","title":"Configure Docker","text":"To enable running Docker commands without the preface of sudo, add the user to the Docker group.
Warning
The docker group grants root-level privileges to the user. For details on how this impacts security in your system, see Docker Daemon Attack Surface.
Create Docker group:
sudo groupadd docker\n
Note
If the group already exists, groupadd
outputs a message: groupadd: group docker
already exists. This is OK.
Add User to group:
sudo usermod -aG docker $USER\n
Restart your computer for the changes to take effect.
To verify the Docker installation, run hello-world
:
docker run hello-world\n
A Hello from Docker! greeting indicates successful installation. Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n2db29710123e: Pull complete \nDigest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n...\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-docker-compose","title":"Install Docker Compose","text":"Install Docker Compose from the official repository as documented on the Docker Compose site.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-golang","title":"Install Golang","text":"Install Golang from the official Golang website.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-tools","title":"Install Tools","text":"Install build tools:
sudo apt install build-essential\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#steps-for-running-this-example","title":"Steps for running this example:","text":""},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#1-start-the-edgex-core-services-and-device-services","title":"1. Start the EdgeX Core Services and Device Services.","text":"Clone edgex-compose
from github.com.
git clone https://github.com/edgexfoundry/edgex-compose.git\n
Navigate to the edgex-compose
directory:
cd edgex-compose\n
Checkout the latest release (main):
git checkout main\n
Navigate to the compose-builder
subdirectory:
cd compose-builder/\n
(Optional) Update the add-device-usb-camera.yml
file:
Note
This step is only required if you plan on using USB cameras.
a. Add enable rtsp server and the rtsp server hostname environment variables to the device-usb-camera
service, where your-local-ip-address
is the ip address of the machine running the device-usb-camera
service.
Snippet from add-device-usb-camera.yml
services:\n device-usb-camera:\n environment:\n DRIVER_ENABLERTSPSERVER: \"true\"\n DRIVER_RTSPSERVERHOSTNAME: \"your-local-ip-address\"\n
b. Under the ports
section, find the entry for port 8554 and change the host_ip from 127.0.0.1
to either 0.0.0.0
or the ip address you put in the previous step.
Clone the EdgeX Examples repository :
git clone https://github.com/edgexfoundry/edgex-examples.git\n
Navigate to the edgex-examples
directory:
cd edgex-examples\n
Checkout the latest release (main):
git checkout main\n
Navigate to the application-services/custom/camera-management
directory
cd application-services/custom/camera-management\n
Configure device-mqtt service to send Edge Video Analytics Microservice inference results into Edgex via MQTT
a. Copy the entire evam-mqtt-edgex folder into edgex-compose/compose-builder
directory.
b. Add this information into the add-device-mqtt.yml file in the edgex-compose/compose-builder
directory.
Snippet from add-device-mqtt.yml
services:\ndevice-mqtt:\n...\nenvironment:\nDEVICE_DEVICESDIR: /evam-mqtt-edgex/devices\nDEVICE_PROFILESDIR: /evam-mqtt-edgex/profiles\nMQTTBROKERINFO_INCOMINGTOPIC: \"incoming/data/#\"\nMQTTBROKERINFO_USETOPICLEVELS: \"true\"\n...\n... volumes:\n# example: - /home/github.com/edgexfoundry/edgex-compose/compose-builder/evam-mqtt-edgex:/evam-mqtt-edgex\n- <add-absolute-path-of-your-edgex-compose-builder-here-example-above>/evam-mqtt-edgex:/evam-mqtt-edgex\n
c. Add this information into the add-mqtt-broker-mosquitto.yml file in the edgex-compose/compose-builder
directory.
Snippet from add-mqtt-broker-mosquitto.yml
services:\nmqtt-broker:\n...\nports:\n...\n- \"59001:9001\"\n...\nvolumes:\n# example: - /home/github.com/edgexfoundry/edgex-compose/compose-builder/evam-mqtt-edgex:/evam-mqtt-edgex\n- <add-absolute-path-of-your-edgex-compose-builder-here>/evam-mqtt-edgex/mosquitto.conf:/mosquitto-no-auth.conf:ro
Note
Please note that both the services in this file need the absolute path to be inserted for their volumes.
Run the following command to start all the Edgex services.
Note
The ds-onvif-camera
parameter can be omitted if no Onvif cameras are present, or the ds-usb-camera
parameter can be omitted if no usb cameras are present.
make run no-secty ds-mqtt mqtt-broker ds-onvif-camera ds-usb-camera
Open cloned edgex-examples
repo and navigate to the edgex-examples/application-services/custom/camera-management
directory:
cd edgex-examples/application-services/custom/camera-management\n
Run this once to download edge-video-analytics into the edge-video-analytics sub-folder, download models, and patch pipelines
make install-edge-video-analytics\n
Note
This step is only required if you have Onvif cameras. Currently, this example app is limited to supporting only 1 username/password combination for all Onvif cameras.
Note
Please follow the instructions for the Edgex Onvif Camera device service in order to connect your Onvif cameras to EdgeX.
configuration.yamlenv varsModify the res/configuration.yaml file
InsecureSecrets:\nonvifauth:\nSecretName: onvifauth\nSecretData:\nusername: \"<username>\"\npassword: \"<password>\"\n
Export environment variable overrides
export WRITABLE_INSECURESECRETS_ONVIFAUTH_SECRETDATA_USERNAME=\"<username>\"\nexport WRITABLE_INSECURESECRETS_ONVIFAUTH_SECRETDATA_PASSWORD=\"<password>\"\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#32-optional-configure-usb-camera-rtsp-credentials","title":"3.2 (Optional) Configure USB Camera RTSP Credentials.","text":"Note
This step is only required if you have USB cameras.
Note
Please follow the instructions for the Edgex USB Camera device service in order to connect your USB cameras to EdgeX.
configuration.yamlenv varsModify the res/configuration.yaml file
InsecureSecrets:\nrtspauth:\nSecretName: rtspauth\nSecretData:\nusername: \"<username>\"\npassword: \"<password>\"\n
Export environment variable overrides
export WRITABLE_INSECURESECRETS_RTSPAUTH_SECRETDATA_USERNAME=\"<username>\"\nexport WRITABLE_INSECURESECRETS_RTSPAUTH_SECRETDATA_PASSWORD=\"<password>\"\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#33-configure-default-pipeline","title":"3.3 Configure Default Pipeline","text":"Initially, all new cameras added to the system will start the default analytics pipeline as defined in the configuration file below. The desired pipeline can be changed or the feature can be disabled by setting the DefaultPipelineName
and DefaultPipelineVersion
to empty strings.
Modify the res/configuration.yaml file with the name and version of the default pipeline to use when a new device is added to the system.
Note
These values can be left empty to disable the feature.
AppCustom:\nDefaultPipelineName: object_detection # Name of the default pipeline used when a new device is added to the system; can be left blank to disable feature\nDefaultPipelineVersion: person # Version of the default pipeline used when a new device is added to the system; can be left blank to disable feature\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#34-build-and-run","title":"3.4 Build and run","text":"Make sure you are at the root of this example app
cd edgex-examples/application-services/custom/camera-management\n
Build the docker image
make docker\n
Start the docker compose services in the background for both EVAM and Camera Management App
docker compose up -d\n
Note
If you would like to view the logs for these services, you can use docker compose logs -f
. To stop the services, use docker compose down
.
Note
The port for EVAM result streams has been changed from 8554 to 8555 to avoid conflicts with the device-usb-camera service.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#using-the-app","title":"Using the App","text":"Visit http://localhost:59750 to access the app.
Figure 1: Homepage for the Camera Management app
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#camera-position","title":"Camera Position","text":"You can control the position of supported cameras using ptz commands.
This section outlines how to start an analytics pipeline for inferencing on a specific camera stream.
Select a camera out of the drop down list of connected cameras.
Select a video stream out of the drop down list of connected cameras.
Select a analytics pipeline out of the drop down list of connected cameras.
Click the Start Pipeline
button.
Once the pipeline is running, you can view the pipeline and its status.
Expand a pipeline to see its status. This includes important information such as elapsed time, latency, frames per second, and elapsed time.
In the terminal where you started the app, once the pipeline is started, this log message will pop up.
level=INFO ts=2022-07-11T22:26:11.581149638Z app=app-camera-management source=evam.go:115 msg=\"View inference results at 'rtsp://<SYSTEM_IP_ADDRESS>:8555/<device name>'\"\n
Use the URI from the log to view the camera footage with analytics overlayed.
ffplay 'rtsp://<SYSTEM_IP_ADDRESS>:8555/<device name>'\n
Example Output:
Figure 2: analytics stream with overlay
If you want to stop the stream, press the red square:
Figure 3: the red square to shut down the pipeline"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#api-log","title":"API Log","text":"
The API log shows the status of the 5 most recent calls and commands that the management has made. This includes important information from the responses, including camera information or error messages.
Expand a log item to see the response
Good response: Bad response:
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#inference-events","title":"Inference Events","text":"To view the inference events in a json format, click the Stream Events
button.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#inference-results-in-edgex","title":"Inference results in Edgex","text":"
To view inference results in Edgex, open Edgex UI http://localhost:4000, click on the DataCenter
tab and view data streaming under Event Data Stream
by clicking on the Start
button.
A custom app service can be used to analyze this inference data and take action based on the analysis.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#video-example","title":"Video Example","text":"A brief video demonstration of building and using the device service:
Warning
This video was created with a previous release. Some new features may not be depicted in this video, and there might be some extra steps needed to configure the service.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#additional-development","title":"Additional Development","text":"
Warning
The following steps are only useful for developers who wish to make modifications to the code and the Web-UI.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#development-and-testing-of-ui","title":"Development and Testing of UI","text":""},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#1-build-the-production-web-ui","title":"1. Build the production web-ui","text":"This builds the web ui into the web-ui/dist
folder, which is what is served by the app service on port 59750.
make web-ui\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#2-serve-the-web-ui-in-hot-reload-mode","title":"2. Serve the Web-UI in hot-reload mode","text":"This will serve the web ui in hot reload mode on port 4200 which will recompile and update anytime you make changes to a file. It is useful for rapidly testing changes to the UI.
make serve-ui\n
Open your browser to http://localhost:4200
"},{"location":"general/ContainerNames/","title":"EdgeX Container Names","text":"The following table provides the list of the default EdgeX Docker image names to the Docker container name and Docker Compose names.
CoreSupportingApplication & AnalyticsDeviceSecurityMiscellaneous Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/core-data edgex-core-data edgex-core-data core-data edgexfoundry/core-metadata edgex-core-metadata edgex-core-metadata core-metadata edgexfoundry/core-command edgex-core-command edgex-core-command core-command edgexfoundry/core-common-config-bootstrapper edgex-core-common-config-bootstrapper edgex-core-common-config-bootstrapper core-common-config-bootstrapper Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/support-notifications edgex-support-notifications edgex-support-notifications support-notifications edgexfoundry/support-scheduler edgex-support-scheduler edgex-support-scheduler support-scheduler Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/app-rfid-llrp-inventory edgex-app-rfid-llrp-inventory edgex-app-rfid-llrp-inventory app-rfid-llrp-inventory edgexfoundry/app-service-configurable edgex-app-rules-engine edgex-app-rules-engine app-rules-engine edgexfoundry/app-service-configurable edgex-app-http-export edgex-app-http-export app-http-export edgexfoundry/app-service-configurable edgex-app-mqtt-export edgex-app-mqtt-export app-mqtt-export edgexfoundry/app-service-configurable edgex-app-metrics-influxdb edgex-app-metrics-influxdb app-metrics-influxdb edgexfoundry/app-service-configurable edgex-app-sample edgex-app-sample app-sample edgexfoundry/app-service-configurable edgex-app-external-mqtt-trigger edgex-app-external-mqtt-trigger app-external-mqtt-trigger emqx/kuiper edgex-kuiper edgex-kuiper rulesengine Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/device-virtual edgex-device-virtual edgex-device-virtual device-virtual edgexfoundry/device-mqtt edgex-device-mqtt edgex-device-mqtt device-mqtt edgexfoundry/device-rest edgex-device-rest edgex-device-rest device-rest edgexfoundry/device-modbus edgex-device-modbus edgex-device-modbus device-modbus edgexfoundry/device-snmp edgex-device-snmp edgex-device-snmp device-snmp edgexfoundry/device-bacnet edgex-device-bacnet edgex-device-bacnet device-bacnet edgexfoundry/device-onvif-camera edgex-device-onvif-camera edgex-device-onvif-camera device-onvif-camera edgexfoundry/device-usb-camera edgex-device-usb-camera edgex-device-usb-camera device-usb-camera edgexfoundry/device-coap edgex-device-coap edgex-device-coap device-coap Docker image name Docker container name Docker network hostname Docker Compose service name vault edgex-vault edgex-vault vault nginx edgex-nginx edgex-nginx nginx edgexfoundry/security-proxy-auth edgex-proxy-auth edgex-proxy-auth security-proxy-auth edgexfoundry/security-proxy-setup edgex-security-proxy-setup edgex-security-proxy-setup security-proxy-setup edgexfoundry/security-secretstore-setup edgex-security-secretstore-setup edgex-security-secretstore-setup security-secretstore-setup edgexfoundry/security-bootstrapper edgex-security-bootstrapper edgex-security-bootstrapper security-bootstrapper Docker image name Docker container name Docker network hostname Docker Compose service name consul edgex-core-consul edgex-core-consul consul redis edgex-redis edgex-redis database"},{"location":"general/Definitions/","title":"Definitions","text":"The following glossary provides terms used in EdgeX Foundry. The definition are based on how EdgeX and its community use the term versus any strict technical or industry definition.
"},{"location":"general/Definitions/#actuate","title":"Actuate","text":"To cause a machine or device to operate. In EdgeX terms, to command a device or sensor under management of EdgeX to do something (example: stop a motor) or to reconfigure itself (example: set a thermostat's cooling point).
"},{"location":"general/Definitions/#brownfield-and-greenfield","title":"Brownfield and Greenfield","text":"Brownfield refers to older legacy equipment (nodes, devices, sensors) in an edge/IoT deployment, which typically uses older protocols. Greenfield refers to, typically, new equipment with modern protocols.
"},{"location":"general/Definitions/#cbor","title":"CBOR","text":"An acronym for \"concise binary object representation.\" A binary data serialization format used by EdgeX to transport binary sensed data (like an image). The user can also choose to send all data via CBOR for efficiency purposes, but at the expense of having EdgeX convert the CBOR into another format whenever the data needs to be understood and inspected or to persist the data.
"},{"location":"general/Definitions/#containerized","title":"Containerized","text":"EdgeX micro services and infrastructure (i.e. databases, registry, etc.) are built as executable programs, put into Docker images, and made available via Docker Hub (and Nexus repository for nightly builds). A service (or infrastructure element) that is available in Docker Hub (or Nexus) is said to be containerized. Docker images can be quickly downloaded and new Docker containers created from the images.
"},{"location":"general/Definitions/#contributordeveloper","title":"Contributor/Developer","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort.
"},{"location":"general/Definitions/#created-time-stamp","title":"Created time stamp","text":"The Created time stamp is the time the data was created in the database and is unchangeable. The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data was sent to EdgeX Foundry and the database.
Usually, the Origin and Created time stamps are the same, or very close to being the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different.
If persistence is disable in core-data, the time stamp will default to 0.
"},{"location":"general/Definitions/#device","title":"Device","text":"In EdgeX parlance, \"device\" is used to refer to a sensor, actuator, or IoT \"thing\". A sensor generally collects information from the physical world - like a temperature or vibration sensor. Actuators are machines that can be told to do something. Actuators move or otherwise control a mechanism or system - like a value on a pump. While there may be some technical differences, for the purposes of EdgeX documentation, device will refer to a sensor, actuator or \"thing\".
"},{"location":"general/Definitions/#edge-analytics","title":"Edge Analytics","text":"The terms edge or local analytics (the terms are used interchangeably and have the same meaning in this context) for the purposes of edge computing (and EdgeX), refers to an \u201canalytics\u201d service is that: - Receives and interprets the EdgeX sensor data to some degree; some analytics services are more sophisticated and able to provide more insights than others - Make determinations on what actions and actuations need to occur based on the insights it has achieved, thereby driving actuation requests to EdgeX associated devices or other services (like notifications)
The analytics service could be some simple logic built into an app service, a rules engine package, or an agent of some artificial intelligence/machine learning system. From an EdgeX perspective, actionable intelligence generation is all the same. From an EdgeX perspective, edge analytics = seeing the edge data and be able to make requests to act on what is seen. While EdgeX provides a rules engine service as its reference implementation of local analytics, app services and its data preparation capability allow sensor data to be streamed to any analytics package.
Because of EdgeX\u2019s micro service architecture and distributed nature, the analytics service would not necessarily have to run local to the devices / sensors. In other words, it would not have to run at the edge. App services could deliver the edge data to analytics living in the cloud. However, in these scenarios, the insight intelligence would not be considered local or edge in context. Because of latency concerns, data security and privacy needs, intermittent connectivity of edge systems, and other reasons, it is often vital for edge platforms to retain an analytic capability at the edge or local.
"},{"location":"general/Definitions/#gateway","title":"Gateway","text":"An IoT gateway is a compute platform at the farthest ends of an edge or IoT network. It is the host or \u201cbox\u201d to which physical sensors and devices connect and that is, in turn, connected to the networks (wired or wirelessly) of the information technology realm.
IoT or edge gateways are compute platforms that connect \u201cthings\u201d (sensors and devices) to IT networks and systems.
"},{"location":"general/Definitions/#micro-service","title":"Micro service","text":"In a micro service architecture, each component has its own process. This is in contrast to a monolithic architecture in which all components of the application run in the same process.
Benefits of micro service architectures include: - Allow any one service to be replaced and upgraded more easily - Allow services to be programmed using different programming languages and underlying technical solutions (use the best technology for each specific service) - Ex: services written in C can communicate and work with services written in Go - This allows organizations building solutions to maximize available developer resources and some legacy code - Allow services to be distributed across host compute platforms - allowing better utilization of available compute resources - Allow for more scalable solutions by adding copies of services when needed
"},{"location":"general/Definitions/#origin-time-stamp","title":"Origin time stamp","text":"The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data is sent to EdgeX Foundry and the database. The Created time stamp is the time the data was created in the database.
Usually, the Origin and Created time stamps are the same or very close to the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different.
"},{"location":"general/Definitions/#reference-implementation","title":"Reference Implementation","text":"Default and example implementation(s) offered by the EdgeX community. Other implementations may be offered by 3rd parties or for specialization.
"},{"location":"general/Definitions/#resource","title":"Resource","text":"A piece of information or data available from a sensor or \"thing\". For example, a thermostat would have temperature and humidity resources. A resource has a name (ResourceName) to identify it (\"temperature\" or \"humidity\" in this example) and a value (the sensed data - like 72 degrees). A resource may also have additional properties or attributes associated with it. The data type of the value (e.g., integer, float, string, etc.) would be an example of a resource property.
"},{"location":"general/Definitions/#rules-engine","title":"Rules Engine","text":"Rules engines are important to the IoT edge system.
A rules engine is a software system that is connected to a collection of data (either database or data stream). The rules engine examines various elements of the data and monitors the data, and then triggers some action based on the results of the monitoring of the data it.
A rules engine is a collection of \"If-Then\" conditional statements. The \"If\" informs the rules engine what data to look at and what ranges or values of data must match in order to trigger the \"Then\" part of the statement, which then informs the rules engine what action to take or what external resource to call on, when the data is a match to the \"If\" statement.
Most rules engines can be dynamically programmed meaning that new \"If-Then\" statements or rules, can be provided while the engine is running. The rules are often defined by some type of rule language with simple syntax to enable non-Developers to provide the new rules.
Rules engines are one of the simplest forms of \"edge analytics\" provided in IoT systems. Rules engines enable data picked up by IoT sensors to be monitored and acted upon (actuated). Typically, the actuation is accomplished on another IoT device or sensor. For example, a temperature sensor in an equipment enclosure may be monitored by a rules engine to detect when the temperature is getting too warm (or too cold) for safe or optimum operation of the equipment. The rules engine, upon detecting temperatures outside of the acceptable range, shuts off the equipment in the enclosure.
"},{"location":"general/Definitions/#software-development-kit","title":"Software Development Kit","text":"In EdgeX, a software development kit (or SDK) is a library or module to be incorporated into a new micro service. It provides a lot of the boilerplate code and scaffolding associated with the type of service being created. The SDK allows the developer to focus on the details of the service functionality and not have to worry about the mundane tasks associated with EdgeX services.
"},{"location":"general/Definitions/#south-and-north-side","title":"South and North Side","text":"South Side: All IoT objects, within the physical realm, and the edge of the network that communicates directly with those devices, sensors, actuators, and other IoT objects, and collects the data from them, is known collectively as the \"south side.\"
North Side: The cloud (or enterprise system) where data is collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the cloud, is referred to as the \"north side\" of the network.
EdgeX enables data to be sent \"north, \" \"south, \" or laterally as needed and as directed.
"},{"location":"general/Definitions/#snappy-ubuntu-core-snaps","title":"\"Snappy\" / Ubuntu Core & Snaps","text":"A Linux-based Operating System provided by Ubuntu - formally called Ubuntu Core but often referred to as \"Snappy\". The packages are called 'snaps' and the tool for using them 'snapd', and works for phone, cloud, internet of things, and desktop computers. The \"Snap\" packages are self-contained and have no dependency on external stores. \"Snaps\" can be used to create command line tools, background services, and desktop applications.
"},{"location":"general/Definitions/#user","title":"User","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\".
"},{"location":"general/EdgeX_CN/","title":"Is EdgeX Foundry Cloud Native?","text":"This is a question we get in the EdgeX Community quite often; along with other related or extended questions like:
As a simple (perhaps over simplified) answer to these questions, EdgeX was designed to run in/on minimal platforms (\"edge platforms\") with little compute, memory and network connectivity. Cloud native applications are, for the most part, designed to run in resource rich enterprise / cloud environments. Limited resources and other considerations greatly impact the design and operation of edge applications.
Before answering these questions in more detail, its important to understand the definition of cloud native systems. Where did \"cloud native\" come from and what is its purpose? How do all these other questions relate and what are people really asking?
"},{"location":"general/EdgeX_CN/#defining-cloud-native","title":"Defining Cloud Native","text":""},{"location":"general/EdgeX_CN/#origins","title":"Origins","text":"The origins of cloud native computing are right there in the name. Cloud native originated in the realm of cloud computing. Cloud native communities like to say their approach was \"born in the cloud.\" Cloud native computing and architectures emerged from organizations learning how to build and run applications in the cloud. Specifically, how to build and run applications that could scale (up and down) easily, remain functioning in the face of inevitable failures (resiliency), and could operate in the dynamic (or elastic) and distributed resource environments that exist in public, private or even hybrid clouds.
The origins of cloud native computing obviously come from the emergence of cloud technology, but many point specifically to 2015 and the creation of the Cloud Native Computing Foundation (launched by Google, IBM, Intel, VMWare and others with ties in the Cloud industry) as the event that started to galvenize cloud native concepts and steer the direction of Kubernetes (an important and typical ingredient in cloud native systems - see more below) used in cloud native applications.
"},{"location":"general/EdgeX_CN/#defining","title":"Defining","text":"So the origins are in cloud computing, but what exactly is cloud native computing? While debatable, most cloud native computing experts would agree that cloud native computing is about building and running applications in the cloud using methodologies, techniques and technologies that help applications be resilient, easy to manage, and easy to observe. \"Resilient, manageable, and observable\" are the mantra of cloud native experts. Why? Because applications that are resilient, manageable and observable make it easier for developers to make \"high impact\" code changes, at frequent rates, and with predicable impacts and minimal work. Simply put, the cloud native approach allows people to rapidly grow (iterate?) on an application and deploy it easily and with few or no outages.
"},{"location":"general/EdgeX_CN/#ingredients","title":"Ingredients","text":"How is this accomplished? The list of technologies and techniques of cloud native applications include:
Again, the list above is not official (and debatable on some of its points), but the product of the cloud native approach using these technologies create, say cloud native proponents, applications that exist in the cloud that are:
You might be thinking - \"Wow! with all that goodness, why shouldn't all software applications be manufactured using the cloud native approach?\" Indeed, many of the principles of cloud native computing are now applied to all sorts of software development. cloud native computing has expanded beyond the cloud. Additional methodologies (e.g. 12 factor apps), tools (e.g. Prometheus) and techniques (e.g., service discovery and service mesh mechanisms) have emerged to refine (some might say improve) the cloud native approach. Most, if not all, of what is labelled as cloud native computing technology can and has been used in general software development and deployment environments that don't operate in the cloud.
That includes use in edge or IoT computing.
There are, however, important differences between the edge and cloud. They are on opposite ends of the computing spectrum. These natural differences require, in many cases, that edge / IoT applications be constructed and run a little different.
Note
The continuum of edge computing is vast. One often needs to define \"edge\" before making too many generalizations. Running MCUs and PLCs in a factory is at one end of the edge spectrum versus rather large and powerful ruggedized server in a retail store versus a rack of servers at the base of a cell phone tower at the other end of the spectrum - yet all these qualify as \"edge computing\". In this light, as EdgeX Foundry was generally built for the more resource constrained, farther reaches of the edge (although it can be used in larger edge environments), this reference explores how cloud native computing applies under some of the lowest common denominator environments of the edge/IoT space.
So while it would be great if cloud native computing could be directly and wholly applied to the edge and IoT space - and by association then EdgeX Foundry - the constraints of the edge / IoT environment often allow only some of the the cloud native computing approach (tools, technology, etc.) to be applied. This reference attempts to explain where cloud native computing principals have been applied to EdgeX, and where (and why) some challenges exist. It also identifies where future work and improvements in EdgeX (and the edge) and products from CNCF may help bring EdgeX more in line with cloud native computing.
"},{"location":"general/EdgeX_CN/#edge-native","title":"Edge Native","text":"The EdgeX community likes to think of EdgeX as \"Edge Native\". Born at the edge and adhering to some well established needs of the edge and IoT environments. Edge Native shares many of the principals of Cloud Native, but there are differences and one cannot (should not) blanketly try to apply cloud native to edge native realms just as the reverse (applying edge native to cloud native realms) would also be wrong.
"},{"location":"general/EdgeX_CN/#edgex-and-cloud-native-computing","title":"EdgeX and Cloud Native Computing","text":"While EdgeX is not cloud native, it has adopted quite a bit of cloud native principals and technologies. The lists below discuss where EdgeX does, does partially, and does not apply cloud native.
"},{"location":"general/EdgeX_CN/#incorporated-cloud-native-ingredients-in-edgex","title":"Incorporated Cloud Native Ingredients In EdgeX","text":"Micro Services
EdgeX has fully embraced micro services. From the beginning of the project, micro services offered a means to provide an edge/IoT application platform based on loosely coupled capabilities with well defined APIs. A micro service architecture allows the adopter to pick and choose which services are important to their use case and drop the others (critical in a resource constrained environment). It allows EdgeX services to be more easily improved upon and replaced (often by 3rd parties and commercially driven implementers) as better solutions emerge over time. It allows services to be written in alternate programming languages or using technologies best suited to the job. The benefit of micro services can be very beneficial where flexibility is a driving force as it is in cloud and edge computing.
APIs
Each EdgeX micro service has a well defined API set. This API set is what allows replacement services to be created and inserted with ease. It allows for applications on top of EdgeX to be more easily created. Over the course of its existence, this API set has seen only one major revision (and most of that revision was based on the inclusion of standard communication elements such as correlation ids, pagination, and standard error messaging versus a change to the functional APIs). This speaks to how well the APIs are performing in the face of EdgeX requirements. Furthermore, the REST API definitions are even serving as the foundation for EdgeX service communication in other protocols (such as message oriented middleware). This is not unique as cloud native computing systems are also starting to embrace the use of service communications in alternate protocols as well as REST.
CI/CD
Through the efforts of some very talented, experienced and dedicated devops community members, EdgeX has enjoyed world class continuous integration/continuos development (CI/CD) since day one of the project. The EdgeX devops team has provided the project with automated builds, tests, and creation of project artifacts (like containers and snaps) that run with each pull request, nightly (for check of the days work), or on a regular schedule (such as performance checks monthly to ensure the platform remains within expected parameters as it is developed). As shown in cloud native environments, well developed CI/CD pipelines make sure EdgeX is able to \"make 'high impact' code changes, at frequent rates, and with predicable impacts and minimal work.\"
"},{"location":"general/EdgeX_CN/#sometimes-incorporated-cloud-native-ingredients-in-edgex","title":"Sometimes Incorporated Cloud Native Ingredients in EdgeX","text":"The following elements of cloud native are often, but not always applied in EdgeX.
Containers
EdgeX supports (even embraces) containers, but does not require their use. The EdgeX community produces both Docker containers and snaps (Ubuntu Linux software packages) with each release - along with Docker Compose and Helm Charts for orchestration and deployment assistance. Containers provide a convenient mechanism to package up a micro service with all of its dependencies, configuration, etc. They are a convenient software unit that makes deploying, orchestrating and monitoring the services of an application easier. However, there are environments where EdgeX runs that do not support container (or snap or other containerized) runtimes. Resource constraints (memory, storage, CPU, etc.), environmental situations (such as hardware architecture or OS), legacy infrastructure (old hardware or OS) and security constraints are just some of the reasons why EdgeX supports but does not dictate the use of containers. Further, and perhaps most importantly, EdgeX often provides the middleware between operational technology (OT) - like physical equipment and sensors - and information technology (IT). In the world of OT, there are physical connections and hardware specific touch points that need to be accommodated that make using a container in that instance very difficult. Its not uncommon to see EdgeX adopters apply a hybrid approach whereby some of its services are containerized while other services are running \"bare metal\" or outside of any containerization runtime.
Agile
EdgeX has not adopted the Agile Manifesto, but the project does operate on Agile principals. The community formally releases twice a year, but development of the product is ongoing constantly and any change (new feature, bug fix, refactor, etc.) is tested and integrated into the product continuously and immediately (through the CI/CD process mentioned above). Formal releases are more stakes in the ground with regard to higher-level stability and agreed upon timelines for significant features. The community has adopted a philosophy of \"crawl, walk, run\" to grow new features that support a requirements base - but with an understanding (even an expectation) that requirements will change and/or be more fully understood as the feature evolves and gets used. While face-to-face meetings between community members are difficult given the global nature of an open source project, regular and frequent communications between the community developers/architects in and about the code is favored above lots of formal and comprehensive document exchange. Developers are free to use the tools and processes that suit them best so long as the resulting code fulfils requirements and satisfies the CI/CD process.
Distributed
EdgeX is a micro service architecture. Services communicate with each other via REST or message bus and that communication can occur across nodes (aka machines, hosts, etc.). Services have even been built to wait and continue to attempt to communicate with a dependent service - allowing for some resiliency. As such, EdgeX is, at its core, distributable. It was designed such that the services could operate largely independently and on top of whatever limited resources are available at the edge. As an example deployment, an EdgeX device service could run on a Raspberry Pi or smaller compute platform that is directly connected by GPIO to a physical sensor, while the core services are run on an edge gateway, and the application and analytic services (rules engine) run on an edge service. This would allow each service to maximize the available resources available to the solution. Having said that, there are some complexities around real world distributed solutions that adopters would still need to solve depending on their use case and environment. For example, while services can communicate across a distributed set of nodes, the communications between EdgeX services are not secure by default (as would be provided via something like a cloud native service mesh). Adopters would need to provide for their own means to secure all traffic between services in most production environments. Service discovery is not fully implemented. EdgeX services do register with a service registry (Consul) but the services do not use that registry to locate other services. If a service changed location, other services would need to have their configuration changed in order to know and use the service at its new location. Finally, latency is a real concern in edge systems. In addition to service to service communications, most services use stores of information (Redis for data, Vault for secrets, Consul for configuration) which could also be distributed. These are referred to as backing services in cloud native terminology. Even if the communications were secure, if these stores or other services are all distributed, then the additional latency to constantly communicate with services and stores may not be conducive to the edge use case it supports. Each \"hop\" on a network of distributed services costs and that cost adds up when building solutions that operate and manage physical edge capability.
"},{"location":"general/EdgeX_CN/#cloud-native-ingredients-not-in-edgex-and-why","title":"Cloud Native Ingredients Not In EdgeX (and why)","text":"Kubernetes
EdgeX provides example Helm Charts to assist adopters that want to run EdgeX in a Kubernetes environment. However, EdgeX was not designed to fully operate in a multi-cluster environment and take advantage of a full K8s environment. Our example Helm Charts, for example, allow a single instance of each EdgeX service to be deployed/orchestrated and monitored, but it would not allow K8s to fully manage and scale EdgeX services. Why? First and foremost Kubernetes is large compared to the resource constraints of some edge platforms. While smaller Kubernetes environments are being developed for the edge (see Futures below), a whole host of challenges such as resource constraints, environment, infrastructure, etc. (as mentioned under Containers above) may not allow K8s to operate at the edge. Kubernetes is, for the most part, about the ability to load balance, distribute traffic, and scale (up or down) workloads so that an application remains stable. But on an edge platform, where would Kubernetes find the resources to balance and distribute and scale? Because edge nodes are static and often times physically connected to the sensors they collect data from, there is not the means to grow and/or shift the workloads. Portions of EdgeX might be able to scale up or down (those not physically tied to an edge sensor), but the platform as a whole is often rooted to the physical world it is connected to.
There are benefits (and challenges) to the use of Kubernetes that must be considered - whether used at the edge or in the enterprise..
Some of the Benefits of Kubernetes - It provides a \"central pane of glass\" for placing workloads at the edge, monitoring them, and being able to easily upgrade them, more easily than a native, snap-based, or Docker-based deployment. - It allows people to more easily deploy workloads that span from the cloud to the edge by using familiar tools that allow users to place their workloads in a more appropriate place. - Kubernetes is often choosen over Docker alone for container orchestration, with lots of commercially supported Kubernetes distributions for doing so. - Despite the fact that edge resources are not elastic, Kubernetes can make better scheduling decisions in a complex edge environment, where computational accelerators may be available on some nodes and not others, and Kubernetes can help place those workloads where they will run most efficiently.
Some of the Challenges of Using Kubernetes at the Edge - Edge resources are not elastic - Some devices are physically connected to nodes using non-routable or non-Internet protocols, which reduces the value of the Kubernetes scheduler - Storage is a sticking point - unless there is enough infrastructure at the edge to make storage highly available, separation of the storage from the workload mathematically reduces availability (i.e: 0.9 x 0.9 = 0.81 !) - Available network bandwidth and latency can be a concern: a Kubernetes cluster generates a lot of background network and CPU activity.
Serverless Functions
EdgeX is not built on a serverless execution model. Unlike the compute and infrastructure resources of the cloud (which can almost be thought of as infinitely available and scaled up or down as needed), edge compute resources and infrastructure are not scaled up or down based on demand. An edge gateway, running on a light pole of a smart city for example, is not dynamic. The gateway must be provisioned based on the expected highest demand of that platform. The workload on the edge gateway must operate within those resource constraints. EdgeX is designed to operate in some of the smaller of the static, resource constrained environments.
Cloud
Interestingly, we have been asked it EdgeX can run in the cloud. Indeed, some services (such as application services or analytics packages like the rules engine) could run in the cloud (most of the services are platform agnostic), but EdgeX was designed to serve as the middleware between the edge and the cloud. At the lowest level - EdgeX services are meant to connect the physical edge (IoT senors and devices of the OT world) to IT worlds. EdgeX connects things that don't always speak TCP/IP based IT protocols. EdgeX is meant to explore data at the edge in order to reduce latency of communication (making decisions closer to where the decision is turned into action) with the physical edge and reduce the amount of data that needs to be back-hauled to the world of IT (reducing the transportation and storage of unimportant edge data). Even if physical sensors or devices are able to connect and talk to the cloud directly (perhaps because they have Wifi or 5G capability allowing them to connect via TCP/IP), the latency needs and cost to transport all the data directly to the cloud is typically prohibitive.
Note
There are some edge use case where a sensor-to-cloud architecture is warranted. Where the sensor speaks well known IT protocols (TCP/IP REST, MQTT, etc.), the edge data collection rates are small, and there is no need to make quick decisions at the edge, a simple sensor to cloud architecture makes sense and would likely negate the need for EdgeX in that situation.
"},{"location":"general/EdgeX_CN/#other-cloud-native-aspects","title":"Other Cloud Native Aspects","text":"Here are some other aspects or thoughts associated to the cloud native approach (directly or by loose association) and how they apply to EdgeX.
OS is separate
As highly abstracted, containerized applications, cloud native apps do not have a dependency on any specific operating system or individual machine. EdgeX is, for the most part, platform agnostic and able to run on any hardware, OS or connect to any type of sensor or cloud system (whether using EdgeX containers or running on bare metal). However, there are some sensors/devices that require OS or hardware specific drivers or protocol support. These specific services (typically device services) are OS dependent.
High Availability
While not strictly a cloud native principal, cloud native container apps are typically said to provide high availability (HA) - avoiding downtime (scheduled or unscheduled), often by taking advantage of cloud native infrastructure like Kubernetes to keep multiple instances of a service running when HA is paramount. EdgeX does not offer HA out of the box. Services are built to be resilient (for example, recovering from anticipated errors or waiting for dependent services to come up or return when they are not detected), but they are not guaranteed to be HA. When EdgeX services are run in some environments (snaps for example) the environment may detect service issues and launch a new instance of the service to prevent downtime, but these are features of the underlying runtime environments and not of EdgeX services directly. HA often requires a certain amount of redundancy; that is keeping multiple instances of a service running (or at the ready) and using something like Kubernetes to route traffic appropriately given the condition of a service. EdgeX does not have this infrastructure built in, and even if it did, it would have difficulty since some services are again tied to physical senors/devices. If a device service connected to a Modbus device, for example, was to go down, then a backup/redundant service would be of little use without re-provisioning the sensor or device to the backup device service. In order to provide true HA uptime with an edge solution that includes EdgeX, one would need to scale out not up. That is, one would need to setup redundant hardware (sensors, gateway, etc.) with the edge application (EdgeX in this instance) connected to its copy of the sensors and devices and each transmitting back to the IT enterprise such that the enterprise could compare and detect when one of the copies was likely having issues.
Would EdgeX ever explore buiding more HA capability into its services (or even some of its services)? This is unlikely in the near term for the following reasons:
Benefitting from Elastic Infrastructure
Cloud native applications take advantage of shared infrastructure (hardware, software, etc.) provided by the cloud platform in an \"elastic manner\" - that is expanding or shrinking its use of infrastructure based on need (and not really availability which can be considered near infinite). As previously mentioned, edge platforms rarely, if ever, provide this type of infrastructure. Therefore, EdgeX is not built to benefit from it. If an EdgeX service was to begin to receive more and more hits on its APIs, the service would eventually fail. There is not EdgeX provided capability to scale out additional copies of the service.
12 factor app
EdgeX and its services are not 12 factor apps. EdgeX does try to abide by many of the twelve factors (one codebase, declared and isolated dependencies, external config, isolated and configurable backing services, separate build, release and run stages, etc.). But some of the 12 factors, such as concurrency (scale out via the process model), are not possible with each EdgeX service as already mentioned above.
Observable
Perhaps one of the greatest contributions of the CNCF community to cloud native computing is providing all sorts of tools and technologies to observe and analyze cloud native applications in the cloud. Tools like Prometheus make monitoring cloud native containers and their resource utilization a breeze. EdgeX does not come with native observability capabilities. When using EdgeX containers tools like Prometheus for observability and analytics can be used to monitor EdgeX services. Likewise, on some platforms and OS, there are ingredients (like Linux process status or system monitor for snaps) that can be used to help facilitate some level of monitoring. But these are not provided by EdgeX, usually require additional work by an adopter, and may not provide the level of inspection detail required. EdgeX is, with the Kamakura release, starting to provide more system level data (versus sensor data), metrics and events via message bus that an adopter can subscribe to in order to do more observing/analyzing of the EdgeX services. This, however is raw data, to which some additional tooling will be required to provide either human or machine monitoring of the data on top to make sense of it.
"},{"location":"general/EdgeX_CN/#the-future-of-cloud-native-and-edgex","title":"The Future of Cloud Native and EdgeX","text":"As cloud native computing technology and principals expands to more levels of our software realms and as the edge begins to become more indistinguishable from any other part of our computing network, it is inevitable that EdgeX will become more cloud native like. Or perhaps put more precisely, cloud native and edge native are tending toward each other. Edge computing environments are becoming less resource constrained in many places. The CNCF is looking to bring cloud native technology and tools (like Kubernetes) to the edge. Additionally, there are places where EdgeX improvements can help to bridge the cloud native | edge native divide.
Kubernetes Support
As lighter weight Kubernetes infrastructure becomes available (e.g. K3s, KubeEdge, Minikube, etc. - see a comparison for context) and are improved upon, and/or as more edge computing environments get more resources, one of the chief cloud native technologies - that is Kubernetes - or its close cousin will emerge to better facilitate deployment, orchestration, and monitoring (observability) of container based workloads at the edge. EdgeX must be prepared to support and embrace it as it has containers and snaps - yet still recognize that the lowest common denominator of edge platforms may only support \"bare metal\" (only OS and not hypervisor or container infrastructure) type deployments for the foreseeable future.
Better Use of the Service Registry
EdgeX services can and should use the service registry to locate dependent services. This will allow services to be more easily distributed and even allow for use of load balancing and redundant services in some cases.
Secure Service-to-Service Communications
Where warranted, the inclusion of secure communication between services and potentially the inclusion of an optional service mesh will allow for more easily distributed services.
"},{"location":"general/PlatformRequirements/","title":"Platform Requirements","text":"EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended:
MemoryStorageOperating SystemsMemory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve. When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual).
Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start.
EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems
Info
EdgeX Foundry runs on various distributions and / or versions of Linux, Unix, MacOS, Windows, etc. However, the community only supports the platform on amd64
(x86-64) and arm64
architectures.
EdgeX Foundry releases pre-built artifacts as Docker images and Snaps. Please refer to Getting Started for details.
EdgeX can run on armhf
architecture but that requires users to build their own executables from source. EdgeX does not officially support armhf
.
Each EdgeX micro service requires configuration (i.e. - a repository of initialization and operating values). The configuration is initially provided by a YAML file but a service can utilize the centralized configuration management provided by EdgeX for its configuration.
See the Configuration and Registry documentation for more details about initialization of services and the use of the configuration service.
Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX.
Please refer to the general Common Configuration documentation for configuration properties common to all services. Find service specific configuration references in the tabs below.
CoreSupportingApplication & AnalyticsDeviceSecurity Service Name Configuration Reference core-data Core Data Configuration core-metadata Core Metadata Configuration core-command Core Command Configuration Service Name Configuration Reference support-notifications Support Notifications Configuration support-scheduler Support Scheduler Configuration Services Name Configuration Reference app-service General Application Service Configuration app-service-configurable Configurable Application Service Configuration eKuiper rules engine/eKuiper Basic eKuiper Configuration Services Name Configuration Reference device-service General Device Service Configuration device-virtual Virtual Device Service Configuration Services Name Configuration Reference API Gateway API Gateway Configuration Add-on Services Configuring Add-on Service"},{"location":"general/ServicePorts/","title":"Default Service Ports","text":"The following tables (organized by type of service) capture the default service ports. These default ports are also used in the EdgeX provided service routes defined in the Kong API Gateway for access control.
CoreSupportingApplicationDeviceSecurityMiscellaneous Services Name Port Definition core-data 59880 core-metadata 59881 core-command 59882 redis 6379 consul 8500 Services Name Port Definition support-notifications 59860 support-scheduler 59861 rules engine / eKuiper 59720 system management agent (deprecated) 58890 Services Name Port Definition app-sample 59700 app-service-rules 59701 app-push-to-core 59702 app-mqtt-export 59703 app-http-export 59704 app-functional-tests 59705 app-external-mqtt-trigger 59706 app-metrics-influxdb 59707 app-rfid-llrp-inventory 59711 Services Name Port Definition device-virtual 59900 device-modbus 59901 device-bacnet 59980 device-mqtt 59982 device-usb-camera 59983 device-onvif-camera 59984 device-camera 59985 device-rest 59986 device-coap 59988 device-rfid-llrp 59989 device-grove 59992 device-snmp 59993 device-gpio 59910 Services Name Port Definition vault 8200 nginx 8000, 8443 security-spire-server 59840 security-spiffe-token-provider 59841 security-proxy-auth 59842 Services Name Port Definition ui 4000 Modbus simulator 1502 MQTT broker 1883"},{"location":"getting-started/","title":"Getting Started","text":"EdgeX Foundry is operating system and architecture agnostic. The community releases artifacts for common architectures. However, it is possible to build the components for other platforms. See the platform requirements reference page for details.
To get started you need to get EdgeX Foundry either as a User or as a Developer/Contributor.
"},{"location":"getting-started/#user","title":"User","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". You will want to follow the Getting Started as a User guide which takes you through the process of deploying the latest EdgeX releases.
For demo purposes and to run EdgeX on your machine in just a few minutes, please refer to the Quick Start guide.
"},{"location":"getting-started/#developer-and-contributor","title":"Developer and Contributor","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. You will want to follow the Getting Started for Developers guide.
"},{"location":"getting-started/#hybrid","title":"Hybrid","text":"See Getting Started Hybrid if you are developing or working on a particular micro service, but want to run the other micro services via Docker Containers. When working on something like an analytics service (as a developer or contributor) you may not wish to download, build and run all the EdgeX code - you only want to work with the code of your service. Your new service may still need to communicate with other services while you test your new service. Unless you want to get and build all the services, developers will often get and run the containers for the other EdgeX micro services and run only their service natively in a development environment. The EdgeX community refers to this as \"Hybrid\" development.
"},{"location":"getting-started/#device-service-developer","title":"Device Service Developer","text":"As a developer, if you intend to connect IoT objects (device, sensor or other \"thing\") that are not currently connected to EdgeX Foundry, you may also want to obtain the Device Service Software Development Kit (DS SDK) and create new device services. The DS SDK creates all the scaffolding code for a new EdgeX Foundry device service; allowing you to focus on the details of interfacing with the device in its native protocol. See Getting Started with Device SDK for help on using the DS SDK to create a new device service. Learn more about Device Services and the Device Service SDK at Device Services.
"},{"location":"getting-started/#application-service-developer","title":"Application Service Developer","text":"As a developer, if you intend to get EdgeX sensor data to external systems (be that an enterprise application, on-prem server or Cloud platform like Azure IoT Hub, AWS IoT, Google Cloud IOT, etc.), you will likely want to obtain the Application Functions SDK (App Func SDK) and create new application services. The App Func SDK creates all the scaffolding code for a new EdgeX Foundry application service; allowing you to focus on the details of data transformation, filtering, and otherwise prepare the sensor data for the external endpoint. Learn more about Application Services and the Application Functions SDK at Application Services.
"},{"location":"getting-started/#versioning","title":"Versioning","text":"Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.
"},{"location":"getting-started/#long-term-support","title":"Long Term Support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.
"},{"location":"getting-started/ApplicationFunctionsSDK/","title":"Getting Started","text":""},{"location":"getting-started/ApplicationFunctionsSDK/#the-application-functions-sdk","title":"The Application Functions SDK","text":"The SDK is built around the idea of a \"Functions Pipeline\". A functions pipeline is a collection of various functions that process the data in the order that you've specified. The functions pipeline is executed by the specified trigger in the configuration.yaml
. The first function in the pipeline is called with the event that triggered the pipeline (ex. dtos.Event
). Each successive call in the pipeline is called with the return result of the previous function. Let's take a look at a simple example that creates a pipeline to filter particular device ids and subsequently transform the data to XML:
package main\n\nimport (\n\"errors\"\n\"fmt\"\n\"os\"\n\n\"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg\"\n\"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\"\n\"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/transforms\"\n)\n\nconst (\nserviceKey = \"app-simple-filter-xml\"\n)\n\nfunc main() {\n// turn off secure mode for examples. Not recommended for production\n_ = os.Setenv(\"EDGEX_SECURITY_SECRET_STORE\", \"false\")\n\n// 1) First thing to do is to create an new instance of an EdgeX Application Service.\nservice, ok := pkg.NewAppService(serviceKey)\nif !ok {\nos.Exit(-1)\n}\n\n// Leverage the built in logging service in EdgeX\nlc := service.LoggingClient()\n\n// 2) shows how to access the application's specific configuration settings.\ndeviceNames, err := service.GetAppSettingStrings(\"DeviceNames\")\nif err != nil {\nlc.Error(err.Error())\nos.Exit(-1)\n}\n\nlc.Info(fmt.Sprintf(\"Filtering for devices %v\", deviceNames))\n\n// 3) This is our pipeline configuration, the collection of functions to\n// execute every time an event is triggered.\nif err := service.SetDefaultFunctionsPipeline(\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\ntransforms.NewConversion().TransformToXML\n); err != nil {\nlc.Errorf(\"SetDefaultFunctionsPipeline returned error: %s\", err.Error())\nos.Exit(-1)\n}\n\n// 4) Lastly, we'll go ahead and tell the SDK to \"start\" and begin listening for events\n// to trigger the pipeline.\nerr = service.Run()\nif err != nil {\nlc.Errorf(\"Run returned error: %s\", err.Error())\nos.Exit(-1)\n}\n\n// Do any required cleanup here\n\nos.Exit(0)\n}\n
The above example is meant to merely demonstrate the structure of your application. Notice that the output of the last function is not available anywhere inside this application. You must provide a function in order to work with the data from the previous function. Let's go ahead and add the following function that prints the output to the console.
func printXMLToConsole(ctx interfaces.AppFunctionContext, data interface{}) (bool, interface{}) {\n// Leverage the built in logging service in EdgeX\nlc := ctx.LoggingClient()\n\nif data == nil {\nreturn false, errors.New(\"printXMLToConsole: No data received\")\n}\n\nxml, ok := data.(string)\nif !ok {\nreturn false, errors.New(\"printXMLToConsole: Data received is not the expected 'string' type\")\n}\n\nprintln(xml)\nreturn true, nil\n}\n
After placing the above function in your code, the next step is to modify the pipeline to call this function: if err := service.SetDefaultFunctionsPipeline(\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\ntransforms.NewConversion().TransformToXML,\nprintXMLToConsole //notice this is not a function call, but simply a function pointer. \n); err != nil {\n...\n}\n
Set the Trigger type to http
in configuration file found here: res/configuration.yaml [Trigger]\nType=\"http\"\n
Using PostMan or curl send the following JSON to localhost:<port>/api/v3/trigger
{\n\"requestId\": \"82eb2e26-0f24-48ba-ae4c-de9dac3fb9bc\",\n\"apiVersion\" : \"v3\",\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"deviceName\": \"Random-Float-Device\",\n\"profileName\": \"Random-Float-Device\",\n\"sourceName\" : \"Float32\",\n\"origin\": 1540855006456,\n\"id\": \"94eb2e26-0f24-5555-2222-de9dac3fb228\",\n\"readings\": [\n{\n\"apiVersion\" : \"v3\",\n\"resourceName\": \"Float32\",\n\"profileName\": \"Random-Float-Device\",\n\"deviceName\": \"Random-Float-Device\",\n\"value\": \"76677\",\n\"origin\": 1540855006469,\n\"ValueType\": \"Float32\",\n\"id\": \"82eb2e36-0f24-48aa-ae4c-de9dac3fb920\"\n}\n]\n}\n}\n
After making the above modifications, you should now see data printing out to the console in XML when an event is triggered.
Note
You can find this complete example \"Simple Filter XML\" and more examples located in the examples section.
Up until this point, the pipeline has been triggered by an event over HTTP and the data at the end of that pipeline lands in the last function specified. In the example, data ends up printed to the console. Perhaps we'd like to send the data back to where it came from. In the case of an HTTP trigger, this would be the HTTP response. In the case of EdgeX MessageBus, this could be a new topic to send the data back to the MessageBus for other applications that wish to receive it. To do this, simply call ctx.SetResponseData(data []byte)
passing in the data you wish to \"respond\" with. In the above printXMLToConsole(...)
function, replace println(xml)
with ctx.SetResponseData([]byte(xml))
. You should now see the response in your postman window when testing the pipeline.
These instructions are for C Developers and Contributors to get, run and otherwise work with C-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements.
If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User)
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#what-you-need-for-c-development","title":"What You Need For C Development","text":"Many of EdgeX device services are built in C. In the future, other services could be built in C. In additional to the hardware and software listed in the Developers guide, to build EdgeX C services, you will need the following:
You can install these on Debian 11 (Bullseye) by running:
sudo apt-get install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev libpaho-mqtt-dev uuid-dev libhiredis-dev\n
Some of these supporting packages have dependencies of their own, which will be automatically installed when using package managers such as APT, DNF etc. libpaho-mqtt-dev
is not included in Ubuntu prior to Groovy (20.10). IOTech provides a package for Focal (20.04 LTS) which may be installed as follows:
sudo curl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public -o /etc/apt/trusted.gpg.d/iotech-public.asc\nsudo echo \"deb https://iotech.jfrog.io/iotech/debian-release $(lsb_release -cs) main\" | tee -a /etc/apt/sources.list.d/iotech.list\nsudo apt-get update\nsudo apt-get install libpaho-mqtt\n
CMake is required to build the SDKs. Version 3 or better is required. You can install CMake on Debian by running:
sudo apt-get install cmake\n
Check that your C development environment includes the following:
From EdgeX version 3.0, the C utilities used by the SDK must be installed as a pre-requisite package, rather than being downloaded and built with the SDK itself as in previous versions. Note that if re-using an old build tree, the src/c/iot
and include/iot
directories must be removed as these will be outdated.
All commands shown are to be run as the root user.
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#debian-and-ubuntu","title":"Debian and Ubuntu","text":"Management of package signing keys is changed in newer versions. For Debian 11 and Ubuntu 22.04:
apt-get install lsb-release apt-transport-https curl gnupg\ncurl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public | gpg --dearmor -o /usr/share/keyrings/iotech.gpg\necho \"deb [signed-by=/usr/share/keyrings/iotech.gpg] https://iotech.jfrog.io/iotech/debian-release $(lsb_release -cs) main\" | tee -a /etc/apt/sources.list.d/iotech.list\napt-get update\napt-get install iotech-iot-1.5-dev\n
For earlier versions:
apt-get install lsb-release apt-transport-https curl gnupg\ncurl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public | apt-key add -\necho \"deb https://iotech.jfrog.io/iotech/debian-release $(lsb_release -cs) main\" | tee -a /etc/apt/sources.list.d/iotech.list\napt-get update\napt-get install iotech-iot-1.5-dev\n
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#alpine","title":"Alpine","text":"wget https://iotech.jfrog.io/artifactory/api/security/keypair/public/repositories/alpine-release -O /etc/apk/keys/alpine.dev.rsa.pub\necho \"https://iotech.jfrog.io/artifactory/alpine-release/v3.16/main\" >> /etc/apk/repositories\napk update\napk add iotech-iot-1.5-dev\n
Note: If not using Alpine 3.16, replace v3.16 in the above commands with the correct version.
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#next-steps","title":"Next Steps","text":"To explore how to create and build EdgeX device services in C, head to the Device Services, C SDK guide.
"},{"location":"getting-started/Ch-GettingStartedDTOValidation/","title":"DTO Validation","text":"The go-mod-core-contracts leverage the go-playground/validator for DTO validation as it provides common validation function and customization mechanism.
"},{"location":"getting-started/Ch-GettingStartedDTOValidation/#tag-usage","title":"Tag usage","text":"EdgeX verifies the struct fields by using go-playground/validator validation tags or custom validation tags, for example:
type Device struct {\n DBTimestamp `json:\",inline\"`\n Id string `json:\"id,omitempty\" validate:\"omitempty,uuid\"`\n Name string `json:\"name\" validate:\"required,edgex-dto-none-empty-string,edgex-dto-rfc3986-unreserved-chars\"`\n Description string `json:\"description,omitempty\"`\n AdminState string `json:\"adminState\" validate:\"oneof='LOCKED' 'UNLOCKED'\"`\n OperatingState string `json:\"operatingState\" validate:\"oneof='UP' 'DOWN' 'UNKNOWN'\"`\n ...\n}\n
The device name field contains the following validation: You can find more validations in the go-playground/validator and EdgeX custom validations in the go-mod-core-contracts.
"},{"location":"getting-started/Ch-GettingStartedDTOValidation/#character-restriction","title":"Character restriction","text":"The EdgeX uses the custom validation edgex-dto-rfc3986-unreserved-chars to prevent the user inputting the reserved characters.
This validation allows for only the following characters:
EdgeX 3.0
In EdgeX 3.0, the character restriction was reduced for the command name and resource name because some protocols may use /
or .
in the name. By using URL escaping for the API, device command name and resource name allow various characters. For example, the user can define the command name line-a/test:value
and use it with URL escaping as /api/v3/device/name/Modbus-TCP-Device/line-a%2Ftest%3Avalue
.
These instructions are for Developers and Contributors to get and run EdgeX Foundry. If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User)
EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. EdgeX consists of a collection of reference implementation services and SDK tools. The micro services and SDKs are written in Go or C. These documentation pages provide a developer with the information and instructions to get and run EdgeX Foundry in development mode - that is running natively outside of containers and with the intent of adding to or changing the existing code base.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#what-you-need","title":"What You Need","text":""},{"location":"getting-started/Ch-GettingStartedDevelopers/#hardware","title":"Hardware","text":"EdgeX Foundry is an operating system (OS) and hardware (HW)-agnostic edge software platform. See the reference page for platform requirements. These provide guidance on a minimal platform to run the EdgeX platform. However, as a developer, you may find that additional memory, disk space, and improved CPU are essential to building and debugging.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#software","title":"Software","text":"Developers need to install the following software to get, run and develop EdgeX Foundry micro services:
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#git","title":"Git","text":"Use this free and open source version control (SVC) system to download (and upload) the EdgeX Foundry source code from the project's GitHub repositories. See https://git-scm.com/downloads for download and install instructions. Alternative tools (Easy Git for example) could be used, but this document assumes use of git and leaves how to use alternative SVC tools to the reader.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#redis","title":"Redis","text":"By default, EdgeX Foundry uses Redis (version 5 starting with the Geneva release) as the persistence mechanism for sensor data as well as metadata about the devices/sensors that are connected. See Redis Documentation for download and installation instructions.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#docker-optional","title":"Docker (Optional)","text":"If you intend to create Docker images for your updated or newly created EdgeX services, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#additional-programming-tools-and-next-steps","title":"Additional Programming Tools and Next Steps","text":"Depending on which part of EdgeX you work on, you need to install one or more programming languages (Go, C, etc.) and associated tooling. These tools are covered under the documentation specific to each type of development.
Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#long-term-support","title":"Long Term Support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/","title":"Getting Started using Docker","text":""},{"location":"getting-started/Ch-GettingStartedDockerUsers/#introduction","title":"Introduction","text":"These instructions are for users to get and run EdgeX Foundry using the latest stable Docker images.
If you wish to get the latest builds of EdgeX Docker images (prior to releases), then see the EdgeX Nexus Repository guide.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#get-run-edgex-foundry","title":"Get & Run EdgeX Foundry","text":""},{"location":"getting-started/Ch-GettingStartedDockerUsers/#install-docker-docker-compose","title":"Install Docker & Docker Compose","text":"To run Dockerized EdgeX, you need to install Docker first. See https://docs.docker.com/engine/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. The following short video is also very informative https://www.youtube.com/watch?time_continue=3&v=VhabrYF1nms
Use Docker Compose to orchestrate the fetch (or pull), install, and start the EdgeX micro service containers. Also use Docker Compose to stop the micro service containers. See: https://docs.docker.com/compose/ to learn more about Docker Compose and https://docs.docker.com/compose/install/linux/ to install it.
You do not need to be an expert with Docker (or Docker Compose) to get and run EdgeX. This guide provides the steps to get EdgeX running in your environment. Some knowledge of Docker and Docker Compose are nice to have, but not required. Basic Docker and Docker Compose commands provided here enable you to run, update, and diagnose issues within EdgeX.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#select-a-edgex-foundry-compose-file","title":"Select a EdgeX Foundry Compose File","text":"After installing Docker and Docker Compose, you need a EdgeX Docker Compose file. EdgeX Foundry has over a dozen micro services, each deployed in its own Docker container. This file is a manifest of all the EdgeX Foundry micro services to run. The Docker Compose file provides details about how to run each of the services. Specifically, a Docker Compose file is a manifest file, which lists:
The EdgeX development team provides Docker Compose files for each release. Visit the project's GitHub and find the edgex-compose repository. This repository holds all of the EdgeX Docker Compose files for each of the EdgeX releases/versions. The Compose files for each release are found in separate branches. Click on the main
button to see all the branches.
The edgex-compose repositor contains branches for each release. Select the release branch to locate the Docker Compose files for each release.
Locate the branch containing the EdgeX Docker Compose file for the version of EdgeX you want to run.
Note
The main
branch contains the Docker Compose files that use artifacts created from the latest code submitted by contributors (from the night builds). Most end users should avoid using these Docker Compose files. They are work-in-progress. Users should use the Docker Compose files for the latest version of EdgeX.
In each edgex-compose branch, you will find several Docker Compose files (all with a .yml extension). The name of the file will suggest the type of EdgeX instance the Compose file will help setup. The table below provides a list of the Docker Compose filenames for the main
version. Find the Docker Compose file that matches:
Once you have selected the release branch of edgex-compose you want to use, download it using your favorite tool. The examples below uses wget to fetch Docker Compose for the Ireland release with no security.
x86ARMwget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty.yml -O docker-compose.yml\n
wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty-arm64.yml -O docker-compose.yml\n
Note
The commands above fetch the Docker Compose to a file named 'docker-compose.yml' in the current directory. Docker Compose commands look for a file named 'docker-compose.yml' by default. You can use an alternate file name but then must specify that file name when issuing Docker Compose commands. See Compose reference documentation for help.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#generate-a-custom-docker-compose-file","title":"Generate a custom Docker Compose file","text":"The Docker Compose files in the ireland
branch contain the standard set of EdgeX services configured to use Redis
message bus and include only the Virtual and REST device services. If you need to have different device services running or use MQTT
for the message bus, you need a modified version of one of the standard Docker Compose files. You could manually add the device services to one of the existing EdgeX Compose files or, use the EdgeX Compose Builder tool to generate a new custom Compose file that contains the services you would like included. When you use Compose Builder, you don't have to worry about adding all the necessary ports, variables, etc. as the tool will generate the service elements in the file for you. The Compose Builder tool was added with the Hanoi release. You will find the Compose Builder tool in each of the release branches since Hanoi
under the compose-builder folder of those branches. You will also find a compose-builder folder on the main
branch for creating custom Compose files for the nightly builds.
Do the following to use this tool to generate a custom Compose file:
git clone https://github.com/edgexfoundry/edgex-compose.git\n
2. Change directories to the clone and checkout the appropriate release branch. Checkout of the Ireland release branch is shown here. cd edgex-compose/\ngit checkout kamakura\n
3. Change directories to the compose-builder folder and then use the make gen <options>
command to generate your custom compose file. The generated Docker Compose file is named docker-compose.yaml
. Here are some examples: cd compose-builder/\nmake gen ds-mqtt mqtt-broker\n - Generates secure Compose file configured to use MQTT for the message bus, adds then MQTT broker and the Device MQTT services. \n\nmake gen no-secty ds-modbus \n - Generates non-secure compose file with just the Device Modbus device service.\n\nmake gen no-secty arm64 ds-grove \n - Generates non-secure compose file for ARM64 with just the Device Grove device service.\n
See the README document in the compose-builder directory for details on all the available options. The Compose Builder is different per release, so make sure to consult the README in the appropriate release branch. See Ireland's Compose Builder README for details on the lastest release Compose Builder options for make gen
.
Note
The generated Docker Compose file may require addition customizations for your specific needs, such as environment override(s) to set appropriate Host IP address, etc.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#run-edgex-foundry","title":"Run EdgeX Foundry","text":"Now that you have the EdgeX Docker Compose file, you are ready to run EdgeX. Follow these steps to get the container images and start EdgeX!
In a command terminal, change directories to the location of your docker-compose.yml. Run the following command in the terminal to pull (fetch) and then start the EdgeX containers.
docker-compose up -d\n
Warning
If you are using Docker Compose Version 2, please replace docker-compose
with docker compose
before proceeding. This change should be applied to all the docker-compose
in this tutorial. See: https://www.docker.com/blog/announcing-compose-v2-general-availability/ for more information.
Info
If you wish, you can fetch the images first and then run them. This allows you to make sure the EdgeX images you need are all available before trying to run.
docker-compose pull\ndocker-compose up -d\n
Note
The -d option indicates you want Docker Compose to run the EdgeX containers in detached mode - that is to run the containers in the background. Without -d, the containers will all start in the terminal and in order to use the terminal further you have to stop the containers.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#verify-edgex-foundry-running","title":"Verify EdgeX Foundry Running","text":"In the same terminal, run the process status command shown below to confirm that all the containers downloaded and started.
docker-compose ps\n
If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. If you are using a custom Compose file, your containers list may vary. Also note that some \"setup\" containers are designed to start and then exit after configuring your EdgeX instance.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#checking-the-status-of-edgex-foundry","title":"Checking the Status of EdgeX Foundry","text":"In addition to the process status of the EdgeX containers, there are a number of other tools to check on the health and status of your EdgeX instance.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#edgex-foundry-container-logs","title":"EdgeX Foundry Container Logs","text":"Use the command below to see the log of any service.
# see the logs of a service\ndocker-compose logs -f [compose-service-name]\n# example - core data\ndocker-compose logs -f data\n
See EdgeX Container Names for a list of the EdgeX Docker Compose service names.
A check of an EdgeX service log usually indicates if the service is running normally or has errors.
When you are done reviewing the content of the log, select Control-c to stop the output to your terminal.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#ping-check","title":"Ping Check","text":"Each EdgeX micro service has a built-in response to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro service containers are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available.
http://localhost:[service port]/api/v3/ping\n
See EdgeX Default Service Ports for a list of the EdgeX default service ports.
\"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#consul-registry-check","title":"Consul Registry Check","text":"EdgeX uses the open source Consul project as its registry service. All EdgeX micro services are expected to register with Consul as they start. Going to Consul's dashboard UI enables you to see which services are up. Find the Consul UI at http://localhost:8500/ui.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/","title":"Getting Started - Go Developers","text":""},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#introduction","title":"Introduction","text":"These instructions are for Go Lang Developers and Contributors to get, run and otherwise work with Go-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements.
If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User)
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#what-you-need-for-go-development","title":"What You Need For Go Development","text":"In additional to the hardware and software listed in the Developers guide, you will need the following to work with the EdgeX Go-based micro services.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#go","title":"Go","text":"The open sourced micro services of EdgeX Foundry are written in Go 1.16. See https://golang.org/dl/ for download and installation instructions. Newer versions of Go are available and may work, but the project has not built and tested to these newer versions of the language. Older versions of Go, especially 1.10 or older, are likely to cause issues (EdgeX now uses Go Modules which were introduced with Go Lang 1.11).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-essentials","title":"Build Essentials","text":"In order to compile and build some elements of EdgeX, Gnu C compiler, utilities (like make), and associated librarires need to be installed. Some IDEs may already come with these tools. Some OS environments may already come with these tools. Others environments may require you install them. For Ubuntu environments, you can install a convenience package called Build Essentials.
Note
If you are installing Build Essentials, note that there is a build-essential package for each Ubuntu release. Search for 'build-essential' associated to your Ubuntu version via Ubuntu Packages Search.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#ide-optional","title":"IDE (Optional)","text":"There are many tool options for writing and editing Go Lang code. You could use a simple text editor. For more convenience, you may choose to use an integrated development environment (IDE). The list below highlights IDEs used by some of the EdgeX community (without any project endorsement).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#goland","title":"GoLand","text":"GoLand is a popular, although subscription-fee based, Go specific IDE. Learn how to purchase and download Go Land here: https://www.jetbrains.com/go/.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#visual-studio-code","title":"Visual Studio Code","text":"Visual Studio Code is a free, open source IDE developed by Microsoft. Find and download Visual Studio Code here: https://code.visualstudio.com/.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#atom","title":"Atom","text":"Atom is also a free, open source IDE used with many languages. Find and download Atom here: https://ide.atom.io/.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#get-the-code","title":"Get the code","text":"This part of the documentation assumes you wish to get and work with the key EdgeX services. This includes but is not limited to Core, Supporting, some security, and system management services. To work with other Go-based security services, device services, application services, SDKs, user interface, or other service you may need to pull in other EdgeX repository code. See other getting started guides for working with other Go-based services. As you will see below, you do not need to explicitly pull in dependency modules (whether EdgeX or 3rd party provided). Dependencies will automatically be pulled through the building process.
To work with the key services, you will need to download the source code from the EdgeX Go repository. The EdgeX Go-based micro services are all available in a single GitHub repository download. Once the code is pulled, the Go micro services are built and packaged as platform dependent executables. If Docker is installed, the executable can also be containerized for end user deployment/use.
To download the EdgeX Go code, first change directories to the location where you want to download the code (to edgex in the image below). Then use your git tool and request to clone this repository with the following command:
git clone https://github.com/edgexfoundry/edgex-go.git\n
Note
If you plan to contribute code back to the EdgeX project (as a Contributor), you are going to want to fork the repositories you plan to work with and then pull your fork versus the EdgeX repositories directly. This documentation does not address the process and procedures for working with an EdgeX fork, committing changes and submitting contribution pull requests (PRs). See some of the links below in the EdgeX Wiki for help on how to fork and contribute EdgeX code.
Furthermore, this pulls and works with the latest code from the main
branch. The main
branch contains code that is \"work in progress\" for the upcoming release. If you want to work with a specific release, checkout code from the specific release branch or tag(e.g. v2.0.0
, hanoi
, v1.3.11
, etc.)
To build the Go Lang services found in edgex-go, first change directories to the root of the edgex-go code
cd edgex-go\n
Second, use the community provided Makefile to build all the services in a single call make build\n
Info
The first time EdgeX builds, it will take longer than other builds as it has to download all dependencies. Depending on the size of your host machine, an initial build can take several minutes. Make sure the build completes and has no errors. If it does build, you should find new service executables in each of the service folders under the service directories found in the /edgex-go/cmd folder.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-foundry","title":"Run EdgeX Foundry","text":""},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-the-database","title":"Run the Database","text":"Several of the EdgeX Foundry micro services use a database. This includes core-data, core-metadata, support-scheduler, among others. Therefore, when working with EdgeX Foundry its a good idea to have the database up and running as a general rule. See the Redis Quick Start Guide for how to run Redis in a Linux environment (or find similar documentation for other environments).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-services","title":"Run EdgeX Services","text":"With the services built, and the database up and running, you can now run each of the services. In this example, the services will run without security services turned on. If you wish to run with security, you will need to clone, build and run the security services.
In order to turn security off, first set the EDGEX_SECURITY_SECRET_STORE
environment variable to false with an export call.
Simply call
export EDGEX_SECURITY_SECRET_STORE=false\n
Next, move to the cmd
folder and then change folders to the service folder for the service you want to run. Start the executable (with default configuration) that is in that folder. For example, to start Core Metadata, enter the cmd/core-metadata folder and start core-metadata.
cd cmd/core-metadata/\n./core-metadata &\n
Note
When running the services from the command line, you will usually want to start the service with the &
character after the command. This makes the command run in the background. If you do not run the service in the background, then you will need to leave the service running in the terminal and open another terminal to start the other services.
This will start the EdgeX go service and leave it running in the background until you kill it. The log entries from the service will still display in the terminal. Watch the log entries for any ERROR indicators.
Info
To kill a service there are several options, but an easy means is to use pkill with the service name.
pkill core-metadata\n
Start as many services as you need in order to carry out your development, testing, etc. As an absolute minimal set, you will typically need to run core-metadata, core-data, core-command and a device service. Selection of the device service will depend on which physical sensor or device you want to use (or use the virtual device to simulate a sensor). Here are the set of commands to launch core-data and core-command (in addition to core-metadata above)
cd ../core-data/\n./core-data &\ncd ../core-command/\n./core-command &\n
Tip
You can run some services via Docker containers while working on specific services in Go. See Working in a Hybrid Environment for more details.
While the EdgeX services are running you can make EdgeX API calls to localhost
.
Info
No sensor data will flow yet as this just gets the key services up and running. To get sensor data flowing into EdgeX, you will need to get, build and run an EdgeX device service in a similar fashion. The community provides a virtual device service to test and experiment with (https://github.com/edgexfoundry/device-virtual-go).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#verify-edgex-is-working","title":"Verify EdgeX is Working","text":"Each EdgeX micro service has a built-in respond to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro services are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available.
http://localhost:[port]/api/v3/ping\n
See EdgeX Default Service Ports for a list of the EdgeX default service ports.
\"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. The example above shows the ping of core-data.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#next-steps","title":"Next Steps","text":"Application services and some device services are also built in Go. To explore how to create and build EdgeX application and devices services in Go, head to SDK documentation covering these EdgeX elements.
IDEs offer many code editing conveniences. Go Land was specifically built to edit and work with Go code. So if you are doing any significant code work with the EdgeX Go micro services, you will likely find it convenient to edit, build, run, test, etc. from GoLand or other IDE.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#import-edgex","title":"Import EdgeX","text":"To bring in the EdgeX repository code into Go Land, use the File \u2192 Open... menu option in Go Land to open the Open File or Project Window.
In the \"Open File or Project\" popup, select the location of the folder containing your cloned edgex-go repo.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#open-the-terminal","title":"Open the Terminal","text":"From the View menu in Go Land, select the Terminal menu option. This will open a command terminal from which you can issue commands to install the dependencies, build the micro services, run the micro services, etc.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-the-edgex-micro-services","title":"Build the EdgeX Micro Services","text":"Run \"make build\" in the Terminal view (as shown below) to build the services. This can take a few minutes to build all the services.
Just as when running make build from the command line in a terminal, the micro service executables that get built in Go Land's terminal will be created in each of the service folders under the service directories found in the /edgex-go/cmd folder..
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex","title":"Run EdgeX","text":"With all the micro services built, you can now run EdgeX services. You may first want to make sure the database is running. Then, set any environment variables, change directories to the /cmd and service subfolder, and run the service right from the the terminal (same as in Run EdgeX Services).
You can now call on the service APIs to make sure they are running correctly. Namely, call on http://localhost:\\[service port\\]/api/v3/ping
to see each service respond to the simplest of requests.
In some cases, as a developer or contributor, you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run all the micro services. There is an alternative approach! You can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a \"hybrid\" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from Docker containers. This page outlines how to work in a hybrid development environment.
As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-and-run-the-edgex-docker-containers","title":"Get and Run the EdgeX Docker Containers","text":"Since we plan to work with the virtual device service in this example, you don't need or want to run the virtual device service. You will run all the other services via Docker Compose.
Based on the instructions found in the Getting Started using Docker, locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start the EdgeX containers and then stop the virtual device service (which is the service you are working on in this example).
docker-compose up -d \ndocker-compose stop device-virtual\n
Run the EdgeX containers and then stop the service container that you are going to work on - in this case the virtual device service container.
Note
These notes assume you are working with the EdgeX Minnesota or later release. It also assumes you have downloaded the appropriate Docker Compose file and have named it docker-compose.yml
so you don't have to specify the file name each time you run a Docker Compose command. Some versions of EdgeX may require other or additional containers to run.
Tip
You can also use the EdgeX Compose Builder tool to create a custom Docker Compose file with just the services you want. See the Compose Builder documentation on and checkout the Compose Builder tool in GitHub.
Run the command below to confirm that all the containers have started and that the virtual device container is no longer running.
docker-compose ps\n
With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-the-service-code","title":"Get the service code","text":"Per Getting Started Go Developers, pull the micro service code you want to work on from GitHub. In this example, we use the latest released tag for device-virtual-go as the micro service that is going to be worked on. The main branch is the development branch for the next release. The latest release tag should always be used so you are worked with the most recent stable code. The release tags can be found here. Release tags are those tags to do not have -dev
in the name.
git clone --branch <latest-release-tag> https://github.com/edgexfoundry/device-virtual-go.git\n
"},{"location":"getting-started/Ch-GettingStartedHybrid/#build-the-service-code","title":"Build the service code","text":"At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service.
cd device-virtual-go/\nmake build\n
Clone the service from Github, make your code changes and then build the service locally.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#run-the-service-code-natively","title":"Run the service code natively.","text":"The executable created by the make build
command is found in the cmd folder of the service. Change folders to the location of the executable. Set any environment variables needed depending on your EdgeX setup. In this example, we did not start the security elements so we need to set EDGEX_SECURITY_SECRET_STORE
to false
in order to turn off security. Finally, run the service right from a terminal.
cd cmd\nexport EDGEX_SECURITY_SECRET_STORE=false\n./device-virtual -cp -d -o\n
Note
The -cp
flag tells the service to use the Configuration Provider. This is required so that the service can pull the common configuration. The -d
flag tells the service to run in developer mode (aka hybrid mode) so that any Host
names in configuration for dependent services are automatically changed from their Docker network names to localhost
allowing the service to find the dependent services. The -o
flag tells the service to overwrite of configuration from local file into Config Provider (only need when service was previously run in Docker).
EdgeX 3.0
Common configuration is new in EdgeX 3.0. EdgeX services now have a reduced local configuration file that only contains the services' private configuration. All other configuration settings are now in the common configuration. See the Service Configuration section for more details.
Change folders to the service's cmd/ folder, set env vars, and then execute the service executable in the cmd folder.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#check-the-results","title":"Check the results","text":"At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Because Core Metadata callbacks do not work in the hybrid environment, the virtual device service will not receive the Add Device callbacks on the initial run after creating them in Core Metadata. The simple work around for this issue is to stop (Ctrl-c
from the terminal) and restart the virtual device service (again with ./device-virtual -cp -d
execution).
The virtual device service log after stopping and restarting.
Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data.
http://localhost:59880/api/v3/event/count\n
For this example, you can check that the virtual device service is sending data into Core Data by checking the event count.
Note
If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/","title":"C SDK","text":"In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some of the SDK framework and work necessary to complete a device service without actually having a device to talk to.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#install-dependencies","title":"Install dependencies","text":"See the Getting Started - C Developers guide to install the necessary tools and infrastructure needed to develop a C service.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#get-the-edgex-device-sdk-for-c","title":"Get the EdgeX Device SDK for C","text":"The next step is to download and build the EdgeX device service SDK for C.
First, clone the device-sdk-c from Github:
git clone -b v3.0.1 https://github.com/edgexfoundry/device-sdk-c.git\ncd ./device-sdk-c\n
Note
The clone command above has you pull v3.0.1 of the C SDK which is the version compatible with the Minnesota release.
Then, build the device-sdk-c:
make\n
For this guide, you use the example template provided by the C SDK as a starting point for a new device service. You modify the device service to generate random integer values.
Begin by copying the template example source into a new directory named example-device-c
:
mkdir -p ../example-device-c/res/profiles\nmkdir -p ../example-device-c/res/devices\ncp ./src/c/examples/template.c ../example-device-c\ncd ../example-device-c\n
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#build-your-device-service","title":"Build your Device Service","text":"Now you are ready to build your new device service using the C SDK you compiled in an earlier step.
Tell the compiler where to find the C SDK files:
export CSDK_DIR=../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-3.0.1\n
Note
The exact path to your compiled CSDK_DIR may differ depending on the tagged version number on the SDK. The version of the SDK can be found in the VERSION file located in the ./device-sdk-c/VERSION file. In the example above, the Minnesota release of 3.0.1 is used.
Now build your device service executable:
gcc -I$CSDK_DIR/include -I/opt/iotech/iot/1.5/include -L$CSDK_DIR/lib -L/opt/iotech/iot/1.5/lib -o device-example-c template.c -lcsdk -liot\n
If everything is working properly, a device-example-c
executable will be created in the directory.
Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your template.c
method template_get_handler. Replace the following code:
for (uint32_t i = 0; i < nreadings; i++)\n{\n/* Log the attributes for each requested resource */\niot_log_debug (driver->lc, \" Requested reading %u:\", i);\ndump_attributes (driver->lc, requests[i].resource->attrs);\n/* Fill in a result regardless */\nreadings[i].value = iot_data_alloc_string (\"Template result\", IOT_DATA_REF);\n}\nreturn true;\n
with this code:
for (uint32_t i = 0; i < nreadings; i++)\n{\nconst char *rdtype = iot_data_string_map_get_string (requests[i].resource->attrs, \"type\");\nif (rdtype)\n{\nif (strcmp (rdtype, \"random\") == 0)\n{\n/* Set the reading as a random value between 0 and 100 */\nreadings[i].value = iot_data_alloc_i32 (rand() % 100);\n}\nelse\n{\n*exception = iot_data_alloc_string (\"Unknown sensor type requested\", IOT_DATA_REF);\nreturn false;\n}\n}\nelse\n{\n*exception = iot_data_alloc_string (\"Unable to read value, no \\\"type\\\" attribute given\", IOT_DATA_REF);\nreturn false;\n}\n}\nreturn true;\n
Here the reading value is set to a random signed integer. Various iot_data_alloc_
functions are defined in the iot/data.h
header allowing readings of different types to be generated.
A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the device and how to get it.
Follow these steps to create a device profile for the simple random number generating device service.
Explore the files in the device-sdk-c/src/c/examples/res/profiles folder. Note the example TemplateProfile.json device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources
in the file represent properties of a device (properties like SensorOne, SensorTwo and Switch).
A pre-created device profile for the random number device is provided in this documentation. This is supplied in the alternative file format .yaml. Download random-generator.yaml and save the file to the ./res/profiles
folder.
Open the random-generator.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber
. Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber
will be a Int32. In real world IoT situations, this deviceResource list could be extensive and filled with many deviceResources all different types of data.
Device Service accepts pre-defined devices to be added to EdgeX during device service startup.
Follow these steps to create a pre-defined device for the simple random number generating device service.
A pre-created device for the random number device is provided in this documentation. Download random-generator-devices.json and save the file to the ./res/devices
folder.
Open the random-generator-devices.json file in a text editor. Note how the file contents represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). In this example, the device described has a profileName: RandNum-Device
. In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile
Now update the configuration for the new device service. This documentation provides a new configuration.yaml file. This configuration file: - changes the port the service operates on so as not to conflict with other device services
Download configuration.yaml and save the file to the ./res folder.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#custom-structured-configuration","title":"Custom Structured Configuration","text":"C Device Services support structured custom configuration as part of the [Driver]
section in the configuration.yaml file.
View the main
function of template.c
. The confparams
variable is initialized with default values for three test parameters. These values may be overridden by entries in the configuration file or by environment variables in the usual way. The resulting configuration is passed to the init
function when the service starts.
Configuration parameters X
, Y/Z
and Writable/Q
correspond to configuration file entries as follows:
[Writable]\n [Writable.Driver]\n Q = \"foo\"\n\n[Driver]\n X = \"bar\"\n [Driver.Y]\n Z = \"baz\"\n
Entries in the writable section can be changed dynamically if using the registry; the reconfigure
callback will be invoked with the new configuration when changes are made.
In addition to strings, configuration entries may be integer, float or boolean typed. Use the different iot_data_alloc_
functions when setting up the defaults as appropriate.
Now you have your new device service, modified to return a random number, a device profile that will tell EdgeX how to read that random number, as well as a configuration file that will let your device service register itself and its device profile with EdgeX, and begin taking readings every 10 seconds.
Rebuild your Device Service to reflect the changes that you have made:
gcc -I$CSDK_DIR/include -I/opt/iotech/iot/1.5/include -L$CSDK_DIR/lib -L/opt/iotech/iot/1.5/lib -o device-example-c template.c -lcsdk -liot\n
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#run-your-device-service","title":"Run your Device Service","text":"Allow your newly created Device Service, which was formed out of the Device Service C SDK, to create sensor mimicking data which it then sends to EdgeX.
Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call:
docker compose -f docker-compose-no-secty.yml up -d\n
Back in your custom device service directory, tell your device service where to find the libcsdk.so
and libiot.so
:
export LD_LIBRARY_PATH=$CSDK_DIR/lib:/opt/iotech/iot/1.5/lib\n
Run your device service:
./device-example-c\n
You should now see your device service having its /Random command called every 10 seconds. You can verify that it is sending data into EdgeX by watching the logs of the edgex-core-data
service:
docker logs -f edgex-core-data\n
Which would print an event record every time your device service is called.
You can manually generate an event using curl to query the device service directly:
curl 0:59999/api/v3/device/name/RandNum-Device01/RandomNumber\n
Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX:
http://localhost:59880/api/v3/event/device/name/RandNum-Device01?limit=100
This request asks core data to provide the last 100 events/readings associated to the RandNum-Device-01.
In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some SDK framework and work necessary to complete a device service without actually having a device to talk to.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#install-dependencies","title":"Install dependencies","text":"See the Getting Started - Go Developers guide to install the necessary tools and infrastructure needed to develop a GoLang service.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#get-the-edgex-device-sdk-for-go","title":"Get the EdgeX Device SDK for Go","text":"Follow these steps to create a folder on your file system, download the Device SDK, and get the GoLang device service SDK on your system.
Create a collection of nested folders, ~/edgexfoundry
on your file system. This folder will hold your new Device Service. In Linux, create a directory with a single mkdir command
mkdir -p ~/edgexfoundry\n
In a terminal window, change directories to the folder just created and pull down the SDK in Go with the commands as shown.
cd ~/edgexfoundry\ngit clone --depth 1 --branch v2.0.0 https://github.com/edgexfoundry/device-sdk-go.git\n
Note
The clone command above has you pull v2.0.0 of the Go SDK which is the version associated to Ireland. There are later releases of EdgeX, and it is always a good idea to pull and use the latest version associated with the major version of EdgeX you are using. You may want to check for the latest released version by going to https://github.com/edgexfoundry/device-sdk-go and look for the latest release.
Create a folder that will hold the new device service. The name of the folder is also the name you want to give your new device service. Standard practice in EdgeX is to prefix the name of a device service with device-
. In this example, the name 'device-simple' is used.
mkdir -p ~/edgexfoundry/device-simple\n
Copy the example code from device-sdk-go to device-simple:
cd ~/edgexfoundry\ncp -rf ./device-sdk-go/example/* ./device-simple/\n
Copy Makefile to device-simple:
cp ./device-sdk-go/Makefile ./device-simple\n
cp ./device-sdk-go/version.go ./device-simple/\n
After completing these steps, your device-simple folder should look like the listing below.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#start-a-new-device-service","title":"Start a new Device Service","text":"With the device service application structure in place, time now to program the service to act like a sensor data fetching service.
Change folders to the device-simple directory.
cd ~/edgexfoundry/device-simple\n
Open main.go file in the cmd/device-simple folder with your favorite text editor. Modify the import statements. Replace github.com/edgexfoundry/device-sdk-go/v2/example/driver
with github.com/edgexfoundry/device-simple/driver
in the import statements. Also replace github.com/edgexfoundry/device-sdk-go/v2
with github.com/edgexfoundry/device-simple
. Save the file when you have finished editing.
Open Makefile found in the base folder (~/edgexfoundry/device-simple) in your favorite text editor and make the following changes
Replace:
MICROSERVICES=example/cmd/device-simple/device-simple\n
with:
MICROSERVICES=cmd/device-simple/device-simple\n
Change:
GOFLAGS=-ldflags \"-X github.com/edgexfoundry/device-sdk-go/v2.Version=$(VERSION)\"\n
to refer to the new service with:
GOFLAGS=-ldflags \"-X github.com/edgexfoundry/device-simple.Version=$(VERSION)\"\n
Change:
example/cmd/device-simple/device-simple:\ngo mod tidy\n$(GOCGO) build $(GOFLAGS) -o $@ ./example/cmd/device-simple\n
to:
cmd/device-simple/device-simple:\ngo mod tidy\n$(GOCGO) build $(GOFLAGS) -o $@ ./cmd/device-simple\n
Save the file.
Enter the following command to create the initial module definition and write it to the go.mod file:
GO111MODULE=on go mod init github.com/edgexfoundry/device-simple\n
Use an editor to open and edit the go.mod file created in ~/edgexfoundry/device-simple. Add the code highlighted below to the bottom of the file. This code indicates which version of the device service SDK and the associated EdgeX contracts module to use.
require (\ngithub.com/edgexfoundry/device-sdk-go/v2 v2.0.0\ngithub.com/edgexfoundry/go-mod-core-contracts/v2 v2.0.0\n)\n
Note
You should always check the go.mod file in the latest released version SDK for the correct versions of the Go SDK and go-mod-contracts to use in your go.mod.
To ensure that the code you have moved and updated still works, build the device service. In a terminal window, make sure you are still in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command:
make build\n
If there are no errors, your service is ready for you to add custom code to generate data values as if there was a sensor attached.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#customize-your-device-service","title":"Customize your Device Service","text":"The device service you are creating isn't going to talk to a real device. Instead, it is going to generate a random number where the service would ordinarily make a call to get sensor data from the actual device.
Locate the simpledriver.go file in the /driver folder and open it with your favorite editor.
In the import() area at the top of the file, add \"math/rand\" under \"time\".
Locate the HandleReadCommands() function in this same file (simpledriver.go). Find the following lines of code in this file (around line 139):
if reqs[0].DeviceResourceName == \"SwitchButton\" {\ncv, _ := sdkModels.NewCommandValue(reqs[0].DeviceResourceName, common.ValueTypeBool, s.switchButton) res[0] = cv\n}\n
Add the conditional (if-else) code in front of the above conditional:
if reqs[0].DeviceResourceName == \"randomnumber\" {\ncv, _ := sdkModels.NewCommandValue(reqs[0].DeviceResourceName, common.ValueTypeInt32, int32(rand.Intn(100)))\nres[0] = cv\n} else\n
The first line of code checks that the current request is for a resource called \"RandomNumber\". The second line of code generates an integer (between 0 and 100) and uses that as the value the device service sends to EdgeX -- mimicking the collection of data from a real device. It is here that the device service would normally capture some sensor reading from a device and send the data to EdgeX. The HandleReadCommands is where you'd need to do some customization work to talk to the device, get the latest sensor values and send them into EdgeX.
Save the simpledriver.go file
A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it.
Follow these steps to create a device profile for the simple random number generating device service.
Explore the files in the cmd/device-simple/res/profiles folder. Note the example Simple-Driver.yaml device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources
in the file represent properties of a device (properties like SwitchButton, X, Y and Z rotation).
A pre-created device profile for the random number device is provided in this documentation. Download random-generator.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/profiles
folder.
Open the random-generator.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber
. Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a INT32. In real world IoT situations, this deviceResource list could be extensive. Rather than a single deviceResource, you might find this section filled with many deviceResources and each deviceResource associated to a different type.
Device Service accepts pre-defined devices to be added to EdgeX during device service startup.
Follow these steps to create a pre-defined device for the simple random number generating device service.
Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.yaml that is already in this folder. Open the file with your favorite editor and explore its contents. Note how DeviceList
in the file represent an actual device with its properties (properties like Name, ProfileName, AutoEvents).
A pre-created device for the random number device is provided in this documentation. Download random-generator-devices.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices
folder.
Open the random-generator-devices.yaml file in a text editor. In this example, the device described has a ProfileName: RandNum-Device
. In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile
Go Device Services provide /api/v3/validate/device
API to validate device's ProtocolProperties. This feature allows Device Services whose protocol has strict rule to validate their devices before adding them into EdgeX.
Go SDK provides DeviceValidator
interface:
// DeviceValidator is a low-level device-specific interface implemented\n// by device services that validate device's protocol properties.\ntype DeviceValidator interface {\n// ValidateDevice triggers device's protocol properties validation, returns error\n// if validation failed and the incoming device will not be added into EdgeX.\nValidateDevice(device models.Device) error\n}\n
By implementing DeviceValidator
interface whenever a device is added or updated, ValidateDevice
function will be called to validate incoming device's ProtocolProperties and reject the request if validation failed.
Now update the configuration for the new device service. This documentation provides a new configuration.yaml file. This configuration file:
Download configuration.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res
folder (overwrite the existing configuration file). Change the host address of the device service to your system's IP address.
Warning
In the configuration.yaml, change the host address (around line 14) to the IP address of the system host. This allows core metadata to callback to your new device service when a new device is created. Because the rest of EdgeX, to include core metadata, will be running in Docker, the IP address of the host system on the Docker network must be provided to allow metadata in Docker to call out from Docker to the new device service running on your host system.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#custom-structured-configuration","title":"Custom Structured Configuration","text":"Go Device Services can now define their own custom structured configuration section in the configuration.yaml
file. Any additional sections in the configuration file are ignored by the SDK when it parses the file for the SDK defined sections.
This feature allows a Device Service to define and watch it's own structured section in the service's configuration file.
The SDK
API provides the follow APIs to enable structured custom configuration:
LoadCustomConfig(config UpdatableConfig, sectionName string) error
Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw
interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider.
ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback.
See the Device MQTT Service for an example of using the new Structured Custom Configuration capability.
The following built-in device service metrics are collected by the Device SDK
See Device Service Configuration Properties for detail on configuring device service metrics
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#custom","title":"Custom","text":"The Custom Device Service Metrics capability allows for device service developers to define, collect and report their own service metrics beyond the common built-in service metrics supplied by the Device SDK.
The following are the steps to collect and report service metrics:
Determine the metric type that needs to be collected
counter
- Track the integer count of somethinggauge
- Track the integer value of something gaugeFloat64
- Track the float64 value of something timer
- Track the time it takes to accomplish a taskhistogram
- Track the integer value variance of somethingCreate instance of the metric type from github.com/rcrowley/go-metrics
myCounter = gometrics.NewCounter()
myGauge = gometrics.NewGauge()
myGaugeFloat64 = gometrics.NewGaugeFloat64()
myTimer = gometrics.NewTime()
myHistogram = gometrics.NewHistogram(gometrics.NewUniformSample(<reservoir size))
Determine if there are any tags to report along with your metric. Not common so nil
is typically passed for the tags map[strings]string
parameter in the next step.
Register your metric(s) with the MetricsManager from the sdk
reference. See Device SDK API for more details:
service.MetricsManager().Register(\"MyCounterName\", myCounter, nil)
Collect the metric
myCounter.Inc(someIntvalue)
myCounter.Dec(someIntvalue)
myGauge.Update(someIntvalue)
myGaugeFloat64.Update(someFloatvalue)
myTimer.Update(someDuration)
myTimer.Time(func { do sometime})
myTimer.UpdateSince(someTimeValue)
myHistogram.Update(someIntvalue)
Configure reporting of the service's metrics. See Writable.Telemetry
configuration details in the Common Configuration section for more detail.
Example - Service Telemetry Configuration
Writable:\nTelemetry\nInterval: \"30s\"\nMetrics: # All service's metric names must be present in this list.\nMyCounterName: true\nMyGaugeName: true\nMyGaugeFloat64Name: true\nMyTimerName: true\nMyHistogram: true\nTags: # Contains the service level tags to be attached to all the service's metrics\nGateway: \"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only change existing value, not added new ones.\n
Note
The metric names used in the above configuration (to enable or disable reporting of a metric) must match the metric name used when the metric is registered. A partial match of starts with is acceptable, i.e. the metric name registered starts with the above configured name.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#retrieving-secrets","title":"Retrieving Secrets","text":"The Go Device SDK provides the SecretProvider.GetSecret()
API to retrieve the Device Services secrets. See the Device MQTT Service for an example of using the SecretProvider.GetSecret()
API. Note that this code implements a retry loop allowing time for the secret(s) to be push into the service's SecretStore
via the /secret endpoint. See Storing Secrets section for more details.
Just as you did in the Build your Device Service step above, build the device-simple service, which creates the executable program that is your device service. In a terminal window, make sure you are in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command:
cd ~/edgexfoundry/device-simple\nmake build\n
If there are no errors, your service is created and put in the ~/edgexfoundry/device-simple/cmd/device-simple
folder. Look for the device-simple
executable in the folder.
Allow the newly created device service, which was formed out of the Device Service Go SDK, to create sensor-mimicking data that it then sends to EdgeX:
Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call (we're using non-security EdgeX in this example):
docker composef docker-compose-no-secty.yml up -d\n
In a terminal window, change directories to the device-simple's cmd/device-simple folder and run the new device-simple service.
cd ~/edgexfoundry/device-simple/cmd/device-simple\n./device-simple -cp -d\n
This starts the service and immediately displays log entries in the terminal.
EdgeX 3.0
In EdgeX 3.0, services must be provided with a flag indicating where the new common configuration can be found. In most case this will be -cp/--configProvider
specifying to use the Configuration Provider for configuration. Alternatively the -cc/--commonConfig
flag can be used to specify a file that contains the common configuration. In addition, when running in hybrid mode the -d/--dev
flag tells the service that it is running in hybrid mode and to override the Host
names for dependencies with localhost
. See Command Line Options for more details.
Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX:
http://localhost:59880/api/v3/event/device/name/RandNum-Device01
This request asks core data to provide the events associated to the RandNum-Device-01.
The EdgeX device service software development kits (SDKs) help developers create new device connectors for EdgeX. An SDK provides the common scaffolding that each device service needs. This allows developers to create new device/sensor connectors more quickly.
The EdgeX community already provides many device services. However, there is no way the community can provide for every protocol and every sensor. Even if the EdgeX community provided a device service for every protocol, your use case, sensor, or security infrastructure might require customization. Thus, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity.
EdgeX provides two SDKs to help developers create new device services. Most of EdgeX is written in Go and C. Thus, there's a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, the community may offer alternate language SDKs.
The SDKs are libraries that get incorporated into a new micro services. They make writing a new device service much easier. By importing the SDK library into your new device service project, developers are left to focus on the code that is specific to the communications with the device via the protocol of the device.
The code in the SDK handles the other details, such as: - initialization of the device service - getting the service configured - sending sensor data to core data - managing communications with core metadata - and much more.
The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX. For example, it makes sure the service registers with the EdgeX registry service when it starts.
Use the GoLang SDK
Use the C SDK
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/","title":"Getting Started using Snaps","text":""},{"location":"getting-started/Ch-GettingStartedSnapUsers/#introduction","title":"Introduction","text":"Snaps are application packages that are easy to install and update while being secure, cross\u2010platform and self-contained. Snaps can be installed on any Linux distribution with snap support.
Quick Start
Spinning up EdgeX with snaps is extremely easy. For demonstration purposes, let's install the platform, along with the virtual device service and EdgeX UI.
1) Install the platform snap, Device Virtual and EdgeX UI:
snap install edgexfoundry edgex-device-virtual edgex-ui\n
This installs the latest stable version of the snaps. The installation section provides more explanations. 2) Disable security in each of the installed snaps:
snap set edgexfoundry security=false\nsnap set edgex-device-virtual config.edgex-security-secret-store=false\nsnap set edgex-ui config.edgex-security-secret-store=false\n
Beware that this leaves the services at risk! We do it here only to simplify the quick start. Refer to disabling security for details.
3) Start the services:
# start Core and Support services in the platform snap\nsudo snap start edgexfoundry.consul edgexfoundry.redis \\\nedgexfoundry.core-common-config-bootstrapper \\\nedgexfoundry.core-data edgexfoundry.core-metadata edgexfoundry.core-command \\\nedgexfoundry.support-scheduler edgexfoundry.support-notifications\n\n# start Device Virtual\nsnap start edgex-device-virtual\n\n# start EdgeX UI\nsnap start edgex-ui\n
You should now be able to access the UI using a browser at http://localhost:4000
To run the services with security, skip step 2 and refer to platform snap for starting all platform services and adding an API Gateway user to generate a JWT. The JWT is needed to access the secured EdgeX UI.
The following sub-sections provide generic instructions for installation, configuration, and managing services using snaps.
For the list of EdgeX snaps and specific instructions, please refer to the EdgeX Snaps section.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#installation","title":"Installation","text":"When using the snap CLI, the installation is possible by simply executing:
snap install <snap>\n
This is similar to setting --channel=latest/stable
or shorthand --stable
and will install the latest stable release of a snap. In this case, latest/stable
is the channel, composed of latest
track and stable
risk level.
To install a specific version with long term support (e.g. 2.1), or to install a beta or development release, refer to the store page for the snap, choose install, and then pick the desired channel. The store page also provides instructions for installation on different Linux distributions as well as the list of supported CPU architectures.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#configuration","title":"Configuration","text":"EdgeX snaps are packaged with default service configuration files. In certain cases, few configuration fields are overridden within the snap for snap-specific deployment requirements.
There are a few ways to configure snapped services. In simple cases, it should be sufficient to modify the default config files before starting the services for the first time and use config overrides to change supported settings afterwards. Please refer below to learn about the different configuration methods.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-files","title":"Config files","text":"The default configuration files are typically placed at /var/snap/<snap>/current/config
. Upon a successful startup of an EdgeX service, the server configuration file (typically named configuration.yaml
) is uploaded to the Registry by default. After that, the local server configuration file will no longer be read and any modifications will not be applied. At this point, the configurations can be only changed via the Registry or by setting environment variables. Refer to config registry or config overrides for details.
For device services, the Device and Device Profile files are submitted to Core Metadata upon initial startup. Refer to the documentation of Device Services for details.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-registry","title":"Config registry","text":"The configurations that are uploaded to the Registry (i.e. Consul by default) can be modified using Consul's UI or kv REST API. The Registry is a Core services, part of the Platform Snap.
Changes to configurations in Registry are loaded by the service at startup. If the service has already started, a restart is required to load new configurations. Configurations that are in the writable section get loaded not only at startup, but also during the runtime. In other words, changes to the writable configurations are loaded automatically without a restart.
Please refer to Common Configuration and Configuration and Registry Providers for more information.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-provider-snap","title":"Config provider snap","text":"Most EdgeX snaps have a content interface which allows another snap to seed it with configuration files. This is useful for replacing all the configuration files in a service snap via a config provider snap without manual user interaction. This should not to be confused with the EdgeX Config Provider.
A config provider snap could be a standalone package with all the necessary configurations for multiple snaps. It will expose one or more interface slots to allow connections from consumer plugs. The config provider snap can be released to the store just like any other snap. Upon a connection between provider and consumer snaps, the packaged config files get mounted inside the consumer snap, to be used by services.
Please refer to edgex-config-provider, for an example.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-overrides","title":"Config overrides","text":"EdgeX snap options schemeSince EdgeX v2.2, the snaps use the following scheme for the snap configuration options:
apps.<app>.<type>.<key>\n
where: <app>
is the name of the app (service, executable)<type>
is the type of option with respect to the app<key>
is key for the option. It could contain a path to set a value inside an object, e.g. x.y=z
sets {\"x\": {\"y\": \"z\"}}
.We call these app options because of the apps.<app>
prefix which is used to apply configurations to specific services. This prefix can be dropped to apply the configuration globally to all apps within a snap!
This scheme is used for config overrides (described in this section) as well as autostart described in managing services, among others.
To know more about snap configuration in general, refer here.
The EdgeX services allow overriding server configurations using environment variables. Moreover, the services read EdgeX Common Environment Variables that override configurations which are hardcoded in source code or set as command-line options.
The EdgeX snaps provide an mechanism that reads stored key-value options and internally export environment variables to specific services and apps.
The snap options for setting environment variable uses the the following format:
apps.<app>.config.<env-var>
: setting an app-specific value (e.g. apps.core-data.config.service-port=1000
).config.<env-var>
: setting a global value (e.g. config.service-host=localhost
or config.writable-loglevel=DEBUG
)where:
<app>
is the name of the app (service, executable)<env-var>
is a lowercase, dash-separated mapping from the uppercase, underscore-separate environment variable name (e.g. X_Y
->x-y
). The reason for such mapping is that uppercase and underscore characters are not supported as config keys for snaps.Mapping examples:
Snap config key Environment Variable Service configuration YAML service-port SERVICE_PORTService: Port:clients-core-data-host CLIENTS_CORE_DATA_HOST
Clients: core-data: Host:edgex-startup-duration EDGEX_STARTUP_DURATION - edgex-add-secretstore-tokens EDGEX_ADD_SECRETSTORE_TOKENS -
Example
To change the service port of the core-data
service on edgexfoundry
snap to 8080:
snap set edgexfoundry apps.core-data.config.service-port=8080\n
\u200b This would internally export SERVICE_PORT=8080
to core-data
service.
Note
The services load the set configuration on startup. If a service has already started, a restart will be necessary to load the configurations.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#examples","title":"Examples","text":""},{"location":"getting-started/Ch-GettingStartedSnapUsers/#disabling-security","title":"Disabling security","text":"Warning
Disabling security is NOT recommended, unless for demonstration purposes, or when there are other means to secure the services.
The platform snap snap does NOT allow the security to be re-enabled. The only way to re-enable it is to re-install the snap.
Disabling security involves a few steps:
The platform snap which includes all the reference security components provides a convenience option to help disabling security:
sudo snap set edgexfoundry security=false\n
The above command results in stopping everything (if active), disabling the security components (by setting their autostart options to false), as well as setting EDGEX_SECURITY_SECRET_STORE=false
internally so that the included core/support services stop using the Secret Store. Now, to start the platform without security components, either start the non-security services selectively:
sudo snap start edgexfoundry.consul edgexfoundry.redis \\\nedgexfoundry.core-common-config-bootstrapper \\\nedgexfoundry.core-data edgexfoundry.core-metadata edgexfoundry.core-command \\\nedgexfoundry.support-scheduler edgexfoundry.support-notifications\n
or set the autostart option globally:
sudo snap set edgexfoundry autostart=true\n
After disabling the security on the platform, the external services should be similarly configured by setting EDGEX_SECURITY_SECRET_STORE=false
so that they don't attempt to initialize the security.
Example
To disable security for the edgex-ui snap:
snap set edgex-ui config.edgex-security-secret-store=false\nsnap restart edgex-ui\n
Note
All snapped services except for the API Gateway are restricted by default to listening on localhost (127.0.0.1). On the platform snap, the API Gateway proxies external requests to internal services. Since disabling security on the platform snap disables the API Gateway, the service endpoints will no longer be accessible from other systems. They will be still accessible on the local machine and reachable by other local services.
If you need to make an insecure service accessible remotely, set the bind address of the service to the IP address of that networking interface on the local machine. If you trust all your interfaces and want the services to accept connections from all, set it to 0.0.0.0
.
By default, core-data
listens on 127.0.0.1:59880
:
$ sudo lsof -nPi :59880\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\ncore-data 30944 root 12u IPv4 198726 0t0 TCP 127.0.0.1:59880 (LISTEN)\n
To set the bind address of core-data
in the platform snap to 0.0.0.0
:
snap set edgexfoundry apps.core-data.config.service-serverbindaddr=\"0.0.0.0\"\n
Now, core data is listening an all interfaces (*:59880
):
$ sudo lsof -nPi :59880\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\ncore-data 30548 root 12u IPv6 185059 0t0 TCP *:59880 (LISTEN)\n
To set it for all services inside the platform snap:
snap set edgexfoundry config.service-serverbindaddr=\"0.0.0.0\"\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#using-mqtt-message-bus","title":"Using MQTT message bus","text":"The default message bus for EdgeX services is Redis Pub/Sub. If you prefer to use MQTT instead of Redis, change the message bus configurations using snap options.
Example
To switch to an insecure MQTT message bus for all core services (inside the platform snap) and the Device Virtual using snap options, set the following:
snap set edgexfoundry config.messagequeue-protocol=\"mqtt\" \\\nconfig.messagequeue-port=1883 \\\nconfig.messagequeue-type=\"mqtt\" \\\nconfig.messagequeue-authmode=\"none\"\n\nsnap set edgex-device-virtual config.messagequeue-protocol=\"mqtt\" \\\nconfig.messagequeue-port=1883 \\\nconfig.messagequeue-type=\"mqtt\" \\\nconfig.messagequeue-authmode=\"none\"\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#disabling-registry-and-config-provider","title":"Disabling registry and config provider","text":"Consul is the default Registry and Config Provider in EdgeX. To disable both, it would be sufficient to disable Consul and configure the services not to use Registry and Config Provider.
Example
To disable Consul and configure all services (inside the platform snap) not to use Registry and Config provider using snap options, set the following:
snap set edgexfoundry apps.consul.autostart=false\nsnap set edgexfoundry config.edgex-use-registry=false \nsnap set edgexfoundry config.edgex-configuration-provider=none\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#managing-services","title":"Managing services","text":"The services of a snap can be started/stopped/restarted using the snap CLI. When starting/stopping, you can additionally set them to enable/disable which configures whether or not the service should also start on boot.
To list the services and check their status:
snap services <snap>\n
To start and optionally enable services:
# all services\nsnap start --enable <snap>\n\n# one service\nsnap start --enable <snap>.<app>\n
Similarly, a service can be stopped and optionally disabled using snap stop --disable
.
Note
The service autostart overrides the status and startup setting of the services. In other words, if autostart is set to true/false, it will apply that setting every time the snap is re-configured, e.g. when executing snap set|unset
.
To restart services, e.g. to load the configurations:
# all services\nsnap restart <snap>\n\n# one service\nsnap restart <snap>.<app>\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#service-autostart","title":"Service autostart","text":"The EdgeX snaps provide a mechanism to change the default startup of services (e.g. enabled instead of disabled).
The EdgeX snaps allows the change using snap options following the below scheme:
apps.<app>.autostart=true|false
: changing the default startup of one appautostart=true|false
: changing the default startup of all appswhere <app>
is the name of the app which can run as a service.
Disable the autostart of support-scheduler on the platform snap:
snap set edgexfoundry apps.support-scheduler.autostart=false\n
Enable the autostart of all Device USB Camera services:
snap set edgex-device-virtual autostart=true\n
The autostart options are also useful for changing the startup behavior when seeding the snap from a Gadget on Ubuntu Core.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#debugging","title":"Debugging","text":"The service logs can be queried using the snap log
command.
For example, to query 100 lines and follow:
# all services\nsnap logs -n=100 -f <snap>\n\n# one service\nsnap logs -n=100 -f <snap>.<app>\n
Check snap logs --help
for details. To query not only the service logs, but also the snap logs (incl. hook apps such as install and configure), use journalctl
:
sudo journalctl -n 100 -f | grep <snap>\n
Info
The verbosity of service logs is INFO by default. This can be changed by overriding the log level using the WRITABLE_LOGLEVEL
environment variable using snap config overrides apps.<app>.config.writable-loglevel
or globally as config.writable-loglevel
.
The following snaps are maintained by the EdgeX working groups:
To find all EdgeX snaps on the public Snap Store, search by keyword.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#platform-snap","title":"Platform Snap","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The main platform snap, simply called edgexfoundry
contains all reference core and security services along with support-scheduler and support-notifications.
Upon installation, the services are stopped and disabled. They can be started altogether or selectively; see managing services. For example, to start all the services, run:
sudo snap start edgexfoundry\n
For the configuration of services, refer to configuration. Read below for other deployment-related instructions about this snap.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#adding-api-gateway-users","title":"Adding API Gateway users","text":"The API gateway will pass any request that authenticates using a signed identity token from the EdgeX secret store.
The baseline implementation in EdgeX 3.0 uses Vault identity and the 'userpass' authentication engine to create users, though EdgeX adopters are free to add their own Vault identities using authentication methods of their choice. To add a new user locally, use the snapped secrets-config
utility.
To get the usage help:
edgexfoundry.secrets-config proxy adduser -h\n
You may also refer to the secrets-config proxy documentation. Creating an example user
Use secrets-config
to add an example
user (note: always specify --useRootToken
for the snap deployment of EdgeX):
sudo edgexfoundry.secrets-config proxy adduser --user example --useRootToken \\\n| jq --raw-output '.password' \\\n> password.txt\n
On success, the above command writes the system-generated password for example
user to password.txt
. If the \"adduser\" command is run multiple times, each run will overwrite the password from the previous run with a new random password. Generating a JWT token (ID Token) for the example user
Some additional work is required to generate a JWT that is usable for API gateway authentication.
username=example\npassword=$(cat password.txt)\n\nvault_token=$(curl --silent --show-err \"http://localhost:8200/v1/auth/userpass/login/${username}\" --data \"{\\\"password\\\":\\\"${password}\\\"}\" \\\n| jq --raw-output '.auth.client_token')\n\ncurl --silent --show-err -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/token/${username}\" \\\n| jq --raw-output '.data.token' \\\n> id-token.txt\n
The ID Token gets written to id-token.txt
. Once you have the token, you can access the services via the API Gateway (the vault token can be discarded). To obtain a new JWT token once the current one is expired, repeat the above snippet of code.
Calling an API on behalf of example user
curl --insecure https://localhost:8443/core-data/api/v3/ping -H \"Authorization: Bearer $(cat id-token.txt)\"\n
Output: {\"apiVersion\" : \"v3\",\"timestamp\":\"Mon May 15 16:45:55 CEST 2023\",\"serviceName\":\"core-data\"}
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#accessing-consul","title":"Accessing Consul","text":"Consul API and UI can be accessed using the consul token (Secret ID). For the snap, token is the value of SecretID
typically placed in a JSON file at /var/snap/edgexfoundry/current/secrets/consul-acl-token/mgmt_token.json
.
Example
To get the token:
sudo cat /var/snap/edgexfoundry/current/secrets/consul-acl-token/mgmt_token.json \\\n| jq -r '.SecretID' \\\n> consul-token.txt\n
The output gets written to consul-token.txt
. Try it out locally:
curl --silent --show-err http://localhost:8500/v1/kv/edgex/v3/core-data/Service/Port -H \"X-Consul-Token:$(cat consul-token.txt)\"\n
Through the API Gateway: We need to pass both the Consul token and Secret Store token obtained in Adding API Gateway users examples.
curl --insecure --silent --show-err https://localhost:8443/consul/v1/kv/edgex/v3/core-data/Service/Port -H \"X-Consul-Token:$(cat consul-token.txt)\" -H \"Authorization: Bearer $(cat id-token.txt)\"\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#changing-tls-certificates","title":"Changing TLS certificates","text":"The API Gateway setup generates a self-signed certificate with a short expiration by default.
The JWT authentication token that is consumed by the proxy is sensitive and it is important that measures are taken to ensure that clients do not disclose the JWT to unauthorized parties. For this reason, the default certificate and key should be replaced with a certificate and key that is trusted by connecting clients.
The certificate and key can be replaced locally. They are located at:
/var/snap/edgexfoundry/current/nginx/nginx.crt
/var/snap/edgexfoundry/current/nginx/nginx.key
Changes to the files should be followed by reloading Nginx: sudo snap restart --reload edgexfoundry.nginx
Alternatively, the certificate and key can be replaced using the snapped secrets-config
application. To get the usage help:
edgexfoundry.secrets-config proxy tls -h\n
Refer to the secrets-config proxy documentation. Example
Given the following files created outside the scope of this document:
server.crt
user-provided certificate (replacing the default)server.key
user-provided private key (replacing the default)ca.crt
Certificate Authority certificate (that signed server.crt
, directly or indirectly)For example, to generate a CA and issue a certificate valid for 30 days:
# Generate the Certificate Authority (CA) Private Key\nopenssl ecparam -name prime256v1 -genkey -noout -out ca.key\n# Generate the Certificate Authority Certificate\nopenssl req -new -x509 -sha256 -key ca.key -out ca.crt -subj \"/CN=getting-started-ca\"\n# Generate the Server Certificate Private Key\nopenssl ecparam -name prime256v1 -genkey -noout -out server.key\n# Generate the Server Certificate Signing Request\nopenssl req -new -sha256 -key server.key -out server.csr -subj \"/CN=localhost\"\n# Generate the Server Certificate\nopenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 30 -sha256\n
Perform the following steps:
Copy server.crt
and server.key
to the snap
sudo cp server.crt server.key /var/snap/edgexfoundry/common/\n
We do this to allow temporary access to the files by the confined application. Instead of temporarily adding the files to the snap, the files can be read directly from the root user's home (/root
) or a removable media, after granting the home or removable-media permissions. Add new certificate files:
sudo edgexfoundry.secrets-config proxy tls \\\n--targetFolder /var/snap/edgexfoundry/current/nginx \\\n--inCert /var/snap/edgexfoundry/common/server.crt \\\n--inKey /var/snap/edgexfoundry/common/server.key
Reload Nginx:
sudo snap restart --reload edgexfoundry.nginx\n
Try it out:
curl --verbose --cacert ca.crt https://localhost:8443/core-data/api/v3/ping\n
The output should include a message indicating that the request is unauthorized. This means that TLS is setup correctly, but the request misses the required authentication. See Adding API Gateway users. In the information about the TLS, look for the Server certificate's issuer and make sure it matches your CA. For example, issuer: CN=getting-started-ca
.
The --cacert
can be omitted if the CA is available in root certificates (e.g. CA-signed or pre-installed CA certificate).
The services inside standalone snaps (e.g. device, app snaps) automatically receive a Secret Store token when:
The edgex-secretstore-token
content interface provides the mechanism to automatically supply tokens to connected snaps.
Execute the following command to check the status of connections:
sudo snap connections edgexfoundry\n
To manually connect the edgexfoundry's plug to a standalone snap's slot:
snap connect edgexfoundry:edgex-secretstore-token <snap>:edgex-secretstore-token\n
Note that the token has a limited expiry time of 1h by default. The connection and service startup should happen within the validity period.
To better understand the snap connections, read the interface management
Extend the default Secret Store token TTL
The TOKENFILEPROVIDER_DEFAULTTOKENTTL environment variable can be set to override the default time to live (TTL) of the Secret Store tokens. This is useful when the microservice consumers of the tokens are expected to start after a delay that is longer than the default TTL.
This can be achieved in the snap by setting the equivalent tokenfileprovider-defaulttokenttl
config option:
sudo snap set edgexfoundry app-options=true\nsudo snap set edgexfoundry apps.security-secretstore-setup.config.tokenfileprovider-defaulttokenttl=72h\n\n# Re-start the oneshot setup service to re-generate tokens:\nsudo snap start edgexfoundry.security-secretstore-setup\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#edgex-ui","title":"EdgeX UI","text":"| Installation | Managing Services | Debugging | Source |
For usage instructions, please refer to the Graphical User Interface (GUI) guide.
The service is not started by default. Please refer to configuration and managing services.
Once started, the UI will be reachable locally and by default at: http://localhost:4000
A valid JWT token is required to access the UI; follow Adding API Gateway users steps to generate a token. In development environments, the UI access control can be disabled as described in disabling security.
To enable all the functionalities of the UI, the following services should be running:
For example, to start/install the support services:
sudo snap start edgexfoundry.support-scheduler\nsudo snap start edgexfoundry.support-notifications\nsudo snap install edgex-ekuiper\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#edgex-ekuiper","title":"EdgeX eKuiper","text":"| Installation | Managing Services | Debugging | Source |
For the documentation of the standalone EdgeX eKuiper snap, visit the README.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#app-service-configurable","title":"App Service Configurable","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-app-service-configurable/current/config/\n\u2514\u2500\u2500 res\n \u251c\u2500\u2500 external-mqtt-trigger\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 functional-tests\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 http-export\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 metrics-influxdb\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 mqtt-export\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 push-to-core\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u2514\u2500\u2500 rules-engine\n \u2514\u2500\u2500 configuration.yaml\n
Filtering devices using snap options App service configurable provides various event filtering options. For example, to filter by device names Random-Integer-Device
and Random-Binary-Device
using snap options:
snap set edgex-app-service-configurable config.writable-pipeline-executionorder=\"FilterByDeviceName, SetResponseData\"\nsnap set edgex-app-service-configurable config.writable-pipeline-functions-filterbydevicename-parameters-devicenames=\"Random-Integer-Device, Random-Binary-Device\"\nsnap set edgex-app-service-configurable config.writable-pipeline-functions-filterbydevicename-parameters-filterout=true\n
Please refer to App Service Configurable guide for detailed usage instructions.
Profile
Before you can start the service, you must select one of available profiles, using snap options.
For example, to set mqtt-export
profile using the snap CLI:
sudo snap set edgex-app-service-configurable profile=mqtt-export\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#app-rfid-llrp-inventory","title":"App RFID LLRP Inventory","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-app-rfid-llrp-inventory/current/config/\n\u2514\u2500\u2500 app-rfid-llrp-inventory\n \u2514\u2500\u2500 res\n \u2514\u2500\u2500 configuration.yaml\n
Aliases
The aliases need to be provided for the service to work. See Setting the Aliases.
For the snap, this can either be by:
configuration.yaml
file with the correct aliases, before startup| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-gpio/current/config\n\u2514\u2500\u2500 device-gpio\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 device.custom.gpio.yaml\n \u2514\u2500\u2500 profiles\n \u2514\u2500\u2500 device.custom.gpio.yaml\n
GPIO Access
This snap is strictly confined which means that the access to interfaces are subject to various security measures.
On a Linux distribution without snap confinement for GPIO (e.g. Raspberry Pi OS 11), the snap may be able to access the GPIO directly, without any snap interface and manual connections.
On Linux distributions with snap confinement for GPIO such as Ubuntu Core, the GPIO access is possible via the gpio interface, provided by a gadget snap. The official Raspberry Pi Ubuntu Core image includes that gadget. It is NOT possible to use this snap on Linux distributions that have the GPIO confinement but not the interface (e.g. Ubuntu Server 20.04), unless for development purposes.
In development environments, it is possible to install the snap in dev mode (using --devmode
flag which disables security confinement and automatic upgrades) to allow direct GPIO access.
The gpio
interface provides slots for each GPIO channel. The slots can be listed using:
$ sudo snap interface gpio\nname: gpio\nsummary: allows access to specific GPIO pin\nplugs:\n - edgex-device-gpio\nslots:\n - pi:bcm-gpio-0\n - pi:bcm-gpio-1\n - pi:bcm-gpio-10\n ...\n
The slots are not connected automatically. For example, to connect GPIO-17:
$ sudo snap connect edgex-device-gpio:gpio pi:bcm-gpio-17\n
Check the list of connections:
$ sudo snap connections\nInterface Plug Slot Notes\ngpio edgex-device-gpio:gpio pi:bcm-gpio-17 manual\n\u2026\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-modbus","title":"Device Modbus","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-modbus/current/config/\n\u2514\u2500\u2500 device-modbus\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 modbus.test.devices.yaml\n \u2514\u2500\u2500 profiles\n \u2514\u2500\u2500 modbus.test.device.profile.yml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-mqtt","title":"Device MQTT","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-mqtt/current/config/\n\u2514\u2500\u2500 device-mqtt\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 mqtt.test.device.yaml\n \u2514\u2500\u2500 profiles\n \u2514\u2500\u2500 mqtt.test.device.profile.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-rest","title":"Device REST","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-rest/current/config/\n\u2514\u2500\u2500 device-rest\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 sample-devices.yaml\n \u2514\u2500\u2500 profiles\n \u251c\u2500\u2500 sample-image-device.yaml\n \u251c\u2500\u2500 sample-json-device.yaml\n \u2514\u2500\u2500 sample-numeric-device.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-rfid-llrp","title":"Device RFID LLRP","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-rfid-llrp/current/config/\n\u2514\u2500\u2500 device-rfid-llrp\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u251c\u2500\u2500 profiles\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 llrp.device.profile.yaml\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 llrp.impinj.profile.yaml\n \u2514\u2500\u2500 provision_watchers\n \u251c\u2500\u2500 impinj.provision.watcher.yaml\n \u2514\u2500\u2500 llrp.provision.watcher.yaml\n
Subnet setup
The DiscoverySubnets
setting needs to be provided before a device discovery can occur. This can be done in a number of ways:
Using snap set
to set your local subnet information. Example:
sudo snap set edgex-device-rfid-llrp apps.device-rfid-llrp.config.app-custom.discovery-subnets=\"192.168.10.0/24\"\n\ncurl -X POST http://localhost:59989/api/v3/discovery\n
Using a config-provider-snap to set device configuration
Using the auto-configure
command.
This command finds all local network interfaces which are online and non-virtual and sets the value of DiscoverySubnets
in Consul. When running with security enabled, it requires a Consul token, so it needs to be run as follows:
# get Consul ACL token\nCONSUL_TOKEN=$(sudo cat /var/snap/edgexfoundry/current/secrets/consul-acl-token/bootstrap_token.json | jq \".SecretID\" | tr -d '\"') echo $CONSUL_TOKEN # start the device service and connect the interfaces required for network interface discovery\nsudo snap start edgex-device-rfid-llrp.device-rfid-llrp \nsudo snap connect edgex-device-rfid-llrp:network-control \nsudo snap connect edgex-device-rfid-llrp:network-observe # run the nework interface discovery, providing the Consul token\nedgex-device-rfid-llrp.auto-configure $CONSUL_TOKEN\n
| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-snmp/current/config/\n\u2514\u2500\u2500 device-snmp\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 device.snmp.trendnet.TPE082WS.yaml\n \u2514\u2500\u2500 profiles\n \u251c\u2500\u2500 device.snmp.patlite.yaml\n \u251c\u2500\u2500 device.snmp.switch.dell.N1108P-ON.yaml\n \u2514\u2500\u2500 device.snmp.trendnet.TPE082WS.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-usb-camera","title":"Device USB Camera","text":"| Installation | Configuration | Managing Services | Debugging | Source |
This snap includes two services:
The services are not started by default. Please refer to configuration and managing services.
The snap uses the camera interface to access local USB camera devices. The interface management document describes how Snap interfaces are used to control the access to resources.
The default configuration files are installed at:
/var/snap/edgex-device-usb-camera/current/config\n\u251c\u2500\u2500 device-usb-camera\n\u2502 \u2514\u2500\u2500 res\n\u2502 \u251c\u2500\u2500 configuration.yaml\n\u2502 \u251c\u2500\u2500 devices\n\u2502 \u2502 \u251c\u2500\u2500 general.usb.camera.yaml.example\n\u2502 \u2502 \u2514\u2500\u2500 hp.w200.yaml.example\n\u2502 \u251c\u2500\u2500 profiles\n\u2502 \u2502 \u251c\u2500\u2500 general.usb.camera.yaml\n\u2502 \u2502 \u251c\u2500\u2500 hp.w200.yaml.example\n\u2502 \u2502 \u2514\u2500\u2500 jinpei.general.yaml.example\n\u2502 \u2514\u2500\u2500 provision_watchers\n\u2502 \u2514\u2500\u2500 generic.provision.watcher.yaml\n\u2514\u2500\u2500 rtsp-simple-server\n \u2514\u2500\u2500 config.yml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-virtual","title":"Device Virtual","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-virtual/current/config\n\u2514\u2500\u2500 device-virtual\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 devices.yaml\n \u2514\u2500\u2500 profiles\n \u251c\u2500\u2500 device.virtual.binary.yaml\n \u251c\u2500\u2500 device.virtual.bool.yaml\n \u251c\u2500\u2500 device.virtual.float.yaml\n \u251c\u2500\u2500 device.virtual.int.yaml\n \u2514\u2500\u2500 device.virtual.uint.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-onvif-camera","title":"Device ONVIF Camera","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-onvif-camera/current/config\n\u2514\u2500\u2500 device-onvif-camera\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 camera.yaml.example\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 control-plane-device.yaml\n \u251c\u2500\u2500 profiles\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 camera.yaml\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 control-plane.profile.yaml\n \u2514\u2500\u2500 provision_watchers\n \u2514\u2500\u2500 generic.provision.watcher.yaml\n
"},{"location":"getting-started/Ch-GettingStartedUsers/","title":"Getting Started as a User","text":"This section provides instructions for Users to get EdgeX up and running. If you are a Developer, you should read Getting Started as a Developer.
EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability.
You can download EdgeX micro service source code and build your own micro services. However, if you do not have a need to change or add to EdgeX, then you do not need to download source code. Instead, you can download and run the pre-built EdgeX micro service artifacts.
The EdgeX community builds and creates Docker images as well as Snap packages with each release. The community also provides the latest unstable builds (prior to releases).
Please continue by referring to:
Released EdgeX Docker container images are available from Docker Hub. Please refer to the Getting Started using Docker for instructions related to stable releases.
In some cases, it may be necessary to get your EdgeX container images from the Nexus repository. The Linux Foundation manages the Nexus repository for the project.
Warning
Containers used from Nexus are considered \"work in progress\". There is no guarantee that these containers will function properly or function properly with other containers from the current release.
Nexus contains the EdgeX project staging and development container images. In other words, Nexus contains work-in-progress or pre-release images. These, pre-release/work-in-progress Docker images are built nightly and made available at the following Nexus location:
nexus3.edgexfoundry.org:10004\n
"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#rationale-to-use-nexus-images","title":"Rationale To Use Nexus Images","text":"Reasons you might want to use container images from Nexus include:
A set of Docker Compose files have been created to allow you to get and use the latest EdgeX service images from Nexus. Find these Nexus \"Nightly Build\" Compose files in the main
branch of the edgex-compose
respository in GitHub. The EdgeX development team provides these Docker Compose files. As with the EdgeX release Compose files, you will find several different Docker Compose files that allow you to get the type of EdgeX instance setup based on:
Warning
The \"Nightly Build\" images are provided as-is and may not always function properly or with other EdgeX services. Use with caution and typically only if you are a developer/contributor to EdgeX. These images represent the latest development work and may not have been thoroughly tested or integrated.
"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-nexus-images","title":"Using Nexus Images","text":"The operations to pull the images and run the Nexus Repository containers are the same as when using EdgeX images from Docker Hub (see Getting Started using Docker).
To get container images from the Nexus Repository, in a command terminal, change directories to the location of your downloaded Nexus Docker Compose yaml. Rename the file to docker-compose.yml. Then run the following command in the terminal to pull (fetch) and then start the EdgeX Nexus-image containers.
docker compose up -d\n
"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-a-single-nexus-image","title":"Using a Single Nexus Image","text":"In some cases, you may only need to use a single image from Nexus while other EdgeX services are created from the Docker Hub images. In this case, you can simply replace the image location for the selected image in your original Docker Compose file. The address of Nexus is nexus3.edgexfoundry.org at port 10004. So, if you wished to use the EdgeX core data image from Nexus, you would replace the name and location of the core data image edgexfoundry/core-data:2.0.0
with nexus3.edgexfoundry.org:10004/core-data:latest
in the Compose file.
Note
The example above replaces the Ireland core data service from Docker Hub with the latest core data image in Nexus.
"},{"location":"getting-started/native/Ch-BuildRunNative/","title":"Native Build and Run","text":"There are instances, in both development as well as production, where you need to run EdgeX \"natively.\" That is, you want to run EdgeX on the native operating system / hardware outside of any emulation, container platform, Docker, Docker Compose, Snaps, etc.. Per PC Magazine, running natively
\"is to execute software written for the computer's natural, basic mode of operation; for example, a program written for Windows running under Windows. Contrast with running a program under some type of emulation or simulation\".
The following guides will assist you in building and running EdgeX natively.
Alert
Please note that the rest of the EdgeX documentation, outside of these native build and run guides, focuses on running EdgeX in Docker containers or EdgeX snaps. Using containers or snaps are usually the easiest and preferred way to run EdgeX - especially when you are not a developer and not familiar with operating system commands, compiling code, building program artifacts, and running programs in an operating system.
Therefore, these native build and run guides do not contain every aspect or option for running EdgeX in native environments. They are meant as a quick start for more seasoned developers or administrators comfortable with running a system by setting up build tools/environments, pulling source code, building from source and running the program outputs (executable artifacts) of the build without the benefits and ease that container platforms and similar technology bring.
Warning
These build and run guides offer some assistance to seasoned developers or administrators to help build and run EdgeX in environments not always supported by the project. EdgeX was built to be platform independent. As such, we believe most of EdgeX can run on almost any environment (on any hardware architecture and almost any operating system). However, there are elements of the EdgeX platform that will not run on all operating systems. For example, Redis will not run on Windows OS natively and some device services are only capable of running on Linux distributions or ARM64 platforms.
Existence of these guides does not imply current or future support. Use of these guides should be used with care and with an understanding that they are the community's best effort to provide advanced developers with the means to begin their own custom EdgeX development.
"},{"location":"getting-started/native/Ch-BuildRunNative/#guides","title":"Guides","text":"Warning
This build and run guide offers some assistance to seasoned developers or administrators to help build and run EdgeX on Linux OS with ARM 32 hardware natively (not using Docker and not running with snaps). Running on ARM 32 is not supported by the project. EdgeX was built to be platform independent. As such, we believe most of EdgeX can run on almost any environment (on any hardware architecture and almost any operating system).
Existence of this guide does not imply current or future support. Use of this guide should be used with care and with an understanding that it is the community's best effort to provide advanced developers with the means to begin their own custom EdgeX development and execution on Linux distributions running on ARM 32 hardware.
This build and run guide shows you how to get, compile/build, execute and test EdgeX (including the core and supporting services, the configurable application service, eKuiper rules engine and a virtual device service) in Linux on ARM 32 hardware. Specifically, this guide was done using a Raspberry Pi 3 running Raspberry Pi OS - version 5.15. For the most part, the guide should assist in building and running EdgeX in almost any Linux distribution on almost any ARM 32 hardware, but some instructions will vary based on the nuances of the underlying distribution.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#environment","title":"Environment","text":"Building and running EdgeX on Linux natively will require you have:
sudo
or root accessThe following software is assumed to already be installed and available on the host platform. Follow the referenced guides if you need to install or setup this software. Please note, the commands to check for the required software documented below are correct, but the actual results of the check may vary per OS distribution and version.
Go Lang, version 1.17 or later as of the Kamakura release
How to check for existence and version on your machine
GCC Build Essentials (for C++)
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Consul, version 1.10 or later as of the Kamakura release
How to check for existence and version on your machine
Redis,version 6.2 or later as of the Kamakura release
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Git
How to check for existence and version on your machine
In this guide, you will be building and running EdgeX in \"non-secure\" mode. That is, you will be building and running the EdgeX platform without the security services and security configuration. An environmental variable, EDGEX_SECURITY_SECRET_STORE
, is set to indicate whether the EdgeX services are expected to initialize and use the secure secret store. By default, this variable is set to true
. Prior to building and running EdgeX, set this environment variable to false.
export EDGEX_SECURITY_SECRET_STORE=false
This can be done in the terminal from which you build and run EdgeX or you can set it in your user's profile to make an environment persist across terminal sessions. See How to Set Environment Variables in Linux for assistance.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#download-edgex-source","title":"Download EdgeX Source","text":"In order to build and run EdgeX micro services, you will first need to get the source code for the platform. Using git, clone the EdgeX repositories with the following commands:
Tip
You may wish to create a new folder and then issue these git commands from that folder so that all EdgeX code is neatly stored in a single folder.
git clone https://github.com/edgexfoundry/edgex-go.git\ngit clone https://github.com/edgexfoundry/device-virtual-go.git\ngit clone https://github.com/edgexfoundry/app-service-configurable.git\ngit clone https://github.com/lf-edge/ekuiper.git\ngit clone https://github.com/edgexfoundry/edgex-ui-go.git\n
Note that a new folder, named for the repository, gets created containing source code with each of the git clones above.
Warning
These git clone operations pull from the main branch of the EdgeX repositories. This is the current working branch in EdgeX development. See the git clone documentation for how to clone a specific named release branch or version tag.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-edgex-services","title":"Build EdgeX Services","text":"With the source code, you can now build the EdgeX services, GUI, as well as eKuiper rules engine.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-core-and-supporting-services","title":"Build Core and Supporting Services","text":"Most of the services are in the edgex-go
folder. This folder contains the code for the core and supporting services. A single command in this repository will build several of the services.
Enter the edgex-go
folder and issue the make build
command as shown below.
Warning
Depending on the amount of memory your system has, building the services in edgex-go
can take several minutes (in the case of a Raspberry Pi 3, build time for edgex-go services can take as much as 30-45 minutes and a device service is taking about 10-15 minutes to build).
Note
Building the services in edgex-go folder will actually build some of the services (such as the security services) not used in this guide, but issuing a single command is the easiest way to build the services needed without having to build services one by one.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-the-virtual-device-service","title":"Build the Virtual Device Service","text":"The virtual device service simulates devices/sensors sending data to EdgeX as if it was a \"thing\". This guide uses the virtual device service to exemplify how other devices services can be built and run.
Enter the device-virtual-go
folder and issue the make build
command as shown below.
The configurable application service helps prepare device/sensor data for enterprise or cloud systems. It also prepares data for use by the rules engine - eKuiper
Enter the app-service-configurable
folder and issue the make build
command as shown below.
Sister Linux Foundation, LF Edge project eKuiper is the reference implementation rules engine for EdgeX.
Enter the ekuiper
folder and issue the make build_with_edgex
command as shown below.
Note
eKuiper does also provide binaries which can be downloaded and used without the need for builds.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-the-gui","title":"Build the GUI","text":"EdgeX provides a graphical user interface for exploring a single instance of the EdgeX platform. The GUI makes it easier to work with EdgeX and see sample data coming from sensors. It provides a means to check that EdgeX is working correctly, monitor EdgeX and even make some configuration changes.
Enter the edgex-ui-go
folder and issue the make build
command as shown below.
Provided everything built correctly and without issue, you can now start your EdgeX services one at a time. First make sure Redis Server is running. If Redis is not running, start it before the other services. If it is running, you can start each of the EdgeX services in order as listed below.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-consul","title":"Start Consul","text":"Start Consul Agent with the following command.
nohup consul agent -ui -bootstrap -server -client 0.0.0.0 -data-dir=tmp/consul &\n
The nohup
is used to execute the command and ignore all SIGHUP (hangup) signals. The &
says to execute the process in the background. Both nohup
and &
will be used to run each of the services so that the same terminal can be used and the output will be directed to local nohup.out log files.
If Consul is running correctly, you should be able to reach the Consul UI through a browser at http://(host address):8500
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-core-metadata","title":"Start Core Metadata","text":"Each of core and supporting EdgeX services are located in edgex-go/cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go/cmd/core-metadata
. Change directories to the core-metadata service subfolder and then run the executable found in the subfolder with -cp
and -registry
command line options as shown below.
cd edgex-go/cmd/core-metadata/\nnohup ./core-metadata -cp=consul.http://localhost:8500 -registry &\n
The -cp=consul.http://localhost:8500
command line parameter tells core-metadata to use Consul and where to find Consul running. The -registry
command line parameter tells core-metadata to use (and register with) the registry service. Both of these command line parameters will be use when launching all EdgeX services.
In a similar fashion, enter each of the other core and supporting service folders in edgex-go/cmd
and launch the services.
cd ../core-data\nnohup ./core-data -cp=consul.http://localhost:8500 -registry &\ncd ../core-command\nnohup ./core-command -cp=consul.http://localhost:8500 -registry &\ncd ../support-notifications/\nnohup ./support-notifications -cp=consul.http://localhost:8500 -registry &\ncd ../support-scheduler/\nnohup ./support-scheduler -cp=consul.http://localhost:8500 -registry &\n
Tip
If you still have the Consul UI up, you should see each of the EdgeX core and supporting services listed in Consul's Services
page with green check marks next to them suggesting they are running.
The configurable application service is located in the root of app-service-configurable
folder.
The configurable application service is started in a similar way as the other EdgeX services. The configurable application service is going to be used to route data to the rules engine. Therefore, an additional command line parameter (p
) is added to its launch command to tell the app service to use the rules engine configuration and profile.
nohup ./app-service-configurable -cp=consul.http://localhost:8500 -registry -p=rules-engine &\n
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-the-virtual-device-service","title":"Start the Virtual Device Service","text":"The virtual device service is also started in similar way as the other EdgeX services. The virtual device service manufactures data as if it was to come from a sensor and sends that data into the rest of EdgeX. By default, the virtual device service will generate random numbers (integers, unsigned integers, floats), booleans and even binary data as simulated sensor data. The virtual device service is located in the device-virtual-go/cmd
folder.
Change directories to the virtual device service's cmd
folder and then launch the service with the command shown below.
nohup ./device-virtual -cp=consul.http://localhost:8500 -registry &\n
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-the-gui","title":"Start the GUI","text":"The EdgeX graphical user interface (GUI) provides an easy to use visual tool to monitor data passing through EdgeX services. It also provides some capability to change an EdgeX instance's configuration or metadata. The EdgeX GUI is located in the edgex-ui-go/cmd/edgex-ui-server
folder.
Change directories to the GUI's cmd/edgex-ui-server
folder and then launch the GUI with the command shown below.
nohup ./edgex-ui-server &\n
If the GUI is running correctly, you should be able to reach the GUI through a browser at http://(host address):4000. It may take a few seconds for the GUI to initialize once you hit the URL.
Note
Some elements of the GUI will not work as you do not have all available EdgeX services running. Notably, the System Management service and its executor are not running so the System view of the GUI will display an error. By default, the System Management service and its executor operate by checking on the other services memory, CPU, etc. via Docker Stats. In this case, since you are not running Docker containers, the System Management service would not function.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-ekuiper","title":"Start eKuiper","text":"eKuiper is the reference implementation rules engine that is typically run with EdgeX by default. It is a lightweight, easy to use rules engine. Rules can be established using SQL. It is a sister project under the LF Edge umbrella project.
eKuiper's executable (called kuiperd
) is located in the ekuiper/_build/kuiper-*version*-linux-arm/bin
folder. Note that the location is in a _build
folder subfolder created when you built eKuiper. The subfolder is named for the eKuiper version, OS, architecture.
Change directories to the ekuiper/_build/kuiper-*version*-linux-arm/bin
folder.
As a 3rd party component, eKuiper can be setup to work with many streams of data from various systems or engines. It must be provided knowledge about where it is receiving data and how to handle/treat the incoming data. Therefore, before launching eKuiper, execute the following export of environmental variables in order to tell eKuiper where to receive data coming from the EdgeX configurable application service (via the EdgeX message bus).
export CONNECTION__EDGEX__REDISMSGBUS__PORT=6379\nexport CONNECTION__EDGEX__REDISMSGBUS__PROTOCOL=redis\nexport CONNECTION__EDGEX__REDISMSGBUS__SERVER=localhost\nexport CONNECTION__EDGEX__REDISMSGBUS__TYPE=redis\nexport EDGEX__DEFAULT__PORT=6379\nexport EDGEX__DEFAULT__PROTOCOL=redis\nexport EDGEX__DEFAULT__SERVER=localhost\nexport EDGEX__DEFAULT__TOPIC=rules-events\nexport EDGEX__DEFAULT__TYPE=redis\nexport KUIPER__BASIC__CONSOLELOG=\"true\"\nexport KUIPER__BASIC__RESTPORT=59720\n
Setting these environment variables must be done in the same terminal from which you plan to execute the eKuiper server.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#run-ekuiper","title":"Run eKuiper","text":"From the ekuiper/_build/kuiper-*version*-linux-arm
folder, and with the environmental variables set, launch eKuiper's server with the command shown below.
nohup ./bin/kuiperd &\n
Warning
There is both a kuiper
and a kuiperd
executable in the bin
folder. Make sure you are running kuiperd
.
If eKuiper is running correctly, the RuleEngine tab in the EdgeX GUI should offer the ability to define eKuiper Streams and Rules as shown below.
If eKuiper is not running correctly or if the environmental variables where incorrectly set, then you will see an error screen like that shown below.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#test-and-explore-edgex","title":"Test and Explore EdgeX","text":"With EdgeX up and running (inclusive of Consul, Redis, and eKuiper), you can try these quick tests to see that EdgeX is running correctly.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#see-sensor-data-flowing-through-edgex","title":"See sensor data flowing through EdgeX","text":"You have already been using Consul and the EdgeX GUI to check on some items of EdgeX in this tutorial. You can use the EdgeX GUI to further check that sensor data is flowing through the system.
In a browser, go to http://(host address):4000. Remember, it may take a few seconds for the GUI to initialize once you hit the URL. Once the GUI displays, find and click on the DataCenter
link on the left hand navigation bar (highlighted below).
The DataCenter
display allows you to see the EdgeX event/readings as they are persisted by the core data service to Redis. Simply press the >Start
button to see the \"stream\" of simulated sensor data that was generated by the virtual device service and sent to EdgeX. The simulated data may take a second or two to start to display in the EventDataStream
area of the GUI.
Press the Pause
button to stop this display of data. Notice that you can see the EdgeX Events (and associated Readings) or just the Readings with the two tabs on this DataCenter
display.
Each EdgeX micro service has a REST API associated with it. You can use curl or a browser to test that the service is up using its ping
API. Below are curl commands to \"ping\" both core data and core metadata.
curl http://localhost:59880/api/v3/ping\n curl http://localhost:59881/api/v3/ping\n
Each service should respond with JSON data to indicate it is able to respond to requests. Below is an example response from the core metadata \"Ping\" request.
{\"apiVersion\":\"v2\",\"timestamp\":\"Thu May 12 23:25:04 UTC 2022\",\"serviceName\":\"core-metadata\"}\n
See the service port reference page for a list of service ports to check the ping
API of other services.
As an added test, use curl to get the count of the number of events persisted by core data with the command below (you can also use a browser with the URL to get the same).
curl http://localhost:59880/api/v3/event/count\n
The response will indicate a \"count\" of events stored (in this case 6270).
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":6270}\n
Info
The full set of APIs for each service can be found in SwaggerHub. You can use the documentation to test other APIs as well.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#set-up-an-ekuiper-stream-and-rule","title":"Set up an eKuiper Stream and Rule","text":"While eKuiper is running, it is currently sitting idle since it has no rules on which to watch for data and execute commands. Set up a simple eKuiper rule to log any sensor data it sees. Use the GUI tool to establish the eKuiper stream
and rule
. Learn about Streams and Rules in the eKuiper documentation.
In the GUI, click on the Rules Engine
link in the navigation bar on the left. Then, click on the Add
button on the Stream tab. Allow the default EdgeX stream be created by hitting the Submit
button.
Next, click on the Rules
tab on the Rules Engine
page. Then click on the Add
button on the Rules
tab in order to create a new eKuiper rule. In the form that appears, enter any name for the rule (TestRule
is used below) in the Name field. Enter SELECT * FROM EdgeXStream
in the RuleSQL field and add a log
action - all as shown below in the form. Hit the Submit
button when you have your rule established.
With the stream and rule defined, you have asked eKuiper to fire a log entry each time it sees a new EdgeX event/reading come through it. In the future, you could have eKuiper look for particular events/readings (e.g., thermostat readings above a specified temperature) produced by a particular sensor in order to issue commands to some device. But for now, you can check the eKuiper log to see that the rule engine is working and publishing a message to the log with each event/reading.
In the ekuiper/_build/kuiper-*version*-linux-arm/log
folder, you will find a stream.log
file.
If you use Linux tail
, you can see that the eKuiper rules engine is firing a log entry for each virtual device service record that flows through EdgeX. Issue the following command to see the log entries occur in real time:
tail -f stream.log\n
Info
Seeing the eKuiper rules engine fire a log entry into a file for each EdgeX event/reading that comes through, has allowed you to confirm and see the entire EdgeX system is working properly.
With the nohup
command on each service, the log file contents are redirected to a file (nohup.out
) in the directory where you started each service. if you find that a service does not appear to be running or if it is running but not working correctly, check the nohup.out
file for any errors or issues. In the example below, the core data's nohup.out
log file is explored.
This build and run guide shows you how to get, compile/build, execute and test EdgeX (including the core and supporting services, the configurable application service, eKuiper rules engine and a virtual device service) in Linux on an x86 or x86_64 hardware. Specifically, this guide was done using Ubuntu 20.04. For the most part, the guide should assist in building and running EdgeX in almost any Linux distribution, but some instructions will vary based on the nuances of the underlying distribution.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#environment","title":"Environment","text":"Building and running EdgeX on Linux natively will require you have:
sudo
accessThe following software is assumed to already be installed and available on the host platform. Follow the referenced guides if you need to install or setup this software.
Go Lang, version 1.17 or later as of the Kamakura release
How to check for existence and version on your machine
GCC Build Essentials (for C++)
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Consul, version 1.10 or later as of the Kamakura release
How to check for existence and version on your machine
Redis,version 6.2 or later as of the Kamakura release
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Git
How to check for existence and version on your machine
In this guide, you will be building and running EdgeX in \"non-secure\" mode. That is, you will be building and running the EdgeX platform without the security services and security configuration. An environmental variable, EDGEX_SECURITY_SECRET_STORE
, is set to indicate whether the EdgeX services are expected to initialize and use the secure secret store. By default, this variable is set to true
. Prior to building and running EdgeX, set this environment variable to false.
export EDGEX_SECURITY_SECRET_STORE=false
This can be done in the terminal from which you build and run EdgeX or you can set it in your user's profile to make an environment persist across terminal sessions. See How to Set Environment Variables in Linux for assistance.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#download-edgex-source","title":"Download EdgeX Source","text":"In order to build and run EdgeX micro services, you will first need to get the source code for the platform. Using git, clone the EdgeX repositories with the following commands:
Tip
You may wish to create a new folder and then issue these git commands from that folder so that all EdgeX code is neatly stored in a single folder.
git clone https://github.com/edgexfoundry/edgex-go.git\ngit clone https://github.com/edgexfoundry/device-virtual-go.git\ngit clone https://github.com/edgexfoundry/app-service-configurable.git\ngit clone https://github.com/lf-edge/ekuiper.git\ngit clone https://github.com/edgexfoundry/edgex-ui-go.git\n
Note that a new folder, named for the repository, gets created containing source code with each of the git clones above.
Warning
These git clone operations pull from the main branch of the EdgeX repositories. This is the current working branch in EdgeX development. See the git clone documentation for how to clone a specific named release branch or version tag.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-edgex-services","title":"Build EdgeX Services","text":"With the source code, you can now build the EdgeX services, GUI, as well as eKuiper rules engine.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-core-and-supporting-services","title":"Build Core and Supporting Services","text":"Most of the services are in the edgex-go
folder. This folder contains the code for the core and supporting services. A single command in this repository will build several of the services.
Enter the edgex-go
folder and issue the make build
command as shown below.
Warning
Depending on the amount of memory your system has, building the services in edgex-go
can take several minutes.
Note
Building the services in edgex-go folder will actually build some of the services (such as the security services) not used in this guide, but issuing a single command is the easiest way to build the services needed without having to build services one by one.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-the-virtual-device-service","title":"Build the Virtual Device Service","text":"The virtual device service simulates devices/sensors sending data to EdgeX as if it was a \"thing\". This guide uses the virtual device service to exemplify how other devices services can be built and run.
Enter the device-virtual-go
folder and issue the make build
command as shown below.
The configurable application service helps prepare device/sensor data for enterprise or cloud systems. It also prepares data for use by the rules engine - eKuiper
Enter the app-service-configurable
folder and issue the make build
command as shown below.
Sister Linux Foundation, LF Edge project eKuiper is the reference implementation rules engine for EdgeX.
Enter the ekuiper
folder and issue the make build_with_edgex
command as shown below.
Note
eKuiper does also provide binaries which can be downloaded and used without the need for builds.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-the-gui","title":"Build the GUI","text":"EdgeX provides a graphical user interface for exploring a single instance of the EdgeX platform. The GUI makes it easier to work with EdgeX and see sample data coming from sensors. It provides a means to check that EdgeX is working correctly, monitor EdgeX and even make some configuration changes.
Enter the edgex-ui-go
folder and issue the make build
command as shown below.
Provided everything built correctly and without issue, you can now start your EdgeX services one at a time. First make sure Redis Server is running. If Redis is not running, start it before the other services. If it is running, you can start each of the EdgeX services in order as listed below.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-consul","title":"Start Consul","text":"Start Consul Agent with the following command.
nohup consul agent -ui -bootstrap -server -client 0.0.0.0 -data-dir=tmp/consul &\n
The nohup
is used to execute the command and ignore all SIGHUP (hangup) signals. The &
says to execute the process in the background. Both nohup
and &
will be used to run each of the services so that the same terminal can be used and the output will be directed to local nohup.out log files.
If Consul is running correctly, you should be able to reach the Consul UI through a browser at http://(host address):8500
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-core-metadata","title":"Start Core Metadata","text":"Each of core and supporting EdgeX services are located in edgex-go/cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go/cmd/core-metadata
. Change directories to the core-metadata service subfolder and then run the executable found in the subfolder with -cp
and -registry
command line options as shown below.
cd edgex-go/cmd/core-metadata/\nnohup ./core-metadata -cp=consul.http://localhost:8500 -registry &\n
The -cp=consul.http://localhost:8500
command line parameter tells core-metadata to use Consul and where to find Consul running. The -registry
command line parameter tells core-metadata to use (and register with) the registry service. Both of these command line parameters will be use when launching all EdgeX services.
In a similar fashion, enter each of the other core and supporting service folders in edgex-go/cmd
and launch the services.
cd ../core-data\nnohup ./core-data -cp=consul.http://localhost:8500 -registry &\ncd ../core-command\nnohup ./core-command -cp=consul.http://localhost:8500 -registry &\ncd ../support-notifications/\nnohup ./support-notifications -cp=consul.http://localhost:8500 -registry &\ncd ../support-scheduler/\nnohup ./support-scheduler -cp=consul.http://localhost:8500 -registry &\n
Tip
If you still have the Consul UI up, you should see each of the EdgeX core and supporting services listed in Consul's Services
page with green check marks next to them suggesting they are running.
The configurable application service is located in the root of app-service-configurable
folder.
The configurable application service is started in a similar way as the other EdgeX services. The configurable application service is going to be used to route data to the rules engine. Therefore, an additional command line parameter (p
) is added to its launch command to tell the app service to use the rules engine configuration and profile.
nohup ./app-service-configurable -cp=consul.http://localhost:8500 -registry -p=rules-engine &\n
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-the-virtual-device-service","title":"Start the Virtual Device Service","text":"The virtual device service is also started in similar way as the other EdgeX services. The virtual device service manufactures data as if it was to come from a sensor and sends that data into the rest of EdgeX. By default, the virtual device service will generate random numbers (integers, unsigned integers, floats), booleans and even binary data as simulated sensor data. The virtual device service is located in the device-virtual-go/cmd
folder.
Change directories to the virtual device service's cmd
folder and then launch the service with the command shown below.
nohup ./device-virtual -cp=consul.http://localhost:8500 -registry &\n
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-the-gui","title":"Start the GUI","text":"The EdgeX graphical user interface (GUI) provides an easy to use visual tool to monitor data passing through EdgeX services. It also provides some capability to change an EdgeX instance's configuration or metadata. The EdgeX GUI is located in the edgex-ui-go/cmd/edgex-ui-server
folder.
Change directories to the GUI's cmd/edgex-ui-server
folder and then launch the GUI with the command shown below.
nohup ./edgex-ui-server &\n
If the GUI is running correctly, you should be able to reach the GUI through a browser at http://(host address):4000. It may take a few seconds for the GUI to initialize once you hit the URL.
Note
Some elements of the GUI will not work as you do not have all available EdgeX services running. Notably, the System Management service and its executor are not running so the System view of the GUI will display an error. By default, the System Management service and its executor operate by checking on the other services memory, CPU, etc. via Docker Stats. In this case, since you are not running Docker containers, the System Management service would not function.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-ekuiper","title":"Start eKuiper","text":"eKuiper is the reference implementation rules engine that is typically run with EdgeX by default. It is a lightweight, easy to use rules engine. Rules can be established using SQL. It is a sister project under the LF Edge umbrella project.
eKuiper's executable (called kuiperd
) is located in the ekuiper/_build/kuiper-*version*-linux-amd64/bin
folder. Note that the location is in a _build
folder subfolder created when you built eKuiper. The subfolder is named for the eKuiper version, OS, architecture.
Change directories to the ekuiper/_build/kuiper-*version*-linux-amd64/bin
folder.
As a 3rd party component, eKuiper can be setup to work with many streams of data from various systems or engines. It must be provided knowledge about where it is receiving data and how to handle/treat the incoming data. Therefore, before launching eKuiper, execute the following export of environmental variables in order to tell eKuiper where to receive data coming from the EdgeX configurable application service (via the EdgeX message bus).
export CONNECTION__EDGEX__REDISMSGBUS__PORT=6379\nexport CONNECTION__EDGEX__REDISMSGBUS__PROTOCOL=redis\nexport CONNECTION__EDGEX__REDISMSGBUS__SERVER=localhost\nexport CONNECTION__EDGEX__REDISMSGBUS__TYPE=redis\nexport EDGEX__DEFAULT__PORT=6379\nexport EDGEX__DEFAULT__PROTOCOL=redis\nexport EDGEX__DEFAULT__SERVER=localhost\nexport EDGEX__DEFAULT__TOPIC=rules-events\nexport EDGEX__DEFAULT__TYPE=redis\nexport KUIPER__BASIC__CONSOLELOG=\"true\"\nexport KUIPER__BASIC__RESTPORT=59720\n
Setting these environment variables must be done in the same terminal from which you plan to execute the eKuiper server.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#run-ekuiper","title":"Run eKuiper","text":"From the ekuiper/_build/kuiper-*version*-linux-amd64
folder, and with the environmental variables set, launch eKuiper's server with the command shown below.
nohup ./bin/kuiperd &\n
Warning
There is both a kuiper
and a kuiperd
executable in the bin
folder. Make sure you are running kuiperd
.
If eKuiper is running correctly, the RuleEngine tab in the EdgeX GUI should offer the ability to define eKuiper Streams and Rules as shown below.
If eKuiper is not running correctly or if the environmental variables where incorrectly set, then you will see an error screen like that shown below.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#test-and-explore-edgex","title":"Test and Explore EdgeX","text":"With EdgeX up and running (inclusive of Consul, Redis, and eKuiper), you can try these quick tests to see that EdgeX is running correctly.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#see-sensor-data-flowing-through-edgex","title":"See sensor data flowing through EdgeX","text":"You have already been using Consul and the EdgeX GUI to check on some items of EdgeX in this tutorial. You can use the EdgeX GUI to further check that sensor data is flowing through the system.
In a browser, go to http://(host address):4000. Remember, it may take a few seconds for the GUI to initialize once you hit the URL. Once the GUI displays, find and click on the DataCenter
link on the left hand navigation bar (highlighted below).
The DataCenter
display allows you to see the EdgeX event/readings as they are persisted by the core data service to Redis. Simply press the >Start
button to see the \"stream\" of simulated sensor data that was generated by the virtual device service and sent to EdgeX. The simulated data may take a second or two to start to display in the EventDataStream
area of the GUI.
Press the Pause
button to stop this display of data. Notice that you can see the EdgeX Events (and associated Readings) or just the Readings with the two tabs on this DataCenter
display.
Each EdgeX micro service has a REST API associated with it. You can use curl or a browser to test that the service is up using its ping
API. Below are curl commands to \"ping\" both core data and core metadata.
curl http://localhost:59880/api/v3/ping\n curl http://localhost:59881/api/v3/ping\n
Each service should respond with JSON data to indicate it is able to respond to requests. Below is an example response from the core metadata \"Ping\" request.
{\"apiVersion\":\"v2\",\"timestamp\":\"Thu May 12 23:25:04 UTC 2022\",\"serviceName\":\"core-metadata\"}\n
See the service port reference page for a list of service ports to check the ping
API of other services.
As an added test, use curl to get the count of the number of events persisted by core data with the command below (you can also use a browser with the URL to get the same).
curl http://localhost:59880/api/v3/event/count\n
The response will indicate a \"count\" of events stored (in this case 6270).
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":6270}\n
Info
The full set of APIs for each service can be found in SwaggerHub. You can use the documentation to test other APIs as well.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#set-up-an-ekuiper-stream-and-rule","title":"Set up an eKuiper Stream and Rule","text":"While eKuiper is running, it is currently sitting idle since it has no rules on which to watch for data and execute commands. Set up a simple eKuiper rule to log any sensor data it sees. Use the GUI tool to establish the eKuiper stream
and rule
. Learn about Streams and Rules in the eKuiper documentation.
In the GUI, click on the Rules Engine
link in the navigation bar on the left. Then, click on the Add
button on the Stream tab. Allow the default EdgeX stream be created by hitting the Submit
button.
Next, click on the Rules
tab on the Rules Engine
page. Then click on the Add
button on the Rules
tab in order to create a new eKuiper rule. In the form that appears, enter any name for the rule (TestRule
is used below) in the Name field. Enter SELECT * FROM EdgeXStream
in the RuleSQL field and add a log
action - all as shown below in the form. Hit the Submit
button when you have your rule established.
With the stream and rule defined, you have asked eKuiper to fire a log entry each time it sees a new EdgeX event/reading come through it. In the future, you could have eKuiper look for particular events/readings (e.g., thermostat readings above a specified temperature) produced by a particular sensor in order to issue commands to some device. But for now, you can check the eKuiper log to see that the rule engine is working and publishing a message to the log with each event/reading.
In the ekuiper/_build/kuiper-*version*-linux-amd64/log
folder, you will find a stream.log
file.
If you use Linux tail
, you can see that the eKuiper rules engine is firing a log entry for each virtual device service record that flows through EdgeX. Issue the following command to see the log entries occur in real time:
tail -f stream.log\n
Info
Seeing the eKuiper rules engine fire a log entry into a file for each EdgeX event/reading that comes through, has allowed you to confirm and see the entire EdgeX system is working properly.
With the nohup
command on each service, the log file contents are redirected to a file (nohup.out
) in the directory where you started each service. if you find that a service does not appear to be running or if it is running but not working correctly, check the nohup.out
file for any errors or issues. In the example below, the core data's nohup.out
log file is explored.
Warning
This build and run guide offers some assistance to seasoned developers or administrators to help build and run EdgeX on Windows natively (not using Docker and not running on Windows Subsystem for Linux ) but running natively on Windows is not supported by the project. EdgeX was built to be platform independent. As such, we believe most of EdgeX can run on almost any environment (on any hardware architecture and almost any operating system). However, there are elements of the EdgeX platform that will not run natively on Windows. Specifically, Redis, Kong and eKuiper will not run on Windows natively. Additionally, there are a number of device services that will not work on native Windows. In these instances, developers will need to find workarounds for services or run them outside of Windows and access them across the network.
Existence of this guides does not imply current or future support. Use of this guides should be used with care and with an understanding that it is the community's best effort to provide advanced developers with the means to begin their own custom EdgeX development and execution on Windows.
This build and run guide shows you how to get, compile/build, execute and test EdgeX (including the core and supporting services, the configurable application service, and a virtual device service) on Windows x86_64 hardware. Specifically, this guide was done using Windows 11. It is believed that this same guide works for Windows 10.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#environment","title":"Environment","text":"Building and running EdgeX on Windows natively will require you have:
The following software is assumed to already be installed and available on the host platform. Follow the referenced guides if you need to install or setup this software.
Go Lang, version 1.17 or later as of the Kamakura release
How to check for existence and version on your machine
Consul, version 1.10 or later as of the Kamakura release
How to check for existence and version on your machine
Git for Windows version 2.10 (that provides a BASH emulation to run Git from the command line)
How to check for existence and version on your machine
You may also need GCC (for C++, depending on whether services you are creating have or require C/C++ elements) and Make. These can be provided via a variety of tools/packages in Windows. Some options include use of:
Redis will not run on Windows, but is required in order to run EdgeX. Your Windows platform must be able to connect to a Redis instance on another platform via TCP/IP on port 6379 (by default). Redis,version 6.2 or later as of the Kamakura release. As an example, see How to install and configure Redis on Ubuntu 20.04.
Because EdgeX on your Windows platform will access Redis on another host, Redis should be configured to allow for traffic from other machines, you'll need to allow access from other addresses (see Open Redis port for remote connections). Additionally, you will need to configure EdgeX to use a username/password to access Redis, or set Redis into unprotected mode (see Turn off 'protected-mode' in Redis)
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#prepare-your-environment","title":"Prepare your environment","text":"Info
As you have installed Git for Windows, you will notice that all commands are executed from the Git BASH emulator. This is the easiest way to build and run EdgeX on Windows. You will also find that the instructions closely parallel build and run operations in Linux or other OS. When referring to the \"terminal\" window throughout these instructions, this means use the Git BASH emulator window.
In this guide, you will be building and running EdgeX in \"non-secure\" mode. That is, you will be building and running the EdgeX platform without the security services and security configuration. An environmental variable, EDGEX_SECURITY_SECRET_STORE
, is set to indicate whether the EdgeX services are expected to initialize and use the secure secret store. By default, this variable is set to true
. Prior to building and running EdgeX, set this environment variable to false. You can do this in each terminal window you open by executing the following command:
export EDGEX_SECURITY_SECRET_STORE=false
This can be done in the Git BASH (aka terminal) window from which you will eventually build and run EdgeX.
If you prefer, you can also set a Windows Environment Variable. Open the System Properties
Window, then click on the Environmental Variables
button to add a new variable.
In the Environment Variables Window that comes up, click on the New...
button under the System variables section. Enter EDGEX_SECURITY_SECRET_STORE
in the Variable Name
field and false
in the Variable value
field of the New System Variable
popup. Click OK
to close the System Properties
and Environment Variables
windows.
Now, each time you open a terminal window, the EDGEX_SECURITY_SECRET_STORE
will already be set to false
for you without having to execute the export command above.
In order to build and run EdgeX micro services, you will first need to get the source code for the platform. Using git, clone the EdgeX repositories with the following commands:
Tip
You may wish to create a new folder and then issue these git commands from that folder so that all EdgeX code is neatly stored in a single folder.
git clone https://github.com/edgexfoundry/edgex-go.git\ngit clone https://github.com/edgexfoundry/device-virtual-go.git\ngit clone https://github.com/edgexfoundry/app-service-configurable.git\ngit clone https://github.com/edgexfoundry/edgex-ui-go.git\n
Note that a new folder, named for the repository, gets created containing source code with each of the git clones above.
Note
eKuiper will not run on Windows natively. As with Redis, if you want to use eKuiper, you will need to run eKuiper outside of Windows and communicate via TCP/IP on a connected network.
Warning
These git clone operations pull from the main branch of the EdgeX repositories. This is the current working branch in EdgeX development. See the git clone documentation for how to clone a specific named release branch or version tag.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#build-edgex-services","title":"Build EdgeX Services","text":"With the source code, you can now build the EdgeX services and the GUI.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#build-core-and-supporting-services","title":"Build Core and Supporting Services","text":"Most of the services are in the edgex-go
folder. This folder contains the code for the core and supporting services. A single command in this repository will build several of the services.
Enter the edgex-go
folder and issue the make build
command as shown below.
Note
Building the services in edgex-go folder will actually build some of the services (such as the security services) not used in this guide, but issuing a single command is the easiest way to build the services needed without having to build services one by one.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#build-the-virtual-device-service","title":"Build the Virtual Device Service","text":"The virtual device service simulates devices/sensors sending data to EdgeX as if it was a \"thing\". This guide uses the virtual device service to exemplify how other devices services can be built and run.
Enter the device-virtual-go
folder and issue the make build
command as shown below.
The configurable application service helps prepare device/sensor data for enterprise or cloud systems. It also prepares data for use by the rules engine - eKuiper
Enter the app-service-configurable
folder and issue the make build
command as shown below.
EdgeX provides a graphical user interface for exploring a single instance of the EdgeX platform. The GUI makes it easier to work with EdgeX and see sample data coming from sensors. It provides a means to check that EdgeX is working correctly, monitor EdgeX and even make some configuration changes.
Enter the edgex-ui-go
folder and issue the make build
command as shown below.
Provided everything built correctly and without issue, you can now start your EdgeX services one at a time. First make sure Redis Server is running on its host machine and is accessible via TCP/IP (assuming default port of 6379). If Redis is not running, start it before the other services. If it is running, you can start each of the EdgeX services in order as listed below.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#point-services-to-redis","title":"Point Services to Redis","text":"Because Redis is not running on your Windows machine, the configuration of all the services need to be changed to point the services to Redis on the different host when they start.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#modify-the-configuration-of-edgex-core-and-supporting-services","title":"Modify the Configuration of EdgeX Core and Supporting Services","text":"Each of core and supporting EdgeX services are located in edgex-go\\cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go\\cmd\\core-metadata
. Core-metadata's configuration is located in a configuration.yaml
file in edgex-go\\cmd\\core-metadata\\res
. Use your favorite editor to open the configuration file and locate the Database
section in that file (about 1/2 the way down the configuration listings). Change the host address from localhost
to the IP address of your Redis hosting machine (changed to 10.0.0.75 in the example below).
Modify the host location for Redis in the Database
section of configuration.yaml
files for notifications (edgex-go\\cmd\\support-notifications\\res
) and scheduler (edgex-go\\cmd\\support-scheduler\\res
) services in the same way.
In core-data, you need to modify two host settings. You need to change the location for Redis in the Database
section as well as the host location for Redis in the MessageQueue
section of configuration.yaml
. The latter setting is for accessing the Redis Pub/Sub message bus.
The Configurable App Service uses both the Redis database and message bus like core-data does. Locate the configuration.yaml
file in app-service-configurable\\res\\rules-engine
folder. Open the file with an editor and change the Host in the Database
, Trigger.EdgexMessageBus.SubscribeHost
, and Trigger.EdgexMessageBus.PublishHost
sections from localhost
to the IP address of your Redis hosting machine.
The Virtual Device Service uses the Redis message bus like core-data does. Locate the configuration.yaml
file in device-virtual-go\\cmd\\res
folder. Open the file with an editor and change the Redis MessageQueue
host address from localhost
to the IP address of your Redis hosting machine.
Wherever you installed Consul, start Consul Agent with the following command.
consul agent -ui -bootstrap -server -data-dir=tmp/consul &\n
If Consul is running correctly, you should be able to reach the Consul UI through a browser at http://localhost:8500 on your Windows machine.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#start-core-metadata","title":"Start Core Metadata","text":"Each of core and supporting EdgeX services are located in edgex-go\\cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go\\cmd\\core-metadata
. In a Git BASH terminal, change directories to the core-metadata service subfolder and then run the executable found in the subfolder with -cp
and -registry
command line options as shown below.
cd edgex-go/cmd/core-metadata/\nnohup ./core-metadata -cp=consul.http://localhost:8500 -registry &\n
The nohup
is used to execute the command and ignore all SIGHUP (hangup) signals. The &
says to execute the process in the background. Both nohup
and &
will be used to run each of the services so that the same terminal can be used and the output will be directed to local nohup.out log files.
The -cp=consul.http://localhost:8500
command line parameter tells core-metadata to use Consul and where to find Consul running. The -registry
command line parameter tells core-metadata to use (and register with) the registry service. Both of these command line parameters will be use when launching all EdgeX services.
In a similar fashion, enter each of the other core and supporting service folders in edgex-go\\cmd
and launch the services.
cd ../core-data\nnohup ./core-data -cp=consul.http://localhost:8500 -registry &\ncd ../core-command\nnohup ./core-command -cp=consul.http://localhost:8500 -registry &\ncd ../support-notifications/\nnohup ./support-notifications -cp=consul.http://localhost:8500 -registry &\ncd ../support-scheduler/\nnohup ./support-scheduler -cp=consul.http://localhost:8500 -registry &\n
Tip
If you still have the Consul UI up, you should see each of the EdgeX core and supporting services listed in Consul's Services
page with green check marks next to them suggesting they are running.
The configurable application service is located in the root of app-service-configurable
folder.
The configurable application service is started in a similar way as the other EdgeX services. The configurable application service is going to be used to route data to the rules engine. Therefore, an additional command line parameter (p
) is added to its launch command to tell the app service to use the rules engine configuration and profile.
nohup ./app-service-configurable -cp=consul.http://localhost:8500 -registry -p=rules-engine &\n
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#start-the-virtual-device-service","title":"Start the Virtual Device Service","text":"The virtual device service is also started in similar way as the other EdgeX services. The virtual device service manufactures data as if it was to come from a sensor and sends that data into the rest of EdgeX. By default, the virtual device service will generate random numbers (integers, unsigned integers, floats), booleans and even binary data as simulated sensor data. The virtual device service is located in the device-virtual-go\\cmd
folder.
Change directories to the virtual device service's cmd
folder and then launch the service with the command shown below.
nohup ./device-virtual -cp=consul.http://localhost:8500 -registry &\n
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#start-the-gui","title":"Start the GUI","text":"The EdgeX graphical user interface (GUI) provides an easy to use visual tool to monitor data passing through EdgeX services. It also provides some capability to change an EdgeX instance's configuration or metadata. The EdgeX GUI is located in the edgex-ui-go\\cmd\\edgex-ui-server
folder.
Change directories to the GUI's cmd\\edgex-ui-server
folder and then launch the GUI with the command shown below.
nohup ./edgex-ui-server &\n
If the GUI is running correctly, you should be able to reach the GUI through a Window's browser at http://localhost:4000. It may take a few seconds for the GUI to initialize once you hit the URL.
Note
Some elements of the GUI will not work as you do not have all available EdgeX services running. Notably, the System Management service and its executor are not running so the System view of the GUI will display an error. By default, the System Management service and its executor operate by checking on the other services memory, CPU, etc. via Docker Stats. In this case, since you are not running Docker containers, the System Management service would not function. Also, as eKuiper does not run on Windows, any Rules Engine functionality will not work either.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#test-and-explore-edgex","title":"Test and Explore EdgeX","text":"With EdgeX up and running (inclusive of Consul, and with Redis running on a separate host), you can try these quick tests to see that EdgeX is running correctly.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#see-sensor-data-flowing-through-edgex","title":"See sensor data flowing through EdgeX","text":"You have already been using Consul and the EdgeX GUI to check on some items of EdgeX in this tutorial. You can use the EdgeX GUI to further check that sensor data is flowing through the system.
In your Window's browser, go to http://localhost:4000. Remember, it may take a few seconds for the GUI to initialize once you hit the URL. Once the GUI displays, find and click on the DataCenter
link on the left hand navigation bar (highlighted below).
The DataCenter
display allows you to see the EdgeX event/readings as they are persisted by the core data service to Redis. Simply press the >Start
button to see the \"stream\" of simulated sensor data that was generated by the virtual device service and sent to EdgeX. The simulated data may take a second or two to start to display in the EventDataStream
area of the GUI.
Press the Pause
button to stop this display of data. Notice that you can see the EdgeX Events (and associated Readings) or just the Readings with the two tabs on this DataCenter
display.
Each EdgeX micro service has a REST API associated with it. You can use curl or a browser to test that the service is up using its ping
API. Below are curl commands to \"ping\" both core data and core metadata.
curl http://localhost:59880/api/v3/ping\n curl http://localhost:59881/api/v3/ping\n
Each service should respond with JSON data to indicate it is able to respond to requests. Below is an example response from the core metadata \"Ping\" request.
{\"apiVersion\":\"v2\",\"timestamp\":\"Thu May 12 23:25:04 UTC 2022\",\"serviceName\":\"core-metadata\"}\n
See the service port reference page for a list of service ports to check the ping
API of other services.
As an added test, use curl to get the count of the number of events persisted by core data with the command below (you can also use a browser with the URL to get the same).
curl http://localhost:59880/api/v3/event/count\n
The response will indicate a \"count\" of events stored (in this case 6270).
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":6270}\n
Info
The full set of APIs for each service can be found in SwaggerHub. You can use the documentation to test other APIs as well.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#debugging-and-troubleshooting","title":"Debugging and Troubleshooting","text":"With the nohup
command on each service, the log file contents are redirected to a file (nohup.out
) in the directory where you started each service. if you find that a service does not appear to be running or if it is running but not working correctly, check the nohup.out
file for any errors or issues. In the example below, the core data's nohup.out
log file is explored.
This guide will get EdgeX up and running on your machine in as little as 5 minutes using pre-built Docker containers. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible.
For a quick start with Snaps, refer to Getting Started with Snaps.
When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started using Docker or Getting Started as a Developer guides.
"},{"location":"getting-started/quick-start/#setup-docker","title":"Setup Docker","text":"Install the following:
Info
The version of EdgeX used in the following examples is main
.
Once you have Docker and Docker Compose installed, you need to:
docker-compose
fileThis can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures).
x86ARMcurl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty.yml -o docker-compose.yml; docker compose up -d\n
curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty-arm64.yml -o docker-compose.yml; docker compose up -d\n
Verify that the EdgeX containers have started:
docker compose ps \n
If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above."},{"location":"getting-started/quick-start/#connected-devices","title":"Connected Devices","text":"EdgeX Foundry provides a Virtual device service which is useful for testing and development. It simulates a number of devices, each randomly generating data of various types and within configurable parameters. For example, the Random-Integer-Device will generate random integers.
The Virtual Device (also known as Device Virtual) service is already a service pulled and running as part of the default EdgeX configuration.
You can verify that Virtual Device readings are already being sent by querying the EdgeX core data service for the event records sent for Random-Integer-Device:
curl http://localhost:59880/api/v3/event/device/name/Random-Integer-Device\n
Verify the virtual device service is operating correctly by requesting the last event records received by core data for the Random-Integer-Device. Note
By default, the maximum number of events returned will be 20 (the default limit). You can pass a limit
parameter to get more or less event records.
curl http://localhost:59880/api/v3/event/device/name/Random-Integer-Device?limit=50\n
"},{"location":"getting-started/quick-start/#controlling-the-device","title":"Controlling the Device","text":"Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it.
When our Virtual Device service registered the device Random-Integer-Device
, it used a profile to also define commands that allow you to tell the service not to generate random integers, but to always return a value you set.
You won't call commands on devices directly, instead you use the EdgeX Foundry Command Service to do that. The first step is to check what commands are available to call by asking the Command service about your device:
curl http://localhost:59882/api/v3/device/name/Random-Integer-Device\n
This will return a lot of JSON, because there are a number of commands you can call on this device, but the commands we're going to use in this guide are Int16
(the comand to get the current integer 16 value) and WriteInt16Value
(the command to disable the generation of the random integer 16 number and specify the integer value to return). Look for the Int16
and WriteInt16Value
commands like those shown in the JSON as below:
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"deviceCoreCommand\": {\n\"deviceName\": \"Random-Integer-Device\",\n\"profileName\": \"Random-Integer-Device\",\n\"coreCommands\": [\n{\n\"name\": \"WriteInt16Value\",\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Integer-Device/WriteInt16Value\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Int16\",\n\"valueType\": \"Int16\"\n},\n{\n\"resourceName\": \"EnableRandomization_Int16\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"Int16\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Integer-Device/Int16\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Int16\",\n\"valueType\": \"Int16\"\n}\n]\n}\n...\n\n]\n}\n}\n
You'll notice that the commands have get
or set
(or both) options. A get call will return a random number (integer 16), and is what is being called automatically to send data into the rest of EdgeX (specifically core data). You can also call get manually using the URL provided (with no additinal parameters needed): curl http://localhost:59882/api/v3/device/name/Random-Integer-Device/Int16\n
Warning
Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command, but when calling the service from outside of Docker, you have to use localhost to reach it.
This command will return a JSON result that looks like this:
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"id\": \"6d829637-730c-4b70-9208-dc179070003f\",\n\"deviceName\": \"Random-Integer-Device\",\n\"profileName\": \"Random-Integer-Device\",\n\"sourceName\": \"Int16\",\n\"origin\": 1625605672073875500,\n\"readings\": [\n{\n\"id\": \"545b7add-683b-4745-84f1-d859f3d839e0\",\n\"origin\": 1625605672073875500,\n\"deviceName\": \"Random-Integer-Device\",\n\"resourceName\": \"Int16\",\n\"profileName\": \"Random-Integer-Device\",\n\"valueType\": \"Int16\",\n\"binaryValue\": null,\n\"mediaType\": \"\",\n\"value\": \"-8146\"\n}\n]\n}\n}\n
A call to GET of the Int16 device's Random-Integer-Device operation through the command service results in the next random value produced by the device in JSON format.
The default range for this reading is -32,768 to 32,767. In the example above, a value of -8146
was returned as the reading value. With the service set up to randomly return values, the value returned will be different each time the Int16
command is sent. However, we can use the WriteInt16Value
command to disable random values from being returned and instead specify a value to return. Use the curl command below to call the set command to disable random values and return the value 42
each time.
curl -X PUT -d '{\"Int16\":\"42\", \"EnableRandomization_Int16\":\"false\"}' http://localhost:59882/api/v3/device/name/Random-Integer-Device/WriteInt16Value\n
Warning
Again, also notice that localhost replaces edgex-core-command.
If successful, the service will confirm your setting of the value to be returned with a 200
status code.
A call to the device's SET command through the command service will return the API version and a status code (200 for success).
Now every time we call get on the Int16
command, the returned value will be 42
.
A call to GET of the Int16 device's Random-Integer-Device operation after setting the Int16 value to 42 and disabling randomization will always return a value of 42.
"},{"location":"getting-started/quick-start/#exporting-data","title":"Exporting Data","text":"EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client.
First add the following application service to your docker-compose.yml file right after the 'app-service-rules' service (the first service in the file). Spacing is important in YAML, so make sure to copy and paste it correctly.
app-service-mqtt:\ncontainer_name: edgex-app-mqtt\ndepends_on:\n- consul\n- data\nenvironment:\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nEDGEX_PROFILE: mqtt-export\nEDGEX_SECURITY_SECRET_STORE: \"false\"\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nSERVICE_HOST: edgex-app-mqtt\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\nWRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS: tcp://broker.mqttdashboard.com:1883\nWRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_TOPIC: EdgeXEvents\nhostname: edgex-app-mqtt\nimage: edgexfoundry/app-service-configurable:2.0.0\nnetworks:\nedgex-network: {}\nports:\n- 127.0.0.1:59702:59702/tcp\nread_only: true\nsecurity_opt:\n- no-new-privileges:true\nuser: 2002:2001\n
Note
This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at tcp://broker.mqttdashboard.com:1883
. You will be publishing to the EdgeXEvents topic.
For convenience, see documentation on the EdgeX Compose Builder to create custom Docker Compose files.
Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service.
docker compose up -d\n
You can connect to this broker with any MQTT client to watch the sent data. HiveMQ provides a web-based client that you can use. Use a browser to go to the client's URL. Once there, hit the Connect button to connect to the HiveMQ public broker. Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to.
Then, use the Subscriptions area to subscribe to the \"EdgeXEvents\" topic.
You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service.
You will begin seeing your random number readings appear in the Messages area on the screen.
Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen.
"},{"location":"getting-started/quick-start/#next-steps","title":"Next Steps","text":"Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX.
It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the micro services work together to control devices and read data from them as you just did.
"},{"location":"getting-started/tools/Ch-GUI/","title":"Graphical User Interface (GUI)","text":"EdgeX's graphical user interface (GUI) is provided for demonstration and development use to manage and monitor a single instance of EdgeX Foundry.
"},{"location":"getting-started/tools/Ch-GUI/#setup","title":"Setup","text":"You can quickly run the GUI in a Docker container or as a Snap. You can also download, build and run the GUI natively on your host.
"},{"location":"getting-started/tools/Ch-GUI/#docker-compose","title":"Docker Compose","text":"The EdgeX GUI is now incorporated into all the secure and non-sure Docker Compose files provided by the project. Locate and download the Docker Compose file that best suits your needs from https://github.com/edgexfoundry/edgex-compose. For example, in the Jakarta branch of edgex-compose
the *-with-app-sample*
compose files include the Sample App Service allowing the configurable pipeline to be manipulated from the UI. See the four Docker Compose files that include the Sample App Service circled below.
Note
The GUI can now be used in secure mode as well as non-secure mode.
See the Getting Started using Docker guide for help on how to find, download and use a Docker Compose file to run EdgeX - in this case with the Sample App Service.
"},{"location":"getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token","title":"Secure mode with API Gateway token","text":"When first running the UI in secure mode, you will be prompted to enter a token.
Following the How to get access token? link to view the documentation how get an API Gateway access token. Once you enter the token the UI will have asses to the EdgeX service via the API Gateway.
Note
The UI is no longer restricted to access from localhost
. It can now be accessed from any IP address that can access the host system. This is allowed because the UI is secured via API Gateway token when running in secure mode.
The latest stable version of the snap can be installed using:
$ sudo snap install edgex-ui\n
A specific release of the snap can be installed from a dedicated channel. For example, to install the 2.1 (Jakarta) release:
$ sudo snap install edgex-ui --channel=2.1\n
The latest development version of the edgex-ui snap can be installed using:
$ sudo snap install edgex-ui --edge\n
"},{"location":"getting-started/tools/Ch-GUI/#generate-token-for-entering-ui-secure-mode","title":"Generate token for entering UI secure mode","text":"A JWT access token is required to access the UI securely through the API Gateway. To do so:
$ openssl ecparam -genkey -name prime256v1 -noout -out private.pem\n$ openssl ec -in private.pem -pubout -out public.pem\n
$ sudo snap set edgexfoundry env.security-proxy.user=user01,USER_ID,ES256\n$ sudo snap set edgexfoundry env.security-proxy.public-key=\"$(cat public.pem)\"\n
$ edgexfoundry.secrets-config proxy jwt --algorithm ES256 \\\n--private_key private.pem --id USER_ID --expiration=1h\n
This output is the JWT token for UI login in secure mode. Please keep the token in a safe place for future re-use as the same token cannot be regenerated or recovered from EdgeX's secret-config CLI. The token is required each time you reopen the web page.
"},{"location":"getting-started/tools/Ch-GUI/#using-the-edgex-ui-snap","title":"Using the edgex-ui snap","text":"Open your browser http://localhost:4000
Please log in to EdgeX with the JWT token we generated above.
For more details please refer to edgex-ui Snap
"},{"location":"getting-started/tools/Ch-GUI/#native","title":"Native","text":"If you are running EdgeX natively (outside of Docker Compose or a Snap), you will find instructions on how to build and run the GUI on your platform in the GUI repository README
"},{"location":"getting-started/tools/Ch-GUI/#general","title":"General","text":""},{"location":"getting-started/tools/Ch-GUI/#gui-address","title":"GUI Address","text":"Once the GUI is up and running, simply visit port 4000 on the GUI's host machine (ex: http://localhost:4000) to enter the GUI Dashboard (see below). The GUI does not require any login.
"},{"location":"getting-started/tools/Ch-GUI/#menu-bar","title":"Menu Bar","text":"The left side of the Dashboard holds a menu bar that allows you access to the GUI functionality. The \"hamburger\" icon on the menu bar allows you to shrink or expand the menu bar to icons vs icons and menu bar labels.
"},{"location":"getting-started/tools/Ch-GUI/#mobile-device-ready","title":"Mobile Device Ready","text":"
The EdgeX GUI can be used/displayed on a mobile device via the mobile device's browser if the GUI address is accessible to the device. The display may be skewed in order to fit the device screen. For example, the Dashboard menu will often change to icons over the expanded labeled menu bar when shown on a mobile device.
"},{"location":"getting-started/tools/Ch-GUI/#capability","title":"Capability","text":"The GUI allows you to
The Dashboard page (the main page of the GUI) presents you with a set of clickable \"tiles\" that provide a quick view of the status of your EdgeX instance. That is, it provides some quick data points about the EdgeX instance and what the GUI is tracking. Specifically, the tiles in the Dashboard show you:
If for some reason the GUI has an issue or difficulty getting the information it needs to display a tile in the Dashboard when it is displayed, a popup will be displayed over the screen indicating the issue. In the example below, the support scheduling service was down and the GUI Dashboard was unable to access the scheduler service.
In this way, the Dashboard provides a quick and easy way to see whether the EdgeX instance is nominal or has underlying issues.
You can click on each of the tiles in the Dashboard. Doing so provides more details about each. More precisely, clicking on a tile takes you to another part of the GUI where the details of that item can be found. For example, clicking on the Device Profiles tile takes you to the Metadata page and the Device Profile tab (covered below)
"},{"location":"getting-started/tools/Ch-GUI/#config","title":"Config","text":"The configuration of each service is made available for each service by clicking on the Config
icon for any service from the System Service List. The configuration is displayed in JSON form and is read only. If running Consul, use the Consul Web UI to make changes to the configuration.
From the System Service List, you can request to stop, start or restart any of the listed services with the operation buttons in the far right column.
Warning
There is no confirmation popup or warning on these requests. When you push a stop, start, restart button, the request is immediately made to the system management service for that operation.
The state of the service will change when these operations are invoked. When a service is stopped, the metric and config information for the service will be unavailable.
After starting (or restarting) a service, you may need to hit the Refresh
button on the page to get the state and metric/config icons to change.
The Metadata page (available from the Metadata menu option) provides three tabs to be able to see and manage the basic elements of metadata: device services, device profiles and devices.
"},{"location":"getting-started/tools/Ch-GUI/#device-service-tab","title":"Device Service Tab","text":"The Device Service tab displays the device services known to EdgeX (as device services registered in core metadata). Device services cannot be added or removed through the GUI, but information about the existing device services (i.e., port, admin state) and several actions on the existing device services can be accomplished on this tab.
First note that for each device service listed, the number of associated devices are depicted. If you click on the Associated Devices
button, it will take you to the Device tab to be able to get more information about or work with any of the associated devices.
The Settings
button on each device service allows you to change the description or the admin state of the device service.
Alert
Please note that you must hit the Save
button after making any changes to the Device Service Settings. If you don't and move away from the page, your changes will be lost.
The Device Tab on the Metadata page offers you details about all the sensors/devices known to your EdgeX instance. Buttons at the top of the tab allow you to add, remove or edit a device (or collection of devices when deleting and using the selector checkbox in the device list).
On the row of each device listed, links take you to the appropriate tabs to see the associated device profile or device service for the device.
Icons on the row of each device listed cause editable areas to expand at the bottom of the tab to execute a device command or see/modify the device's AutoEvents.
The command execution display allows you to select the specific device resource or device command (from the Command Name List
), and execute or try
either a GET or SET command (depending on what the associated device profile for the device says is allowed). The response will be displayed in the ResponseRaw
area after the try
button is pushed.
The Add
button on the Device List tab will take you to the Add Device Wizard
. This nice utility will assist you, entry screen by entry screen, in getting a new device setup in EdgeX. Specifically, it has you (in order):
Once all the information in the Add Device Wizard
screens is entered, the Submit
button at the end of the wizard causes your new device to be created in core metadata with all appropriate associations.
The Device Profile Tab on the Metadata page displays the device profiles known to EdgeX and allows you to add new profiles or edit/remove existing profiles.
The AssociatedDevice
button on each row of the Device Profile List will take you to the Device tab and show you the list of devices currently associated to the device profile.
Warning
When deleting a profile, the system will popup an error if deices are still associated to the profile.
"},{"location":"getting-started/tools/Ch-GUI/#data-center-seeing-eventreading-data","title":"Data Center (Seeing Event/Reading Data)","text":"From the Data Center option on the GUI's menu bar you can see the stream of Event/Readings coming from the device services into core data. The event/reading data will be displayed in JSON form.
There are two tabs on the Data Stream page, both with Start
and Pause
buttons:
Hit the Start
button on either tab to see the event or reading data displayed in the stream pane (events are shown in the example below). Push the Pause
button to stop the display of event or reading data.
Warning
In actuality, the event and reading data is pulled from core data via REST call every three (3) seconds - so it is not a live stream display but a poll of data. Furthermore, if EdgeX is setup to have device services send data directly to application services via message bus and core data is not running or if core data is configured to have persistence turned off, there will be no data in core data to pull and so there will be no events or readings to see.
"},{"location":"getting-started/tools/Ch-GUI/#scheduler-intervalinterval-list","title":"Scheduler (Interval/Interval List)","text":"Interval and Interval Actions, which help define task management schedules in EdgeX, are managed via the Scheduler page from selecting Scheduler off the menu bar.
Again, as with many of the EdgeX GUI pages, there are two tabs on the Scheduler page:
When updating or adding an Interval, you must provide a name Interval duration string which takes an unsigned integer plus a unit of measure which must be one of \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Optionally provide a start/end dates and an indication that the interval runs only once (and thereby ignores the interval).
"},{"location":"getting-started/tools/Ch-GUI/#interval-action-list","title":"Interval Action List","text":"Interval Actions define what happens when the Interval kicks off. Interval Actions can define REST, MQTT or Email actions that take place when an Interval timer hits. The GUI provides the means to edit or create any of these actions. Note that an Interval Action must be associated to an already defined Interval.
"},{"location":"getting-started/tools/Ch-GUI/#notifications","title":"Notifications","text":"Notifications are messages from EdgeX to external systems about something that has happened in EdgeX - for example that a new device has been created. Currently, notifications can be sent by email or REST call.
The Notification Center page, available from the Notifications menu option, allows you to see new (not processed), processed or escalated (notifications that have failed to be sent within its resend limit) notifications. By default, the new notifications are displayed, but if you click on the Advanced >>
link on the page (see below), you can select which type of notifications to display.
The Subscriptions tab on the Notification Center page allows you to add, update or remove subscriptions to notifications. Subscribers are registered receivers of notifications - either via email or REST.
When adding (or editing) a subscription, you must provide a name, category, label, receiver, and either an email address or REST endpoint. A template is provided to specify either the email or REST endpoint configuration data needed for the subscription.
"},{"location":"getting-started/tools/Ch-GUI/#ruleengine","title":"RuleEngine","text":"The Rule Engine page, from the RuleEngine menu option, provides the means to define streams and rules for the integrated eKuiper rules engine.
Via the Stream tab, streams are defined by JSON. All that is really required is a stream name (EdgeXStream in the example below).
The Rules tab allows eKuiper rules to be added, removed or updated/edited as well as started, stopped or restarted. When adding or editing a rule, you must provide a name, the rule SQL and action. The action can be one of the following (some requiring extra parameters):
See the eKuiper documentation for more information on how to define rules.
Alert
Once a rule is created, it is started by default. Return to the Rules tab on the RulesEngine page to stop a new rule.
When creating or editing the rule, if the stream referenced in the rule is not already defined, the GUI will present an error when trying to submit the rule.
"},{"location":"getting-started/tools/Ch-GUI/#appservice","title":"AppService","text":"In the AppService page, you can configure existing configurable application services. The list of available configurable app services is determined by the UI automatically (based on a query for available app services from the registry service).
"},{"location":"getting-started/tools/Ch-GUI/#configurable","title":"Configurable","text":"When the application service is a configurable app service and is known to the GUI, the Configurable
button on the App Service List allows you to change the triggers, functions, secrets and other configuration associated to the configurable app service.
There are four tabs in the Configurable Setting editor:
Note
When the Trigger is changed, the service must be restarted for the change to take effect.
"},{"location":"getting-started/tools/Ch-GUI/#why-demo-and-developer-use-only","title":"Why Demo and Developer Use Only","text":"The GUI is meant as a developer tool or to be used in EdgeX demonstration situations. It is not yet designed for production settings. There are several reasons for this restriction.
The EdgeX community is exploring efforts to make the GUI available in secure mode in a future release.
"},{"location":"microservices/application/AdvancedTopics/","title":"Advanced Topics","text":"The following items discuss topics that are a bit beyond the basic use cases of the Application Functions SDK when interacting with EdgeX.
"},{"location":"microservices/application/AdvancedTopics/#configurable-functions-pipeline","title":"Configurable Functions Pipeline","text":"This SDK provides the capability to define the functions pipeline via configuration rather than code by using the app-service-configurable application service. See the App Service Configurable section for more details.
"},{"location":"microservices/application/AdvancedTopics/#custom-rest-endpoints","title":"Custom REST Endpoints","text":"It is not uncommon to require your own custom REST endpoints when building an Application Service. Rather than spin up your own webserver inside of your app (alongside the already existing running webserver), we've exposed a method that allows you add your own routes to the existing webserver. A few routes are reserved and cannot be used:
To add your own route, use the AddCustomRoute()
API provided on the ApplicationService
interface.
Example - Add Custom REST route
myhandler := func(c echo.Context) error {\nservice.LoggingClient().Info(\"TEST\") c.Response().WriteHeader(http.StatusOK)\nc.Response().Write([]byte(\"hello\")) } service := pkg.NewAppService(serviceKey) service.AddCustomRoute(\"/myroute\", service.Authenticated, myHandler, \"GET\")
Under the hood, this simply adds the provided route, handler, and method to the gorilla mux.Router
used in the SDK. For more information on gorilla mux
you can check out the github repo here. You can access the interfaces.ApplicationService
API for resources such as the logging client by pulling it from the context as shown above -- this is useful for when your routes might not be defined in your main.go
where you have access to the interfaces.ApplicationService
instance.
The target type is the object type of the incoming data that is sent to the first function in the function pipeline. By default this is an EdgeX dtos.Event
since typical usage is receiving Events
from the EdgeX MessageBus.
There are scenarios where the incoming data is not an EdgeX Event
. One example scenario is two application services are chained via the EdgeX MessageBus. The output of the first service is inference data from analyzing the original Event
data, and published back to the EdgeX MessageBus. The second service needs to be able to let the SDK know the target type of the input data it is expecting.
For usages where the incoming data is not events
, the TargetType
of the expected incoming data can be set when the ApplicationService
instance is created using the NewAppServiceWithTargetType()
factory function.
Example - Set and use custom Target Type
type Person struct { FirstName string `json:\"first_name\"` LastName string `json:\"last_name\"` } service := pkg.NewAppServiceWithTargetType(serviceKey, &Person{})
TargetType
must be set to a pointer to an instance of your target type such as &Person{}
. The first function in your function pipeline will be passed an instance of your target type, not a pointer to it. In the example above, the first function in the pipeline would start something like:
func MyPersonFunction(ctx interfaces.AppFunctionContext, data interface{}) (bool, interface{}) { ctx.LoggingClient().Debug(\"MyPersonFunction executing\")\n\nif data == nil {\nreturn false, errors.New(\"no data received to MyPersonFunction\")\n}\n\nperson, ok := data.(Person)\nif !ok {\nreturn false, errors.New(\"MyPersonFunction type received is not a Person\")\n}\n\n// ....\n
The SDK supports un-marshaling JSON or CBOR encoded data into an instance of the target type. If your incoming data is not JSON or CBOR encoded, you then need to set the TargetType
to &[]byte
.
If the target type is set to &[]byte
the incoming data will not be un-marshaled. The content type, if set, will be set on the interfaces.AppFunctionContext
and can be access via the InputContentType()
API. Your first function will be responsible for decoding the data or not.
See the Common Command Line Options for the set of command line options common to all EdgeX services. The following command line options are specific to Application Services.
"},{"location":"microservices/application/AdvancedTopics/#skip-version-check","title":"Skip Version Check","text":"-s/--skipVersionCheck
Indicates the service should skip the Core Service's version compatibility check.
"},{"location":"microservices/application/AdvancedTopics/#service-key","title":"Service Key","text":"-sk/--serviceKey
Sets the service key that is used with Registry, Configuration Provider and security services. The default service key is set by the application service. If the name provided contains the placeholder text <profile>
, this text will be replaced with the name of the profile used. If profile is not set, the <profile>
text is simply removed
Can be overridden with EDGEX_SERVICE_KEY environment variable.
"},{"location":"microservices/application/AdvancedTopics/#environment-variables","title":"Environment Variables","text":"See the Common Environment Variables section for the list of environment variables common to all EdgeX Services. The remaining in this section are specific to Application Services.
"},{"location":"microservices/application/AdvancedTopics/#edgex_service_key","title":"EDGEX_SERVICE_KEY","text":"This environment variable overrides the -sk/--serviceKey
command-line option and the default set by the application service.
Note
If the name provided contains the text <profile>
, this text will be replaced with the name of the profile used.
Example - Service Key
EDGEX_SERVICE_KEY: app-<profile>-mycloud
profile: http-export
then service key will be app-http-export-mycloud
Applications can specify custom configuration in the service's configuration file in two ways.
"},{"location":"microservices/application/AdvancedTopics/#application-settings","title":"Application Settings","text":"The first simple way is to add items to the ApplicationSetting
section. This is a map of string key/value pairs, i.e. map[string]string
. Use for simple string values or comma separated list of string values. The ApplicationService
API provides the follow access APIs for this configuration section:
ApplicationSettings() map[string]string
GetAppSetting(setting string) (string, error)
setting
valueGetAppSettingStrings(setting string) ([]string, error)
setting
value. The Entry is assumed to be a comma separated list of strings.The second is the more complex Structured Custom Configuration
which allows the Application Service to define and watch it's own structured section in the service's configuration file.
The ApplicationService
API provides the follow APIs to enable structured custom configuration:
LoadCustomConfig(config UpdatableConfig, sectionName string) error
UpdateFromRaw
interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider.ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
See the Application Service Template for an example of using the new Structured Custom Configuration capability.
The Store and Forward capability allows for export functions to persist data on failure and for the export of the data to be retried at a later time.
Note
The order the data exported via this retry mechanism is not guaranteed to be the same order in which the data was initial received from Core Data
"},{"location":"microservices/application/AdvancedTopics/#configuration","title":"Configuration","text":"Writable.StoreAndForward
allows enabling, setting the interval between retries and the max number of retries. If running with Configuration Provider, these setting can be changed on the fly via Consul without having to restart the service.
Example - Store and Forward configuration
Writable:\nStoreAndForward:\nEnabled: false\nRetryInterval: \"5m\"\nMaxRetryCount: 10\n
Note
RetryInterval should be at least 1 second (eg. '1s') or greater. If a value less than 1 second is specified, 1 second will be used. Endless retries will occur when MaxRetryCount is set to 0. If MaxRetryCount is set to less than 0, a default of 1 retry will be used.
Database configuration section describes which database type to use and the information required to connect to the database. This section is required if Store and Forward is enabled. It is optional if not using Redis
for the EdgeX MessageBus which is now the default.
Example - Database configuration
Database:\nType: \"redisdb\"\nHost: \"localhost\"\nPort: 6379\nTimeout: \"5s\"\n
"},{"location":"microservices/application/AdvancedTopics/#how-it-works","title":"How it works","text":"When an export function encounters an error sending data it can call SetRetryData(payload []byte)
on the AppFunctionContext
. This will store the data for later retry. If the Application Service is stopped and then restarted while stored data hasn't been successfully exported, the export retry will resume once the service is up and running again.
Note
It is important that export functions return an error and stop pipeline execution after the call to SetRetryData
. See HTTPPost function in SDK as an example
When the RetryInterval
expires, the function pipeline will be re-executed starting with the export function that saved the data. The saved data will be passed to the export function which can then attempt to resend the data.
Note
The export function will receive the data as it was stored, so it is important that any transformation of the data occur in functions prior to the export function. The export function should only export the data that it receives.
One of three out comes can occur after the export retried has completed.
Export retry was successful
In this case, the stored data is removed from the database and the execution of the pipeline functions after the export function, if any, continues.
Export retry fails and retry count has not been
exceeded
In this case, the stored data is updated in the database with the incremented retry count
Export retry fails and retry count has been
exceeded
In this case, the stored data is removed from the database and never retried again.
Note
Changing Writable.Pipeline.ExecutionOrder will invalidate all currently stored data and result in it all being removed from the database on the next retry. This is because the position of the export function can no longer be guaranteed and no way to ensure it is properly executed on the retry.
"},{"location":"microservices/application/AdvancedTopics/#custom-storage","title":"Custom Storage","text":"The default backing store is redis. Custom implementations of the StoreClient
interface can be provided if redis does not meet your requirements.
type StoreClient interface {\n// Store persists a stored object to the data store and returns the assigned UUID.\nStore(o StoredObject) (id string, err error)\n\n// RetrieveFromStore gets an object from the data store.\nRetrieveFromStore(appServiceKey string) (objects []StoredObject, err error)\n\n// Update replaces the data currently in the store with the provided data.\nUpdate(o StoredObject) error\n\n// RemoveFromStore removes an object from the data store.\nRemoveFromStore(o StoredObject) error\n\n// Disconnect ends the connection.\nDisconnect() error\n}\n
A factory function to create these clients can then be registered with your service by calling RegisterCustomStoreFactory service.RegisterCustomStoreFactory(\"jetstream\", func(cfg interfaces.DatabaseInfo, cred config.Credentials) (interfaces.StoreClient, error) {\nconn, err := nats.Connect(fmt.Sprintf(\"nats://%s:%d\", cfg.Host, cfg.Port))\n\nif err != nil {\nreturn nil, err\n}\n\njs, err := conn.JetStream()\n\nif err != nil {\nreturn nil, err\n}\n\nkv, err := js.KeyValue(serviceKey)\n\nif err != nil {\nkv, err = js.CreateKeyValue(&nats.KeyValueConfig{Bucket: serviceKey})\n}\n\nreturn &JetstreamStore{\nconn: conn,\nserviceKey: serviceKey,\nkv: kv,\n}, err\n})\n
and configured using the registered name in the Database
section:
Example - Database configuration
Database:\nType: \"jetstream\"\nHost: \"broker\"\nPort: 4222\nTimeout: \"5s\"\n
"},{"location":"microservices/application/AdvancedTopics/#secrets","title":"Secrets","text":""},{"location":"microservices/application/AdvancedTopics/#configuration_1","title":"Configuration","text":"All instances of App Services running in secure mode require a SecretStore to be configured. With the use of Redis Pub/Sub
as the default EdgeX MessageBus all App Services need the redisdb
known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details.
Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It now has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
"},{"location":"microservices/application/AdvancedTopics/#storing-secrets","title":"Storing Secrets","text":""},{"location":"microservices/application/AdvancedTopics/#secure-mode","title":"Secure Mode","text":"When running an application service in secure mode, secrets can be stored in the service's secure SecretStore by making an HTTP POST
call to the /api/v3/secret
API route in the application service. The secret data POSTed is stored and retrieved from the service's secure SecretStore . Once a secret is stored, only the service that added the secret will be able to retrieve it. For secret retrieval see Getting Secrets section below.
Example - JSON message body
{\n\"secretName\" : \"MySecret\",\n\"secretData\" : [\n{\n\"key\" : \"MySecretKey\",\n\"value\" : \"MySecretValue\"\n}\n]\n}\n
Note
SecretName specifies the location of the secret within the service's SecretStore.
"},{"location":"microservices/application/AdvancedTopics/#insecure-mode","title":"Insecure Mode","text":"When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration file. Insecure secrets and their paths can be configured as below.
Example - InsecureSecrets Configuration
Writable:\nInsecureSecrets: AWS:\nSecretName: \"aws\"\nSecretsData:\nusername: \"aws-user\"\npassword: \"aws-pw\"\nDB:\nSecretName: \"redisdb\"\nSecretsData:\nusername: \"\"\npassword: \"\"\n
"},{"location":"microservices/application/AdvancedTopics/#getting-secrets","title":"Getting Secrets","text":"Application Services can retrieve their secrets from their SecretStore using the interfaces.ApplicationService.SecretProvider.GetSecret() API or from the interfaces.AppFunctionContext.SecretProvider.GetSecret() API
When in secure mode, the secrets are retrieved from the service secure SecretStore.
When running in insecure mode, the secrets are retrieved from the Writable.InsecureSecrets
configuration.
The background publisher API has been deprecated. Any applications using it should migrate replacements available on the ApplicationService
or AppFunctionContext
APIs:
Application Services using the MessageBus trigger can request a background publisher using the AddBackgroundPublisher API in the SDK. This method takes an int representing the background channel's capacity as the only parameter and returns a reference to a BackgroundPublisher. This reference can then be used by background processes to publish to the configured MessageBus output. A custom topic can be provided to use instead of the configured message bus output as well.
Example - Background Publisher
func runJob (service interfaces.ApplicationService, done chan struct{}){\nticker := time.NewTicker(1 * time.Minute)\n\n//initialize background publisher with a channel capacity of 10 and a custom topic\npublisher, err := service.AddBackgroundPublisherWithTopic(10, \"custom-topic\")\n\nif err != nil {\n// do something\n}\n\ngo func(pub interfaces.BackgroundPublisher) {\nfor {\nselect {\ncase <-ticker.C:\nmsg := myDataService.GetMessage()\npayload, err := json.Marshal(message)\n\nif err != nil {\n//do something\n}\n\nctx := svc.BuildContext(uuid.NewString(), common.ContentTypeJSON)\n\n// modify context as needed\n\nerr = pub.Publish(payload, ctx)\n\nif err != nil {\n//do something\n}\ncase <-j.done:\nticker.Stop()\nreturn\n}\n}\n}(publisher)\n}\n\nfunc main() {\nservice := pkg.NewAppService(serviceKey)\n\ndone := make(chan struct{})\ndefer close(done)\n\n//pass publisher to your background job\nrunJob(service, done)\n\nservice.SetDefaultFunctionsPipeline(\nAll,\nMy,\nFunctions,\n)\n\nservice.Run()\n\nos.Exit(0)\n}
"},{"location":"microservices/application/AdvancedTopics/#stopping-the-service","title":"Stopping the Service","text":"Application Services will listen for SIGTERM / SIGINT signals from the OS and stop the function pipeline in response. The pipeline can also be exited programmatically by calling sdk.Stop()
on the running ApplicationService
instance. This can be useful for cases where you want to stop a service in response to a runtime condition, e.g. receiving a \"poison pill\" message through its trigger.
When messages are received via the EdgeX MessageBus or External MQTT triggers, the topic that the data was received on is seeded into the new Context Storage on the AppFunctionContext
with the key receivedtopic
. This make the Received Topic
available to all functions in the pipeline. The SDK provides the interfaces.RECEIVEDTOPIC
constant for this key. See the Context Storage section for more details on extracting values.
The Pipeline Per Topics
feature allows for multiple function pipelines to be defined. Each will execute only when one of the specified pipeline topics matches the received topic. The pipeline topics can have wildcards (+
and #
) allowing the topic to match a variety of received topics. Each pipeline has its own set of functions (transforms) that are executed on the received message. If the #
wildcard is used by itself for a pipeline topic, it will match all received topics and the specified functions pipeline will execute on every message received.
Note
The Pipeline Per Topics
feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank
, so the pipeline's topics must contain a single topic set to the #
wildcard so that all messages received are processed by the pipeline.
Example pipeline topics with wildcards
\"#\" - Matches all messages received\n\"edegex/events/#\" - Matches all messages received with the based topic `edegex/events/`\n\"edegex/events/core/#\" - Matches all messages received just from Core Data\n\"edegex/events/device/#\" - Matches all messages received just from Device services\n\"edegex/events/+/my-profile/#\" - Matches all messages received from Core Data or Device services for `my-profile`\n\"edegex/events/+/+/my-device/#\" - Matches all messages received from Core Data or Device services for `my-device`\n\"edegex/events/+/+/+/my-source\" - Matches all messages received from Core Data or Device services for `my-source`\n
Refer to the Filter By Topics section for details on the structure of the received topic.
All pipeline function capabilities such as Store and Forward, Batching, etc. can be used with one or more of the multiple function pipelines. Store and Forward uses the Pipeline's ID to find and restart the pipeline on retries.
Example - Adding multiple function pipelines
This example adds two pipelines. One to process data from the Random-Float-Device
device and one to process data from the Int32
and Int64
sources.
sample := functions.NewSample()\nerr = service.AddFunctionsPipelineForTopics(\n\"Floats-Pipeline\", []string{\"edgex/events/+/+/Random-Float-Device/#\"}, transforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\n...\nreturn -1\n}\n\nerr = app.service.AddFunctionsPipelineForTopics(\n\"Int32-Pipleine\", []string{\"edgex/events/+/+/+/Int32\", \"edgex/events/+/+/+/Int64\"},\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\n...\nreturn -1\n}\n
"},{"location":"microservices/application/AdvancedTopics/#built-in-application-service-metrics","title":"Built-in Application Service Metrics","text":"All application services have the following built-in metrics:
MessagesReceived
- This is a counter metric that counts the number of messages received by the application service. Includes invalid messages.
InvalidMessagesReceived
- (NEW) This is a counter metric that counts the number of invalid messages received by the application service.
HttpExportSize
- (NEW) This is a histogram metric that collects the size of data exported via the built-in HTTP Export pipeline function. The metric data is not currently tagged due to breaking changes required to tag the data with the destination endpoint. This will be addressed in a future EdgeX 3.0 release.
MqttExportSize
- (NEW) This is a histogram metric that collects the size of data exported via the built-in MQTT Export pipeline function. The metric data is tagged with the specific broker address and topic.
PipelineMessagesProcessed
- This is a counter metric that counts the number of messages processed by the individual function pipelines defined by the application service. The metric data is tagged with the specific function pipeline ID the count is for.
PipelineProcessingErrors
- (NEW) This is a counter metric that counts the number of errors returned by the individual function pipelines defined by the application service. The metric data is tagged with the specific function pipeline ID the count is for.
PipelineMessageProcessingTime
- This is a timer metric that tracks the amount of time taken to process messages by the individual function pipelines defined by the application service. The metric data is tagged with the specific function pipeline ID the timer is for.
Note
The time tracked for this metric is only for the function pipeline processing time. The overhead of receiving the messages and handing them to the appropriate function pipelines is not included. Accounting for this overhead may be added as another timer metric in a future release.
Reporting of these built-in metrics is disabled by default in the Writable.Telemetry
configuration section. See Writable.Telemetry
configuration details in the Application Service Configuration section for complete detail on this section. If the configuration for these built-in metrics are missing, then the reporting of the metrics will be disabled.
Example - Service Telemetry Configuration with all built-in metrics enabled for reporting
Writable:\nTelemetry:\nInterval: \"30s\"\nMetrics:\nMessagesReceived: true\nInvalidMessagesReceived: true\nPipelineMessagesProcessed: true PipelineMessageProcessingTime: true\nPipelineProcessingErrors: true HttpExportSize: true MqttExportSize: true Tags: # Contains the service level tags to be attached to all the service's metrics\nGateway: \"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only change existing value, not added new ones.\n
"},{"location":"microservices/application/AdvancedTopics/#custom-application-service-metrics","title":"Custom Application Service Metrics","text":"The Custom Application Service Metrics capability allows for custom application services to define, collect and report their own custom service metrics.
The following are the steps to collect and report custom service metrics:
Determine the metric type that needs to be collected
counter
- Track the integer count of somethinggauge
- Track the integer value of something gaugeFloat64
- Track the float64 value of something timer
- Track the time it takes to accomplish a taskhistogram
- Track the integer value variance of somethingCreate instance of the metric type from github.com/rcrowley/go-metrics
myCounter = gometrics.NewCounter()
myGauge = gometrics.NewGauge()
myGaugeFloat64 = gometrics.NewGaugeFloat64()
myTimer = gometrics.NewTime()
myHistogram = gometrics.NewHistogram(gometrics.NewUniformSample(<reservoir size))
Determine if there are any tags to report along with your metric. Not common so nil
is typically passed for the tags map[strings]string
parameter in the next step.
Register your metric(s) with the MetricsManager from the service
or pipeline function context
reference. See Application Service API and App Function Context API for more details:
service.MetricsManager().Register(\"MyCounterName\", myCounter, nil)
ctx.MetricsManager().Register(\"MyCounterName\", myCounter, nil)
Collect the metric
myCounter.Inc(someIntvalue)
myCounter.Dec(someIntvalue)
myGauge.Update(someIntvalue)
myGaugeFloat64.Update(someFloatvalue)
myTimer.Update(someDuration)
myTimer.Time(func { do sometime})
myTimer.UpdateSince(someTimeValue)
myHistogram.Update(someIntvalue)
Configure reporting of the service's metrics. See Writable.Telemetry
configuration details in the Application Service Configuration section for more detail.
Example - Service Telemetry Configuration
Writable:\nTelemetry:\nInterval: \"30s\"\nMetrics:\nMyCounterName: true\nMyGaugeName: true\nMyGaugeFloat64Name: true\nMyTimerName: true\nMyHistogram: true\nTags: # Contains the service level tags to be attached to all the service's metrics\nGateway: \"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only change existing value, not added new ones.\n
Note
The metric names used in the above configuration (to enable or disable reporting of a metric) must match the metric name used when the metric is registered. A partial match of starts with is acceptable, i.e. the metric name registered starts with the above configured name.
The context parameter passed to each function/transform provides operations and data associated with each execution of the pipeline.
Let's take a look at its API:
type AppFunctionContext interface {\nCorrelationID() string\nInputContentType() string\nSetResponseData(data []byte)\nResponseData() []byte\nSetResponseContentType(string)\nResponseContentType() string\nSetRetryData(data []byte)\nSecretProvider() interfaces.SecretProvider\nLoggingClient() logger.LoggingClient\nEventClient() interfaces.EventClient\nCommandClient() interfaces.CommandClient\nNotificationClient() interfaces.NotificationClient\nSubscriptionClient() interfaces.SubscriptionClient\nDeviceServiceClient() interfaces.DeviceServiceClient\nDeviceProfileClient() interfaces.DeviceProfileClient\nDeviceClient() interfaces.DeviceClient\nMetricsManager() bootstrapInterfaces.MetricsManager\nGetDeviceResource(profileName string, resourceName string) (dtos.DeviceResource, error)\nAddValue(key string, value string)\nRemoveValue(key string)\nGetValue(key string) (string, bool)\nGetAllValues() map[string]string\nApplyValues(format string) (string, error)\nPipelineId() string\nPublish(data any) error\nPublishWithTopic(topic string, data any) error\nClone() AppFunctionContext\n}\n
"},{"location":"microservices/application/AppFunctionContextAPI/#response-data","title":"Response Data","text":""},{"location":"microservices/application/AppFunctionContextAPI/#setresponsedata","title":"SetResponseData","text":"SetResponseData(data []byte)
This API sets the response data that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#responsedata","title":"ResponseData","text":"ResponseData()
This API returns the data that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#setresponsecontenttype","title":"SetResponseContentType","text":"SetResponseContentType(string)
This API sets the content type that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#responsecontenttype","title":"ResponseContentType","text":"ResponseContentType()
This API returns the content type that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#clients","title":"Clients","text":""},{"location":"microservices/application/AppFunctionContextAPI/#loggingclient","title":"LoggingClient","text":"LoggingClient() logger.LoggingClient
Returns a LoggingClient
to leverage logging libraries/service utilized throughout the EdgeX framework. The SDK has initialized everything so it can be used to log Trace
, Debug
, Warn
, Info
, and Error
messages as appropriate.
Example - LoggingClient
ctx.LoggingClient().Info(\"Hello World\")\nc.LoggingClient().Errorf(\"Some error occurred: %w\", err)\n
"},{"location":"microservices/application/AppFunctionContextAPI/#eventclient","title":"EventClient","text":"EventClient() interfaces.EventClient
Returns an EventClient
to leverage Core Data's Event
API. See interface definition for more details. This client is useful for querying events. Note if Core Data is not specified in the Clients configuration, this will return nil.
CommandClient() interfaces.CommandClient
Returns a CommandClient
to leverage Core Command's Command
API. See interface definition for more details. Useful for sending commands to devices. Note if Core Command is not specified in the Clients configuration, this will return nil.
NotificationClient() interfaces.NotificationClient
Returns a NotificationClient
to leverage Support Notifications' Notifications
API. See interface definition for more details. Useful for sending notifications. Note if Support Notifications is not specified in the Clients configuration, this will return nil.
SubscriptionClient() interfaces.SubscriptionClient
Returns a SubscriptionClient
to leverage Support Notifications' Subscription
API. See interface definition for more details. Useful for creating notification subscriptions. Note if Support Notifications is not specified in the Clients configuration, this will return nil.
DeviceServiceClient() interfaces.DeviceServiceClient
Returns a DeviceServiceClient
to leverage Core Metadata's DeviceService
API. See interface definition for more details. Useful for querying information about Device Services. Note if Core Metadata is not specified in the Clients configuration, this will return nil.
DeviceProfileClient() interfaces.DeviceProfileClient
Returns a DeviceProfileClient
to leverage Core Metadata's DeviceProfile
API. See interface definition for more details. Useful for querying information about Device Profiles and is used by the GetDeviceResource
helper function below. Note if Core Metadata is not specified in the Clients configuration, this will return nil.
DeviceClient() interfaces.DeviceClient
Returns a DeviceClient
to leverage Core Metadata's Device
API. See interface definition for more details. Useful for querying information about Devices. Note if Core Metadata is not specified in the Clients configuration, this will return nil.
Each of the clients above is only initialized if the Clients section of the configuration contains an entry for the service associated with the Client API. If it isn't in the configuration the client will be nil
. Your code must check for nil
to avoid panic in case it is missing from the configuration. Only add the clients to your configuration that your Application Service will actually be using. All application services need Core-Data
for version compatibility check done on start-up. The following is an example Clients
section of a configuration.yaml with all supported clients specified:
Example - Client Configuration Section
Clients:\ncore-data:\nProtocol: http\nHost: localhost\nPort: 59880\n\ncore-command:\nProtocol: http\nHost: localhost\nPort: 59882\n\nsupport-notifications:\nProtocol: http\nHost: localhost\nPort: 59860\n
Note
Core Metadata client is required and provided by the App Services Common Configuration, so it is not included in the above example.
"},{"location":"microservices/application/AppFunctionContextAPI/#context-storage","title":"Context Storage","text":"The context API exposes a map-like interface that can be used to store custom data specific to a given pipeline execution. This data is persisted for retry if needed. Currently only strings are supported, and keys are treated as case-insensitive.
There following values are seeded into the Context Storage when an Event is received:
interfaces.PROFILENAME
)interfaces.DEVICENAME
)interfaces.SOURCENAME
)interfaces.RECEIVEDTOPIC
)Note
Received Topic only available when the message was received from the Edgex MessageBus or External MQTT triggers.
Storage can be accessed using the following methods:
"},{"location":"microservices/application/AppFunctionContextAPI/#addvalue","title":"AddValue","text":"AddValue(key string, value string)
This API stores a value for access within a pipeline execution
"},{"location":"microservices/application/AppFunctionContextAPI/#removevalue","title":"RemoveValue","text":"RemoveValue(key string)
This API deletes a value stored in the context at the given key
"},{"location":"microservices/application/AppFunctionContextAPI/#getvalue","title":"GetValue","text":"GetValue(key string) (string, bool)
This API attempts to retrieve a value stored in the context at the given key
"},{"location":"microservices/application/AppFunctionContextAPI/#getallvalues","title":"GetAllValues","text":"GetAllValues() map[string]string
This API returns a read-only copy of all data stored in the context
"},{"location":"microservices/application/AppFunctionContextAPI/#applyvalues","title":"ApplyValues","text":"ApplyValues(format string) (string, error)
This API will replace placeholders of the form {context-key-name}
with the value found in the context at context-key-name
. Note that key matching is case insensitive. An error will be returned if any placeholders in the provided string do NOT have a corresponding entry in the context storage map.
SecretProvider() interfaces.SecretProvider
This API returns reference to the SecretProvider instance. See Secret Provider API section for more details.
"},{"location":"microservices/application/AppFunctionContextAPI/#miscellaneous","title":"Miscellaneous","text":""},{"location":"microservices/application/AppFunctionContextAPI/#clone","title":"Clone()","text":"Clone() AppFunctionContext
This method returns a copy of the context that can be mutated independently where appropriate. This can be useful when running operations that take AppFunctionContext in parallel.
"},{"location":"microservices/application/AppFunctionContextAPI/#correlationid","title":"CorrelationID()","text":"CorrelationID() string
This API returns the ID used to track the EdgeX event through entire EdgeX framework.
"},{"location":"microservices/application/AppFunctionContextAPI/#pipelineid","title":"PipelineId","text":"PipelineId() string
This API returns the ID of the pipeline currently executing. Useful when logging messages from pipeline functions so the message contain the ID of the pipeline that executed the pipeline function.
"},{"location":"microservices/application/AppFunctionContextAPI/#inputcontenttype","title":"InputContentType()","text":"InputContentType() string
This API returns the content type of the data that initiated the pipeline execution. Only useful when the TargetType for the pipeline is []byte, otherwise the data will be the type specified by TargetType.
"},{"location":"microservices/application/AppFunctionContextAPI/#getdeviceresource","title":"GetDeviceResource()","text":"GetDeviceResource(profileName string, resourceName string) (dtos.DeviceResource, error)
This API retrieves the DeviceResource for the given profile / resource name. Results are cached to minimize HTTP traffic to core-metadata.
"},{"location":"microservices/application/AppFunctionContextAPI/#setretrydata","title":"SetRetryData()","text":"SetRetryData(data []byte)
This method can be used to store data for later retry. This is useful when creating a custom export function that needs to retry on failure. The payload data will be stored for later retry based on Store and Forward
configuration. When the retry is triggered, the function pipeline will be re-executed starting with the function that called this API. That function will be passed the stored data, so it is important that all transformations occur in functions prior to the export function. The Context
will also be restored to the state when the function called this API. See Store and Forward for more details.
Note
Store and Forward
be must enabled when calling this API, otherwise the data is ignored.
MetricsManager() bootstrapInterfaces.MetricsManager
This API returns the Metrics Manager used to register counter, gauge, gaugeFloat64 or timer metric types from github.com/rcrowley/go-metrics
myCounterMetricName := \"MyCounter\"\nmyCounter := gometrics.NewCounter()\nmyTags := map[string]string{\"Tag1\":\"Value1\"}\nctx.MetricsManager().Register(myCounterMetricName, myCounter, myTags)
"},{"location":"microservices/application/AppFunctionContextAPI/#publish","title":"Publish","text":"Publish(data any) error
This API pushes data to the EdgeX MessageBus using configured topic and returns an error if the EdgeX MessageBus is disabled in configuration
"},{"location":"microservices/application/AppFunctionContextAPI/#publishwithtopic","title":"PublishWithTopic","text":"PublishWithTopic(topic string, data any) error
This API pushes data to the EdgeX MessageBus using a given topic and returns an error if the EdgeX MessageBus is diasbled in configuration
"},{"location":"microservices/application/ApplicationFunctionsSDK/","title":"App Functions SDK Overview","text":"Welcome the App Functions SDK for EdgeX. This SDK is meant to provide all the plumbing necessary for developers to get started in processing/transforming/exporting data out of EdgeX.
If you're new to the SDK - checkout the Getting Started guide.
If you're already familiar - checkout the various sections about the SDK:
Section Description Application Service API Provides a list of all available APIs on the interface use to build Application Services App Function Context API Provides a list of all available APIs on the context interface that is available inside of a pipeline function Pipeline Function Error Handling Describes how to properly handle pipeline execution failures Built-In Pipeline Functions Provides a list of the available pipeline functions/transforms in the SDK Advanced Topics Learn about other ways to leverage the SDK beyond basic use casesThe App Functions SDK implements a small REST API which can be seen Here.
"},{"location":"microservices/application/ApplicationServiceAPI/","title":"Application Service API","text":"The ApplicationService
API is the central API for creating an EdgeX Application Service.
The new ApplicationService
API is as follows:
type AppFunction = func(appCxt AppFunctionContext, data interface{}) (bool, interface{})\n\ntype FunctionPipeline struct {\nId string\nTransforms []AppFunction\nTopic string\nHash string\n}\n\ntype ApplicationService interface {\nApplicationSettings() map[string]string\nGetAppSetting(setting string) (string, error)\nGetAppSettingStrings(setting string) ([]string, error)\nLoadCustomConfig(config UpdatableConfig, sectionName string) error\nListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error\nSetDefaultFunctionsPipeline(transforms ...AppFunction) error\nAddFunctionsPipelineForTopics(id string, topics []string, transforms ...AppFunction) error\nLoadConfigurableFunctionPipelines() (map[string]FunctionPipeline, error)\nRemoveAllFunctionPipelines()\nRun() error\nStop()\nSecretProvider() interfaces.SecretProvider\nLoggingClient() logger.LoggingClient\nEventClient() interfaces.EventClient\nCommandClient() interfaces.CommandClient\nNotificationClient() interfaces.NotificationClient\nSubscriptionClient() interfaces.SubscriptionClient\nDeviceServiceClient() interfaces.DeviceServiceClient\nDeviceProfileClient() interfaces.DeviceProfileClient\nDeviceClient() interfaces.DeviceClient\nRegistryClient() registry.Client\nMetricsManager() bootstrapInterfaces.MetricsManager\nAddBackgroundPublisher(capacity int) (BackgroundPublisher, error)\nAddBackgroundPublisherWithTopic(capacity int, topic string) (BackgroundPublisher, error)\nBuildContext(correlationId string, contentType string) AppFunctionContext\nAddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error\nAddCustomRoute(route string, authentication Authentication, handler echo.HandlerFunc, methods ...string) error\nAppContext() context.Context\nRequestTimeout() time.Duration\nRegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error\nRegisterCustomStoreFactory(name string, factory func(cfg DatabaseInfo, cred config.Credentials) (StoreClient, error)) error\nPublish(data any) error\nPublishWithTopic(topic string, data any) error\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#factory-functions","title":"Factory Functions","text":"The App Functions SDK provides two factory functions for creating an ApplicationService
NewAppService(serviceKey string) (interfaces.ApplicationService, bool)
This factory function returns an interfaces.ApplicationService
using the default Target Type of dtos.Event
and initializes the service. The second bool
return parameter will be true
if successfully initialized, otherwise it will be false
when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1)
if false
is returned.
Example - NewAppService
const serviceKey = \"app-myservice\"\n...\n\nservice, ok := pkg.NewAppService(serviceKey)\nif !ok {\nos.Exit(-1)\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#newappservicewithtargettype","title":"NewAppServiceWithTargetType","text":"NewAppServiceWithTargetType(serviceKey string, targetType interface{}) (interfaces.ApplicationService, bool)
This factory function returns an interfaces.ApplicationService
using the passed in Target Type and initializes the service. The second bool
return parameter will be true
if successfully initialized, otherwise it will be false
when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1)
if false
is returned.
See the Target Type advanced topic for more details.
Example - NewAppServiceWithTargetType
const serviceKey = \"app-myservice\"\n...\n\nservice, ok := pkg.NewAppServiceWithTargetType(serviceKey, &[]byte{})\nif !ok {\nos.Exit(-1)\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#custom-configuration-apis","title":"Custom Configuration APIs","text":"The following ApplicationService
APIs allow your service to access their custom configuration from the configuration file and/or Configuration Provider. See the Custom Configuration advanced topic for more details.
ApplicationSettings() map[string]string
This API returns the complete key/value map of custom settings
Example - ApplicationSettings
ApplicationSettings:\nGreeting: \"Hello World\"\n
appSettings := service.ApplicationSettings()\ngreeting := appSettings[\"Greeting\"]\nservice.LoggingClient.Info(greeting)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#getappsetting","title":"GetAppSetting","text":"GetAppSetting(setting string) (string, error)
This API is a convenience API that returns a single setting from the [ApplicationSetting]
section of the service configuration. An error is returned if the specified setting is not found.
Example - GetAppSetting
ApplicationSettings:\nGreeting: \"Hello World\"\n
greeting, err := service.GetAppSetting[\"Greeting\"]\nif err != nil {\n...\n}\nservice.LoggingClient.Info(greeting)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#getappsettingstrings","title":"GetAppSettingStrings","text":"GetAppSettingStrings(setting string) ([]string, error)
This API is a convenience API that parses the string value for the specified custom application setting as a comma separated list. It returns the list of strings. An error is returned if the specified setting is not found.
Example - GetAppSettingStrings
ApplicationSettings:\nGreetings: \"Hello World, Welcome World, Hi World\"\n
greetings, err := service.GetAppSettingStrings[\"Greetings\"]\nif err != nil {\n...\n}\nfor _, greeting := range greetings {\nservice.LoggingClient.Info(greeting)\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#loadcustomconfig","title":"LoadCustomConfig","text":"LoadCustomConfig(config UpdatableConfig, sectionName string) error
This API loads the service's Structured Custom Configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration if service is using the Configuration Provider. The UpdateFromRaw
API (UpdatableConfig
interface) will be called on the custom configuration when the configuration is loaded from the Configuration Provider. The custom config must implement the UpdatableConfig
interface.
Example - LoadCustomConfig
AppCustom: # Can be any name you choose\nResourceNames: \"Boolean, Int32, Uint32, Float32, Binary\"\nSomeValue: 123\nSomeService:\nHost: \"localhost\"\nPort: 9080\nProtocol: \"http\"\n
type ServiceConfig struct {\nAppCustom AppCustomConfig\n}\n\ntype AppCustomConfig struct {\nResourceNames string\nSomeValue int\nSomeService HostInfo\n}\n\nfunc (c *ServiceConfig) UpdateFromRaw(rawConfig interface{}) bool {\nconfiguration, ok := rawConfig.(*ServiceConfig)\nif !ok {\nreturn false //errors.New(\"unable to cast raw config to type 'ServiceConfig'\")\n}\n\n*c = *configuration\n\nreturn true\n}\n\n...\n\nserviceConfig := &ServiceConfig{}\nerr := service.LoadCustomConfig(serviceConfig, \"AppCustom\")\nif err != nil {\n...\n}\n
See the App Service Template for a complete example of using Structured Custom Configuration.
"},{"location":"microservices/application/ApplicationServiceAPI/#listenforcustomconfigchanges","title":"ListenForCustomConfigChanges","text":"ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
This API starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the provided changedCallback
function is called with the updated section of configuration. The service must then implement the code to copy the updates into it's copy of the configuration and respond to the updates if needed.
Example - ListenForCustomConfigChanges
AppCustom: # Can be any name you choose\nResourceNames: \"Boolean, Int32, Uint32, Float32, Binary\"\nSomeValue: 123\nSomeService:\nHost: \"localhost\"\nPort: 9080\nProtocol: \"http\"\n
...\n\nerr := service.ListenForCustomConfigChanges(&serviceConfig.AppCustom, \"AppCustom\", ProcessConfigUpdates)\nif err != nil {\nlogger.Errorf(\"unable to watch custom writable configuration: %s\", err.Error())\n}\n\n...\n\nfunc (app *myApp) ProcessConfigUpdates(rawWritableConfig interface{}) {\nupdated, ok := rawWritableConfig.(*config.AppCustomConfig)\nif !ok {\n...\nreturn\n}\n\nprevious := app.serviceConfig.AppCustom\napp.serviceConfig.AppCustom = *updated\n\nif reflect.DeepEqual(previous, updated) {\nlogger.Info(\"No changes detected\")\nreturn\n}\n\nif previous.SomeValue != updated.SomeValue {\nlogger.Infof(\"AppCustom.SomeValue changed to: %d\", updated.SomeValue)\n}\nif previous.ResourceNames != updated.ResourceNames {\nlogger.Infof(\"AppCustom.ResourceNames changed to: %s\", updated.ResourceNames)\n}\nif !reflect.DeepEqual(previous.SomeService, updated.SomeService) {\nlogger.Infof(\"AppCustom.SomeService changed to: %v\", updated.SomeService)\n}\n}\n
See the App Service Template for a complete example of using Structured Custom Configuration.
"},{"location":"microservices/application/ApplicationServiceAPI/#function-pipeline-apis","title":"Function Pipeline APIs","text":"The following ApplicationService
APIs allow your service to set the Functions Pipeline and start and stop the Functions Pipeline.
type AppFunction = func(appCxt AppFunctionContext, data interface{}) (bool, interface{})
This type defines the signature that all pipeline functions must implement.
"},{"location":"microservices/application/ApplicationServiceAPI/#functionpipeline","title":"FunctionPipeline","text":"This type defines the struct that contains the metadata for a functions pipeline instance.
type FunctionPipeline struct {\nId string\nTransforms []AppFunction\nTopic string\nHash string\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#setdefaultfunctionspipeline","title":"SetDefaultFunctionsPipeline","text":"SetDefaultFunctionsPipeline(transforms ...AppFunction) error
This API sets the default functions pipeline with the specified list of Application Functions. This pipeline is executed for all messages received from the configured trigger. Note that the functions are executed in the order provided in the list. An error is returned if the list is empty.
Example - SetDefaultFunctionsPipeline
sample := functions.NewSample()\nerr = service.SetDefaultFunctionsPipeline(\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\napp.lc.Errorf(\"SetDefaultFunctionsPipeline returned error: %s\", err.Error())\nreturn -1\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#addfunctionspipelinefortopics","title":"AddFunctionsPipelineForTopics","text":"AddFunctionsPipelineForTopics(id string, topics []string, transforms ...AppFunction) error
This API adds a functions pipeline with the specified unique ID and list of functions (transforms) to be executed when the received topic matches one of the specified pipeline topics. See the Pipeline Per Topic section for more details.
Example - AddFunctionsPipelineForTopics
sample := functions.NewSample()\nerr = service.AddFunctionsPipelineForTopic(\"Floats-Pipeline\", []string{\"edgex/events/+/+/Random-Float-Device/#\"},\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\n...\nreturn -1\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#loadconfigurablefunctionpipelines","title":"LoadConfigurableFunctionPipelines","text":"LoadConfigurableFunctionPipelines() (map[string]FunctionPipeline, error)
This API loads the function pipelines (default and per topic) from configuration. An error is returned if the configuration is not valid, i.e. missing required function parameters, invalid function name, etc.
Note
This API is only useful if pipeline is always defined in configuration as is with App Service Configurable.
Example - LoadConfigurableFunctionPipelines
configuredPipelines, err := service.LoadConfigurableFunctionPipelines()\nif err != nil {\n...\nos.Exit(-1)\n}\n\n...\n\nfor _, pipeline := range configuredPipelines {\nswitch pipeline.Id {\ncase interfaces.DefaultPipelineId:\nif err = service.SetDefaultFunctionsPipeline(pipeline.Transforms...); err != nil {\n...\nos.Exit(-1)\n}\ndefault:\nif err = service.AddFunctionsPipelineForTopic(pipeline.Id, pipeline.Topic, pipeline.Transforms...); err != nil {\n...\nos.Exit(-1)\n}\n}\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#removeallfunctionpipelines","title":"RemoveAllFunctionPipelines","text":"RemoveAllFunctionPipelines()
This API removes all existing functions pipelines previously added via SetDefaultFunctionsPipeline
, AddFunctionsPipelineForTopics
or LoadConfigurableFunctionPipelines
Run() error
This API starts the configured trigger to allow the Functions Pipeline to execute when the trigger receives data. The internal webserver is also started. This is a long running API which does not return until the service is stopped or Stop() is called. An error is returned if the trigger can not be create or initialized or if the internal webserver encounters an error.
Example - Run
if err := service.Run(); err != nil {\nlogger.Errorf(\"Run returned error: %s\", err.Error())\nos.exit(-1)\n}\n\n// Do any required cleanup here, if needed\n\nos.exit(0)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#stop","title":"Stop","text":"Stop()
This API stops the configured trigger so that the functions pipeline no longer executes. The internal webserver continues to accept requests. See Stopping the Service advanced topic for more details
Example - Stop
service.Stop()\n...\n
"},{"location":"microservices/application/ApplicationServiceAPI/#secrets-apis","title":"Secrets APIs","text":"The following ApplicationService
APIs allow your service retrieve and store secrets from/to the service's SecretStore. See the Secrets advanced topic for more details about using secrets.
SecretProvider() interfaces.SecretProvider
This API returns reference to the SecretProvider instance. See Secret Provider API section for more details.
"},{"location":"microservices/application/ApplicationServiceAPI/#client-apis","title":"Client APIs","text":"The following ApplicationService
APIs allow your service access the various EdgeX clients and their APIs.
LoggingClient() logger.LoggingClient
This API returns the LoggingClient instance which the service uses to log messages. See the LoggingClient interface for more details.
Example - LoggingClient
service.LoggingClient().Info(\"Hello World\")\nservice.LoggingClient().Errorf(\"Some error occurred: %w\", err)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#registryclient","title":"RegistryClient","text":"RegistryClient() registry.Client
This API returns the Registry Client. Note the registry must been enabled, otherwise this will return nil. See the Registry Client interface for more details. Useful if service needs to add additional health checks or needs to get endpoint of another registered service.
"},{"location":"microservices/application/ApplicationServiceAPI/#eventclient","title":"EventClient","text":"EventClient() interfaces.EventClient
This API returns the Event Client. Note if Core Data is not specified in the Clients configuration, this will return nil. See the Event Client interface for more details. Useful for adding, deleting or querying Events.
"},{"location":"microservices/application/ApplicationServiceAPI/#commandclient","title":"CommandClient","text":"CommandClient() interfaces.CommandClient
This API returns the Command Client. Note if Core Command is not specified in the Clients configuration, this will return nil. See the Command Client interface for more details. Useful for issuing commands to devices.
"},{"location":"microservices/application/ApplicationServiceAPI/#notificationclient","title":"NotificationClient","text":"NotificationClient() interfaces.NotificationClient
This API returns the Notification Client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Notification Client interface for more details. Useful for sending notifications.
"},{"location":"microservices/application/ApplicationServiceAPI/#subscriptionclient","title":"SubscriptionClient","text":"SubscriptionClient() interfaces.SubscriptionClient
This API returns the Subscription client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Subscription Client interface for more details. Useful for creating notification subscriptions.
"},{"location":"microservices/application/ApplicationServiceAPI/#deviceserviceclient","title":"DeviceServiceClient","text":"DeviceServiceClient() interfaces.DeviceServiceClient
This API returns the Device Service Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Service Client interface for more details. Useful for querying information about a Device Service.
"},{"location":"microservices/application/ApplicationServiceAPI/#deviceprofileclient","title":"DeviceProfileClient","text":"DeviceProfileClient() interfaces.DeviceProfileClient
This API returns the Device Profile Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Profile Client interface for more details. Useful for querying information about a Device Profile such as Device Resource details.
"},{"location":"microservices/application/ApplicationServiceAPI/#deviceclient","title":"DeviceClient","text":"DeviceClient() interfaces.DeviceClient
This API returns the Device Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Client interface for more details. Useful for querying list of devices for a specific Device Service or Device Profile.
"},{"location":"microservices/application/ApplicationServiceAPI/#background-publisher-apis","title":"Background Publisher APIs","text":"The following ApplicationService
APIs allow Application Services to have background publishers. See the Background Publishing advanced topic for more details and example.
AddBackgroundPublisher(capacity int) (BackgroundPublisher, error)
This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus.
"},{"location":"microservices/application/ApplicationServiceAPI/#addbackgroundpublisherwithtopic-deprecated","title":"AddBackgroundPublisherWithTopic DEPRECATED","text":"AddBackgroundPublisherWithTopic(capacity int, topic string) (BackgroundPublisher, error)
This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus on the specified topic.
"},{"location":"microservices/application/ApplicationServiceAPI/#buildcontext","title":"BuildContext","text":"BuildContext(correlationId string, contentType string) AppFunctionContext
This API allows external callers that may need a context (eg background publishers) to easily create one.
"},{"location":"microservices/application/ApplicationServiceAPI/#other-apis","title":"Other APIs","text":""},{"location":"microservices/application/ApplicationServiceAPI/#addroute-deprecated","title":"AddRoute (Deprecated)","text":"AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error
This API is deprecated in favor of AddCustomRoute()
which has an explicit parameter to indicate whether the route should require authentication.
AddCustomRoute(route string, authentication Authentication, handler echo.HandlerFunc, methods ...string) error
This API adds a custom REST route to the application service's internal webserver. If the route is marked authenticated, it will require an EdgeX JWT when security is enabled. A reference to the ApplicationService is added to the context that is passed to the handler, which can be retrieved using the AppService
key. See Custom REST Endpoints advanced topic for more details and example.
AppContext() context.Context
This API returns the application service context used to detect cancelled context when the service is terminating. Used by custom app service to appropriately exit any long running functions.
"},{"location":"microservices/application/ApplicationServiceAPI/#requesttimeout","title":"RequestTimeout","text":"RequestTimeout() time.Duration
This API returns the parsed value for the Service.RequestTimeout
configuration setting. The setting is parsed on start-up so that any error is caught then.
Example - RequestTimeout
Service:\n...\nRequestTimeout: \"60s\"\n...\n
timeout := service.RequestTimeout()\n
"},{"location":"microservices/application/ApplicationServiceAPI/#registercustomtriggerfactory","title":"RegisterCustomTriggerFactory","text":"RegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error
This API registers a trigger factory for a custom trigger to be used. See the Custom Triggers section for more details and example.
"},{"location":"microservices/application/ApplicationServiceAPI/#registercustomstorefactory","title":"RegisterCustomStoreFactory","text":"RegisterCustomStoreFactory(name string, factory func(cfg DatabaseInfo, cred config.Credentials) (StoreClient, error)) error
This API registers a factory to construct a custom store client for the store & forward loop.
"},{"location":"microservices/application/ApplicationServiceAPI/#metricsmanager","title":"MetricsManager","text":"MetricsManager() bootstrapInterfaces.MetricsManager
This API returns the Metrics Manager used to register counter, gauge, gaugeFloat64 or timer metric types from github.com/rcrowley/go-metrics
myCounterMetricName := \"MyCounter\"\nmyCounter := gometrics.NewCounter()\nmyTags := map[string]string{\"Tag1\":\"Value1\"}\napp.service.MetricsManager().Register(myCounterMetricName, myCounter, myTags)
"},{"location":"microservices/application/ApplicationServiceAPI/#publish","title":"Publish","text":"Publish(data any) error
This API pushes data to the EdgeX MessageBus using configured topic and returns an error if the EdgeX MessageBus is disabled in configuration
"},{"location":"microservices/application/ApplicationServiceAPI/#publishwithtopic","title":"PublishWithTopic","text":"PublishWithTopic(topic string, data any) error
This API pushes data to the EdgeX MessageBus using a given topic and returns an error if the EdgeX MessageBus is disabled in configuration
"},{"location":"microservices/application/ApplicationServices/","title":"Application Services Overview","text":"Application Services are a means to get data from EdgeX Foundry to be processed at the edge and/or sent to external systems (be it analytics package, enterprise or on-prem application, cloud systems like Azure IoT, AWS IoT, or Google IoT Core, etc.). Application Services provide the means for data to be prepared (transformed, enriched, filtered, etc.) and groomed (formatted, compressed, encrypted, etc.) before being sent to an endpoint of choice or published back to other Application Service to consume. The export endpoints supported out of the box today include HTTP and MQTT endpoints, but custom endpoints can be implemented along side the existing functionality.
Application Services are based on the idea of a \"Functions Pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event/reading messages) in the order that you've specified. Triggers seed the first function in the pipeline with the data received by the Application Service. A trigger is something like a message landing in a watched message queue. The most commonly used Trigger is the MessageBus Trigger. See the Triggers section for more details
An Applications Functions Software Development Kit (or App Functions SDK
) is available to help create Application Services. Currently the only SDK supported language is Golang, with the intention that community developed and supported SDKs may come in the future for other languages. The SDK is available as a Golang module to remain operating system (OS) agnostic and to comply with the latest EdgeX guidelines on dependency management.
Any application built on top of the Application Functions SDK is considered an App Service. This SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a pipeline.
"},{"location":"microservices/application/ApplicationServices/#standard-functions","title":"Standard Functions","text":"As mentioned, an Application Service is a function pipeline. The SDK provides some standard functions that can be used in a functions pipeline. In the future, additional functions will be provided \"standard\" or in other words provided with the SDK. Additionally, developers can implement their own custom functions and add those to their Application Service functions pipeline.
One of the most common use cases for working with data that comes from the MessageBus is to filter data down to what is relevant for a given application and to format it. To help facilitate this, six primary functions are included in the SDK.
FilterByProfileName
function which will remove events that do or do not match the configured ProfileNames
and execution of the pipeline will cease if no event remains after filtering. FilterByDeviceName
function which will remove events that do or do not match the configured DeviceNames
and execution of the pipeline will cease if no event remains after filtering. FilterBySourceName
function which will remove events that do or do not match the configured SourceNames
and execution of the pipeline will cease if no event remains after filtering. A SourceName
is the name of the source (command or resource) that the Event was created from. FilterByResourceName
which exhibits the same behavior as DeviceNameFilter
except filtering the event's Readings
on ResourceName
instead of DeviceName
. Execution of the pipeline will cease if no readings remain after filtering. XMLTransform
or JSONTransform
.Typically, after filtering and transforming the data as needed, exporting is the last step in a pipeline to ship the data where it needs to go. There are three primary functions included in the SDK to help facilitate this. The first are theHTTPPost/HTTPPut
functions that will POST/PUT the provided data to a specified endpoint, and the third is an MQTTSecretSend()
function that will publish the provided data to an MQTT Broker as specified in the configuration.
See Built-in Functions section for full list of SDK supplied functions
Note
The App SDK provides much more functionality than just filtering, formatting and exporting. The above simple example is provided to demonstrate how the functions pipeline works. With the ability to write your custom pipeline functions, your custom application services can do what ever your use case demands.
There are three primary triggers that have been included in the SDK that initiate the start of the function pipeline. First is the HTTP Trigger via a POST to the endpoint /api/v3/trigger
with the EdgeX Event data as the body. Second is the EdgeX MessageBus Trigger with connection details as specified in the configuration and the third it the External MQTT Trigger with connection details as specified in the configuration. See the Triggers section for full list of available Triggers
Finally, data may be sent back to the Trigger response by calling .SetResponseData()
on the context. If the trigger is HTTP, then it will be an HTTP Response. If the trigger is EdgeX MessageBus, then it will be published to the configured host and publish topic. If the trigger is External MQTT, then it will be published to the configured publish topic.
All pipeline functions define a type and a factory function which is used to initialize an instance of the type with the required options. The instances returned by these factory functions give access to their appropriate pipeline function pointers when setting up the function pipeline.
Example
NewFilterFor([] {\"Device1\", \"Device2\"}).FilterByDeviceName\n
"},{"location":"microservices/application/BuiltIn/#batching","title":"Batching","text":"Included in the SDK is an in-memory batch function that will hold on to your data before continuing the pipeline. There are three functions provided for batching each with their own strategy.
Factory Method Description NewBatchByTime(timeInterval string) This function returns aBatchConfig
instance with time being the strategy that is used for determining when to release the batched data and continue the pipeline. timeInterval
is the duration to wait (i.e. 10s
). The time begins after the first piece of data is received. If no data has been received no data will be sent forward. NewBatchByCount(batchThreshold int) This function returns a BatchConfig
instance with count being the strategy that is used for determining when to release the batched data and continue the pipeline. batchThreshold
is how many events to hold on to (i.e. 25
). The count begins after the first piece of data is received and once the threshold is met, the batched data will continue forward and the counter will be reset. NewBatchByTimeAndCount(timeInterval string, batchThreshold int) This function returns a BatchConfig
instance with a combination of both time and count being the strategy that is used for determining when to release the batched data and continue the pipeline. Whichever occurs first will trigger the data to continue and be reset. Examples
NewBatchByTime(\"10s\").Batch\nNewBatchByCount(10).Batch\nNewBatchByTimeAndCount(\"30s\", 10).Batch\n
Property Description IsEventData The IsEventData
flag, when true, lets this function know that the data being batched is Events
and to un-marshal the data a []Event
prior to returning the batched data. MergeOnSend The MergeOnSend
flag, when true, will merge the [][]byte
data to a single[]byte
prior to sending the data to the next function in the pipeline. Batch with IsEventData
flag set to true.
batch := NewBatchByTimeAndCount(\"30s\", 10)\nbatch.IsEventData = true\n...\nbatch.Batch\n
Batch with MergeOnSend
flag set to true.
batch := NewBatchByTimeAndCount(\"30s\", 10)\nbatch.MergeOnSend = true\n...\nbatch.Batch\n
"},{"location":"microservices/application/BuiltIn/#batch","title":"Batch","text":"Batch
- This pipeline function will apply the selected strategy in your pipeline. By default the batched data returned by this function is [][]byte
. This is because this function doesn't need to know the type of the individual items batched. It simply marshals the items to JSON if the data isn't already a []byte
.
Warning
Keep memory usage in mind as you determine the thresholds for both time and count. The larger they are the more memory is required and could lead to performance issue.
"},{"location":"microservices/application/BuiltIn/#compression","title":"Compression","text":"There are two compression types included in the SDK that can be added to your pipeline. These transforms return a []byte
.
Compression
instance that is used to access the compression functions."},{"location":"microservices/application/BuiltIn/#gzip","title":"GZIP","text":"CompressWithGZIP
- This pipeline function receives either a string
,[]byte
, or json.Marshaler
type, GZIP compresses the data, converts result to base64 encoded string, which is returned as a []byte
to the pipeline.
Example
NewCompression().CompressWithGZIP\n
"},{"location":"microservices/application/BuiltIn/#zlib","title":"ZLIB","text":"CompressWithZLIB
- This pipeline function receives either a string
,[]byte
, or json.Marshaler
type, ZLIB compresses the data, converts result to base64 encoded string, which is returned as a []byte
to the pipeline.
Example
NewCompression().CompressWithZLIB\n
"},{"location":"microservices/application/BuiltIn/#conversion","title":"Conversion","text":"There are two conversions included in the SDK that can be added to your pipeline. These transforms return a string
.
Conversion
instance that is used to access the conversion functions."},{"location":"microservices/application/BuiltIn/#json","title":"JSON","text":"TransformToJSON
- This pipeline function receives an dtos.Event
type and converts it to JSON format and returns the JSON string to the pipeline.
Example
NewConversion().TransformToJSON\n
"},{"location":"microservices/application/BuiltIn/#xml","title":"XML","text":"TransformToXML
- This pipeline function receives an dtos.Event
type, converts it to XML format and returns the XML string to the pipeline.
Example
NewConversion().TransformToXML\n
"},{"location":"microservices/application/BuiltIn/#event","title":"Event","text":"This enables the ability to wrap data into an Event/Reading
Factory Method Description NewEventWrapperSimpleReading(profileName string, deviceName string, resourceName string, valueType string) This factory function returns anEventWrapper
instance configured to push a Simple
reading. TheEventWrapper
instance returned is used to access core data functions. NewEventWrapperBinaryReading(profileName string, deviceName string, resourceName string, mediaType string) This factory function returns an EventWrapper
instance configured to push a Binary
reading. The EventWrapper
instance returned is used to access core data functions. NewEventWrapperObjectReading(profileName string, deviceName string, resourceName string) This factory function returns an EventWrapper
instance configured to push an Object
reading. The EventWrapper
instance returned is used to access core data functions."},{"location":"microservices/application/BuiltIn/#wrap-into-event","title":"Wrap Into Event","text":"WrapIntoEvent
- This pipeline function provides the ability to Wrap data in an Event/Reading. The data passed into this function from the pipeline is wrapped in an EdgeX Event with the Event and Reading metadata specified from the factory function options. The function returns the new EdgeX Event with ID populated.
Example
NewEventWrapperSimpleReading(\"my-profile\", \"my-device\", \"my-resource\", \"string\").Wrap\n
"},{"location":"microservices/application/BuiltIn/#data-protection","title":"Data Protection","text":"There are two transforms included in the SDK that can be added to your pipeline for data protection.
"},{"location":"microservices/application/BuiltIn/#aesprotection","title":"AESProtection","text":"Factory Method Description NewAESProtection(secretName string, secretValueKey string) This function returns aEncryption
instance initialized with the passed in secretName
and secretValueKey
It requires a 64-byte key from secrets which is split in half, the first half used for encryption, the second for generating the signature.
Encrypt
: This pipeline function receives either a string
, []byte
, or json.Marshaller
type and encrypts it using AES256 encryption, signs it with a SHA512 hash and returns a []byte
to the pipeline of the following form:
Example
transforms.NewAESProtection(secretName, secretValueKey).Encrypt(ctx, data)\n
Note
The Algorithm
used with app-service-configurable configuration to access this transform is AES256
Reading data protected with this function is a multi step process:
Signing Hash Validation
def hash(cipher_hex, key):\n # Extract the 32 bytes of the Hash signature from the end of the cipher_hex\n extract_hash = cipher_hex[-64:]\n\n # last 32 bytes of the 64 byte key used by the encrypt function (2 hex digits per byte)\n private_key = key[-64:]\n # IV & ciphertext\n content = cipher_hex[:-64]\n\n hash_text = hmac.new(key=bytes.fromhex(private_key), msg=(bytes.fromhex(content) + bytearray(8)), digestmod='SHA512')\n\n # Calculated tag is only the the first 32 bytes of the resulting SHA512\n calculated_hash = hash_text.hexdigest()[:64]\n\n if extract_hash == calculated_hash:\n return \"true\"\n else:\n return \"false\", extract_hash, calculated_hash\n
If the signing hash can be validated, the message is OK to decrypt
Payload Decryption
def decrypt(cipher_hex, key):\n # first 32 bytes of the 64 byte key used by the encrypt function (2 hex digits per byte)\n private_key = bytes.fromhex(key[:64])\n\n # Extract the cipher text (remaining bytes in the middle)\n cipher_text = cipher_hex[32:]\n cipher_text = bytes.fromhex(cipher_text[:-64])\n\n # Extract the 16 bytes of initial vector from the beginning of the data\n iv = bytes.fromhex(cipher_hex[:32])\n\n # Decrypt\n cipher = AES.new(private_key, AES.MODE_CBC, iv)\n\n plain_pad = cipher.decrypt(cipher_text)\n unpadded = Padding.unpad(plain_pad, AES.block_size)\n\n return unpadded.decode('utf-8')\n
"},{"location":"microservices/application/BuiltIn/#export","title":"Export","text":"There are two export functions included in the SDK that can be added to your pipeline.
"},{"location":"microservices/application/BuiltIn/#http-export","title":"HTTP Export","text":"Factory Method Description NewHTTPSender(url string, mimeType string, persistOnError bool) This factory function returns aHTTPSender
instance initialized with the passed in url, mime type and persistOnError values. NewHTTPSenderWithSecretHeader(url string, mimeType string, persistOnError bool, headerName string, secretName string, secretValueKey string) This factory function returns a HTTPSender
instance similar to the above function however will set up the HTTPSender
to add a header to the HTTP request using the headerName
for the field name and the secretName
and secretValueKey
to pull the header field value from the Secret Store. NewHTTPSenderWithOptions(options HTTPSenderOptions) This factory function returns a HTTPSender
using the passed in options
to configure it. // HTTPSenderOptions contains all options available to the sender\ntype HTTPSenderOptions struct {\n// URL of destination\nURL string\n// MimeType to send to destination\nMimeType string\n// PersistOnError enables use of store & forward loop if true\nPersistOnError bool\n// HTTPHeaderName to use for passing configured secret\nHTTPHeaderName string\n// SecretName to search for configured secret\nSecretName string\n// SecretValueKey is the key for configured secret data\nSecretValueKey string\n// URLFormatter specifies custom formatting behavior to be applied to configured URL.\n// If nothing specified, default behavior is to attempt to replace placeholders in the\n// form '{some-context-key}' with the values found in the context storage.\nURLFormatter StringValuesFormatter\n// ContinueOnSendError allows execution of subsequent chained senders after errors if true\nContinueOnSendError bool\n// ReturnInputData enables chaining multiple HTTP senders if true\nReturnInputData bool\n}\n
"},{"location":"microservices/application/BuiltIn/#http-post","title":"HTTP POST","text":"HTTPPost
- This pipeline function receives either a string
, []byte
, or json.Marshaler
type from the previous function in the pipeline and posts it to the configured endpoint and returns the HTTP response. If no previous function exists, then the event that triggered the pipeline, marshaled to json, will be used. If the post fails and persistOnError=true
and Store and Forward
is enabled, the data will be stored for later retry. See Store and Forward for more details. If ReturnInputData=true
the function will return the data that it received instead of the HTTP response. This allows the following function in the pipeline to be another HTTP Export which receives the same data but is configured to send to a different endpoint. When chaining for multiple HTTP Exports you need to decide how to handle errors. Do you want to stop execution of the pipeline or continue so that the next HTTP Export function can attempt to export to its endpoint. This is where ContinueOnSendError
comes in. If set to true
the error is logged and the function returns the received data for the next function to use. ContinueOnSendError=true
can only be used when ReturnInputData=true
and cannot be use when PersistOnError=true
.
Example
POST NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPost
PUT NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPut
POST with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPost
PUT with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPPut
"},{"location":"microservices/application/BuiltIn/#http-put","title":"HTTP PUT","text":"HTTPPut
- This pipeline function operates the same as HTTPPost
but uses the PUT
method rather than POST
.
The configured URL is dynamically formatted prior to the POST/PUT request. The default formatter (used if URLFormatter
is nil) simply replaces any placeholder text, {key-name}
, in the configured URL with matching values from the new Context Storage
. An error will occur if a specified placeholder does not exist in the Context Storage
. See the Context Storage documentation for more details on seeded values and storing your own values.
The URLFormatter
option allows you to override the default formatter with your own custom URL formatting scheme.
Example
Export the Events to different endpoints base on their device name Url=\"http://myhost.com/edgex-events/{devicename}\"
Example
httpRequestHeaders = map[string]string{ \"Connection\": \"keep-alive\", \"From\": \"user@example.com\" } SetHttpRequestHeaders(httpRequestHeaders)
MQTTSecretSender
instance initialized with the options specified in the MQTTSecretConfig
and persistOnError
. NewMQTTSecretSenderWithTopicFormatter(mqttConfig MQTTSecretConfig, persistOnError bool, topicFormatter StringValuesFormatter) This factory function returns a MQTTSecretSender
instance initialized with the options specified in the MQTTSecretConfig
, persistOnError
and topicFormatter
. See Topic Formatting below for more details. type MQTTSecretConfig struct {\n// BrokerAddress should be set to the complete broker address i.e. mqtts://mosquitto:8883/mybroker\nBrokerAddress string\n// ClientId to connect with the broker with.\nClientId string\n// The name of the secret in secret provider to retrieve your secrets\nSecretName string\n// AutoReconnect indicated whether or not to retry connection if disconnected\nAutoReconnect bool\n// KeepAlive is the interval duration between client sending keepalive ping to broker\nKeepAlive string\n// ConnectTimeout is the duration for timing out on connecting to the broker\nConnectTimeout string\n// Topic that you wish to publish to\nTopic string\n// QoS for MQTT Connection\nQoS byte\n// Retain setting for MQTT Connection\nRetain bool\n// SkipCertVerify\nSkipCertVerify bool\n// AuthMode indicates what to use when connecting to the broker. \n// Options are \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\".\n// If a CA Cert exists in the SecretName data then it will be used for \n// all modes except \"none\". \nAuthMode string\n}\n
Secrets in the Secret Store may be located at any SecretName however they must have some or all the follow keys at the specified in the secret data:
username
- username to connect to the brokerpassword
- password used to connect to the brokerclientkey
- client private key in PEM formatclientcert
- client cert in PEM formatcacert
- ca cert in PEM formatThe AuthMode
setting you choose depends on what secret values above are used. For example, if \"none\" is specified as auth mode all keys will be ignored. Similarly, if AuthMode
is set to \"clientcert\" username and password will be ignored.
The configured Topic is dynamically formatted prior to publishing . The default formatter (used if topicFormatter
is nil) simply replaces any placeholder text, {key-name}
, in the configured Topic
with matching values from the new Context Storage
. An error will occur if a specified placeholder does not exist in the Context Storage
. See the Context Storage documentation for more details on seeded values and storing your own values.
The topicFormatter
option allows you to override the default formatter with your own custom topic formatting scheme.
There are four basic types of filtering included in the SDK to add to your pipeline. There is also an option to Filter Out
specific items. These provided filter functions return a type of dtos.Event
. If filtering results in no remaining data, the pipeline execution for that pass is terminated. If no values are provided for filtering, then data flows through unfiltered.
Filter
instance initialized with the passed in filter values with FilterOut
set to false
. This Filter
instance is used to access the following filter functions that will operate using the specified filter values. NewFilterOut([]string filterValues) This factory function returns a Filter
instance initialized with the passed in filter values with FilterOut
set to true
. This Filter
instance is used to access the following filter functions that will operate using the specified filter values. type Filter struct {\n// Holds the values to be filtered\nFilterValues []string\n// Determines if items in FilterValues should be filtered out. If set to true all items found in the filter will be removed. If set to false all items found in the filter will be returned. If FilterValues is empty then all items will be returned.\nFilterOut bool\n}\n
Note
Either strings or regular expressions are accepted as filter values.
"},{"location":"microservices/application/BuiltIn/#by-profile-name","title":"By Profile Name","text":"FilterByProfileName
- This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified profiles names.
Example
NewFilterFor([] {\"Profile1\", \"Profile2\"}).FilterByProfileName\n\nNewFilterFor([] {\"Profile[0-9]+\"}).FilterByProfileName\n
"},{"location":"microservices/application/BuiltIn/#by-device-name","title":"By Device Name","text":"FilterByDeviceName
- This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified device names.
Example
NewFilterFor([] {\"(Device)1, Device2\"}).FilterByDeviceName\n\nNewFilterFor([] {\"(Device)[0-9]+\"}).FilterByDeviceName\n
"},{"location":"microservices/application/BuiltIn/#by-source-name","title":"By Source Name","text":"FilterBySourceName
- This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified source names. Source name is either the resource name
or command name
responsible for the Event creation.
Example
NewFilterFor([] {\"Source1\", \"Source2\"}).FilterBySourceName\n\nNewFilterFor([] {\"Source[0-9]+\"}).FilterBySourceName\n
"},{"location":"microservices/application/BuiltIn/#by-resource-name","title":"By Resource Name","text":"FilterByResourceName
- This pipeline function will filter the Event's reading data down to Readings that either have (For) or don't have (Out) the specified resource names. If the result of filtering is zero Readings remaining, the function terminates pipeline execution.
Example
NewFilterFor([] {\"Resource1\", \"Resource2\"}).FilterByResourceName\n\nNewFilterFor([] {\"Resource[0-9]+\"}).FilterByResourceName\n
"},{"location":"microservices/application/BuiltIn/#json-logic","title":"JSON Logic","text":"Factory Method Description NewJSONLogic(rule string) This factory function returns a JSONLogic
instance initialized with the passed in JSON rule. The rule passed in should be a JSON string conforming to the specification here: http://jsonlogic.com/operations.html."},{"location":"microservices/application/BuiltIn/#evaluate","title":"Evaluate","text":"Evaluate
- This is the pipeline function that will be used in the pipeline to apply the JSON rule to data coming in on the pipeline. If the condition of your rule is met, then the pipeline will continue and the data will continue to flow to the next function in the pipeline. If the condition of your rule is NOT met, then pipeline execution stops.
Example
NewJSONLogic(\"{ \\\"in\\\" : [{ \\\"var\\\" : \\\"device\\\" }, \n [\\\"Random-Integer-Device\\\",\\\"Random-Float-Device\\\"] ] }\").Evaluate\n
Note
Only operations that return true or false are supported. See http://jsonlogic.com/operations.html# for the complete list of operations paying attention to return values. Any operator that returns manipulated data is currently not supported. For more advanced scenarios checkout LF Edge eKuiper.
Tip
Leverage http://jsonlogic.com/play.html to get your rule right before implementing in code. JSON can be a bit tricky to get right in code with all the escaped double quotes.
"},{"location":"microservices/application/BuiltIn/#response-data","title":"Response Data","text":"There is one response data function included in the SDK that can be added to your pipeline.
Factory Method Description NewResponseData() This factory function returns aResponseData
instance that is used to access the following pipeline function below."},{"location":"microservices/application/BuiltIn/#content-type","title":"Content Type","text":"ResponseContentType
- This property is used to set the content-type of the response.
Example
responseData := NewResponseData()\nresponseData.ResponseContentType = \"application/json\"\n
"},{"location":"microservices/application/BuiltIn/#set-response-data","title":"Set Response Data","text":"SetResponseData
- This pipeline function receives either a string
,[]byte
, or json.Marshaler
type from the previous function in the pipeline and sets it as the response data that the pipeline returns to the configured trigger. If configured to use theEdgeXMessageBus
trigger, the data will be published back to the EdgeX MessageBus as determined by the configuration. Similar, if configured to use theExternalMQTT
trigger, the data will be published back to the external MQTT Broker as determined by the configuration. If configured to use HTTP
trigger the data is returned as the HTTP response.
Note
Calling SetResponseData()
and SetResponseContentType()
from the Context API in a custom function can be used in place of adding this function to your pipeline.
There is one Tags transform included in the SDK that can be added to your pipeline.
Factory Method Description NewTags(tagsmap[string]interface{}
) Tags This factory function returns a Tags
instance initialized with the passed in collection of generic tag key/value pairs. This Tags
instance is used to access the following Tags function that will use the specified collection of tag key/value pairs. This allows for generic complex types for the Tag values."},{"location":"microservices/application/BuiltIn/#add-tags","title":"Add Tags","text":"AddTags
- This pipeline function receives an Edgex Event
type and adds the collection of specified tags to the Event's Tags
collection.
Example
var myTags = map[string]interface{}{\n\"MyValue\" : 123,\n\"GatewayId\": \"HoustonStore000123\",\n\"Coordinates\": map[string]float32 {\n\"Latitude\": 29.630771,\n\"Longitude\": \"-95.377603\",\n},\n}\n\nNewGenericTags(myTags).AddTags\n
"},{"location":"microservices/application/BuiltIn/#metricsprocessor","title":"MetricsProcessor","text":"MetricsProcessor
contains configuration and functions for processing the new dtos.Metrics
type.
`MetricsProcessor
instance initialized with the passed in collection of additionalTags
(name/value pairs). This MetricsProcessor
instance is used to access the following functions that will process a dtos.Metric instance. The additionalTags
are added as metric tags to the processed data. An error will be returned if any of the additionalTags
have an invalid name. Currently must be non-blank."},{"location":"microservices/application/BuiltIn/#tolineprotocol","title":"ToLineProtocol","text":"ToLineProtocol
- This pipeline function will transform the received dtos.Metric
to a Line Protocol
formatted string. See https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/ for details on the Line Protocol
syntax.
Note
When ToLineProtocol
is the first function in the functions pipeline, the TargetType
for the service must be set to &dtos.Metric{}
. See Target Type section for details on setting the service's TargetType
. The Trigger configuration must also be set so SubscribeTopics=\"edgex/telemetry/#\"
in order to receive the dtos.Metric
data from other services. See the new App Service Configurable metrics-influxdb
profile for an example.
Example
mp, err := NewMetricsProcessor(map[string]string{\"MyTag\":\"MyTagValue\"})\nif err != nil {\n... handle error\n}\n...\nmp.ToLineProtocol\n
Warning
Any service using the MetricsProcessor
needs to disable its own Telemetry reporting to avoid circular data generation from processing. To do this set the servicesWriteable.Telemetry
configuration to:
[Writable.Telemetry]\nInterval = \"0s\" # Don't report any metrics as that would be cyclic processing.\n
"},{"location":"microservices/application/ErrorHandling/","title":"Pipeline Function Error Handling","text":"Each transform returns a true
or false
as part of the return signature. This is called the continuePipeline
flag and indicates whether the SDK should continue calling successive transforms in the pipeline.
return false, nil
will stop the pipeline and stop processing the event. This is useful, for example, when filtering on values and nothing matches the criteria you've filtered on. return false, error
, will stop the pipeline as well and the SDK will log the error you have returned.return true, nil
tells the SDK to continue, and will call the next function in the pipeline with your result.The SDK will return control back to main when receiving a SIGTERM/SIGINT event to allow for custom clean up.
"},{"location":"microservices/application/GeneralAppServiceConfig/","title":"Application Service Configuration","text":"Similar to other EdgeX services, configuration is first determined by the configuration.yaml
file in the /res
folder. Once loaded any environment overrides are applied. If -cp
is passed to the application on startup, the SDK will leverage the specific configuration provider (i.e Consul) to push the configuration into the provider and monitor Writeable
configuration from there. You will find the configuration under the edgex/appservices/2.0/
key in the provider (i.e Consul). On re-restart the service will pull the configuration from the provider and apply any environment overrides.
This section describes the configuration elements that are unique to Application Services
Please first refer to the general Configuration documentation for configuration properties common across all EdgeX services.
Note
*
indicates the configuration value can be changed on the fly if using a configuration provider (like Consul). **
indicates the configuration value can be changed but the service must be restarted.
The tabs below provide additional entries in the Writable section which are applicable to Application Services.
Writable.StoreAndForwardWritable.PipelineWritable.InsecureSecretsWritable.TelemetryThe section configures the Store and Forward capability. Please refer to Store and Forward documentation for more details.
Configuration Default Value Enabled false* Indicates whether the Store and Forward capability enabled or disabled RetryInterval \"5m\"* Indicates the duration of time to wait before retries, aka Forward MaxRetryCount 10* Indicates whether maximum number of retries of failed data. The failed data is removed after the maximum retries has been exceeded. A value of0
indicates endless retries. The section configures the Configurable Function Pipeline which is used only by App Service Configurable. Please refer to App Service Configurable section for more details
This section defines Insecure Secrets that are used when running in non-secure mode, i.e. when Vault isn't available. This is a dynamic map of configuration, so can empty if no secrets are used or can have as many or few user define secrets. It simulates a Secret Store in non-secure mode. Below are a few examples that are need if using the indicated capabilities.
Configuration Default Value Description `' --- This section defines a block of insecure secrets for some service specific need SecretName<name>
Indicates the location in the simulated Secret Store where the secret resides. SecretData --- This section is the collection of secret data. key
value
Secret data key value pairs Property <Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics Service metrics that the application service collects. Boolean value indicates if reporting of the metric is enabled. Custom metrics are also included here for custom application services that define custom metrics Metrics.MessagesReceived false Enable/disable reporting of the built-in MessagesReceived metric Metrics.InvalidMessagesReceived false Enable/disable reporting of the built-in InvalidMessagesReceived metric Metrics.HttpExportSize false Enable/disable reporting of the built-in HttpExportSize metric Metrics.MqttExportSize false Enable/disable reporting of the built-in MqttExportSize metric Metrics.PipelineMessagesProcessed false Enable/disable reporting of the built-in PipelineMessagesProcessed metric Metrics.PipelineProcessingErrors false Enable/disable reporting of the built-in PipelineProcessingErrors metric Metrics.PipelineMessageProcessingTime false Enable/disable reporting of the built-in PipelineMessageProcessingTime metric Metrics.<CustomMetric>
false (Service Specific) Enable/disable reporting of custom application service's custom metric. See Custom Application Service Metrics for more detail Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. i.e. Gateway=\"my-iot-gateway\"
"},{"location":"microservices/application/GeneralAppServiceConfig/#not-writable","title":"Not Writable","text":"The tabs below provide additional configuration which are applicable to Application Services that require the service to be restarted after value(s) are changed.
HttpServerClientsTriggerTrigger ExternalMqttThis section contains the configuration for the internal Webserver. Only need if configuring the Webserver for HTTPS
certificate data
to use for HTTPS HTTPSKeyName blank** Indicates the key name in the HTTPS secret data that contains the key data
to use for HTTPS This service specific section defines the connection information for the EdgeX Clients and is the same as that used by all EdgeX services, just which clients are needed differs. Please refer to the Note about Clients section for more details.
This section defines the Trigger
for incoming data. See the Triggers documentation for more details on the inner working of triggers.
Trigger
binding type. valid values are edgex-messagebus
, external-mqtt
, http
, or <custom>
SubscribeTopics events/#** Topic(s) to subscribe to. This is a comma separated list of topics. Supports filtering by subscribe topics. Only set when using edgex-messagebus
or external-mqtt
. See EdgeXMessageBus Trigger for more details. PublishTopic blank** Indicates the topic in which to publish the function pipeline response data, if any. Supports dynamic topic places holders. Only set when using edgex-messagebus
or external-mqtt
. See EdgeXMessageBus Trigger for more details. This section defines the external MQTT Broker connect information. Only used for external-mqtt
trigger binding type
Note
external-mqtt
is not the default Trigger type, so there are no default values for ExternalMqtt
settings beyond those that the Go compiler gives to the empty struct. Some of those default values are not valid and must be specified, i.e. Authmode
tcp://localhost:1883
ClientId blank** ClientId to connect to the broker with ConnectTimeout blank** Time duration indicating how long to wait before timing out broker connection, i.e \"30s\" AutoReconnect false** Indicates whether or not to retry connection if disconnected KeepAlive 0** Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 QOS 0** Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain false** Retain setting for MQTT Connection SkipCertVerify false** Indicates if the certificate verification should be skipped SecretPath blank** Name of the path in secret provider to retrieve your secrets. Must be non-blank. AuthMode blank** Indicates what to use when connecting to the broker. Must be one of \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". If a CA Cert exists in the SecretPath then it will be used for all modes except \"none\". RetryDuration 600 Indicates how long (in seconds) to wait timing out on the MQTT client creation RetryInterval 5 Indicates the time (in seconds) that will be waited between attempts to create MQTT client Note
Authmode=cacert
is only needed when client authentication (e.g. usernamepassword
) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert.
[ApplicationSettings]
- Is used for custom application settings and is accessed via the ApplicationSettings() API. The ApplicationSettings API returns a map[string] string
containing the contents on the ApplicationSetting section of the configuration.yaml
file.
ApplicationSettings:\nApplicationName: \"My Application Service\"\n
Custom Application Services can now define their own custom structured configuration section in the configuration.yaml
file. Any additional sections in the configuration file are ignore by the SDK when it parses the file for the SDK defined sections. See the Custom Configuration section of the SDK documentation for more details.
There are two flavors of Applications Service which are configurable
and custom
. This section will describe how and when each flavor should be used.
The App Functions SDK
has a full suite of built-in features that are accessible via configuration when using the App Service Configurable
service. This service is built using the App Functions SDK
and uses configuration profiles to define separate distinct instances of the service. The service comes with a few built in profiles for common use cases, but custom profiles can also be used. If your use case needs can be meet with the built-in functionality then the App Service Configurable
service is right for you. See the App Service Configurable section for more details.
Custom Application Services are needed when use case needs can not be meet with just the built-in functionality. This is when you must develop you own custom Application Service use the App Functions SDK
. Typically this is triggered by the use case needing an custom Pipeline Function
. See the App Functions SDK section for all the details on the features you custom Application Service can take advantage of.
To help accelerate the creation of your custom Application Service the App Functions SDK
contains a template for new custom Application Services. This template has TODO's in the code and a README that walk you through the creation of your new custom Application Service. See the template README for more details.
Triggers
are common to both Configurable
and Custom
Application Services. The are the next logical area to get familiar with. See the Triggers section for more details.
Finally service configuration is very important to understand for both Configurable
and Custom
Application Services. The service configuration documentation is broken into two parts. First is the configuration that is common to all EdgeX services and the second is the configuration that is specific to Application Services. See the Common Configuration and Application Service Configuration sections for more details.
Triggers determine how the App Functions Pipeline begins execution. The trigger is determined by the [Trigger]
configuration section in the configuration.yaml
file.
Edgex 2.0
For Edgex 2.0 the [Binding]
configuration section has been renamed to [Trigger]
. The [MessageBus]
section has been renamed to EdgexMessageBus
and moved under the [Trigger]
section. The [MqttBroker]
section has been renamed to ExternalMqtt
and moved under the [Trigger]
section.
There are 4 types of Triggers
supported in the App Functions SDK which are discussed in this document
An EdgeX MessageBus trigger will execute the pipeline every time data is received from the configured Edgex MessageBus SubscribeTopics
. The EdgeX MessageBus is the central message bus internal to EdgeX and has a specific message envelope that wraps all data published to this message bus.
There currently are four implementations of the EdgeX MessageBus available to be used. Two of these are available out of the box: Redis Pub/Sub
(default) and MQTT
. Additionally NATS (both core and JetStream) options can be made available with the build flag mentioned above. The implementation type is selected via the [Trigger.EdgexMessageBus]
configuration described below.
Example Trigger Configuration
Trigger:\nType: \"edgex-messagebus\"\n
In the above example Type
is set to edgex-messagebus
trigger type so data will be received from the EdgeX MessageBus and may be Published to the EdgeX MessageBus, if configured.
The SubscribeTopics configuration specifies the comma separated list of topics the service will subscribe to.
Note
The default SubscribeTopics
configuration is set in the App Services Common Trigger Configuration.
The PublishTopic configuration specifies the topic published to when the ResponseData
is set via the ctx.SetResponseData([]byte outputData)
API. Nothing will be published if the PublishTopic is not set or the ResponseData
is never set
Note
The default PublishTopic
configuration is set in the App Services Common Trigger Configuration.
See the EdgeX MessageBus section for complete details.
Edgex 3.0
For Edgex 3.0 the MessageBus configuration settings are set in the Common MessageBus Configuration.
"},{"location":"microservices/application/Triggers/#filter-by-topics","title":"Filter By Topics","text":"App services now have the capability to filter by EdgeX MessageBus topics rather than using Filter functions in the functions pipeline. Filtering by topic is more efficient since the App Service never receives the data off the MessageBus. Core Data and/or Device Services now publish to multi-level topics that include the profilename
, devicename
and sourcename
. Sources are the commandname
or resourcename
that generated the Event. The publish topics now look like this:
# From Core Data\nedgex/events/core/<device-service>/<profile-name>/<device-name>/<source-name>\n\n# From Device Services\nedgex/events/device/<device-service>/<profile-name>/<device-name>/<source-name>\n
This with App Services capability to have multiple subscriptions allows for multiple filters by subscriptions. The SubscribeTopics
setting takes a comma separated list of subscribe topics.
Here are a few examples of how to configure the SubscribeTopics
setting under the Trigger.EdgexMessageBus.SubscribeHost
section to filter by subscriptions using the profile
, device
and source
names from the SNMP Device Service file here:
Trigger:\nSubscribeTopics: \"events/#\"\n
Trigger:\nSubscribeTopics: \"events/+/+/trendnet/#\"\n
Trigger:\nSubscribeTopics: \"edgex/events/+/+/+/trendnet01/#\"\n
Trigger:\nSubscribeTopics: \"edgex/events/+/+/+/trendnet01/#, edgex/events/+/+/+/trendnet02/#\"\n
Trigger:\nSubscribeTopics: \"edgex/events/+/+/+/+/Uptime, edgex/events/+/+/+/+/MacAddress\"\n
"},{"location":"microservices/application/Triggers/#external-mqtt-trigger","title":"External MQTT Trigger","text":"An External MQTT trigger will execute the pipeline every time data is received from an external MQTT broker on the configured SubscribeTopics
.
Note
The data received from the external MQTT broker is not wrapped with any metadata known to EdgeX. The data is handled as JSON or CBOR. The data is assumed to be JSON unless the first byte in the data is not a {
or a [
, in which case it is then assumed to be CBOR.
Note
The data received, encoded as JSON or CBOR, must match the TargetType
defined by your application service. The default TargetType
is an Edgex Event
. See TargetType for more details.
Example Trigger Configuration
Trigger:\nType: \"external-mqtt\"\nSubscribeTopics: \"external/#\"\nPublishTopic: \"\"\n...\n
The Type
is set to external-mqtt
. To receive data from the external MQTT Broker you must set your SubscribeTopics
to the appropriate topic(s) that the external publisher is using. You may also designate a PublishTopic
if you wish to publish data back to the external MQTT Broker. The Context function ctx.SetResponseData([]byte outputData)
stores the data to send back to the external MQTT Broker on the topic specified by the PublishTopic
setting.
the PublishTopic
can have placeholders. See Publish Topic Placeholders section below for more details
The other piece of configuration required are the MQTT Broker connection settings:
Trigger:\n...\nExternalMqtt:\nUrl: \"tls://test.mosquitto.org:8884\"\nClientId: \"app-external-mqtt-trigger\"\nQos: 0\nKeepAlive: 10\nRetained: false\nAutoReconnect: true\nConnectTimeout: \"30s\"\nSkipCertVerify: true\nAuthMode: \"clientcert\"\nSecretName: \"external-mqtt\"\nRetryDuration: 600\nRetryInterval: 5\n
"},{"location":"microservices/application/Triggers/#http-trigger","title":"HTTP Trigger","text":"Designating an HTTP trigger will allow the pipeline to be triggered by a RESTful POST
call to http://[host]:[port]/api/v3/trigger/
.
Example Trigger Configuration
Trigger:\nType: \"http\"
The Type=
is set to http
. This will enable listening to the api/v3/trigger/
endpoint. No other configuration is required. The Context function ctx.SetResponseData([]byte outputData)
stores the data to send back as the response to the requestor that originally triggered the HTTP Request.
Note
The HTTP trigger uses the content-type
from the HTTP Header to determine if the data is JSON or CBOR encoded and the optional X-Correlation-ID
to set the correlation ID for the request.
Note
The data received, encoded as JSON or CBOR, must match the TargetType
defined by your application service. The default TargetType
is an Edgex Event
. See TargetType for more details.
It is also possible to define your own trigger and register a factory function for it with the SDK. You can then configure the trigger by registering a factory function to build it along with a name to use in the config file. These triggers can be registered with:
service.RegisterCustomTriggerFactory(\"my-trigger-name\", myFactoryFunc)
Note
You can NOT override trigger names built into the SDK ( \"edgex-messagebus\", \"external-mqtt\", or \"http\") for a custom trigger.
The trigger factory function is bound to an instance of a trigger configuration struct that is provided by the SDK:
type TriggerConfig struct {\nLogger logger.LoggingClient\nContextBuilder TriggerContextBuilder\nMessageReceived TriggerMessageHandler\nConfigLoader TriggerConfigLoader\n}\n
This type carries a pointer to the internal edgex logger, along with three functions:
ContextBuilder
builds an interfaces.AppFunctionContext
from a message envelope you construct.MessageReceived
exposes a function that sends your message envelope and context to any pipelines configured in the EdgeX service. It also takes a function that will be run to process the response for each successful pipeline.Note
The context passed in to Received
will be cloned for each pipeline configured to run. If a nil context is passed a new one will be initialized from the message.
ConfigLoader
exposes a function that loads your custom config struct. By default this is done from the primary EdgeX configuration pipeline, and only loads root-level elements.If you need to override these functions it can be done in the factory function registered with the service.
The custom trigger constructed here will then need to implement the trigger interface so that the SDK can invoke it:
type Trigger interface {\nInitialize(wg *sync.WaitGroup, ctx context.Context, background <-chan BackgroundMessage) (bootstrap.Deferred, error)\n}\n\ntype BackgroundMessage interface {\nMessage() types.MessageEnvelope\nTopic() string\n}\n
This leaves a lot of flexibility for how you want the trigger to behave (for example you could write a trigger to watch for file changes, or run on a timer). Below is a sample implementation of a trigger that reads lines from os.Stdin and pass the captured string through the edgex function pipeline. In this case the target type for the service is set to &[]byte{}
.
type stdinTrigger struct{\ntc appsdk.TriggerConfig\n}\n\nfunc (t *stdinTrigger) Initialize(wg *sync.WaitGroup, ctx context.Context, _ <-chan interfaces.BackgroundMessage) (bootstrap.Deferred, error) {\nmsgs := make(chan []byte)\n\nreceiveMessage := true\n\nresponseHandler := func(ctx AppFunctionContext, pipeline *FunctionPipeline) {\n// do stuff\n}\n\ngo func() {\nfmt.Print(\"> \")\nrdr := bufio.NewReader(os.Stdin)\nfor receiveMessage {\ns, err := rdr.ReadString('\\n')\ns = strings.TrimRight(s, \"\\n\")\n\nif err != nil {\nt.tc.Logger.Error(err.Error())\ncontinue\n}\n\nmsgs <- []byte(s)\n}\n}()\n\ngo func() {\nfor receiveMessage {\nselect {\ncase <-ctx.Done():\nreceiveMessage = false\n\ncase m := <-msgs:\ngo func() {\nenv := types.MessageEnvelope{\nPayload: m,\n}\n\nctx := t.tc.ContextBuilder(env)\n\nerr := t.tc.MessageReceived(ctx, env, responseHandler)\n\nif err != nil {\nt.tc.Logger.Error(err.Error())\n}\n}()\n}\n}\n}()\n\nreturn cancel, nil\n}\n
This trigger can then be registered by calling:
appService.RegisterCustomTriggerFactory(\"custom-stdin\", func(config appsdk.TriggerConfig) (appsdk.Trigger, error) {\nreturn &stdinTrigger{\ntc: config,\n}, nil\n})\n
"},{"location":"microservices/application/Triggers/#type-configuration_3","title":"Type Configuration","text":"Example Trigger Configuration
Trigger:\nType: \"custom-stdin\"
Now the custom trigger is configured to be used rather than one of the built-in triggers.
A complete working example can be found here
"},{"location":"microservices/application/Triggers/#publish-topic-placeholders","title":"Publish Topic Placeholders","text":"Both the EdgeX MessageBus
and the External MQTT
triggers support the new Publish Topic Placeholders capability. The configured PublishTopic
for either of these triggers can contain placeholders for runtime replacements. The placeholders are replaced with values from the new Context Storage
whose key match the placeholder name. Function pipelines can add values to the Context Storage
which can then be used as replacement values in the publish topic. If an EdgeX Event is received by the configured trigger the Event's profilename
, devicename
and sourcename
as well as the will be seeded into the Context Storage
. See the Context Storage documentation for more details.
The Publish Topic Placeholders format is a simple {<key-name>}
that can appear anywhere in the topic multiple times. An error will occur if a specified placeholder does not exist in the Context Storage
.
PublishTopic: \"data/{profilename}/{devicename}/{custom}\"\n
"},{"location":"microservices/application/Triggers/#received-topic","title":"Received Topic","text":"The topic the data was received on for EdgeX MessageBus
and the External MQTT
triggers is now stored in the new Context Storage
with the key receivedtopic
. This makes it available to pipeline functions via the Context Storage
.
The migration of any Application Service's configuration starts with migrating configuration common to all EdgeX services. See the V3 Migration of Common Configuration section for details including the change from TOML format to YAML format for the configuration file. The remainder of this section focuses on configuration specific to Application Services.
"},{"location":"microservices/application/V3Migration/#common-configuration-removed","title":"Common Configuration Removed","text":"Any configuration that is common to all EdgeX services or all EdgeX Application Services needs to be removed from custom application service's private configuration.
Note
With this change, any custom application service must be run with either the -cp/--configProvider
flag or the -cc/--commonConfig
flag in order for the service to receive the common configuration that has been removed from its private configuration. See Config Provider and Common Config sections for more details on these flags.
The EdgeX MessageBus configuration has been moved out of the Trigger configuration and most values are placed in the common configuration. The only values remaining in the application service's private configuration are:
Disabled
- Used to disable the use of the EdgeX MessageBus when not using metrics and not using edgex-messagebus
Trigger type. Value need to be present so that it can be overridden with environment variable.Optional.ClientId
- Unique name needed for when MQTT or NATS are used as the MessageBus implementation.Example Application Service specific MessageBus section for 3.0
MessageBus:\nDisabled: false # Set to true if not using metrics and not using `edgex-messagebus` Trigger type\nOptional:\nClientId: \"<service-key>\"\n
"},{"location":"microservices/application/V3Migration/#trigger","title":"Trigger","text":""},{"location":"microservices/application/V3Migration/#edgex-messagebus-changes","title":"edgex-messagebus changes","text":"As noted above the EdgeX MessageBus configuration has been removed from the Trigger configuration. In addition the SubscribeTopics
and PublishTopic
settings have been move to the top level of the Trigger configuration. Most application services can simply use the default trigger configuration from application service common configuration.
Example application service Trigger configuration - From Common Configuration
Trigger:\nType: \"edgex-messagebus\"\nSubscribeTopics: \"events/#\" # Base topic is prepended to this topic when using edgex-messagebus\n
Example local application service Trigger configuration - None
# Using default Trigger config from common config\n
Some application services may need to publish results back to the EdgeX MessageBus. In this case the PublishTopic
will remain in the service private configuration.
Example local application service Trigger configuration - PublishTopic
Trigger:\n# Default value for SubscribeTopics is aslo set in common config\nPublishTopic: \"<my-topic>\" # Base topic is prepended to this topic when using edgex-messagebus\n
Note
In EdgeX 3.0 Application services, the base topic in MessageBus common configuration is prepended to the configured SubscribeTopics
and PublishTopic
values. The default base topic is edgex
; thus, all topics start with edgex/
If the common Trigger configuration is what your service needs
If your service publishes back to the EdgeX MessageBus
PublishTopic
to top level in your Trigger configurationedgex/
prefix if usedIf your service uses filter by topic
SubscribeTopics
to top level in your Trigger configurationedgex/
prefix from each topic if used#
between levels with +
. See Multi-level topics and wildcards section for more detailsThe External MQTT trigger configuration remains under Trigger configuration, but the SubscribeTopics
and PublishTopic
setting have been moved to the top level of the Trigger configuration.
Example - External MQTT trigger configuration
Trigger:\nType: \"external-mqtt\"\nSubscribeTopics: \"external-request/#\"\nPublishTopic: \"\" # optional if publishing response back to the the External MQTT Broker\nExternalMqtt:\nUrl: \"tcp://broker.hivemq.com:1883\" # fully qualified URL to connect to the MQTT broker\nClientId: \"app-my-service\"\nConnectTimeout: \"30s\" AutoReconnect: true\nKeepAlive: 10 # Seconds (must be 2 or greater)\nQoS: 0 # Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once)\nRetain: true\nSkipCertVerify: false\nSecretName: \"mqtt-trigger\" AuthMode: \"none\"\n
"},{"location":"microservices/application/V3Migration/#external-mqtt-trigger-migration","title":"external-mqtt Trigger Migration","text":"SubscribeTopics
and PublishTopic
top level of the Trigger configurationThe HTTP trigger configuration has not changed for EdgeX 3.0
"},{"location":"microservices/application/V3Migration/#writable-pipeline","title":"Writable Pipeline","text":"See Pipeline Configuration section below for changes to the Writable Pipeline configuration
"},{"location":"microservices/application/V3Migration/#custom-application-service","title":"Custom Application Service","text":""},{"location":"microservices/application/V3Migration/#code","title":"Code","text":""},{"location":"microservices/application/V3Migration/#dependencies","title":"Dependencies","text":"You first need to update the go.mod
file to specify go 1.20
and the v3 versions of the App Functions SDK and any EdgeX go-mods directly used by your service.
Example go.mod for V3
module <your service>\n\ngo 1.20\n\nrequire (\ngithub.com/edgexfoundry/app-functions-sdk-go/v3 v3.0.0\ngithub.com/edgexfoundry/go-mod-core-contracts/v3 v3.0.0\n)\n
Once that is complete then the import statements for these dependencies must be updated to include the /v3
in the path.
Example import statements for V3
import (\n...\n\n\"github.com/edgexfoundry/app-functions-sdk-go/v3/pkg/interfaces\"\n\"github.com/edgexfoundry/go-mod-core-contracts/v3/dtos\"\n)\n
"},{"location":"microservices/application/V3Migration/#api-changes","title":"API Changes","text":""},{"location":"microservices/application/V3Migration/#applicationservice-api","title":"ApplicationService API","text":"The ApplicationService
API has the following changes:
SetFunctionsPipeline
has been removed. Use SetDefaultFunctionsPipeline
insteadMakeItRun
has been renamed to Run
MakeItStop
has been renamed to Stop
GetSecret
has been removed. Use SecretProvider().GetSecret
StoreSecret
has been removed. Use SecretProvider().StoreSecret
LoadConfigurablePipeline
has been removed. Use LoadConfigurableFunctionPipelines
CommandClient
Get
API's dsPushEvent
and dsReturnEvent
parameters changed to be type bool
See Application Service API section for completed details on this API, including some new capabilities.
"},{"location":"microservices/application/V3Migration/#appfunctioncontext-api","title":"AppFunctionContext API","text":"The AppFunctionContext
API has the following changes:
PushToCore
has been removed. Use WrapIntoEvent function and publishing to the EdgeX MessageBus instead. See Trigger.PublishTopic or Background Publisher sections for more details on publishing data back to the EdgeX MessageBus.GetSecret
has been removed. Use SecretProvider().GetSecret
StoreSecret
has been removed. Use SecretProvider().StoreSecret
SecretsLastUpdated
has been removed. Use SecretProvider().SecretsLastUpdated
CommandClient
Get
API's dsPushEvent
and dsReturnEvent
parameters changed to be type bool
NewAESProtection
signature has changes. secretName
parameter renamed tosecretValueKey
secretPath
parameter renamed to secretName
Encrypt
pipeline function now require a *AESProtection
for the receiverNewAESProtection
now returns a *AESProtection
Compression
pipeline functions now require a *Compression
for the receiverNewCompression
now returns a *Compression
Conversion
pipeline functions now require a *Conversion
for the receiverNewConversion
now returns a *Conversion
PushToCoreData
function has been removed. Use WrapIntoEvent function and publishing to the EdgeX MessageBus instead. See Trigger.PublishTopic or Background Publisher sections for more details on publishing data back to the EdgeX MessageBus.EncryptWithAES
function has been removed, use AESProtection.Encrypt
instead. See AES Protection for more detailsFilter
pipeline functions now requires a *Filter
for the receiverNewFilterFor
and NewFilterOut
now return a *Filter
NewHTTPSenderWithSecretHeader
signature has changedsecretName
parameter renamed tosecretValueKey
secretPath
parameter renamed to secretName
Evaluate
pipeline function now requires a *JSONLogic
for the receiverNewJSONLogic
now returns a *JSONLogic
MQTTSecretConfig
has changedSecretPath
field renamed to SecretName
SetResponseData
pipeline function now requires a *ResponseData
for the receiverNewResponseData
now returns a *ResponseData
NewGenericTags
has been removed and replaced with new version of NewTags
which takes map[string]interface{}
for the tags
parameter.NewTags
now returns a *Tags
PushToCore
profile has been removed. Use WrapIntoEvent function and publishing to the EdgeX MessageBus instead. See Trigger.PublishTopic or Background Publisher sections for more details on publishing data back to the EdgeX MessageBus.Custom profiles for App Service Configurable must be migrated in a similar fashion to the configuration for custom application services. All configuration that is common to all EdgeX services or all EdgeX Application Services needs to be removed from custom profiles. See Common Service Configuration section for details about configuration that is common to all Edgex services. See Application Service Configuration section for details about configuration that is common to all EdgeX Application Services. Use the App Service Configurable provided profiles as examples of what configuration is left after removing the common configuration.
"},{"location":"microservices/application/V3Migration/#pipeline-configuration","title":"Pipeline Configuration","text":"raw
, event
or metric
#
between level has be replaced with +
. See Multi-level topics and wildcards for more details.SecretName
renamed to be SecretValueKey
SecretPath
renamed to be SecretName
SecretName
renamed to be SecretValueKey
SecretPath
renamed to be SecretName
Environment variable overrides must be adjusted appropriately for the above changes. Remove any overrides that apply to common configuration.
"},{"location":"microservices/application/services/AppLLRPInventory/","title":"App RFID LLRP Inventory","text":""},{"location":"microservices/application/services/AppLLRPInventory/#introduction","title":"Introduction","text":"Edgex application service for processing raw LLRP tag reads, producing events [Arrived, Moved, Departed], configure and manage LLRP readers via commands
See README for details
"},{"location":"microservices/application/services/AppRecordReplay/","title":"App Record and Replay","text":""},{"location":"microservices/application/services/AppRecordReplay/#introduction","title":"Introduction","text":"This service is a developer testing tool which will record Events from the EdgeX MessageBus and replay them back to the EdgeX MessageBus at a later time. The value of this is a session with devices present can be recorded for later replay on a system which doesn't have the required devices. This allows for testing of services that receive and process the Events without requiring the devices to be present.
Note
The source device service must be running when data is imported since the devices and device profiles are captured as part of the recorded data will be added to the system during import.
"},{"location":"microservices/application/services/AppRecordReplay/#storage","title":"Storage","text":"Since this is targeted as a developer testing tool, the storage model is kept simple by using in-memory storage for the recorded data. This should be kept in mind when recording or importing a recoding on systems with limited resources.
"},{"location":"microservices/application/services/AppRecordReplay/#rest-api","title":"REST API","text":"Control of this service is accomplished via the following REST API.
"},{"location":"microservices/application/services/AppRecordReplay/#postman-collection","title":"Postman Collection","text":"A sample Postman collection can be found here.
Note
Use the Postman Send and Download
option for the Export recording - JSON
request so that the response can be saved to file. The Send and Download
option is on the Send
button.
Note
Postman automatically un-compresses the responses when requesting GZIB or ZLIB compression. Use the following curl command to save the compressed response to file.
curl localhost:59712/api/v3/data?compression=gzip -o test.gz\ncurl localhost:59712/api/v3/data?compression=zlib -o test.zlib\n
"},{"location":"microservices/application/services/AppServiceConfigurable/","title":"App Service Configurable","text":""},{"location":"microservices/application/services/AppServiceConfigurable/#introduction","title":"Introduction","text":"App-Service-Configurable is provided as an easy way to get started with processing data flowing through EdgeX. This service leverages the App Functions SDK and provides a way for developers to use configuration instead of having to compile standalone services to utilize built in functions in the SDK. Please refer to Available Configurable Pipeline Functions section below for full list of built-in functions that can be used in the configurable pipeline.
To get started with App Service Configurable, you'll want to start by determining which functions are required in your pipeline. Using a simple example, let's assume you wish to use the following functions from the SDK:
Once the functions have been identified, we'll go ahead and build out the configuration in the configuration.yaml
file under the [Writable.Pipeline]
section.
Example - Writable.Pipeline
Writable:\nPipeline:\nExecutionOrder: \"FilterByDeviceName, Transform, HTTPExport\"\nFunctions:\nFilterByDeviceName:\nParameters:\nFilterValues: \"Random-Float-Device, Random-Integer-Device\"\nTransform:\nParameters:\nType: \"xml\"\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"\n
The first line of note is ExecutionOrder: \"FilterByDeviceName, Transform, HTTPExport\"
. This specifies the order in which to execute your functions. Each function specified here must also be placed in the Functions:
section.
Next, each function and its required information is listed. Each function typically has associated Parameters that must be configured to properly execute the function as designated by Parameters:
under {FunctionName}
. Knowing which parameters are required for each function, can be referenced by taking a look at the Available Configurable Pipeline Functions section below.
Note
By default, the configuration provided is set to use EdgexMessageBus
as a trigger. This means you must have EdgeX Running with devices sending data in order to trigger the pipeline. You can also change the trigger to be HTTP. For more details on triggers, view the Triggers
documentation located in the Triggers section.
That's it! Now we can run/deploy this service and the functions pipeline will process the data with functions we've defined.
"},{"location":"microservices/application/services/AppServiceConfigurable/#pipeline-per-topics","title":"Pipeline Per Topics","text":"The above pipeline configuration in Introduction section is the preferred way if your use case only requires a single functions pipeline. For use cases that require multiple functions pipelines in order to process the data differently based on the profile
, device
or source
for the Event, there is the Pipeline Per Topics feature. This feature allows multiple pipelines to be configured in the [Writable.Pipeline.PerTopicPipelines]
section. This section is a map of pipelines. The map key must be unique , but isn't used so can be any value. Each pipleline is defined by the following configuration settings:
ExecutionOrder
in the above example in the Introduction sectionExample - Writable.Pipeline.PerTopicPipelines
In this example Events from the device Random-Float-Device
are transformed to JSON and then HTTP exported. At the same time, Events for the source Int8
are transformed to XML and then HTTP exported to same endpoint. Note the custom naming for TransformJson
and TransformXml
. This is taking advantage of the Multiple Instances of a Function described below.
Writable:\nPipeline:\nPerTopicPipelines:\nfloat:\nId: float-pipeline\nTopics: \"edgex/events/device/+/Random-Float-Device/#, edgex/events/device/+/Random-Integer-Device/#\"\nExecutionOrder: \"TransformJson, HTTPExport\"\nint8:\nId: int8-pipeline\nTopic: edgex/events/device/+/+/+/Int8\nExecutionOrder: \"TransformXml, HTTPExport\"\nFunctions:\nFilterByDeviceName:\nParameters:\nFilterValues: \"Random-Float-Device, Random-Integer-Device\"\nTransformJson:\nParameters:\nType: json\nTransformXml:\nParameters:\nType: xml\nHTTPExport:\nParameters:\nMethod: post\nMimeType: application/xml\nUrl: \"http://my.api.net/edgexdata\"\n
Note
The Pipeline Per Topics
feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank
, so the pipeline's topics must contain a single topic set to the #
wildcard so that all messages received are processed by the pipeline.
EdgeX services no longer have docker specific profiles. They now rely on environment variable overrides in the docker compose files for the docker specific differences.
Example - Environment settings required in the compose files for App Service Configurable
EDGEX_PROFILE : [target profile]\nSERVICE_HOST : [services network host name]\nEDGEX_SECURITY_SECRET_STORE: \"false\" # only need to disable as default is true\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\n
Example - Docker compose entry for App Service Configurable in no-secure compose file
app-rules-engine:\ncontainer_name: edgex-app-rules-engine\ndepends_on:\n- consul\n- data\nenvironment:\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nEDGEX_PROFILE: rules-engine\nEDGEX_SECURITY_SECRET_STORE: \"false\"\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nSERVICE_HOST: edgex-app-rules-engine\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\nhostname: edgex-app-rules-engine\nimage: edgexfoundry/app-service-configurable:2.0.0\nnetworks:\nedgex-network: {}\nports:\n- 127.0.0.1:59701:59701/tcp\nread_only: true\nsecurity_opt:\n- no-new-privileges:true\nuser: 2002:2001\n
Note
App Service Configurable is designed to be run multiple times each with different profiles. This is why in the above example the name edgex-app-rules-engine
is used for the instance running the rules-engine
profile.
App Service Configurable was designed to be deployed as multiple instances for different purposes. Since the function pipeline is specified in the configuration.yaml
file, we can use this as a way to run each instance with a different function pipeline. App Service Configurable does not have the standard default configuration at /res/configuration.yaml
. This default configuration has been moved to the sample
profile. This forces you to specify the profile for the configuration you would like to run. The profile is specified using the -p/--profile=[profilename]
command line option or the EDGEX_PROFILE=[profilename]
environment variable override. The profile name selected is used in the service key (app-[profile name]
) to make each instance unique, e.g. AppService-sample
when specifying sample
as the profile.
Note
If you need to run multiple instances with the same profile, e.g. http-export
, but configured differently, you will need to override the service key with a custom name for one or more of the services. This is done with the -sk/-serviceKey
command-line option or the EDGEX_SERVICE_KEY
environment variable. See the Command-line Options and Environment Overrides sections for more detail.
Note
Functions can be declared in a profile but not used in the pipeline ExecutionOrder
allowing them to be added to the pipeline ExecutionOrder
later at runtime if needed.
The following profiles and their purposes are provided with App Service Configurable.
"},{"location":"microservices/application/services/AppServiceConfigurable/#rules-engine","title":"rules-engine","text":"Profile used to push Event messages to the Rules Engine via the Redis Pub/Sub Message Bus. This is used in the default docker compose files for the app-rules-engine
service
One can optionally add Filter function via environment overrides
WRITABLE_PIPELINE_EXECUTIONORDER: \"FilterByDeviceName, HTTPExport\"
WRITABLE_PIPELINE_FUNCTIONS_FILTERBYDEVICENAME_PARAMETERS_DEVICENAMES: \"[comma separated list]\"
There are many optional functions and parameters provided in this profile. See the complete profile for more details
"},{"location":"microservices/application/services/AppServiceConfigurable/#http-export","title":"http-export","text":"Starter profile used for exporting data via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides
Required:
WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your URL]
There are many more optional functions and parameters provided in this profile. See the complete profile for more details.
Starter profile used for exporting telemetry data from other EdgeX services to InfluxDB via HTTP export. This profile configures the service to receive telemetry data from other services, transform it to Line Protocol syntax, batch the data and then export it to an InfluxDB service via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides.
Required:
WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your InfluxDB URL]
`WRITABLE_INSECURESECRETS_INFLUXDB_SECRETS_TOKEN
: [Your InfluxDB Token]
Example value: \"Token 29ER8iMgQ5DPD_icTnSwH_77aUhSvD0AATkvMM59kZdIJOTNoJqcP-RHFCppblG3wSOb7LOqjp1xubA80uaWhQ==\"
If using secure mode, store the token in the service's secret store via POST to the service's /secret
endpoint
Example JSON to post to /secret endpoint
{\n\"apiVersion\":\"v2\",\n\"secretName\":\"influxdb\",\n\"secretData\":[\n{\n\"key\":\"Token\",\n\"value\":\"Token 29ER8iMgQ5DPD_icTnSwH_77aUhSvD0AATkvMM59kZdIJOTNoJqcP-RHFCppblG3wSOb7LOqjp1xubA80uaWhQ==\"\n}]\n}\n
Optional Additional Tags:
WRITABLE_PIPELINE_FUNCTIONS_TOLINEPROTOCOL_PARAMETERS_TAGS: <your additional tags>
Optional Batching parameters (see Batch function for more details):
WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_MODE: <your batch mode>
\"bytimecount\"
\"bycount\"
, \"bytime\"
or `\"bytimecount\"```WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_BATCHTHRESHOLD: <your batch threshold count>
100
WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_TIMEINTERVAL: <your batch time interval>
\"60s\"
Starter profile used for exporting data via MQTT. Requires further configuration which can easily be accomplished using environment variable overrides
Required:
WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS: [Your Broker Address]
There are many optional functions and parameters provided in this profile. See the complete profile for more details
Sample profile with all available functions declared and a sample pipeline. Provided as a sample that can be copied and modified to create new custom profiles. See the complete profile for more details
"},{"location":"microservices/application/services/AppServiceConfigurable/#functional-tests","title":"functional-tests","text":"Profile used for the TAF functional testing
"},{"location":"microservices/application/services/AppServiceConfigurable/#external-mqtt-trigger","title":"external-mqtt-trigger","text":"Profile used for the TAF functional testing of external MQTT Trigger
"},{"location":"microservices/application/services/AppServiceConfigurable/#what-if-my-input-data-isnt-an-edgex-event","title":"What if my input data isn't an EdgeX Event ?","text":"The default TargetType
for data flowing into the functions pipeline is an EdgeX Event DTO. There are cases when this incoming data might not be an EdgeX Event DTO. There are two setting that configure the TargetType to non-Event data.
In these cases the Pipeline
can be configured using TargetType=\"raw\"
to set the TargetType
to be a byte array/slice, i.e. []byte
. The first function in the pipeline must then be one that can handle the []byte
data. The compression, encryption and export functions are examples of pipeline functions that will take input data that is []byte
.
Example - Configure the functions pipeline to compress, encrypt and then export the []byte
data via HTTP
Writable:\nPipeline:\nTargetType: \"raw\"\nExecutionOrder: \"Compress, Encrypt, HTTPExport\"\nFunctions:\nCompress:\nParameters:\nAlogrithm: \"gzip\"\nEncrypt:\nParameters:\nAlgorithm: \"aes256\" SecretName: \"aes\"\nSecretValueKey: \"key\"\nHTTPExport:\nParameters:\nMethod: \"post\"\nUrl: \"http://my.api.net/edgexdata\"\nMimeType: \"application/text\"\n
If along with this pipeline configuration, you also configured the Trigger
to be http
trigger, you could then send any data to the app-service-configurable' s /api/v3/trigger
endpoint and have it compressed, encrypted and sent to your configured URL above.
Example - HTTP Trigger configuration
Trigger:\nType: \"http\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#metric-targettype","title":"Metric TargetType","text":"This setting when set to true will cause the TargeType
to be &dtos.Metric{}
and is meant to be used in conjunction with the new ToLineProtocol
function. See ToLineProtocol section below for more details. In addition the Trigger
SubscribeTopics
must be set to \"edgex/telemetry/#\"
so that the function receives the metric data from the other services.
Example - Metric TargetType
Writable:\nPipeline:\nTargetType: \"metric\"\nExecutionOrder: \"ToLineProtocol, ...\"\n...\nFunctions:\nToLineProtocol:\nParameters:\nTags: \"\" # optional comma separated list of additional tags to add to the metric in to form \"tag:value,...\"\n...\nTrigger:\nSubscribeTopics: telemetry/#\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#multiple-instances-of-a-function","title":"Multiple Instances of a Function","text":"Now multiple instances of the same configurable pipeline function can be specified, configured differently and used together in the functions pipeline. Previously the function names specified in the [Writable.Pipeline.Functions]
section had to match a built-in configurable pipeline function name exactly. Now the names specified only need to start with a built-in configurable pipeline function name. See the HttpExport section below for an example.
Below are the functions that are available to use in the configurable pipeline function pipeline ([Writable.Pipeline]
) section of the configuration. The function names below can be added to the Writable.Pipeline.ExecutionOrder
setting (comma separated list) and must also be present or added to the [Writable.Pipeline.Functions]
section as {FunctionName}]
. The functions will also have the {FunctionName}.Parameters:
section where the function's parameters are configured. Please refer to the Introduction section above for an example.
Note
The Parameters
section for each function is a key/value map of string
values. So even tough the parameter is referred to as an Integer or Boolean, it has to be specified as a valid string representation, e.g. \"20\" or \"true\".
Please refer to the function's detailed documentation by clicking the function name below.
"},{"location":"microservices/application/services/AppServiceConfigurable/#addtags","title":"AddTags","text":"Parameters
tags
- String containing comma separated list of tag key/value pairs. The tag key/value pairs are colon seperatedExample
AddTags:\nParameters:\ntags: \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#batch","title":"Batch","text":"Parameters
Mode
- The batch mode to use. can be 'bycount', 'bytime' or 'bytimecount'BatchThreshold
- Number of items to batch before sending batched items to the next function in the pipeline. Used with 'bycount' and 'bytimecount' modesTimeInterval
- Amount of time to batch before sending batched items to the next function in the pipeline. Used with 'bytime' and 'bytimecount' modesIsEventData
- If true, specifies that the data being batched is Events
and to un-marshal the batched data to []Event
prior to returning the batched data. By default the batched data returned is [][]byte
MergeOnSend
- If true, specifies that the data being batched is to be merged to a single []byte
prior to returning the batched data. By default the batched data returned is [][]byte
Example
Batch:\nParameters:\nMode: \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\"\nBatchThreshold: \"30\"\nTimeInterval: \"60s\"\nIsEventData: \"false\"\nMergeOnSend: \"false\" or\nBatch:\nParameters:\nMode: \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\"\nBatchThreshold: \"30\"\nTimeInterval: \"60s\"\nIsEventData: \"true\"\nMergeOnSend: \"false\" or\nBatch:\nParameters:\nMode: \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\"\nBatchThreshold: \"30\"\nTimeInterval: \"60s\"\nIsEventData: \"false\"\nMergeOnSend: \"true\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#compress","title":"Compress","text":"Parameters
Algorithm
- Compression algorithm to use. Can be 'gzip' or 'zlib'Example
Compress:\nParameters:\nAlgorithm: \"gzip\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#encrypt","title":"Encrypt","text":"Parameters
Algorithm
- AES256SecretName
- (required for AES256) Name of the secret in the Secret Store
where the encryption key is located.SecretValueKey
- (required for AES256) Key of the secret data for the encryption key in the secret's data.Example
# Encrypt with key pulled from Secret Store\nEncrypt:\nParameters:\nAlgorithm: \"aes256\"\nSecretName: \"aes\"\nSecretValueKey: \"key\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbydevicename","title":"FilterByDeviceName","text":"Parameters
DeviceNames
- Comma separated list of device names or regular expressions for filteringFilterOut
- Boolean indicating if the data matching the device names should be filtered out or filtered for.Example
FilterByDeviceName:\nParameters:\nDeviceNames: \"Random-Float-Device,Random-Integer-Device\"\nFilterOut: \"false\"\nor\nFilterByDeviceName:\nParameters:\nDeviceNames: \"[a-zA-Z-]+(Integer-)[a-zA-Z-]+\"\nFilterOut: \"true\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbyprofilename","title":"FilterByProfileName","text":"Parameters
ProfileNames
- Comma separated list of profile names or regular expressions for filteringFilterOut
- Boolean indicating if the data matching the profile names should be filtered out or filtered for.Example
FilterByProfileName:\nParameters:\nProfileNames: \"Random-Float-Device, Random-Integer-Device\"\nFilterOut: \"false\"\nor\nFilterByProfileName:\nParameters:\nProfileNames: \"(Random-)[a-zA-Z-]+\"\nFilterOut: \"false\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbyresourcename","title":"FilterByResourceName","text":"Parameters
ResourceName
- Comma separated list of reading resource names or regular expressions for filteringFilterOut
- Boolean indicating if the readings matching the resource names should be filtered out or filtered for.Example
FilterByResourceName:\nParameters:\nResourceNames: \"Int8, Int64\"\nFilterOut: \"true\"\nor\nFilterByResourceName:\nParameters:\nDeviceNames: \"(Int)[0-9]+\"\nFilterOut: \"false\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbysourcename","title":"FilterBySourceName","text":"Parameters
SourceNames
- Comma separated list of source names or regular expressions for filtering. Source name is either the device command name or the resource name that created the EventFilterOut
- Boolean indicating if the data matching the device names should be filtered out or filtered for.Example
FilterBySourceName:\nParameters:\nSourceNames: \"Bool, BoolArray\"\nFilterOut: \"false\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#httpexport","title":"HTTPExport","text":"Parameters
Method
- HTTP Method to use. Can be post
or put
Url
- HTTP endpoint to POST/PUT the data.MimeType
- Optional mime type for the data. Defaults to application/json
if not set.PersistOnError
- Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\".ContinueOnSendError
- For chained multi destination exports, if true continues after send error so next export function executes.ReturnInputData
- For chained multi destination exports if true, passes the input data to next export function.HeaderName
- (Optional) Name of the header key to add to the HTTP headerSecretName
- (Optional) Name of the secret in the Secret Store
where the header value is stored.SecretValueKey
- (Optional) Key for the header value in the secret data.HttpRequestHeaders
- (Optional) HTTP Request header parameters in json format.Example
# Simple HTTP Export\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"
# HTTP Export with multiple HTTP Request header Parameters\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"\nHttpRequestHeaders: \"{\"Connection\": \"keep-alive\", \"From\": \"[user@example.com](mailto:user@example.com)\" }\"\n
# HTTP Export with secret header data pull from Secret Store\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"\nHeaderName: \"MyApiKey\" SecretName: \"http\"\nSecretValueKey: \"apikey\"\n
# Http Export to multiple destinations\nWritable:\nPipeline:\nExecutionOrder: \"HTTPExport1, HTTPExport2\"\nFunctions:\nHTTPExport1:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api1.net/edgexdata2\" ContinueOnSendError: \"true\"\nReturnInputData: \"true\"\nHTTPExport2:\nParameters:\nMethod: \"put\" MimeType: \"application/xml\" Url: \"http://my.api2.net/edgexdata2\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#jsonlogic","title":"JSONLogic","text":"Parameters
Rule
- The JSON formatted rule that with be executed on the data by JSONLogic Example
JSONLogic:\nParameters:\nRule: \"{ \\\"and\\\" : [{\\\"<\\\" : [{ \\\"var\\\" : \\\"temp\\\" }, 110 ]}, {\\\"==\\\" : [{ \\\"var\\\" : \\\"sensor.type\\\" }, \\\"temperature\\\" ]} ] }\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#mqttexport","title":"MQTTExport","text":"Parameters
BrokerAddress
- URL specify the address of the MQTT BrokerTopic
- Topic to publish the dataClientId
- Id to use when connecting to the MQTT BrokerQos
- MQTT Quality of Service (QOS) setting to use (0, 1 or 2). Please refer here for more details on QOS valuesAutoReconnect
- Boolean specifying if reconnect should be automatic if connection to MQTT broker is lostRetain
- Boolean specifying if the MQTT Broker should save the last message published as the \u201cLast Good Message\u201d on that topic.SkipVerify
- Boolean indicating if the certificate verification should be skipped. PersistOnError
- Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\".AuthMode
- Mode of authentication to use when connecting to the MQTT Brokernone
- No authentication requiredusernamepassword
- Use username and password authentication. The Secret Store (Vault or InsecureSecrets) must contain the username
and password
secrets.clientcert
- Use Client Certificate authentication. The Secret Store (Vault or InsecureSecrets) must contain the clientkey
and clientcert
secrets.cacert
- Use CA Certificate authentication. The Secret Store (Vault or InsecureSecrets) must contain the cacert
secret.SecretName
- Name of the secret in the SecretStore where authentication secrets are stored.Note
Authmode=cacert
is only needed when client authentication (e.g. usernamepassword
) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert.
Example
# Simple MQTT Export\nMQTTExport:\nParameters:\nBrokerAddress: \"tcps://localhost:8883\"\nTopic: \"mytopic\"\nClientId: \"myclientid\"\n
# MQTT Export with auth credentials pull from the Secret Store\nMQTTExport:\nParameters:\nBrokerAddress: \"tcps://my-broker-host.com:8883\"\nTopic: \"mytopic\"\nClientId: \"myclientid\"\nQos=\"2\"\nAutoReconnect=\"true\"\nRetain=\"true\"\nSkipVerify: \"false\"\nPersistOnError: \"true\"\nAuthMode: \"usernamepassword\"\nSecretName: \"mqtt\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#setresponsedata","title":"SetResponseData","text":"Parameters
ResponseContentType
- Used to specify content-type header for response - optionalExample
SetResponseData:\nParameters:\nResponseContentType: \"application/json\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#transform","title":"Transform","text":"Parameters
Type
- Type of transformation to perform. Can be 'xml' or 'json'Example
Transform:\nParameters:\nType: \"xml\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#tolineprotocol","title":"ToLineProtocol","text":"Parameters
Tags
- optional comma separated list of additional tags to add to the metric in to form \"tag:value,...\"Example
ToLineProtocol:\nParameters:\nTags: \"\" # optional comma separated list of additional tags to add to the metric in to form \"tag:value,...\"\n
Note
The new TargetType
setting must be set to \"metric\" when using this function. See the Metric TargetType section above for more details.
Parameters
ProfileName
- Profile name to use for the new EventDeviceName
- Device name to use for the new EventResourceName
- Resource name name to use for the new Event'sSourceName
and Reading's ResourceName
ValueType
- Value type to use the new Event Reading's value typeMediaType
- Media type to use the new Event Reading's value type. Required when the value type is Binary
Example
WrapIntoEvent:\nParameters:\nProfileName: \"MyProfile\"\nDeviceName: \"MyDevice\"\nResourceName: \"SomeResource\"\nValueType: \"String\"\nMediaType: \"\" # Required only when ValueType=Binary\n
"},{"location":"microservices/application/services/AvailableAppServices/","title":"Available Application Services List","text":"The following table lists the available EdgeX Application Services:
Repository Status Comments Documentation app-service-configurable Active App Service which provides configurable function pipelines capability for built-in pipeline functions app-service-configurable docs app-rfid-llrp-inventory Active App Service which generate Inventory movement Events from raw LLRP events produced by device-rfid-llrp app-rfid-llrp-inventory docs app-record-replay Active App Service for Development/Testing with capability to Record and Replay EdgeX Events app-record-replay docs"},{"location":"microservices/configuration/CommonCommandLineOptions/","title":"Command Line Options","text":"This section describes the command line options that are common to all EdgeX services. Some services have addition command line options which are documented in the specific sections for those services.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#config-directory","title":"Config Directory","text":"-cd/--configDir
EdgeX 3.0
The -c/--confdir
command line option is replaced by -cd/--configDir
in EdgeX 3.0.
Specify local configuration directory. Default is ./res
, but will be ignored if Config File parameter refers to a URI beginning with http
or https
.
Can be overridden with EDGEX_CONFIG_DIR environment variable.
EdgeX 3.0
The EDGEX_CONF_DIR
environment variable is replaced by EDGEX_CONFIG_DIR
in EdgeX 3.0.
-cf/--configFile <name>
EdgeX 3.0
The -f/--file
command line option is replaced by -cf/--configFile
in EdgeX 3.0.
Indicates the name of the local configuration file or the URI of the private configuration. See the URI for Files section for more details. Default is configuration.yaml
.
Can be overridden with EDGEX_CONFIG_FILE environment variable.
EdgeX 3.1
Support for loading private configuration via URI is new in EdgeX 3.1.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#config-provider","title":"Config Provider","text":"-cp/ --configProvider
Indicates to use Configuration Provider service at specified URL. URL Format: {type}.{protocol}://{host}:{port}
. Default is consul.http://localhost:8500
Can be overridden with EDGEX_CONFIG_PROVIDER environment variable.
EdgeX 3.0
The EDGEX_CONFIGURATION_PROVIDER
environment variable is replaced by EDGEX_CONFIG_PROVIDER
in EdgeX 3.0.
-cc/ --commonConfig
EdgeX 3.0
The Common Config flag is new to EdgeX 3.0
Takes the location where the common configuration is loaded from - either a local file path or a URI when not using the Configuration Provider. See the URI for Files section for more details. Default is blank.
Can be overridden with EDGEX_COMMON_CONFIG environment variable.
EdgeX 3.1
Support for loading common configuration via URI is new in EdgeX 3.1.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#profile","title":"Profile","text":"-p/--profile <name>
Indicates configuration profile other than default. Default is no profile name resulting in using ./res/configuration.yaml
if -f
and -c
are not used.
Can be overridden with EDGEX_PROFILE environment variable.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#registry","title":"Registry","text":"-r/ --registry
Indicates service should use the Registry. Connection information is pulled from the [Registry]
configuration section.
Can be overridden with EDGEX_USE_REGISTRY environment variable.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#overwrite","title":"Overwrite","text":"-o/--overwrite
Overwrite configuration in provider with local configuration.
Use with caution
This will clobber existing settings in provider, which is problematic if those settings were intentionally edited by hand. Typically only used during development.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#remote-service-hosts","title":"Remote Service Hosts","text":"EdgeX 3.1
New in EdgeX 3.1
-rsh/--remoteServiceHosts <host names>
Warning
This command line option is intended to be used in non-secure EdgeX deployments that are run with in a secured network. See Remote Device Services in Secure Mode section for details of deploying remote EdgeX services in secure EdgeX deployments.
Sets the three host names required when running the service remotely so that it can connect to the core EdgeX services running on another system and also be connected to from those same core EdgeX services.
<host names>
must contain and only contain the following three host names in a comma separated string
Host name of local system where the service is running
Host name of the system where the core EdgeX services are running
Host name to bind to for the internal WebServer for hosting the REST API
This allows the service to be accessed from external network. When running native it can be set to the local system Hostname/IP or 0.0.0.0
When running in docker it must be set to localhost
or 0.0.0.0
and use docker port mapping to expose the service to external network.
Note
Each host name can be a known DNS host name or the IP address of the host
Example setting Remote Service Hosts
--remoteServiceHosts 172.26.113.174,172.26.113.150,0.0.0.0\nor\n-rsh 172.26.113.174,172.26.113.150,localhost\n
Can be overridden with EDGEX_REMOTE_SERVICE_HOSTS environment variable.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#developer-mode","title":"Developer Mode","text":"EdgeX 3.0
New in EdgeX 3.0
-d/--dev
Indicates service should run in developer mode. The allows the service running from command-line to properly communicate with other EdgeX services running in Docker (aka hybrid mode). This flag cause all Host
configuration values pulled from common configuration via the Configuration Provider to be overridden with the value \"localhost\".
Development Only
This flag should only be used for development purposes when running from command-line.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#help","title":"Help","text":"-h/--help
Show the help message
"},{"location":"microservices/configuration/CommonConfiguration/","title":"Service Configuration","text":"The configuration for EdgeX services is broken into multiple layers. The layers are as follows:
Subsequent layers have higher precedence. As a result, the configuration values set in subsequent layers override those of underlying layers.
EdgeX 3.0
This layered configuration is new in EdgeX 3.0
"},{"location":"microservices/configuration/CommonConfiguration/#common-configuration","title":"Common Configuration","text":"EdgeX 3.0
Common configuration is new in Edgex 3.0
The common configuration is divided into 3 sections:
All Services- Configuration that is common to all EdgeX Services See below for details.
App Services - Configuration that is common to just application services. See App Service Configuration section for more details.
When the Configuration Provider is used, the common configuration is seeded by the core-common-config-bootstrapper service, otherwise the common configuration comes from a file specified by the -cc/--commonConfig
command-line option.
Note
Common environment variable overrides set on the core-common-config-bootstrapper service are applied to the common configuration prior to seeding the values into the Configuration Provider. See Common Configuration Overrides section for more details.
"},{"location":"microservices/configuration/CommonConfiguration/#common-configuration-properties","title":"Common Configuration Properties","text":"The tables in each of the tabs below document configuration properties that are common to all services in the EdgeX Foundry platform.
Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It now has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
Edgex 3.0
In EdgeX 3.0, the MessageBus configuration is now common to all services. In addition, the internal MessageBus topic configuration has been replaced by internal constants. The new BaseTopicPrefix setting has been added to allow customization of all topics under a common base prefix. See the new common MessageBus section below.
WritableWritable.TelemetryServiceService.CORSConfigurationRegistryDatabaseMessageBusMessageQueue.Optional Property Default Value Description entries in the Writable section of the configuration can be changed on the fly while the service is running if the service is running with the-cp/--configProvider
flag LogLevel --- log entry severity level. (specific for each service) InsecureSecrets --- This section a map of secrets which simulates the SecretStore for accessing secrets when running in non-secure mode. All services have a default entry for Redis DB credentials called redisdb
Note
LogLevel is included here for documentation purposes since all services have this setting. Since it should always be set at an individual service level it is not included in the new common configuration file and is present in all the individual service private configuration.
Property Default Value Description Interval 30s The interval in seconds at which to report the metrics currently being collected and enabled. Value of 0s disables reporting. Metrics Boolean map of service metrics that are being collected. The boolean flag for each indicates if the metric is enabled for reporting. i.e.EventsPersisted = true
. The metric name must match one defined by the service. Metrics.SecuritySecretsRequested false Enable/Disable reporting of number of secrets requested Metrics.SecuritySecretsStored false Enable/Disable reporting of number of secrets stored Metrics.SecurityConsulTokensRequested false Enable/Disable reporting of number of Consul token requested Metrics.SecurityConsulTokenDuration false Enable/Disable reporting of duration for obtaining Consul token Tags <Common Tags>
String map of arbitrary tags to be added to every metric that is reported for all services . i.e. Gateway=\"my-iot-gateway\"
. The tag names are arbitrary. Property Default Value Description HealthCheckInterval 10s The interval in seconds at which the service registry(Consul) will conduct a health check of this service. Host localhost Micro service host name Port --- Micro service port number (specific for each service) ServerBindAddr '' (empty string) The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host
option resolves (leaving it blank). A value of 0.0.0.0
means listen on all available interfaces. App & Device service do not implement this setting. (specific for each service) StartupMsg --- Message logged when service completes bootstrap start-up MaxResultCount 1024* Read data limit per invocation. *Default value is for core/support services. Application and Device services do not implement this setting. MaxRequestSize 0 Defines the maximum size of http request body in kilbytes. 0 represents default to system max. RequestTimeout 5s Specifies a timeout duration for handling requests EnableNameFieldEscape false The name field escape could allow the system to use special or Chinese characters in the different name fields, including device, profile, and so on. If the EnableNameFieldEscape is false, some special characters might cause system error. If EnableNameFieldEscape is true, the client of event or command message bus API clients have to escape the name to subscribe the topics, for example, if the device name is test-device
, the escaped device name should be test%2Ddevice
, and the event topic is similar to edgex/events/device/device%2Dvirtual/test%2Dprofile/test%2Ddevice/test%2Dresource
. Property Default Value Description The settings of controling CORS http headers EnableCORS false Enable or disable CORS support. CORSAllowCredentials false The value of Access-Control-Allow-Credentials
http header. It appears only if the value is true
. CORSAllowedOrigin \"https://localhost\" The value of Access-Control-Allow-Origin
http header. CORSAllowedMethods \"GET, POST, PUT, PATCH, DELETE\" The value of Access-Control-Allow-Methods
http header. CORSAllowedHeaders \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\" The value of Access-Control-Allow-Headers
http header. CORSExposeHeaders \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\" The value of Access-Control-Expose-Headers
http header. CORSMaxAge 3600 The value of Access-Control-Max-Age
http header. To understand more details about these HTTP headers, please refer to MDN Web Docs, and refer to CORS enabling to learn more. Property Default Value Description configuration that govern how to connect to the registry to register for service registration Host localhost Registry host name Port 8500 Registry port number Type consul Registry implementation type Property Default Value Description configuration that govern database connectivity and the type of database to use. While not all services require DB connectivity, most do and so this has been included in the common configuration docs. Host localhost DB host name Port 6379 DB port number Name ---- Database or document store name (Specific to the service) Timeout 5s DB connection timeout Type redisdb DB type. Redis is the only supported DB Property Default Value Description Entries in the MessageBus section of the configuration allow for connecting to the internal MessageBus and define a common base topic prefix Protocol redis Indicates the connectivity protocol to use when connecting to the bus. Host localhost Indicates the host of the messaging broker, if applicable. Port 6379 Indicates the port to use when publishing a message. Type redis Indicates the type of messaging library to use. Currently this is Redis by default. Refer to the go-mod-messaging module for more information. AuthMode usernamepassword Auth Mode to connect to EdgeX MessageBus. SecretName redisdb Name of the secret in the Secret Store to find the MessageBus credentials. BaseTopicPrefix edgex Indicates the base topic prefix which is prepended to all internal MessageBus topics. Property Default Value Description Configuration and connection parameters for use with MQTT or NATS message bus - in place of Redis ClientId --- Client ID used to put messages on the bus (specific for each service) Qos '0' Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive '10' Period of time in seconds to keep the connection alive when there are no messages flowing (must be 2 or greater) Retained false Whether to retain messages AutoReconnect true Whether to reconnect to the message bus on connection loss ConnectTimeout 5 Message bus connection timeout in seconds SkipCertVerify false TLS configuration - Only used if Cert/Key file or Cert/Key PEMblock are specified Additional Default NATS Specific options Format nats Format of the actual message published. See NATs section of the MessageBus documentation. RetryOnFailedConnect true Retry on connection failure - expects a string representation of a boolean QueueGroup blank Specifies a queue group to distribute messages from a stream to a pool of worker services Durable blank Specifies a durable consumer should be used with the given name. Note that if a durable consumer with the specified name does not exist it will be considered ephemeral and deleted by the client on drain / unsubscribe (JetStream only) AutoProvision true Automatically provision NATS streams. (JetStream only) Deliver new Specifies delivery mode for subscriptions - options are \"new\", \"all\", \"last\" or \"lastpersubject\". See the NATS documentation for more detail (JetStream only) DefaultPubRetryAttempts 2 Number of times to attempt to retry on failed publish (JetStream only)"},{"location":"microservices/configuration/CommonConfiguration/#private-configuration","title":"Private Configuration","text":"Each EdgeX service has a private configuration with values specific to that service. Some of these values may override values found in the common configuration layers described above. This private configuration is initially found in the service's configuration.yaml
file.
When the Configuration Provider is used, the EdgeX services will self-seed their private configuration, with environment variable overrides applied, into the Configuration Provider on first start-up. On restarts, the services will pull their private configuration from the Configuration Provider and apply it over the common configuration previously loaded from the Configuration Provider.
When the Configuration Provider is not used the service's private configuration will be applied over the common configuration loaded via the -cc/--commonConfig
command-line option.
Note
The -cc/--commonConfig
option is not required when the Configuration Provider is not used. If it is not provided, the service's private configuration must be complete for its needs. A complete configuration will have the private configuration settings as well as the necessary common configuration settings. Some of the Security services that do not use the Configuration Provider operate in this manner since they do not have common configuration like other EdgeX services.
The service specific private values and additional settings can be found on the respective documentation page for each service here.
"},{"location":"microservices/configuration/CommonConfiguration/#writable-vs-readable-settings","title":"Writable vs Readable Settings","text":"Within each configuration layer, there are settings whose values can be edited via the Configuration Provider and change the behavior of the service while it is running. These writable settings are grouped under Writable
in each layer. Any configuration settings found in a common or private Writable
section may be changed and affect a service's behavior without a restart. Any modifications to the other settings (read-only configuration) require a restart of the service(s).
Note
Runtime changes to a common Writable setting will be ignored by services which have that setting overridden in a subsequent layer, i.e. app/device or private. This is to avoid changing values that have been explicitly overridden in a lower layer Writable section by changing the same setting in a higher layer Writable section. The setting value should be changed at the lowest layer in which it exists for a service.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/","title":"Environment Variables","text":"There are three types of environment variables used by all EdgeX services. They are standard, command-line overrides, and configuration overrides.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#standard-environment-variables","title":"Standard Environment Variables","text":"This section describes the standard environment variables common to all EdgeX services. Standard environment variables do not override any command line flag or service configuration. Some services may have additional standard environment variables which are documented in those service specific sections. See Notable Other Standard Environment Variables below for list of these additional standard environment variables.
Note
All standard environment variables have the EDGEX_
prefix
This environment variable indicates whether the service is expected to initialize the secure SecretStore which allows the service to access secrets from Vault. Defaults to true
if not set or not set to false
. When set to true
the EdgeX security services must be running. If running EdgeX in non-secure
mode you then want this explicitly set to false
.
Example - Using docker-compose to disable secure SecretStore
environment: EDGEX_SECURITY_SECRET_STORE: \"false\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_disable_jwt_validation","title":"EDGEX_DISABLE_JWT_VALIDATION","text":"This environment variable disables, at the microservice-level, validation of the Authorization
HTTP header of inbound REST API requests. (Microservice-level authentication was added in EdgeX 3.0.)
Normally, when EDGEX_SECURITY_SECRET_STORE
is unset or true
, EdgeX microservices authenticate inbound HTTP requests by parsing the Authorization
header, extracting a JWT bearer token, and validating it with the EdgeX secret store, returning an HTTP 401 error if token validation fails.
If for some reason it is not possible to pass a valid JWT to an EdgeX microservice -- for example, the eKuiper rule engine making an unauthenticated HTTP API call, or other legacy code -- it may be necessary to disable JWT validation in the receiving microservice.
Example - Using docker-compose environment variable to disable secure JWT validation
environment: EDGEX_DISABLE_JWT_VALIDATION: \"true\"\n
Regardless of the setting of this variable, the API gateway (and related security-proxy-auth microservice) will always validate the incoming JWT.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_startup_duration","title":"EDGEX_STARTUP_DURATION","text":"This environment variable sets the total duration in seconds allowed for the services to complete the bootstrap start-up. Default is 60 seconds.
Example - Using docker-compose to set start-up duration to 120 seconds
environment: EDGEX_STARTUP_DURATION: \"120\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_startup_interval","title":"EDGEX_STARTUP_INTERVAL","text":"This environment variable sets the retry interval in seconds for the services retrying a failed action during the bootstrap start-up. Default is 1 second.
Example - Using docker-compose to set start-up interval to 3 seconds
environment: EDGEX_STARTUP_INTERVAL: \"3\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#notable-other-standard-environment-variables","title":"Notable Other Standard Environment Variables","text":"This section covers other standard environment variables that are not common to all services.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_secretstore_tokens","title":"EDGEX_ADD_SECRETSTORE_TOKENS","text":"This environment variable tells the Secret Store Setup service which add-on services to generate SecretStore tokens for. See Configure Service's Secret Store section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_known_secrets","title":"EDGEX_ADD_KNOWN_SECRETS","text":"This environment variable tells the Secret Store Setup service which add-on services need which known secrets added to their Secret Stores. See Configure Known Secrets section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_registry_acl_roles","title":"EDGEX_ADD_REGISTRY_ACL_ROLES","text":"This environment variable tells the Consul service entry point script which add-on services need ACL roles created. See Configure ACL Role section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_proxy_route","title":"EDGEX_ADD_PROXY_ROUTE","text":"This environment variable tells the Proxy Setup Service which additional routes need to be added for add-on services. See Configure API Gateway Route section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_ikm_hook","title":"EDGEX_IKM_HOOK","text":"This environment variable tells the Secret Store Setup service the path to an executable that implements the IKM interface. See IKM HOOK section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#command-line-overrides","title":"Command-line Overrides","text":"This section describes the command-line overrides that are common to most services. These overrides allow the use of the specific command-line flag to be overridden each time a service starts up.
Note
All command-line overrides also have the EDGEX_
prefix.
This environment variable overrides the -cd/--configDir
command-line option.
Example - Using docker-compose to override the configuration folder name
environment: EDGEX_CONF_DIR: \"/my-config\"\n
EdgeX 3.0
The EDGEX_CONF_DIR
environment variable is replaced by EDGEX_CONFIG_DIR
in EdgeX 3.0.
This environment variable overrides the -cf/--configFile
command-line option.
Example - Using docker-compose to override the configuration file name used
environment: EDGEX_CONFIG_FILE: \"my-config.yaml\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_config_provider","title":"EDGEX_CONFIG_PROVIDER","text":"This environment variable overrides the -cp/--configProvider
command-line option.
Overriding with a value of none
disables the use of the Configuration Provider.
Note
All EdgeX service Docker images have this option set to -cp=consul.http://edgex-core-consul:8500
.
Example - Using docker-compose to override with different port number
environment: EDGEX_CONFIG_PROVIDER: \"consul.http://edgex-consul:9500\"\n\nor\n\nenvironment: EDGEX_CONFIG_PROVIDER: \"none\"\n
EdgeX 3.0
The EDGEX_CONFIGURATION_PROVIDER
environment variable is replaced by EDGEX_CONFIG_PROVIDER
in EdgeX 3.0.
This environment variable overrides the -cc/--commonConfig
command-line option.
Note
The Common Config can only be specified when not using the Configuration Provider.
Example - Override with a common configuration file at the command line
$ export EDGEX_COMMON_CONFIG=./my-common-configuration.yaml\n$ ./core-data\n
EdgeX 3.0
The EDGEX_COMMON_CONFIG
variable is new to EdgeX 3.0.
This environment variable overrides the -p/--profile
command-line option. When non-empty, the value is used in the path to the configuration file. i.e. /res/my-profile/configuation.yaml. This is useful when running multiple instances of a service such as App Service Configurable.
Example - Using docker-compose to override the profile to use
app-service-rules:\nimage: edgexfoundry/docker-app-service-configurable:2.0.0\nenvironment: EDGEX_PROFILE: \"rules-engine\"\n...\n
This sets the profile
so that the App Service Configurable uses the rules-engine
configuration profile which resides at /res/rules-engine/configuration.yaml
This environment variable overrides the -r/--registry
command-line option.
Note
All EdgeX service Docker images have this option set to --registry
.
Example - Using docker-compose to override use of the Registry
environment: EDGEX_USE_REGISTRY: \"false\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_remote_service_hosts","title":"EDGEX_REMOTE_SERVICE_HOSTS","text":"This environment variable overrides the -rsh/--remoteServiceHosts
command-line option.
Example - Using docker-compose to override Remote Service Hosts
environment: EDGEX_REMOTE_SERVICE_HOSTS: \"172.26.113.174,172.26.113.150,localhost\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#configuration-overrides","title":"Configuration Overrides","text":"EdgeX 3.0
New in EdgeX 3.0. When used, the Configuration Provider is the System of Record for all configuration. The environment variables for configuration overrides no longer have the highest precedence. However, environment variables for standard and command-line overrides still maintain their role and higher precedence.
Configuration Provider is the System of Record for all configurations
When using the Configuration Provider, it is the System of Record for all configurations. Environment variables are only applied when the configuration is first read from file. These overridden values are used to seed the services' configuration into the Configuration Provider. Once the Configuration Provider has been seeded, services always get their configuration from the Configuration Provider on start up. Any subsequent changes to configuration must be done via the Configuration Provider. Changing an environment variable override for configuration and restating the service will not impact the service's configuration. The services configuration must first be removed from the Configuration Provider for any new/updated environment variable override(s) to impact the service's configuration.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#service-configuration-overrides","title":"Service Configuration Overrides","text":"Any configuration setting from a service's configuration.yaml
file can be overridden by environment variables. The environment variable names have the following format:
<SECTION-NAME>_<KEY-NAME>\n<SECTION-NAME>_<SUB-SECTION-NAME>_<KEY-NAME>\n
Example - Environment Variable Overrides of Configuration
Service configuration YAML Environment variable Writable:LogLevel: \"INFO\"WRITABLE_LOGLEVEL=DEBUG Service:
Host: \"localhost\"SERVICE_HOST=edgex-core-data
Important
Private configuration overrides are only applied to configuration settings that exist in the service's private configuration file.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#secretstore-configuration-overrides","title":"SecretStore Configuration Overrides","text":"The environment variables overrides for SecretStore configuration follow the same rules as the regular configuration overrides. The following are the SecretStore fields that are commonly overridden.
Example SecretStore Configuration Override
Configuration Setting: SecretStore.Host\nEnvironment Variable Override: SECRETSTORE_HOST=edgex-vault
The complete list of SecretStore fields and defaults can be found in the file here. The defaults for the remaining fields typically do not need to be overridden, but may be overridden if needed using that same naming scheme as above.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#notable-configuration-overrides","title":"Notable Configuration Overrides","text":"This section describes configuration overrides that have special utility, such as enabling a debug capability or facilitating code development.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#tokenfileprovider_defaulttokenttl-security-secretstore-setup-service","title":"TOKENFILEPROVIDER_DEFAULTTOKENTTL (security-secretstore-setup service)","text":"This configuration override variable controls the TTL of the default SecretStore tokens that are created for EdgeX microservices by the Secret Store Setup service. This variable defaults to 1h
(one hour) if unspecified. It is often useful when developing a new microservice to set this value to a higher value, such as 12h
. This higher value will allow the secret store token to remain valid long enough for a developer to get a new microservice working and into a state where it can renew its own token. (All secret store tokens in EdgeX expire if not renewed periodically.)
The EdgeX registry and configuration service provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry (such as location and status) and configuration properties (i.e. - a repository of initialization and operating values). Today, EdgeX Foundry uses Consul by Hashicorp as its reference implementation configuration and registry providers. However, abstractions are in place so that these functions could be provided by an alternate implementation. In fact, registration and configuration could be provided by different services under the covers. For more, see the Configuration Provider and Registry Provider sections in this page.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration","title":"Configuration","text":"Please refer to the following EdgeX Foundry ADRs for details (and design decisions) behind the configuration in EdgeX
EdgeX 3.0
Common configuration in single location is new in Edgex 3.0
Many of EdgeX service's configuration settings are the same as all other services. These common configuration settings have been consolidated into a single common configuration location which is seeded by the core-common-config-bootstrapper service. This service seeds the configuration provider with the common configuration from its local file located in the cmd/res/configuration.yaml
. See the Common Configuration for list of all the common configuration settings.
Because EdgeX Foundry may be deployed and run in several different ways, it is important to understand how configuration is loaded and from where it is sourced. Referring to the cmd directory within the edgex-go repository, each service has its own folder. Inside each service's folder there is a res
directory (short for \"resource\"). There the configuration files in YAML format define each service's configuration. A service may support several different configuration profiles, such as a App Service Configurable does. In this case, the configuration file located directly in the res
directory should be considered the default configuration profile. Sub-directories will contain configurations appropriate to the respective profile.
As of the Geneva release, EdgeX recommends using environment variable overrides instead of creating profiles to override some subset of config values. App Service Configurable is an exception to this as this is how it defined unique instances using the same executable.
If you choose to use profiles as described above, the config profile can be indicated using one of the following command line flags:
--profile / -p
Taking the Core Data
and App Service Configurable
services as an examples:
./core-data
starts the service using the default profile found locally./app-service-configurable --profile=rules-engine
starts the service using the rules-engine
profile found locallyNote
Again, utilizing environment variables for configuration overrides is the recommended path. Config profiles, for the most part, are not used.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#seeding-configuration","title":"Seeding Configuration","text":"EdgeX 3.0
Seeding of the new separate common configuration is new in Edgex 3.0
When utilizing the centralized configuration management for the EdgeX Foundry microservices, it is necessary to seed the required configuration before starting the services. The new core-common-config-bootstrapper is responsible for seeding the common configuration that all services now depend on. Each service has the built-in capability to perform the seeding operation for its private configuration. A service will use its local configuration file to seeded into the configuration provider if such is being used.
In order for a service to seed/load the configuration to/from the configuration provider, use one of the following flags:
--configProvider / -cp
Again, taking the core-data
service as an example:
./core-data -cp=consul.http://localhost:8500
will start the service using configuration values found in the provider or seed them if they do not exist.
EdgeX 3.0
In EdgeX 3.0, the common environment variable overrides are applied to this common configuration prior to pushing the configuration into the configuration provider. This dramatically reduces the number of duplicate environment variable overrides in the Docker compose files.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration-structure","title":"Configuration Structure","text":"EdgeX 3.0
In EdgeX 3.0, the configuration is no longer organized into a hierarchical structure grouped by service types.
The root namespace separates EdgeX Foundry related configuration information from other applications that may be using the same configuration provider. Below the root is the configuration version and then all the individual services in a flat list. As an example, the nodes shown when one views the configuration provider might be as follows:
Example configuration structure
**edgex/v3** (root namespace)\n - app-* (app services)\n - core-* (core services which includes common config)\n - devices-* (device services)\n - security-* (security services)\n - support-* (support services)\n
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#versioning","title":"Versioning","text":"The version is now part of the root namespace , i.e. edgex/v3
An advantage of grouping all minor/patch versions under a major version involves end-user configuration changes that need to be persisted during an upgrade. A service on startup will not overwrite existing configuration when it runs unless explicitly told to do so via the --overwrite / -o
command line flag. Therefore, if a user leaves their configuration provider running during an EdgeX Foundry upgrade any customization will be left in place. Environment variable overrides such as those supplied in the docker-compose for a given release will always override existing content in the configuration provider.
You can supply and manage configuration in a centralized manner by utilizing the -cp/--configProvider
flag when starting a service. If the flag is provided and points to an application such as HashiCorp's Consul, the service will bootstrap its configuration into the provider, if it doesn't exist. If configuration does already exist, it will load the content from the given location applying any environment variables overrides of which the service is aware. Integration with the configuration provider is handled through the go-mod-configuration module referenced by all services.
The registry refers to any platform you may use for service discovery. For the EdgeX Foundry reference implementation, the default provider for this responsibility is Consul. Integration with the registry is handled through the go-mod-registry module referenced by all services.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#introduction-to-registry","title":"Introduction to Registry","text":"The objective of the registry is to enable micro services to find and to communicate with each other. When each micro service starts up, it registers itself with the registry, and the registry continues checking its availability periodically via a specified health check endpoint. When one micro service needs to connect to another one, it connects to the registry to retrieve the available host name and port number of the target micro service and then invokes the target micro service. The following figure shows the basic flow.
Consul is the default registry implementation and provides native features for service registration, service discovery, and health checking. Please refer to the Consul official web site for more information:
https://www.consul.io
Physically, the \"registry\" and \"configuration\" management services are combined and running on the same Consul server node.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#web-user-interface","title":"Web User Interface","text":"A web user interface is also provided by Consul. Users can view the available service list and their health status through the web user interface. The web user interface is available at the /ui path on the same port as the HTTP API. By default this is http://localhost:8500/ui. For more detail, please see:
https://developer.hashicorp.com/consul/tutorials/certification-associate-tutorials/get-started-explore-the-ui
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#running-on-docker","title":"Running on Docker","text":"For ease of use to install and update, the microservices of EdgeX Foundry are published as Docker images onto Docker Hub and compose files that allow you to run EdgeX and dependent service such as Consul. These compose files can be found here in the edgex-compose repository. See the Getting Started using Docker for more details.
Once the EdgeX stack is running in docker verify Consul is running by going to http://localhost:8500/ui in your browser.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#running-on-local-machine","title":"Running on Local Machine","text":"To run Consul on the local machine, following these steps:
Execute the following command:
consul agent -data-dir \\${DATA_FOLDER} -ui -advertise 127.0.0.1 -server -bootstrap-expect 1\n\n# ${DATA_FOLDER} could be any folder to put the data files of Consul and it needs the read/write permission.\n
Verify the result: http://localhost:8500/ui
As stated in the top level V3 Migration guide, common configuration has been separated out from each service's private configuration. See the Service Configuration page for more details on the new Common Configuration.
There have also been changes to some sections of the common configuration in order to make them consistent and stream-lined for all EdgeX services
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#messagebus","title":"MessageBus","text":"In EdgeX 3.0 the EdgeX MessageBus configuration has been refactored and renamed to be MessageBus
. Prior to EdgeX 3.0, Core/Support Services and Device services had it as MessageQueue
and Applications Services had it as MessageBus
under the Trigger
configuration. Now all services have it as top level MessageBus
. In addition to the rename, the following fields have been add or removed:
false
. Set to true
by Application Services that don't need the EdgeX MessageBus for Trigger or Metrics. When set to false
this allows for Metrics to still be published to the EdgeX MessageBus when the Trigger is set to http
or external-mqtt
edgex
if not set.BaseTopicPrefix
BaseTopicPrefix
BaseTopicPrefix
PersistData
is set totrue
the Core Data will always subscribe to events from the EdgeX MessageBusIf your deployment has customized any of the EdgeX provided service's MessageBus
configuration, you will need to re-apply your customizations to the EdgeX 3.0 version of the service's MessageBus
configuration in the new separated out common configuration.
Example V3 MessageBus configuration - Common
MessageBus:\nProtocol: \"redis\"\nHost: \"localhost\"\nPort: 6379\nType: \"redis\"\nAuthMode: \"usernamepassword\" # required for redis MessageBus (secure or insecure).\nSecretName: \"redisdb\"\nBaseTopicPrefix: \"edgex\" # prepended to all topics as \"edgex/<additional topic levels>\nOptional:\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\nQos: \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive: \"10\" # Seconds (must be 2 or greater)\nRetained: \"false\"\nAutoReconnect: \"true\"\nConnectTimeout: \"5\" # Seconds\nSkipCertVerify: \"false\"\n# Additional Default NATS Specific options that need to be here to enable environment variable overrides of them\nFormat: \"nats\"\nRetryOnFailedConnect: \"true\"\nQueueGroup: \"\"\nDurable: \"\"\nAutoProvision: \"true\"\nDeliver: \"new\"\nDefaultPubRetryAttempts: \"2\"\nSubject: \"edgex/#\" # Required for NATS JetStream only for stream auto-provisioning\n
With the separation of Common Configuration, each service needs set the Optional.ClientId
in their private configuration to a unique value
Example V3 MessageBus configuration - Private
MessageBus:\nOptional:\nClientId: \"core-data\"\n
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#database","title":"Database","text":"In EdgeX 3.0 the database configuration for Core/Support services has changed from Databases map[string]bootstrapConfig.Database
to Database bootstrapConfig.Database
. This aligns it with the database configuration used by Application Services
Example V3 Database configuration
Database:\n Host: \"localhost\"\n Port: 6379\n Timeout: \"5s\"\n Type: \"redisdb\"\n
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#secretstore","title":"SecretStore","text":"In EdgeX 3.0 the SecretStore
settings have been remove from the service configuration and are now controlled via default values and environment variable overrides. The environment variable override names have not changed. See SecretStore Configuration Overrides section for more details.
If you have customized SecretStore
configuration, simply remove the SecretStore
section and use environment variable overrides to apply your customizations.
In EdgeX 3.0 some InsecureSecrets
configuration fields names have changed.
SecretName
SecretData
Example V3 InsecureSecrets configuration
InsecureSecrets:\nDB:\nSecretName: \"redisdb\"\nSecretData:\nusername: \"\"\npassword: \"\"\n
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#custom-insecuresecrets","title":"Custom InsecureSecrets","text":""},{"location":"microservices/configuration/V3MigrationCommonConfig/#in-file","title":"In File","text":"If you have customized InsecureSecrets
in the configuration file you will need to adjust the field names described above.
If you have used Environment Variable Overrides to customize InsecureSecrets
, the Environment Variable names will need to change to account for the new field names above.
Example V3 Environment Variable Overrides for InsecureSecrets
WRITABLE_INSECURESECRETS_<KEY>_SECRETNAME: mySecretName\nWRITABLE_INSECURESECRETS_<KEY>_SECRETDATA_<DATAKEY>: mySecretDataItem\n
"},{"location":"microservices/core/Ch-CoreServices/","title":"Core Services","text":"Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where the innate knowledge of \u201cthings\u201d connected, sensor data collected, and EdgeX configuration resides. Core consists of the following micro services:
The command micro service (often called the command and control micro service) enables the issuance of commands or actions to devices on behalf of:
The command micro service exposes the commands in a common, normalized way to simplify communications with the devices. There are two types of commands that can be sent to a device.
In most cases, GET commands are simple requests for the latest sensor reading from the device. Therefore, the request is often parameter-less (requiring no parameters or body in the request). SET commands require a request body where the body provides a key/value pair array of values used as parameters in the request (i.e. {\"additionalProp1\": \"string\", \"additionalProp2\": \"string\"}
).
The command micro service gets its knowledge about the devices from the metadata service. The command service always relays commands (GET or SET) to the devices through the device service. The command service never communicates directly to a device. Therefore, the command micro service is a proxy service for command or action requests from the north side of EdgeX (such as analytic or application services) to the protocol-specific device service and associated device.
While not currently part of its duties, the command service could provide a layer of protection around device. Additional security could be added that would not allow unwarranted interaction with the devices (via device service). The command service could also regulate the number of requests on a device do not overwhelm the device - perhaps even caching responses so as to avoid waking a device unless necessary.
"},{"location":"microservices/core/command/Ch-Command/#data-model","title":"Data Model","text":""},{"location":"microservices/core/command/Ch-Command/#data-dictionary","title":"Data Dictionary","text":"DeviceProfileDeviceCoreCommandCoreCommandCoreCommandParameters Property Description Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources deviceResource collection DeviceCommands collect of deviceCommand Property Description DeviceName reference to a device by name ProfileName reference to a device profile by name CoreCommands array of core commands Property Description Name Get bool indicating a get command Set bool indicating a set command Path Url Parameters array of core command parameters Property Description ResourceName ValueType"},{"location":"microservices/core/command/Ch-Command/#high-level-interaction-diagrams","title":"High Level Interaction Diagrams","text":"The two following High Level Diagrams show:
Command PUT Request
Request for Devices and Available Commands
"},{"location":"microservices/core/command/Ch-Command/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Core Command.
Edgex 3.0
For EdgeX 3.0 the MessageQueue.Internal
configuration has been moved to MessageBus
in Common Configuration and MessageQueue.External
has been moved to ExternalMQTT
below
-cp/--configProvider
flag LogLevel INFO log entry severity level. Log entries not of the default level or higher are ignored. Property Default Value Description .mqtt --- Secrets for when connecting to secure External MQTT when running in non-secure mode Property Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics <TBD>
Service metrics that Core Command collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary Core Metadata service level tags to included with every metric that is reported. Property Default Value Description Unique settings for Core Command. The common settings can be found at Common Configuration Port 59882 Micro service port number StartupMsg This is the EdgeX Core Command Microservice Message logged when service completes bootstrap start-up Property Default Value Description Protocol http The protocol to use when building a URI to the service endpoint Host localhost The host name or IP address where the service is hosted Port 59881 The port exposed by the target service Property Default Value Description Unique settings for Core Command. The common settings can be found at Common Configuration ClientId \"core-command Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description Enabled false Indicates whether to connect to external MQTT broker for the Commands via messaging Url tcp://localhost:1883
Fully qualified URL to connect to the MQTT broker ClientId core-command
ClientId to connect to the broker with ConnectTimeout 5s Time duration indicating how long to wait before timing out broker connection, i.e \"30s\" AutoReconnect true Indicates whether or not to retry connection if disconnected KeepAlive 10 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 QOS 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain true Retain setting for MQTT Connection SkipCertVerify false Indicates if the certificate verification should be skipped SecretName mqtt
Name of the path in secret provider to retrieve your secrets. Must be non-blank. AuthMode none
Indicates what to use when connecting to the broker. Must be one of \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". If a CA Cert exists in the SecretPath then it will be used for all modes except \"none\". Property Default Value Description Key-value mappings allow for publication and subscription to the external message bus CommandRequestTopic edgex/command/request/#
For subscribing to 3rd party command requests CommandResponseTopicPrefix edgex/command/response
For publishing responses back to 3rd party systems. /<device-name>/<command-name>/<method>
will be added to this publish topic prefix QueryRequestTopic edgex/commandquery/request/#
For subscribing to 3rd party command query requests QueryResponseTopic edgex/commandquery/response
For publishing command query responses back to 3rd party systems"},{"location":"microservices/core/command/Ch-Command/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"RequireMessageBus
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/core/command/Ch-Command/#commands-via-messaging","title":"Commands via Messaging","text":""},{"location":"microservices/core/command/Ch-Command/#introduction_1","title":"Introduction","text":"Previously, communications from a 3rd party system (enterprise application, cloud application, etc.) to EdgeX in order to acuate a device or get the latest information from a sensor was only accomplished via REST. The 3rd party system makes a REST call of the command service which then relays a request to a device service also using REST. There was no built-in means to make a message-based request of EdgeX or the devices/sensors it manages.
From Levski release, core command service adds support for an external MQTT connection (in the same manner that app services provide an external MQTT connection), which will allow it to act as a bridge between the internal message bus (implemented via either MQTT or Redis Pub/Sub) and external MQTT message bus.
"},{"location":"microservices/core/command/Ch-Command/#core-command-as-message-bus-bridge","title":"Core Command as Message Bus Bridge","text":"The Core Command service will serve as the EdgeX entry point for external, commands via message bus requests to the south side.
3rd party systems should not be granted access to the EdgeX internal message bus. Therefore, in order to implement communications via message bus (specifically MQTT), the command service needs to take messages from the 3rd party or external MQTT topics and pass them internally onto the EdgeX internal message bus where they can eventually be routed to the device services and then on to the devices/sensors (southside).
In reverse, response messages from the southside will also be sent through the internal EdgeX message bus to the command service where they can then be bridged to the external MQTT topics and respond to the 3rd party system requester.
"},{"location":"microservices/core/command/Ch-Command/#message-structure","title":"Message Structure","text":"Since most message bus protocols lack a generic message header mechanism (as in HTTP), providing request/response metadata is accomplished by defining a MessageEnvelope
object associated with each request/response. The message topic names act like the HTTP paths and methods in REST requests. That is, the topic names specify the device receiver of any command request as paths do in the HTTP requests.
Below is an example of the MessageEnvelope
for command query request:
{\n\"apiVersion\" : \"v3\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ContentType\": \"application/json\",\n\"QueryParams\": {\n\"offset\": \"0\",\n\"limit\": \"10\"\n}\n}\n
Below is an example of the MessageEnvelope
of command query response:
{\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ErrorCode\":0,\n\"Payload\":\"...\",\n\"ContentType\":\"application/json\"\n}\n
The messages for formatted requests and responses are sharing a common base structure. The outermost JSON object represents the message envelope, which is used to convey metadata about request/response including ApiVersion
, RequestID
, CorrelationID
...etc.
The Payload
field contains the base64-encoded response body. The ErrorCode
field provides the indication of error. The ErrorCode
will be 0 (no error) or 1 (indicating error) as the two enums for error conditions. When there is an error (with ErrorCode
set to 1), then the payload contains a message string indicating more information about the error. When there is no error (errorCode 0) then there is no message string in the payload.
Core Command service subscribes to the QueryRequestTopic
and publishes the response to QueryResponseTopic
defined in the configuration file. After receiving the request, Core Command service will try to parse the <device-name>
from request topic level. The 3rd party system or application must publish command query requests messages and subscribe to responses from the same topics. Below is the default topic naming used by Core Command:
edgex/commandquery/request/#
edgex/commandquery/response
The last topic level in request topic must be either all
or the <device-name>
to query for.
Example of querying device core commands by device name via messaging:
Send query request message to external MQTT broker on topic edgex/commandquery/request/Random-Boolean-Device
:
{\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\"\n}\n
Receive query response message from external MQTT broker on topic edgex/commandquery/response
:
{\n\"ReceivedTopic\":\"\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":\"eyJhcGlWZXJzaW9uIjoidjIiLCJyZXF1ZXN0SWQiOiJlNmU4YTJmNC1lYjE0LTQ2NDktOWUyYi0xNzUyNDc5MTEzNjkiLCJzdGF0dXNDb2RlIjoyMDAsImRldmljZUNvcmVDb21tYW5kIjp7ImRldmljZU5hbWUiOiJSYW5kb20tQm9vbGVhbi1EZXZpY2UiLCJwcm9maWxlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsImNvcmVDb21tYW5kcyI6W3sibmFtZSI6IldyaXRlQm9vbFZhbHVlIiwic2V0Ijp0cnVlLCJwYXRoIjoiL2FwaS92Mi9kZXZpY2UvbmFtZS9SYW5kb20tQm9vbGVhbi1EZXZpY2UvV3JpdGVCb29sVmFsdWUiLCJ1cmwiOiJodHRwOi8vZWRnZXgtY29yZS1jb21tYW5kOjU5ODgyIiwicGFyYW1ldGVycyI6W3sicmVzb3VyY2VOYW1lIjoiQm9vbCIsInZhbHVlVHlwZSI6IkJvb2wifSx7InJlc291cmNlTmFtZSI6IkVuYWJsZVJhbmRvbWl6YXRpb25fQm9vbCIsInZhbHVlVHlwZSI6IkJvb2wifV19LHsibmFtZSI6IldyaXRlQm9vbEFycmF5VmFsdWUiLCJzZXQiOnRydWUsInBhdGgiOiIvYXBpL3YyL2RldmljZS9uYW1lL1JhbmRvbS1Cb29sZWFuLURldmljZS9Xcml0ZUJvb2xBcnJheVZhbHVlIiwidXJsIjoiaHR0cDovL2VkZ2V4LWNvcmUtY29tbWFuZDo1OTg4MiIsInBhcmFtZXRlcnMiOlt7InJlc291cmNlTmFtZSI6IkJvb2xBcnJheSIsInZhbHVlVHlwZSI6IkJvb2xBcnJheSJ9LHsicmVzb3VyY2VOYW1lIjoiRW5hYmxlUmFuZG9taXphdGlvbl9Cb29sQXJyYXkiLCJ2YWx1ZVR5cGUiOiJCb29sIn1dfSx7Im5hbWUiOiJCb29sIiwiZ2V0Ijp0cnVlLCJzZXQiOnRydWUsInBhdGgiOiIvYXBpL3YyL2RldmljZS9uYW1lL1JhbmRvbS1Cb29sZWFuLURldmljZS9Cb29sIiwidXJsIjoiaHR0cDovL2VkZ2V4LWNvcmUtY29tbWFuZDo1OTg4MiIsInBhcmFtZXRlcnMiOlt7InJlc291cmNlTmFtZSI6IkJvb2wiLCJ2YWx1ZVR5cGUiOiJCb29sIn1dfSx7Im5hbWUiOiJCb29sQXJyYXkiLCJnZXQiOnRydWUsInNldCI6dHJ1ZSwicGF0aCI6Ii9hcGkvdjIvZGV2aWNlL25hbWUvUmFuZG9tLUJvb2xlYW4tRGV2aWNlL0Jvb2xBcnJheSIsInVybCI6Imh0dHA6Ly9lZGdleC1jb3JlLWNvbW1hbmQ6NTk4ODIiLCJwYXJhbWV0ZXJzIjpbeyJyZXNvdXJjZU5hbWUiOiJCb29sQXJyYXkiLCJ2YWx1ZVR5cGUiOiJCb29sQXJyYXkifV19XX19\",\n\"ContentType\":\"application/json\",\n\"QueryParams\":{}\n}\n
Base64-decoding the Payload:
{\n\"apiVersion\":\"v2\",\n\"requestId\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"statusCode\":200,\n\"deviceCoreCommand\":{\n\"deviceName\":\"Random-Boolean-Device\",\n\"profileName\":\"Random-Boolean-Device\",\n\"coreCommands\":[\n{\n\"name\":\"WriteBoolValue\",\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"Bool\", \"valueType\":\"Bool\"},\n{\"resourceName\":\"EnableRandomization_Bool\",\"valueType\":\"Bool\"}\n]\n},\n{\n\"name\":\"WriteBoolArrayValue\",\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/WriteBoolArrayValue\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"BoolArray\",\"valueType\":\"BoolArray\"},\n{\"resourceName\":\"EnableRandomization_BoolArray\",\"valueType\":\"Bool\"}\n]\n},\n{\n\"name\":\"Bool\",\n\"get\":true,\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/Bool\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"Bool\",\"valueType\":\"Bool\"}\n]\n},\n{\n\"name\":\"BoolArray\",\n\"get\":true,\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/BoolArray\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"BoolArray\",\"valueType\":\"BoolArray\"}\n]\n}\n]\n}\n}\n
"},{"location":"microservices/core/command/Ch-Command/#query-all","title":"Query All","text":"Example of querying all device core commands via messaging:
Send query request message to external MQTT broker on topic edgex/commandquery/request/all
:
{\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"QueryParams\": {\n\"offset\": \"0\",\n\"limit\": \"5\"\n}\n}\n
Receive query response message from external MQTT broker on topic edgex/commandquery/response
:
{\n\"ApiVersion\":\"v2\",\n\"ContentType\":\"application/json\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":\"...\"\n}\n
Core Command service subscribes to the CommandRequestTopic
defined in the configuration file. After receiving the request, Core Command service will try to parse <device-name>
<command-name>
and <method>
from request topic level, and send the response back with <device-name>
, <command-name>
and <method>
appended to CommandResponseTopicPrefix
defined in the configuration file. The 3rd party system or application must publish command requests messages and subscribe to responses from the same topics. Below is the default topic naming used by Core Command:
edgex/command/request/#
edgex/command/response/<device-name>/<command-name>/<method>
The last topic level (<method>
) in request topic must be either get
or set
.
Example of making get command request via messaging:
edgex/command/request/Random-Boolean-Device/Bool/get
: {\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"QueryParams\": {\n\"ds-pushevent\": \"false\",\n\"ds-returnevent\": \"true\"\n}\n}\n
edgex/command/response/#
: {\n\"ReceivedTopic\":\"edgex/command/response/Random-Boolean-Device/Bool/get\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":\"eyJhcGlWZXJzaW9uIjoidjIiLCJyZXF1ZXN0SWQiOiJlNmU4YTJmNC1lYjE0LTQ2NDktOWUyYi0xNzUyNDc5MTEzNjkiLCJzdGF0dXNDb2RlIjoyMDAsImV2ZW50Ijp7ImFwaVZlcnNpb24iOiJ2MiIsImlkIjoiM2JiMDBlODYtMTZkZi00NTk1LWIwMWEtMWFhNTM2ZTVjMTM5IiwiZGV2aWNlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsInByb2ZpbGVOYW1lIjoiUmFuZG9tLUJvb2xlYW4tRGV2aWNlIiwic291cmNlTmFtZSI6IkJvb2wiLCJvcmlnaW4iOjE2NjY1OTE2OTk4NjEwNzcwNzYsInJlYWRpbmdzIjpbeyJpZCI6IjFhMmM5NTNkLWJmODctNDhkZi05M2U3LTVhOGUwOWRlNDIwYiIsIm9yaWdpbiI6MTY2NjU5MTY5OTg2MTA3NzA3NiwiZGV2aWNlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsInJlc291cmNlTmFtZSI6IkJvb2wiLCJwcm9maWxlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsInZhbHVlVHlwZSI6IkJvb2wiLCJ2YWx1ZSI6ImZhbHNlIn1dfX0=\",\n\"ContentType\":\"application/json\",\n\"QueryParams\":{}\n}\n
Base64-decoding the Payload:
{\n\"apiVersion\":\"v2\",\n\"requestId\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"statusCode\":200,\n\"event\":{\n\"apiVersion\":\"v2\",\n\"id\":\"3bb00e86-16df-4595-b01a-1aa536e5c139\",\n\"deviceName\":\"Random-Boolean-Device\",\n\"profileName\":\"Random-Boolean-Device\",\n\"sourceName\":\"Bool\",\n\"origin\":1666591699861077076,\n\"readings\":[\n{\n\"id\":\"1a2c953d-bf87-48df-93e7-5a8e09de420b\",\n\"origin\":1666591699861077076,\n\"deviceName\":\"Random-Boolean-Device\",\n\"resourceName\":\"Bool\",\n\"profileName\":\"Random-Boolean-Device\",\n\"valueType\":\"Bool\",\n\"value\":\"false\"\n}\n]\n}\n}\n
"},{"location":"microservices/core/command/Ch-Command/#set-command","title":"Set Command","text":"Example of making put command request via messaging:
edgex/command/request/Random-Boolean-Device/WriteBoolValue/set
: {\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"Payload\": \"eyJCb29sIjogImZhbHNlIn0=\"\n}\n
The payload is the base64-encoding json struct:
{\"Bool\": \"false\"}\n
edgex/command/response/#
{\n\"ReceivedTopic\":\"edgex/command/response/Random-Boolean-Device/WriteBoolValue/set\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":null,\n\"ContentType\":\"application/json\",\n\"QueryParams\":{}\n}\n
Note
There are some cases that Core Command service will be unable to publish the response correctly, for example: - Response topic is not specified in configuration file - Failed to JSON-decoding the request MessageEnvelope
- Failed to parse either <device-name>
, <command-name>
or <method>
In real word, users usually need to provide credentials or certificates to connect to external MQTT broker. To seed such secrets to Secret Store for Command service, you can follow the instructions from the Seeding Service Secrets document.
The following example shows how to set up Command service to connect to external MQTT broker with usernamepassword
authentication.
Example - Setting SecretsFile and ExternalMQTT via environment override
environment:\nEXTERNALMQTT_ENABLED: \"true\"\nEXTERNALLMQTT_URL: \"<url>\" # e.g. tcps://broker.hivemq.com:8883\nEXTERNALMQTT_AUTHMODE: usernamepassword\nSECRETSTORE_SECRETSFILE: \"/tmp/core-command/secrets.json\"\n...\nvolumes:\n- /tmp/core-command/secrets.json:/tmp/core-command/secrets.json\n
Example - secrets.json
{\n\"secrets\": [\n{\n\"secretName\": \"mqtt\",\n\"imported\": false,\n\"secretData\": [\n{\n\"key\": \"username\",\n\"value\": \"edgexuser\"\n},\n{\n\"key\": \"password\",\n\"value\": \"p@55w0rd\"\n}\n]\n}\n]\n}\n
Note
Since EdgeX 3.0, the SecretPath
configuration property of ExternalMQTT
section is renamed to SecretName
. However, in source code it is still referred as SecretPath
and will break down the Command service if ExternalMQTT is enabled. This is a known issue and will be fixed in EdgeX 3.1. Before EdgeX 3.1, to get rid of this issue you need to manually add SecretPath
to configuration via Consul UI and restart Command service to take effect.
Edgex 3.0
Regex Get Command is new in EdgeX 3.0
Command service supports regex syntax for command name. Regex syntax will match against all DeviceResources in the DeviceProfile.
Consider the following example device profile:
apiVersion: \"v2\"\nname: \"Simple-Device\"\ndeviceResources:\n-\nname: \"Xrotation\"\nisHidden: true\ndescription: \"X axis rotation rate\"\nproperties:\nvalueType: \"Int32\"\nreadWrite: \"RW\"\nunits: \"rpm\"\n-\nname: \"Yrotation\"\nisHidden: true\ndescription: \"Y axis rotation rate\"\nproperties:\nvalueType: \"Int32\"\nreadWrite: \"RW\"\n\"units\": \"rpm\"\n-\nname: \"Zrotation\"\nisHidden: true\ndescription: \"Z axis rotation rate\"\nproperties:\nvalueType: \"Int32\"\nreadWrite: \"RW\"\n\"units\": \"rpm\"\n
regex command name .rotation
will return event including Xrotation
, Yrotation
and Zrotation
readings. Note that the RE2 syntax accepted by Go's regexp
package contains character like .
, *
, +
...etc. These characters need to be URL-encoded before executing:
$ curl http://localhost:59882/api/v3/device/name/Simple-Device01/%2Erotation\n\n{\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"event\": {\n\"apiVersion\" : \"v3\",\n \"id\": \"821f9a5d-e521-4ea7-83f9-f6bce6881dce\",\n \"deviceName\": \"Simple-Device01\",\n \"profileName\": \"Simple-Device\",\n \"sourceName\": \".rotation\",\n \"origin\": 1679464105224933600,\n \"readings\": [\n{\n\"id\": \"c008960a-c3cc-4cfc-b9f7-a1f1516168ea\",\n \"origin\": 1679464105224933600,\n \"deviceName\": \"Simple-Device01\",\n \"resourceName\": \"Xrotation\",\n \"profileName\": \"Simple-Device\",\n \"valueType\": \"Int32\",\n \"units\": \"rpm\",\n \"value\": \"0\"\n},\n {\n\"id\": \"7f38677a-aa1f-446b-9e28-4555814ea79d\",\n \"origin\": 1679464105224933600,\n \"deviceName\": \"Simple-Device01\",\n \"resourceName\": \"Yrotation\",\n \"profileName\": \"Simple-Device\",\n \"valueType\": \"Int32\",\n \"units\": \"rpm\",\n \"value\": \"0\"\n},\n {\n\"id\": \"ad72be23-1d0e-40a3-b4ec-2fa0fa5aba58\",\n \"origin\": 1679464105224933600,\n \"deviceName\": \"Simple-Device01\",\n \"resourceName\": \"Zrotation\",\n \"profileName\": \"Simple-Device\",\n \"valueType\": \"Int32\",\n \"units\": \"rpm\",\n \"value\": \"0\"\n}\n]\n}\n}\n
"},{"location":"microservices/core/command/Ch-Command/#api-reference","title":"API Reference","text":"Core Command API Reference
"},{"location":"microservices/core/data/Ch-CoreData/","title":"Core Data","text":""},{"location":"microservices/core/data/Ch-CoreData/#introduction","title":"Introduction","text":"The core data micro service provides centralized persistence for data collected by devices. Device services that collect sensor data call on the core data service to store the sensor data on the edge system (such as in a gateway) until the data gets moved \"north\" and then exported to Enterprise and cloud systems. Core data persists the data in a local database. Redis is used by default, but a database abstraction layer allows for other databases to be used.
Other services and systems, both within EdgeX Foundry and outside of EdgeX Foundry, access the sensor data through the core data service. Core data could also provide a degree of security and protection of the data collected while the data is at the edge.
Note
Core data is completely optional. Device services can send data via message bus directly to application services. If local persistence is not needed, the service can be removed.
If persistence is needed, sensor data can be sent via message bus to core data which then persita the data. See below for more details.
Sensor data can be sent to core data via two different means:
Services (like devices services) and other systems can put sensor data on a message bus topic and core data can be configured to subscribed to that topic. This is the default means of getting data to core data. Any service (like an application service or rules engine service) or 3rd system could also subscribe to the same topic. If the sensor data does not need to persisted locally, core data does not have to subscribe to the message bus topic - making core data completely optional. By default, the message bus is implemented using Redis Pub/Sub. MQTT can be used as an alternate message bus implementation.
Services and systems can call on the core data REST API to send data to core data and have the data put in local storage. Prior to EdgeX 2.0, this was the default and only means to send data to core data. Today, it is an alternate means to send data to core data. When data is sent via REST to core data, core data re-publishes the data on to message bus so that other services can subscribe to it.
Core data moves data to the application service (and edge analytcs) via Redis Pub/Sub by default. MQTT or NATS (opt-in at build time) can alternately be used. Use of MQTT requires the installation of a broker such as ActiveMQ. Use of NATS requires all service to be built with NATS enabled and the installation of NATS Server. A messaging infrastructure abstraction is in place that allows for other message bus (e.g., AMQP) implementations to be created and used.
"},{"location":"microservices/core/data/Ch-CoreData/#core-data-streaming","title":"Core Data \"Streaming\"","text":"By default, core data persists all data sent to it by services and other systems. However, when the data is too sensitive to keep at the edge, or there is no use for the data at the edge by other local services (e.g., by an analytics micro service), the data can be \"streamed\" through core data without persisting it. A configuration change to core data (Writable.PersistData=false) has core data send data to the application services without persisting the data. This option has the advantage of reducing latency through this layer and storage needs at the network edge. But the cost is having no historical data to use for analytics that need to look back in time to make a decision.
Note
When persistence is turned off via the PersistData flag, it is off for all devices. At this time, you cannot specify which device data is persisted and which device data is not. Application services do allow filtering of device data before it is exported or sent to another service like the rules engine, but this is not based on whether the data is persisted or not.
Note
As mentioned, core data is completely optional. Therefore, if persistence is not needed, and if sensor data is sent from device services directly to application services via message bus, core data can be removed. In addition to reducing resource utilization (memory and CPU for core data), it also removes latency of throughput as the core data layer can be completely bypassed. However, if device services are still using REST to send data into the system, core data is the central receiving endpoint and must remain in place; even if persistence is turned off.
"},{"location":"microservices/core/data/Ch-CoreData/#events-and-readings","title":"Events and Readings","text":"Data collected from sensors is marshalled into EdgeX event and reading objects (delivered as JSON objects or a binary object encoded as CBOR to core data). An event represents a collection of one or more sensor readings. Some sensors or devices are only providing a single value \u2013 a single reading - at a time. Other sensors spew multiple values whenever they are read.
An event must have at least one reading. Events are associated to a sensor or device \u2013 the \u201cthing\u201d that sensed the environment and produced the readings. Readings represent a sensing on the part of a device or sensor. Readings only exist as part of (are owned by) an event. Readings are essentially a simple key/value pair of what was sensed (the key - called a ResourceName) and the value sensed (the value). A reading may include other bits of information to provide more context (for example, the data type of the value) for the users of that data. Consumers of the reading data could include things like user interfaces, data visualization systems and analytics tools.
In the diagram below, an example event/reading collection is depicted. The event coming from the \u201cmotor123\u201d device has two readings (or sensed values). The first reading indicates that the motor123 device reported the pressure of the motor was 1300 (the unit of measure might be something like PSI).
The value type property (shown as type above) on the reading lets the consumer of the information know that the value is an integer, base 64. The second reading indicates that the motor123 device also reported the temperature of the motor was 120 at the same time it reported the pressure (perhaps in degrees Fahrenheit).
"},{"location":"microservices/core/data/Ch-CoreData/#data-model","title":"Data Model","text":"The following diagram shows the Data Model for core data. Device services send Event objects containing a collection or Readings to core data when a device captures a sensor reading.
"},{"location":"microservices/core/data/Ch-CoreData/#data-dictionary","title":"Data Dictionary","text":"EventReading Property Description Event represents a single measurable event read from a device. Event has a one-to-many relationship with Reading. ID Uniquely identifies an event, for example a UUID. DeviceName DeviceName identifies the source of the event; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resources collected in the readings of the event. SourceName Name of the source request from the device profile (ResourceName or Command) associated to the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. Tags An arbitrary set of labels or additional information associated with the event. It can be used, for example, to add location information (like GPS coordinates) to the event. Readings A collection (one to many) of associated readings of a given event. Property Description ID Uniquely identifies a reading, for example a UUID. DeviceName DeviceName identifies the source of the reading; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resource collected in the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. ResourceName ResourceName-Value provide the key/value pair of what was sensed by a device. ResourceName specifies what was the value collected. ResourceName should match a device resource name in the device profile. Value The sensor data value ValueType The type of the sensor data - from a list of allowed value types that includes Bool, String, Uint8, Int8, ... BinaryValue Byte array of sensor data when the data captured is not structured; for example an image is captured. This information is not persisted in the Database and is expected to be empty when retrieving a Reading for the ValueType of Binary. MediaType Indicating the type of binary data when collected. ObjectValue Complex value of sensor data when the data captured is structured; for example a BACnet date object:\"date\":{ \"year\":2021, \"month\":8, \"day\":26, \"wday\":4 }
. This is expected to be empty when the Reading for the ValueType is not Object
."},{"location":"microservices/core/data/Ch-CoreData/#high-level-interaction-diagrams","title":"High Level Interaction Diagrams","text":"The two following High Level Interaction Diagrams show:
Core Data Add Sensor Readings
Core Data Request Event / Reading for a Device
"},{"location":"microservices/core/data/Ch-CoreData/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Core Data.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics Service metrics that Core Data collects. Boolean value indicates if reporting of the metric is enabled. Metrics.EventsPersisted false Enable/Disable reporting of number of events persisted. Metrics.ReadingsPersisted false Enable/Disable reporting of number of readings persisted. Tags <empty>
List of arbitrary Core Data service level tags to included with every metric that is reported. Property Default Value Description Unique settings for Core Data. The common settings can be found at Common Configuration Port 59880 Micro service port number StartupMsg This is the EdgeX Core Data Microservice Message logged when service completes bootstrap start-up Property Default Value Description Unique settings for Core Data. The common settings can be found at Common Configuration Name coredata Database or document store name Property Default Value Description Unique settings for Core Data. The common settings can be found at Common Configuration ClientId \"core-data Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description MaxEventSize 25000 maximum event size in kilobytes accepted via REST or MessageBus. 0 represents default to system max. Property Default Value Description Enabled false Enable or disable data retention. Interval 30s Purging interval defines when the database should be rid of readings above the MaxCap. MaxCap 10000 The maximum capacity defines where the high watermark of readings should be detected for purging the amount of the reading to the minimum capacity. MinCap 8000 The minimum capacity defines where the total count of readings should be returned to during purging."},{"location":"microservices/core/data/Ch-CoreData/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"No configuration updated
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/core/data/Ch-CoreData/#api-reference","title":"API Reference","text":"Core Data API Reference
"},{"location":"microservices/core/database/Ch-Redis/","title":"Redis Database","text":"EdgeX Foundry's reference implementation database (for sensor data, metadata and all things that need to be persisted in a database) is Redis.
Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory.
"},{"location":"microservices/core/database/Ch-Redis/#memory-utilization","title":"Memory Utilization","text":"Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see the list below) and those strategies has continued to evolve. When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual).
Redis supports a number of different levels of on-disk persistence. By default, snapshots of the data are persisted every 60 seconds or after 1000 keys have changed. Beyond increasing the frequency of snapshots, append only files that log every database write are also supported. See https://redis.io/topics/persistence for a detailed discussion on how to balance the options.
Redis supports setting a memory usage limit and a policy on what to do if memory cannot be allocated for a write. See the MEMORY MANAGEMENT section of https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf for the configuration options. Since EdgeX and Redis do not currently communicate on data evictions, you will need to use the EdgeX scheduler to control memory usage rather than a Redis eviction policy.
"},{"location":"microservices/core/metadata/Ch-Metadata/","title":"Core Metadata","text":""},{"location":"microservices/core/metadata/Ch-Metadata/#introduction","title":"Introduction","text":"The core metadata micro service has the knowledge about the devices and sensors and how to communicate with them used by the other services, such as core data, core command, and so forth.
Specifically, metadata has the following abilities:
Although metadata has the knowledge, it does not do the following activities:
To understand metadata, its important to understand the EdgeX data objects it manages. Metadata stores its knowledge in a local persistence database. Redis is used by default, but a database abstraction layer allows for other databases to be used.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-profile","title":"Device Profile","text":"Device profiles define general characteristics about devices, the data they provide, and how to command them. Think of a device profile as a template of a type or classification of device. For example, a device profile for BACnet thermostats provides general characteristics for the types of data a BACnet thermostat sends, such as current temperature and humidity level. It also defines which types of commands or actions EdgeX can send to the BACnet thermostat. Examples might include actions that set the cooling or heating point. Device profiles are typically specified in YAML file and uploaded to EdgeX. More details are provided below.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-profile-details","title":"Device Profile Details","text":"Metadata device profile object model
General PropertiesDevice ResourcesAttributesPropertiesDevice CommandsCore CommandsA device profile has a number of high level properties to give the profile context and identification. Its name field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes:
Here is an example general information section for a sample KMC 9001 BACnet thermostat device profile provided with the BACnet device service (you can find the profile in Github) . Only the name is required in this section of the device profile. The name of the device profile must be unique in any EdgeX deployment. The manufacturer, model and labels are all optional bits of information that allow better queries of the device profiles in the system.
name: \"BAC-9001\"\nmanufacturer: \"KMC\"\nmodel: \"BAC-9001\"\nlabels: - \"B-AAC\"\ndescription: \"KMC BAC-9001 BACnet thermostat\"\n
Labels provided a way to tag, organize or categorize the various profiles. They serve no real purpose inside of EdgeX.
A device resource (in the deviceResources section of the YAML file) specifies a sensor value within a device that may be read from or written to either individually or as part of a device command (see below). Think of a device resource as a specific value that can be obtained from the underlying device or a value that can be set to the underlying device. In a thermostat, a device resource may be a temperature or humidity (values sensed from the devices) or cooling point or heating point (values that can be set/actuated to allow the thermostat to determine when associated heat/cooling systems are turned on or off). A device resource has a name for identification and a description for informational purposes.
The properties section of a device resource has also been greatly simplified. See details below.
Back to the BACnet example, here are two device resources. One will be used to get the temperature (read) the current temperature and the other to set (write or actuate) the active cooling set point. The device resource name must be provided and it must also be unique in any EdgeX deployment.
name: Temperature\ndescription: \"Get the current temperature\"\nisHidden: false\n\nname: ActiveCoolingSetpoint\ndescription: \"The active cooling set point\"\nisHidden: false\n
Note
While made explicit in this example, isHidden
is false by default when not specified. isHidden
indicates whether to expose the device resource to the core command service.
The device service allows access to the device resources via REST endpoint. Values specified in the device resources section of the device profile can be accessed through the following URL patterns:
The attributes associated to a device resource are the specific parameters required by the device service to access the particular value. In other words, attributes are \u201cinward facing\u201d and are used by the device service to determine how to speak to the device to either read or write (get or set) some of its values. Attributes are detailed protocol and/or device specific information that informs the device service how to communication with the device to get (or set) values of interest.
Returning to the BACnet device profile example, below are the complete device resource sections for Temperature and ActiveCoolingSetPoint \u2013 inclusive of the attributes \u2013 for the example device.
-\nname: Temperature\ndescription: \"Get the current temperature\"\nisHidden: false\nattributes: { type: \"analogValue\", instance: \"1\", property: \"presentValue\", index: \"none\" }\n-\nname: ActiveCoolingSetpoint\ndescription: \"The active cooling set point\"\nisHidden: false\nattributes:\n{ type: \"analogValue\", instance: \"3\", property: \"presentValue\", index: \"none\" }\n
The properties of a device resource describe the value obtained or set on the device. The properties can optionally inform the device service of some simple processing to be performed on the value. Again, using the BACnet profile as an example, here are the properties associated to the thermostat's temperature device resource.
name: Temperature\ndescription: \"Get the current temperature\"\nattributes: { type: \"analogValue\", instance: \"1\", property: \"presentValue\", index: \"none\" }\nproperties: valueType: \"Float32\"\nreadWrite: \"R\"\nunits: \"Degrees Fahrenheit\"\n
The 'valueType' property of properties gives more detail about the value collected or set. In this case giving the details of the temperature value to be set. The value provides details such as the type of the data collected or set, whether the value can be read, written or both.
The following fields are available in the value property:
The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI)
Device commands (in the deviceCommands section of the YAML file) define access to reads and writes for multiple simultaneous device resources. Device commands are optional. Each named device command should contain a number of get and/or set resource operations, describing the read or write respectively.
Device commands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes (X, Y and Z) together.
A device command consists of the following properties:
Each resourceOperation will specify:
The device commands can also be accessed through a device service\u2019s REST API in a similar manner as described for device resources.
If a device command and device resource have the same name, it will be the device command which is available.
Device resources or device commands that are not hidden are seen and available via the EdgeX core command service.
Other services (such as the rules engine) or external clients of EdgeX, should make requests of device services through the core command service, and when they do, they are calling on the device service\u2019s unhidden device commands or device resources. Direct access to the device commands or device resources of a device service is frowned upon. Commands, made available through the EdgeX command service, allow the EdgeX adopter to add additional security or controls on who/what/when things are triggered and called on an actual device.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device","title":"Device","text":"Data about actual devices is another type of information that the metadata micro service stores and manages. Each device managed by EdgeX Foundry registers with metadata (via its owning device service. Each device must have a unique name associated to it.
Metadata stores information about a device (such as its address) against the name in its database. Each device is also associated to a device profile. This association enables metadata to apply knowledge provided by the device profile to each device. For example, a thermostat profile would say that it reports temperature values in Celsius. Associating a particular thermostat (the thermostat in the lobby for example) to the thermostat profile allows metadata to know that the lobby thermostat reports temperature value in Celsius.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-service","title":"Device Service","text":"Metadata also stores and manages information about the device services. Device services serve as EdgeX's interfaces to the actual devices and sensors.
Device services are other micro services that communicate with devices via the protocol of that device. For example, a Modbus device service facilitates communications among all types of Modbus devices. Examples of Modbus devices include motor controllers, proximity sensors, thermostats, and power meters. Device services simplify communications with the device for the rest of EdgeX.
When a device service starts, it registers itself with metadata. When EdgeX provisions a new devices the device gets associated to its owning device service. That association is also stored in metadata.
Metadata Device, Device Service and Device Profile Model
Metadata's Device Profile, Device and Device Service object model and the association between them
"},{"location":"microservices/core/metadata/Ch-Metadata/#provision-watcher","title":"Provision Watcher","text":"Device services may contain logic to automatically provision new devices. This can be done statically or dynamically. In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts.
In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration.
Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher, is specific configuration information provided to a device service (usually at startup) that gets stored in metadata. In addition to providing details about what devices to look for during a scan, a provision watcher may also contain \u201cblocking\u201d indicators, which define parameters about devices that are not to be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided.
Metadata's provision watcher object model
"},{"location":"microservices/core/metadata/Ch-Metadata/#data-dictionary","title":"Data Dictionary","text":"EdgeX 3.0
Two fields--LastConnected and LastReported--of Device Service are removed in EdgeX 3.0. A new field Properties is added into Device in EdgeX 3.0, so that device-level properties can be defined and then consumed by the implementation of device services to retrieve extra device-level information. For example, assume a device service may require extra device-level information, such as DeviceInstance
, Firmware
, InstanceID
, and ObjectName
in the runtime, and these extra device-level information can be defined in the properties. A new field Properties is added into ProvisionWatcher in EdgeX 3.0, so that the implementation of device services can retrieve extra information when automatically provisioning a device. For example, assume a device service would like to generate the device name in certain format during auto discovery, a property, e.g. DeviceNameTemplate
with the template format of device name can be defined in the ProvisionWatcher, so that the implementation of device service can generate the device name based on such property.
DeviceInstance
, Firmware
, InstanceID
, and ObjectName
in the runtime, and these extra device-level information can be defined in the properties Property Description Represents the attributes and operational capabilities of a device. It is a template for which there can be multiple matching devices within a given system. Id Uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources DeviceResource collection DeviceCommands Collect of deviceCommand Property Description The atomic description of a particular protocol level interface for a class of Devices; represents a value on a device that can be read or written Description Name Tags Tags for adding additional information on reading level Properties List of associated properties Attributes List of associated attributes Property Description Defines read/write capabilities native to the device Description Name isHidden Indicate the visibility of the DeviceCommand via a CoreCommand. Tags Tags for adding additional information on event level readWrite Read/Write Permissions set for this DeviceCommand. The value can be R, W, or RW. R enables GET command, and W enables SET command. resourceOperations List of associated resources and attributes. Should contain more than one, otherwise it is redundant to the single Resource. Property Description DeviceResource Name of a DeviceResource in this profile to be include in a Device Command DefaultValue Default value set to DeviceResource and it should be compatible with the Type field of the named DeviceResource Mappings Map the GET resourceOperation value to another string value and only valid where the Type of the named DeviceResource is String Property Description Represents a service that is responsible for proxying connectivity between a set of devices and the EdgeX Foundry core services; the current state and reachability information for a registered device service Id Uniquely identifies the device service, a UUID for example Name Labels BaseAddress Address (MQTT topic, HTTP address, serial bus, etc.) for reaching the service AdminState Property Description The transformation and constraint properties for a device resource. ValueType Type of the value ReadWrite Read/Write Permissions set for this property Minimum Minimum value that can be get/set from this property Maximum Maximum value that can be get/set from this property DefaultValue Default value set to this property if no argument is passed Mask Mask to be applied prior to get/set of property Shift Shift to be applied after masking, prior to get/set of property Scale Multiplicative factor to be applied after shifting, prior to get/set of property Offset Additive factor to be applied after multiplying, prior to get/set of property Base Base for property to be applied to, leave 0 for no power operation (i.e. base ^ property: 2 ^ 10) Assertion Required value of the property, set for checking error state. Failing an assertion condition will mark the device with an error state MediaType Property Description The metadata used by a Service for automatically provisioning matching Devices. Id Name Unique name and identifier of the provision watcher Labels Identifiers Set of key value pairs that identify property (MAC, HTTP,...) and value to watch for (00-05-1B-A1-99-99, 10.0.0.1,...) BlockingIdentifiers Set of key-values pairs that identify devices which will not be added despite matching on Identifiers ServiceName The base name of the device service that new devices will be associated to AdminState Administrative state for provision watcher - either unlocked or locked DiscoveredDevice A DiscoveredDevice defines the data to be assigned on the new discovered device Property Description A DiscoveredDevice defines the data to be assigned on the new discovered device. ProfileName Name of the device profile that should be applied to the devices available at the identifier addresses AdminState Administrative state for new devices - either unlocked or locked AutoEvents Associated auto events to this new devices Properties A map of extendable properties required by the implementation of device services to retrieve extra information when automatically provisioning a device. For example, assume a device service would like to generate the device name in certain format during auto discovery, a property, e.g. DeviceNameTemplate
with the template format of device name can be defined in the ProvisionWatcher, so that the implementation of device service can generate the device name based on such property"},{"location":"microservices/core/metadata/Ch-Metadata/#high-level-interaction-diagrams","title":"High Level Interaction Diagrams","text":"Sequence diagrams for some of the more critical or complex events regarding metadata. These High Level Interaction Diagrams show:
Add a New Device Profile (Step 1 to provisioning a new device)
Add a New Device (Step 2 to provisioning a new device)
What happens on a device service startup?
"},{"location":"microservices/core/metadata/Ch-Metadata/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Core Metadata.
EdgeX 3.0
Notifications configuration is removed in EdgeX 3.0. Metadata will leverage Device System Events to replace the original device change notifications.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
-cp/--configProvider
flag LogLevel INFO log entry severity level. Log entries not of the default level or higher are ignored. Property Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics <TBD>
Service metrics that Core Metadata collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary Core Metadata service level tags to included with every metric that is reported. Property Default Value Description StrictDeviceProfileChanges false Whether to allow device profile modifications, set to true
to reject all modifications which might impact the existing events and readings. Thus, the changes like manufacture
, isHidden
, or description
can still be made. StrictDeviceProfileDeletes false Whether to allow device profile deletionsm set to true
to reject all deletions. Property Default Value Description Validation false Whether to enable units of measure validation, set to true
to validate all device profile units
against the list of units of measure by core metadata. Property Default Value Description Unique settings for Core Metadata. The common settings can be found at Common Configuration Port 59881 Micro service port number StartupMsg This is the EdgeX Core Metadata Microservice Message logged when service completes bootstrap start-up Property Default Value Description UoMFile './res/uom.yaml' path to the location of units of measure configuration Property Default Value Description Unique settings for Core Metadata. The common settings can be found at Common Configuration Name metadata Database or document store name Property Default Value Description Unique settings for Core Metadata. The common settings can be found at Common Configuration ClientId \"core-metadata Id used when connecting to MQTT or NATS base MessageBus"},{"location":"microservices/core/metadata/Ch-Metadata/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"RequireMessageBus
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-system-events","title":"Device System Events","text":"Device System Events are events triggered by the add, update or delete of devices. A System Event DTO is published to the EdgeX MessageBus each time a new Device is added, an existing Device is updated or when an existing Device is deleted.
"},{"location":"microservices/core/metadata/Ch-Metadata/#system-event-dto","title":"System Event DTO","text":"Edgex 3.0
System Event types deviceservice
, deviceprofile
and provisionwatcher
are new in EdgeX 3.0
The System Event DTO has the following properties:
Property Description Value Type Type of System Eventdevice
, deviceservice
, deviceprofile
, or provisionwatcher
Action System Event action add
, update
, or delete
in this case Source Source of the System Event core-metadata
in this case Owner Owner of the data in the System Event In this case it is the name of the device service that owns the device or core-metadata
Tags Key value map of additional data empty in this case Details The data object that trigger the System Event the added, updated, or deleted Device/Device Profile/Device Service/Provision Watcher in this case Timestamp Date and time of the System Event timestamp"},{"location":"microservices/core/metadata/Ch-Metadata/#publish-topic","title":"Publish Topic","text":"The System Event DTO for Device System Events is published to the topic specified by the MessageQueue.PublishTopicPrefix
configuration setting above, which has a default of edgex/system-events
, plus the following data items, which are added to allow receivers to filter by subscription.
Example Device System Event publish topics
edgex/system-events/core-metadata/device/add/device-onvif-camera/onvif-camera\nedgex/system-events/core-metadata/device/update/device-rest/sample-numeric\nedgex/system-events/core-metadata/device/delete/device-virtual/Random-Boolean-Device\n
"},{"location":"microservices/core/metadata/Ch-Metadata/#units-of-measure","title":"Units of Measure","text":"Core metadata will read unit of measure configuration (see configuration example below) located in UoM.UoMFile
during startup. The specified configuration may be a local configuration file or the URI of the configuration. See the URI for Files section for more details.
EdgeX 3.1
Support for loading the UoM.UoMFile
configuration via URI is new in EdgeX 3.1.
Sample unit of measure configuration
Source: reference to source for all UoM if not specified below\nUnits:\ntemperature:\nSource: www.weather.com\nValues:\n- C\n- F\n- K\nweights:\nSource: www.usa.gov/federal-agencies/weights-and-measures-division\nValues:\n- lbs\n- ounces\n- kilos\n- grams\n
When validation is turned on (Writable.UoM.Validation
is set to true
), all device profile units
(in device resource, device properties) will be validated against the list of units of measure by core metadata.
In other words, when a device profile is created or updated via the core metadata API, the units specified in the device resource's units
field will be checked against the valid list of UoM provided via core metadata configuration.
If the units
value matches any one of the configuration units of measure, then the device resource is considered valid - allowing the create or update operation to continue. If the units
value does not match any one of the configuration units of measure, then the device profile or device resource operation (create or update) is rejected (error code 500 is returned) and an appropriate error message is returned in the response to the caller of the core metadata API.
Note
The units
field on a profile is and shall remain optional. If the units
field is not specified in the device profile, then it is assumed that the device resource does not have well-defined units of measure. In other words, core metadata will not fail a profile with no units
field specified on a device resource.
Core Metadata API Reference
"},{"location":"microservices/device/Ch-DeviceServices/","title":"Device Services Overview","text":""},{"location":"microservices/device/Ch-DeviceServices/#introduction","title":"Introduction","text":"The Device Services Layer interacts with Device Services.
Device services are the edge connectors interacting with the devices that include, but are not limited to: appliances in your home, alarm systems, HVAC equipment, lighting, machines in any industry, irrigation systems, drones, traffic signals, automated transportation, and so forth.
EdgeX device services translate information coming from devices via hundreds of protocols and thousands of formats and bring them into EdgeX. In other terms, device services ingest sensor data provided by \u201cthings\u201d. When it ingests the sensor data, the device service converts the data produced and communicated by the \u201cthing\u201d into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry.
Device services also receive and handle any request for actuation back to the device. Device services take a general command from EdgeX to perform some sort of action and it translates that into a protocol specific request and forwards the request to the desired device.
Device services serve as the main means EdgeX interacts with sensors/devices. So, in addition to getting sensor data and actuating devices, device services also:
Device services may service one or a number of devices at one time.
A device that a device service manages, could be something other than a simple, single, physical device. The device could be an edge/IoT gateway (and all of that gateway's devices), a device manager, a sensor hub, a web service available over HTTP, or a software sensor that acts as a device, or collection of devices, to EdgeX Foundry.
The device service communicates with the devices through protocols native to each device object. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, BLE, etc. EdgeX also provides the means to create new devices services through device service software development kits (SDKs) when you encounter a new protocol and need EdgeX to communicate with a new device.
"},{"location":"microservices/device/Ch-DeviceServices/#device-service-abstraction","title":"Device Service Abstraction","text":"A device service is really just a software abstraction around a device and any associated firmware, software and protocol stack. It allows the rest of EdgeX (and users of EdgeX) to talk to a device via the abstraction API so that all devices look the same from the perspective of how you communicate with them. Under the covers, the implementation of the device service has some common elements, but can also vary greatly depending on the underlying device, protocol, and associate software.
A device service provides the abstraction between the rest of EdgeX and the physical device. In other terms, the device service \u201cwraps\u201d the protocol communication code, device driver/firmware and actual device.
Each device service in EdgeX is an independent micro service. Devices services are typically created using a device service SDK. The SDK is really just a library that provides common scaffolding code and convenience methods that are needed by all device services. While not required, the EdgeX community use the SDKs as the basis for the all device services the community provides. The SDKs make it easier to create device service by allowing a developer to focus on device specific communications, features, etc. versus having to code a lot of EdgeX service boilerplate code. Using the SDKs also helps to ensure the device services adhere to rules required of the device services.
Unless you need to create a new device service or modify an existing device service, you may not ever have to go under the covers, so to speak, to understand how a device service works. However, having some general understanding of what a device service does and how it does it can be helpful in customization, setting configuration and diagnosing problems.
"},{"location":"microservices/device/Ch-DeviceServices/#device-service-functionality","title":"Device Service Functionality","text":"All device services must perform the following tasks:
As you can imagine, many of these tasks (like registering with core metadata) are generic and the same for all device services and thereby provided by the SDK. Other tasks (like getting sensor data from the underlying device) are quite specific to the underlying device. In these cases, the device service SDK provides empty functions for performing the work, but the developer would need to fill in the function code as it relates to the specific device, the communication protocol, device driver, etc.
"},{"location":"microservices/device/Ch-DeviceServices/#device-service-functional-requirements","title":"Device Service Functional Requirements","text":"Requirements for the device service are provided in this documentation. These requirements are being used to define what functionality needs to be offered via any Device Service SDK to produce the device service scaffolding code. They may also help the reader further understand the duties and role of a device service.
"},{"location":"microservices/device/Ch-DeviceServices/#device-profile","title":"Device Profile","text":"EdgeX comes with a number of existing device services for communicating with devices that speak many IoT protocols \u2013 such as Modbus, BACnet, BLE, etc. While these devices services know how to speak to devices that communicate by the associated protocol, the device service doesn\u2019t know the specifics of all devices that speak that protocol. For example, there are thousands of Modbus devices in the world. It is a common industrial protocol used in a variety of devices. Some Modbus devices measure temperature and humidity and provide thermostatic control over building HVAC systems, while other Modbus devices are used in automation control of flare gas meters in the oil and gas industry. This diversity of devices means that the Modbus device service could never know how to communicate with each Modbus device directly. The device service just knows the Modbus protocol generically and must be informed of how to communicate with each individual device based on what that device knows and communicates. Using an analogy, you may speak a language or two. Just because you speak English, doesn\u2019t mean you know everything about all English-speaking people. For example, just because someone spoke English, you would not know if they could solve a calculus problem for you or if they can sing your favorite song.
Device profiles describe a specific device to a device service. Each device managed by a device service has an association device profile, which defines that device in terms of the data it reports and operations that it supports. General characteristics about the type of device, the data the device provides, and how to command the device is all provided in a device profile. A device profile is described in YAML which is a human-readable data serialization language (similar to a markup language like XML). See the page on device profiles to learn more about how they provide the detail EdgeX device services need to communicate with a device.
Info
Device profiles, while normally provided to EdgeX in a YAML file, can also be specified to EdgeX in JSON. See the metadata API for upload via JSON versus upload YAML file.
"},{"location":"microservices/device/Ch-DeviceServices/#device-discovery-and-provision-watchers","title":"Device Discovery and Provision Watchers","text":"Device Services may contain logic to automatically provision new devices. This can be done statically or dynamically.
"},{"location":"microservices/device/Ch-DeviceServices/#static-provisioning","title":"Static Provisioning","text":"In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts.
"},{"location":"microservices/device/Ch-DeviceServices/#dynamic-provisioning","title":"Dynamic Provisioning","text":"In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration.
Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher is created via a call to the core metadata provision watcher API (and is stored in the metadata database).
A Provision Watcher is a filter which is applied to any new devices found when a device service scans for devices. It contains a set of ProtocolProperty names and values, these values may be regular expressions. If a new device is to be added, each of these must match the corresponding properties of the new device. Furthermore, a provision watcher may also contain \u201cblocking\u201d identifiers, if any of these match the properties of the new device (note that matching here is not regex-based), the device will not be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided.
More than one Provision Watcher may be provided for a device service, and discovered devices are added if they match with any one of them. In addition to the filtering criteria, a Provision Watcher includes specification of various properties to be associated with the new device which matches it: these are the Profile name, the initial AdminState, and optionally any AutoEvents to be applied.
"},{"location":"microservices/device/Ch-DeviceServices/#admin-state","title":"Admin State","text":"The adminState is either LOCKED
or UNLOCKED
for each device. This is an administrative condition applied to the device. This state is periodically set by an administrator of the system \u2013 perhaps for system maintenance or upgrade of the sensor. When LOCKED
, requests to the device via the device service are stopped and an indication that the device is locked (HTTP 423 status code) is returned to the caller.
Data collected from devices by a device service is marshalled into EdgeX event and reading objects (delivered as JSON objects in service REST calls). This is one of the primary responsibilities of a device service. Typically, a configurable schedule - called an auto event schedule - determines when a device service sends data to core data via core data\u2019s REST API (future EdgeX implementations may afford alternate means to send the data to core data or to send sensor data to other services).
"},{"location":"microservices/device/Ch-DeviceServices/#test-and-demonstration-device-services","title":"Test and Demonstration Device Services","text":"Among the many available device services provided by EdgeX, there are two device services that are typically used for demonstration, education and testing purposes only. The random device service (device-random-go) is a very simple device service used to provide device service authors a bare bones example inclusive of a device profile. It can also be used to create random integer data (either 8, 16, or 32 bit signed or unsigned) to simulate integer readings when developing or testing other EdgeX micro services. It was created from the Go-based device service SDK.
The virtual device service (device-virtual-go) is also used for demonstration, education and testing. It is a more complex simulator in that it allows any type of data to be generated on a scheduled basis and used an embedded SQL database (ql) to provide simulated data. Manipulating the data in the embedded database allows the service to mimic almost any type of sensing device. More information on the virtual device service is available in this documentation.
"},{"location":"microservices/device/Ch-DeviceServices/#running-multiple-instances","title":"Running multiple instances","text":"Device services support one additional command-line argument, --instance
or -i
. This allows for running multiple instances of a device service in an EdgeX deployment, by giving them different names.
For example, running device-modbus -i 1
results in a service named device-modbus_1
, ie the parameter given to the instance
argument is added as a suffix to the device service name. The same effect may be obtained by setting the EDGEX_INSTANCE_NAME
environment variable.
Device services now have the capability to publish Events directly to the EdgeX MessageBus, rather than POST the Events to Core Data via REST. This capability is controlled by the Device.UseMessageBus
configuration property (see below), which is set to true
by default. Core Data is configured by default to subscribe to the EdgeX MessageBus to receive and persist the Events. Application services, as in EdgeX 1.x, subscribe to the EdgeX MessageBus to receive and process the Events.
Edgex 3.0
Upon successful PUT command, Device services will also publish an Event with the updated Resource value(s) to the EdgeX MessageBus as long as the Resource(s) are not write-only.
"},{"location":"microservices/device/Ch-DeviceServices/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services.
EdgeX 3.0
UpdateLastConnected is removed in EdgeX 3.0.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been move to MessageBus
in Common Configuration
Note
The *
on the configuration section names below denoted that these sections are pulled from the device service common configuration thus are not in the individual device service's private configuration file.
false
to not include units in the Reading. Property Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics Service metrics that the device service collects. Boolean value indicates if reporting of the metric is enabled. Common and custom metrics are also included. EventsSent
= false Enable/disable reporting of the built-in EventsSent metric ReadingsSent
= false Enable/disable reporting of the built-in ReadingsSent metric <CustomMetric>
= false Enable/disable reporting of custom device service's custom metric. See Custom Device Service Metrics for more details. Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. Property Default Value Description Protocol http The protocol to use when building a URI to the service endpoint Host localhost The host name or IP address where the service is hosted Port 59881 The port exposed by the target service Property Default Value Description Properties that determine how the device service communicates with a device DataTransform true Controls whether transformations are applied to numeric readings MaxCmdOps 128 Maximum number of resources in a device command (hence, readings in an event) MaxCmdResultLen 256 Maximum JSON string length for command results ProfilesDir './res/profiles' If set, directory or index URI containing profile definition files to upload to core-metadata. See URI for Device Service Files for more information on URI index files. Also may be in device service private config, so it can be overridden with environment variable DevicesDir './res/devices' If set, directory or index URI containing device definition files to upload to core-metadata. See URI for Device Service Files for more information on URI index files. Also may be in device service private config, so it can be overridden with environment variable ProvisionWatchersDir '' If set, directory or index URI containing provision watcher definition files to upload to core-metadata (service specific when needed). See URI for Device Service Files for more information on URI index files. EnableAsyncReadings true Enables/Disables the Device Service ability to handle async readings AsyncBufferSize 16 Size of the buffer for async readings Discovery/Enabled false Controls whether device discovery is enabled Discovery/Interval 30s Interval between automatic discovery runs. Zero means do not run discovery automatically Property Default Value Description MaxEventSize 0 maximum event size in kilobytes sent to Core Data or MessageBus. 0 represents default to system max."},{"location":"microservices/device/Ch-DeviceServices/#uris-for-device-service-files","title":"URIs for Device Service Files","text":"EdgeX 3.1
Support for URIs for Devices, Profiles, and Provision Watchers is new in EdgeX 3.1.
When loading device definitions, device profiles, and provision watchers from a URI, the directory field (ie DevicesDir
, ProfilesDir
, ProvisionWatchersDir
) loads an index file instead of a folder name. The contents of the index file will specify the individual files to load by URI by appending the filenames to the URI as shown in the example below. Any authentication specified in the original URI will be used in subsequent URIs. See the URI for Files section for more details.
Example Device Dir loaded from URI in service configuration
...\nProfilesDir = \"./res/profiles\"\nDevicesDir = \"http://example.com/devices/index.json\"\nProvisionWatchersDir = \"./res/provisionwatchers\"\n...\n
"},{"location":"microservices/device/Ch-DeviceServices/#device-definition-uri-example","title":"Device Definition URI Example","text":"For device definitions, the index file contains the list of references to device files that contain one or more devices.
Example Device Index File at http://example.com/devices/index.json
and resulting URIs
[\n\"device1.yaml\", \"device2.yaml\"\n]\nwhich results in the following URIs:\nhttp://example.com/devices/device1.yaml\nhttp://example.com/devices/device2.yaml\n
"},{"location":"microservices/device/Ch-DeviceServices/#device-profile-and-provision-watchers-uri-example","title":"Device Profile and Provision Watchers URI Example","text":"For device profiles and provision watchers, the index file contains a dictionary of key-value pairs that map the name of the profile or provision watcher to its file. The name is mapped so that the resources are only loaded from a URI if a device profile or provision watcher by that name has not been loaded yet.
Example Device Profile Index File at http://example.com/profiles/index.json
and resulting URIs
{\n\"Simple-Device\": \"Simple-Driver.yaml\",\n\"Simple-Device2\": \"Simple-Driver2.yml\"\n}\nwhich results in the following URIs:\nhttp://example.com/profiles/Simple-Driver.yaml\nhttp://example.com/profiles/Simple-Driver2.yml\n
"},{"location":"microservices/device/Ch-DeviceServices/#custom-configuration","title":"Custom Configuration","text":"Device services can have custom configuration in one of two ways. See the table below for details.
DriverCustom Structured ConfigurationDriver
- The Driver section used for simple custom settings and is accessed via the SDK's DriverConfigs() API. The DriverConfigs API returns a map[string] string
containing the contents on the Driver
section of the configuration.yaml
file.
Driver:\nMySetting: \"My Value\"\n
For Go Device Services see Go Custom Structured Configuration for more details.
For C Device Service see C Custom Structured Configuration for more details.
"},{"location":"microservices/device/Ch-DeviceServices/#secrets","title":"Secrets","text":""},{"location":"microservices/device/Ch-DeviceServices/#configuration","title":"Configuration","text":"Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
All instances of Device Services running in secure mode require a SecretStore
to be created for the service by the Security Services. See Configuring Add-on Service for details on configuring a SecretStore
to be created for the Device Service. With the use of Redis Pub/Sub
as the default EdgeX MessageBus all Device Services need the redisdb
known secret added to their SecretStore
so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details.
Each Device Service also has detailed configuration to enable connection to it's exclusive SecretStore
When running an Device Service in secure mode, secrets can be stored in the SecretStore by making an HTTP POST
call to the /api/v3/secret
API route on the Device Service. The secret data POSTed is stored to the service's secureSecretStore
. Once a secret is stored, only the service that added the secret will be able to retrieve it. See the Secret API Reference for more details and example.
When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration.yaml file. Insecure secrets and their paths can be configured as below.
Example - InsecureSecrets Configuration
Writable:\nInsecureSecrets: DB:\nSecretName: \"redisdb\"\nSecretData:\nusername: \"\"\npassword: \"\"\nMQTT:\nSecretName: \"credentials\"\nSecretData:\nusername: \"mqtt-user\"\npassword: \"mqtt-password\"\n
"},{"location":"microservices/device/Ch-DeviceServices/#retrieving-secrets","title":"Retrieving Secrets","text":"Device Services retrieve secrets from their SecretStore
using the SDK API. See Retrieving Secrets for more details using the Go SDK.
Device Service - SDK- API Reference
"},{"location":"microservices/device/V3Migration/","title":"V3 Device Service Migration Guide","text":""},{"location":"microservices/device/V3Migration/#all-device-services","title":"All Device Services","text":"This section is specific to changes made that impact only and all device services.
See Top Level V3 Migration Guide for details applicable to all EdgeX Services.
"},{"location":"microservices/device/V3Migration/#device-files","title":"Device Files","text":"LastConnected
and LastReported
configs.ProtocolProperties now supports typed values.
ProtocolProperty with typed values
protocols:\nother:\nAddress: simple01\nPort: 300\n
The boolean field notify
has been removed as it is never used.
properties
has been added to Device. See Metadata Dictionary and point to Device tab for complete details.tags
field to Device for event level tagging. See Metadata Dictionary and point to Device tab for complete details.optional
field in ResourceProperties to allow any additional or customized data.Change the data type of mask
, shift
, scale
, base
, offset
, maximum
and minimum
from string to number in ResourceProperties.
NOTE: When the device profile is in JSON format, please ensure that the values for mask
are specified in decimal, as the JSON number type does not support hexadecimal. YAML does not have this limitation.
Added tags
field in DeviceResource for reading level tagging. See Metadata Dictionary and point to DeviceResource tab for complete details.
tags
field in DeviceCommand for event level tagging. See Metadata Dictionary and point to DeviceCommand tab for complete details.DiscoveredDevice
; such as profileName
, Device adminState
, and autoEvents
.properties
field in the DiscoveredDevice
object.adminState
now. The Device adminState
is moved into the DiscoveredDevice
object.ProvisionWatcher can now be added during device service startup by loading the definition files from the ProvisionWatchersDir
configuration.
Example Configuration
Device:\nProvisionWatchersDir: ./res/provisionwatchers\n
ProvisionWatcher definition file is in YAML format.
Pre-defined ProvisionWatcher
name: Simple-Provision-Watcher\nserviceName: device-simple\nlabels:\n- simple\nidentifiers:\nAddress: simple[0-9]+\nPort: 3[0-9]{2}\nblockingIdentifiers:\nPort:\n- 397\n- 398\n- 399\nadminState: UNLOCKED\ndiscoveredDevice:\nprofileName: Simple-Device\nadminState: UNLOCKED\nautoEvents:\n- interval: 15s\nsourceName: SwitchButton\nproperties:\ntestPropertyA: weather\ntestPropertyB: meter\n
An extendable field properties
has been added to ProvisionWatcher. See Metadata Dictionary and point to DiscoveredDevice tab for complete details.
This section is specific to changes made that impact existing custom device services.
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#dependencies","title":"Dependencies","text":"You first need to update the go.mod
file to specify go 1.20
and the V3 versions of the Device SDK and any EdgeX go-mods directly used by your service. Note the extra /v3
for the modules.
Example go.mod for V3
module <your service>\n\ngo 1.20\n\nrequire (\ngithub.com/edgexfoundry/device-sdk-go/v3 v3.0.0\ngithub.com/edgexfoundry/go-mod-core-contracts/v3 v3.0.0\n...\n)\n
Once that is complete then the import statements for these dependencies must be updated to include the /v3
in the path.
Example import statements for V3
import (\n...\n\n\"github.com/edgexfoundry/device-sdk-go/v3/pkg/models\"\n\"github.com/edgexfoundry/go-mod-core-contracts/v3/common\"\n)\n
"},{"location":"microservices/device/V3Migration/#go-device-services","title":"Go Device Services","text":"map[string]any
instead of map[string]string
to support typed values.ProvisionWatchersDir
configuration to support adding provision watchers during device service startup.UpdateLastConnected
from configuration.UseMessageBus
from configuration. MessageBus is always enabled in 3.0 for sending events and receiving system events for callbacks.Start
method. The Start
method is called after the device service is completely initialized, allowing the service to run startup tasks.Discover
method. The Discover
method triggers protocol specific device discovery, asynchronously writes the results to the channel which is passed to the implementation via ProtocolDriver.Initialize()
. The results may be added to the device service based on a set of acceptance criteria (i.e. Provision Watchers).ValidateDevice
method. The ValidateDevice
method triggers device's protocol properties validation, returns error if validation failed and the incoming device will not be added into EdgeX.Initialize
method signature to pass DeviceServiceSDK interface as parameter.ds *DeviceService
in service package. Instead, the DeviceServiceSDK interface introduced in Levski release is passed to ProtocolDriver as the only parameter in Initialize method so that developer can still access, mock and test with it.Run
method.PatchDevice
method.DeviceExistsForName
method.AsyncValuesChannel
method.DiscoveredDeviceChannel
method.UpdateDeviceOperatingState
method to accept a OperatingState
value.AsyncReadings
to AsyncReadingsEnabled
.DeviceDiscovery
to DeviceDiscoveryEnabled
.GetLoggingClient
to LoggingClient
.GetSecretProvider
to SecretProvider
.GetMetricsManager
to MetricsManager
.Stop
method as it should only be called by SDK.SetDeviceOperatingState
method.Service
function that returns the device service SDK instance.RunningService
function that returns the Device Service instance.<PublishTopicPrefix>/<device-service-name>/<device-profile-name>/<device-name>/<source-name>
/validate/device
/callback/service
/callback/watcher
/callback/watcher/name/{name}
/callback/profile
/callback/device
/callback/device/name/{name}
/metrics
endpoint.There is a new dependency on IOTech's C Utilities which should be satisfied by installing the relevant package. Previous versions built the utilities into the SDK library. Installation instructions for the utility package may be found in the C SDK repository.
Configuration file changes:
UseMessageBus
from configuration. MessageBus is always enabled in 3.0 for sending events and receiving system events for callbacks.The type
field in both devsdk_resource_t
and devsdk_device_resources
is now an iot_typecode_t
rather than a pointer to one. Additionally the type
field in edgex_resourceoperation
is an iot_typecode_t
.
The edgex_propertytype
enum and the functions for obtaining one from iot_data_t
have been removed. Instead, first consult the type
field of an iot_typecode_t
. This is an instance of the iot_data_type_t
enumeration, the enumerands of which are similar to the EdgeX types, except that there are some additional values (not used in the C SDK) such as Vectors and Pointers, and there is a singular Array type. The type of array elements is held in the element_type
field of the iot_typecode_t
.
Binary data is now supported directly in the utilities, so instead of allocating an array of uint8, the iot_data_alloc_binary
function is available.
Add additional level in event publish topic for device service name. The topic is now <PublishTopicPrefix>/<device-service-name>/<device-profile-name>/<device-name>/<source-name>
The following REST callback endpoints are removed and replaced by the System Events mechanism:
/validate/device
/callback/service
/callback/watcher
/callback/watcher/name/{name}
/callback/profile
/callback/device
/callback/device/name/{name}
Remove old metrics collection and REST /metrics
endpoint.
This section is specific to changes made only to Device MQTT.
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#metadata-in-mqtt-topics","title":"Metadata in MQTT Topics","text":"For EdgeX 3.0, Device MQTT now only supports the multi-level topics. Publishing the metadata and command/reading data wrapped in a JSON object is no longer supported. The published payload is now always only the reading data.
Example V2 JSON object wrapper no longer used
{\n\"name\": \"<device-name>\",\n\"cmd\": \"<source-name>\",\n\"<source-name>\": Base64 encoded JSON containing\n{\n\"<resource1>\" : value1,\n\"<resource2>\" : value2,\n...\n}\n}\n
Your MQTT based device(s) must be migrated to use this new approach. See below for more details.
"},{"location":"microservices/device/V3Migration/#async-data","title":"Async Data","text":"A sync data is published to the incoming/data/{device-name}/{source-name}
topic where:
device-name is the name of the device sending the reading(s)
source-name is the command or resource name for the published data
If the source-name matches a command name the published data must be JSON object with the resource names specified in the command as field names.
Example async published command data
Topic=incoming/data/MQTT-test-device/allValues
{\n\"randfloat32\" : 3.32,\n\"randfloat64\" : 5.64,\n\"message\" : \"Hi World\"\n}\n
If the source-name only matches a resource name the published data can either be just the reading value for the resource or a JSON object with the resource name as the field name.
Example async published resource data
Topic=incoming/data/MQTT-test-device/randfloat32
5.67\n\nor\n\n{\n\"randfloat32\" : 5.67\n}\n
Commands send to the device will be sent on thecommand/{device-name}/{command-name}/{method}/{uuid}
topic where:
get
or set
If the command method is a set
, the published payload contains a JSON object with the resource names and the values to set those resources.
Example Data for Set Command
{\n\"randfloat32\" : 3.32,\n\"randfloat64\" : 5.64\n}\n
The device is expected to publish an empty response to the topic command/response/{uuid}
where uuid is the unique identifier sent in command request topic.
If the command method is a get
, the published payload is empty and the device is expected to publish a response to the topic command/response/{uuid}
where uuid is the unique identifier sent in command request topic. The published payload contains a JSON object with the resource names for the specified command and their values.
Example Response Data for Get Command
{\n\"randfloat32\" : 3.32,\n\"randfloat64\" : 5.64,\n\"message\" : \"Hi World\"\n}\n
"},{"location":"microservices/device/V3Migration/#device-onvif-camera","title":"Device ONVIF Camera","text":"This section is specific to changes made only to Device ONVIF Camera.
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#configuration","title":"Configuration","text":"DiscoverySubnets
.Some commands have been renamed for clarity. See the latest Swagger API Documentation for full details.
EdgeX v2 Command Name EdgeX v3 Command Name Profiles MediaProfiles Scopes DiscoveryScopes AddScopes AddDiscoveryScopes RemoveScopes RemoveDiscoveryScopes GetNodes PTZNodes GetNode PTZNode GetConfigurations PTZConfigurations Configuration PTZConfiguration GetConfigurationOptions PTZConfigurationOptions AbsoluteMove PTZAbsoluteMove RelativeMove PTZRelativeMove ContinuousMove PTZContinuousMove Stop PTZStop GetStatus PTZStatus SetPreset PTZPreset GetPresets PTZPresets GotoPreset PTZGotoPreset RemovePreset PTZRemovePreset GotoHomePosition PTZGotoHomePosition SetHomePosition PTZHomePosition SendAuxiliaryCommand PTZSendAuxiliaryCommand GetAnalyticsConfigurations Media2AnalyticsConfigurations AddConfiguration Media2AddConfiguration RemoveConfiguration Media2RemoveConfiguration GetSupportedRules AnalyticsSupportedRules Rules AnalyticsRules CreateRules AnalyticsCreateRules DeleteRules AnalyticsDeleteRules GetRuleOptions AnalyticsRuleOptions SetSystemFactoryDefault SystemFactoryDefault GetVideoEncoderConfigurations VideoEncoderConfigurations GetEventProperties EventProperties OnvifCameraEvent CameraEvent GetSupportedAnalyticsModules SupportedAnalyticsModules GetAnalyticsModuleOptions AnalyticsModuleOptionsSnapshot
command requires a media profile token to be sent in the jsonObject parameter, similar to StreamUri
command.Capabilities
command's Category
field format is now an array of strings instead of a single string. This now matches the spec.VideoStream
has been removed. It was never tested, and the same functionality can be done through the use of MediaProfiles
and StreamUri
calls.This section is specific to changes made only to Device USB Camera
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#rtsp-authentication","title":"RTSP Authentication","text":"All USB camera rtsp streams need authentication by default. To properly configure credentials for the stream refer here. This will require the building of custom images. To see how to use this feature once the service is deployed, see here.
"},{"location":"microservices/device/profile/Ch-DeviceProfile/","title":"Device Profile","text":"The device profile describes a type of device within the EdgeX system. Each device managed by a device service has an association with a device profile, which defines that device type in terms of the operations which it supports.
For a full list of device profile fields and their required values see the device profile reference.
For a detailed look at the device profile model and all its properties, see the metadata device profile data model.
"},{"location":"microservices/device/profile/Ch-DeviceProfile/#identification","title":"Identification","text":"The profile contains various identification fields. The Name
field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes:
A deviceResource specifies a sensor value within a device that may be read from or written to either individually or as part of a deviceCommand. It has a name for identification and a description for informational purposes.
The device service allows access to deviceResources via its device
REST endpoint.
The Attributes
in a deviceResource are the device-service-specific parameters required to access the particular value. Each device service implementation will have its own set of named values that are required here, for example a BACnet device service may need an Object Identifier and a Property Identifier whereas a Bluetooth device service could use a UUID to identify a value.
The Properties
of a deviceResource describe the value and optionally request some simple processing to be performed on it. The following fields are available:
Bool
, Int8
- Int64
, Uint8
- Uint64
, Float32
, Float64
, String
, Binary
, Object
and arrays of the primitive types (ints, floats, bool). Arrays are specified as eg. Float32Array
, BoolArray
etc.R
, RW
, or W
indicating whether the value is readable or writable.Binary
value.The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI)
"},{"location":"microservices/device/profile/Ch-DeviceProfile/#devicecommands","title":"DeviceCommands","text":"DeviceCommands define access to reads and writes for multiple simultaneous device resources. Each named deviceCommand should contain a number of resourceOperations
.
DeviceCommands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes together.
A resourceOperation consists of the following properties:
The device service allows access to deviceCommands via the same device
REST endpoint as is used to access deviceResources.
This chapter details the structure of a Device Profile and allowable values for its fields.
"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#device-profile","title":"Device Profile","text":"Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://datatracker.ietf.org/doc/html/rfc3986#section-2.3. description String N manufacturer String N model String N labels Array of String N deviceResources Array of DeviceResource Y deviceCommands Array of DeviceCommand N"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#deviceresource","title":"DeviceResource","text":"Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://datatracker.ietf.org/doc/html/rfc3986#section-2.3. description String N isHidden Bool N Expose the DeviceResource to Command Service or not, default false tag String N attributes String-Interface Map N Each Device Service should define required and optional keys properties ResourceProperties Y"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#resourceproperties","title":"ResourceProperties","text":"Field Name Type Required? Notes valueType Enum YUint8
, Uint16
, Uint32
, Uint64
, Int8
, Int16
, Int32
, Int64
, Float32
, Float64
, Bool
, String
, Binary
, Object
, Uint8Array
, Uint16Array
, Uint32Array
, Uint64Array
, Int8Array
, Int16Array
, Int32Array
, Int64Array
, Float32Array
, Float64Array
, BoolArray
readWrite Enum Y R
, W
, RW
units String N Developer is open to define units of value minimum Float64 N Error if SET command value out of minimum range maximum Float64 N Error if SET command value out of maximum range defaultValue String N If present, should be compatible with the Type field mask Uint64 N Only valid where Type is one of the unsigned integer types shift Int64 N Only valid where Type is one of the unsigned integer types scale Float64 N Only valid where Type is one of the integer or float types offset Float64 N Only valid where Type is one of the integer or float types base Float64 N Only valid where Type is one of the integer or float types assertion String N String value to which the reading is compared mediaType String N Only required when valueType is Binary
optional String-Any Map N Optional mapping for the given resource"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#devicecommand","title":"DeviceCommand","text":"Field Name Type Required? Notes name String Y Must be unique in this profile. A DeviceCommand with a single DeviceResource is redundant unless renaming and/or restricting R/W access. For example DeviceResource is RW, but DeviceCommand is read-only. Only allow unreserved characters as defined in https://datatracker.ietf.org/doc/html/rfc3986#section-2.3. isHidden Bool N Expose the DeviceCommand to Command Service or not, default false readWrite Enum Y R
, W
, RW
resourceOperations Array of ResourceOperation Y"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#resourceoperation","title":"ResourceOperation","text":"Field Name Type Required? Notes deviceResource String Y Must name a DeviceResource in this profile defaultValue String N If present, should be compatible with the Type field of the named DeviceResource mappings String-String Map N Map the GET resourceOperation value to another string value"},{"location":"microservices/device/sdk/Ch-DeviceSDK/","title":"Device Services SDK","text":""},{"location":"microservices/device/sdk/Ch-DeviceSDK/#introduction-to-the-sdks","title":"Introduction to the SDKs","text":"EdgeX provides two software development kits (SDKs) to help developers create new device services. While the EdgeX community and the larger EdgeX ecosystem provide a number of open source and commercially available device services for use with EdgeX, there is no way that every protocol and every sensor can be accommodated and connected to EdgeX with a pre-existing device service. Even if all the device service connectivity were provided, your use case, sensor or security infrastructure may require customization. Therefore, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity.
EdgeX is mostly written in Go and C. There is a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, alternate language SDKs may be provided by the community or made available by the larger ecosystem.
The SDKs are really libraries to be incorporated into a new micro service. They make writing a new device service much easier. By importing the SDK library of choice into your new device service project, you can focus on the details associated with getting and manipulating sensor data from your device via the specific protocol of your device. Other details, such as initialization of the device service, getting the service configured, sending sensor data to core data, managing communications with core metadata, and much more are handled by the code in the SDK library. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX \u2013 such as making sure the service registers with the EdgeX registry service when it starts up.
The EdgeX Foundry Device Service Software Development Kit (SDK) takes the developer through the step-by-step process to create an EdgeX Foundry device service micro service. Then setup the SDK and execute the code to generate the device service scaffolding to get you started using EdgeX.
The Device Service SDK supports:
This page provides detail on the API provided by the C SDK. A device service implementation will define a number of callback functions, and a main
function which registers these functions with the SDK and uses the SDK lifecycle methods to start the service and shut it down. The implementation may also use some of the helper functions which the SDK provides.
In various places information is passed between the SDK and the DS implementation using the iot_data_t
type. This is a holder for data of different types, and its use is described in its own page : Use of iot_data_t
This struct represents a running device service. An instance of it is created by calling devsdk_service_new
, and this instance should be passed in subsequent sdk function calls.
This struct type holds pointers to the various callback functions which the device service implementor needs to define in order to do the device-specific work of the service
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_address_t","title":"devsdk_address_t","text":"This is an alias to void*
. Implementations should define their own structure for device addresses and cast devsdk_address_t*
to pointers to that structure.
This is an alias to void*
. Implementations should define their own structure for device resource information and cast devsdk_resource_attr_t*
to pointers to that structure.
This is an opaque structure which holds protocol properties. The devsdk_protocols_properties
function is used to find the properties for a particular protocol.
This structure is used to pass errors back from the device service startup and shutdown functions
Field Type Content code uint32_t A numeric code indicating the error. Zero is used for success reason const char * A string describing the errorAn instance of devsdk_error with the code field set to zero should be passed by reference when calling startup and shutdown functions
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_device_t","title":"devsdk_device_t","text":"Specifies a device
Field Type Content name char* The device's name (for logging purposes) address devsdk_address_t Address of the device in parsed form"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_resource_t","title":"devsdk_resource_t","text":"Specifies a resource on a device
Field Type Content name char* The resource name (for logging purposes) attrs devsdk_resource_attr_t Resource attributes in parsed form type iot_typecode_t Expected type of values read from or written to the resource"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_commandrequest","title":"devsdk_commandrequest","text":"Specifies a resource in a get or put request
Field Type Content resource devsdk_resource_t* The resource definition mask uint64_t Mask to be applied (put requests only)"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_commandresult","title":"devsdk_commandresult","text":"Holds a value which has been read from a resource
Field Type Content value iot_data_t* The value which has been read origin uint64_t Timestamp of the valueThe timestamp is specified in nanoseconds past the epoch. It should only be set if one is provided by the device itself. Otherwise the timestamp should be left at zero and the SDK will use the current time.
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_device_resources","title":"devsdk_device_resources","text":"A list of device resources available on a device
Field Type Content resname char* Name of the resource attributes iot_data_t* String-keyed map of the resource attributes type iot_typecode_t Type of the data which may be read or written readable bool Whether this resource is readable writable bool Whether this resource is writable next devsdk_device_resources* The next resource in the list, or NULL if this is the last"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_devices","title":"devsdk_devices","text":"A description of a device or a list of such descriptions
Field Type Content device devsdk_device_t* The device's name and addressing information resources devsdk_device_resources* Information on the device's resources next devsdk_devices* The next device in the list, or NULL if this is the last"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#callbacks","title":"Callbacks","text":"Note that each of the callback functions has as its first parameter a void*
pointer. This pointer is specified by the implementation when the device service is created, and is passed to all callbacks. It may therefore be used to hold whatever state is required by the implementation.
This function is called during the service start operation. Its purpose is to supply the implementation with a logger and configuration.
Parameter Type Description impl void* The context data passed in when the service was created lc iot_logger_t* A logging client for the device service config iot_data_t* A string-keyed map containing the configuration specified in the service's \"Driver\" sectionThe function should return true to indicate that initialization was successful, or false to abort the service startup - eg if the supplied configuration was invalid or resources were not available
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_create_address","title":"devsdk_create_address","text":"This function should take the protocol properties that were specified for a device, and create an object representing the device's address in a form suitable for subsequent access.
Parameter Type Description impl void* The context data passed in when the service was created protocols const devsdk_protocols* The protocol properties for the device exception iot_data_t** Additional information in the event of an errorIf the supplied protocol properties are valid (ie, mandatory elements are supplied and have valid values), the function should return an allocated structure representing the address. Otherwise the function should return NULL, and set *exception
to a string (using eg. iot_data_alloc_string
) containing an error message.
This function should free a structure that was previously allocated in the devsdk_create_address
implementation.
This function should take the attributes that were specified for a deviceResource, and create an object representing these attributes in a form suitable for subsequent access.
Parameter Type Description impl void* The context data passed in when the service was created attributes const iot_data_t* The attributes for the device exception iot_data_t** Additional information in the event of an errorIf the supplied attributes are valid (ie, mandatory elements are supplied and have valid values), the function should return an allocated structure representing the resource within the device. Otherwise the function should return NULL, and set *exception
to a string (using eg. iot_data_alloc_string
) containing an error message.
This function should free a structure that was previously allocated in the devsdk_create_resource_attr
implementation
This function is called when a get (read) request on a deviceResource or deviceCommand is made. In the former case, the request is for a single reading and in the latter, for multiple readings. These readings will be packaged by the SDK into an Event.
Parameter Type Description impl void* The context data passed in when the service was created device devsdk_device_t* The name and address of the device to be queried nreadings uint32_t The number of readings being requested requests devsdk_commandrequest* Array containing details of the resources to be queried readings devsdk_commandresult* Array that the function should populate, with results of this request options iot_data_t* Any options which were specified in this request exception iot_data_t** Additional information in the event of an errorThe readings array will have been allocated in the SDK; the implementation should set the results into readings[0]...readings[nreadings - 1]
.
Options
will be a string-keyed map which contains any options set specifically on this request. In the current implementation these may have been set via query parameters in the URL used to make the request.
The function should return true if all of the requested resources were successfully read. Otherwise, *exception
should be allocated with a string value indicating the problem (this will be logged and returned to the caller), and false returned.
This function is called when a put (write) request on a deviceResource or deviceCommand is made. In the former case, the request is for a single resource and in the latter, for multiple resources.
Parameter Type Description impl void* The context data passed in when the service was created device devsdk_device_t* The name and address of the device to be written to nreadings uint32_t The number of resources to be written requests devsdk_commandrequest* Array containing details of the resources to be written values iot_data_t*[] Array of values to be written options iot_data_t* Any options which were specified in this request exception iot_data_t** Additional information in the event of an errorIf the mask
field in an element of the request array is nonzero, the implementation should implement the following:
new-value = (current-value & mask) | request-value\n
Options
will be a string-keyed map which contains any options set specifically on this request. In the current implementation these may have been set via query parameters in the URL used to make the request.
The function should return true if all of the requested resources were successfully written. Otherwise, *exception
should be allocated with a string value indicating the problem (this will be logged and returned to the caller), and false returned.
The implementation should perform any cleanup necessary before shutdown. At the time that this function is called, the service will be quiescent, ie there will be no new incoming requests.
Parameter Type Description impl void* The context data passed in when the service was created force bool An unclean shutdown may be performed if necessary. Long or indefinite timeouts should not occur."},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_init","title":"devsdk_callbacks_init","text":"Call this function in order to create a devsdk_callbacks object containing the required callback functions. This may then be passed to the SDK when starting the service
Parameter Type init devsdk_initialize gethandler devsdk_handle_get puthandler devsdk_handle_put stop devsdk_stop create_addr devsdk_create_address free_addr devsdk_free_address create_res devsdk_create_resource_attr free_res devsdk_free_resource_attr"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#optional-callback-functions","title":"Optional callback functions","text":""},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_reconfigure","title":"devsdk_reconfigure","text":"Implement this function in order to allow changes in the device-specific configuration to be made without restarting the service.
Parameter Type Description impl void* The context data passed in when the service was created config iot_data_t* The new configuration (contains all elements, not just those which have changed)"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_set_reconfiguration","title":"devsdk_callbacks_set_reconfiguration","text":"Call this to add your reconfiguration function to the callbacks structure
Parameter Type Description cb devsdk_callbacks* structure to be modified reconf devsdk_reconfigure function to add"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_discover","title":"devsdk_discover","text":"This function is called when a request for discovery is made. This may occur automatically at intervals or due to an external request. The SDK implements locking such that multiple invocations of this function will not be made in parallel.
Implementations should perform a scan for devices, and use the devsdk_add_discovered_devices
function to register them.
This is a placeholder function for future use. Its purpose will be to allow automatic generation of device profiles. It is not used in current versions of EdgeX.
Parameter Type Description impl void* The context data passed in when the service was created dev devsdk_device_t* The device which is to be described options iot_data_t* Service specific discovery options map. May be NULL resources devsdk_device_resources** The operations supported by the device exception iot_data_t** Additional information in the event of an errorImplementations should populate the resources
parameter and return true if it is possible to automatically describe the device. Otherwise return false and set exception
.
Call this to add your discovery functions to the callbacks structure
Parameter Type Description cb devsdk_callbacks* structure to be modified discover devsdk_discover device discovery function describe devsdk_describe device description function, may be NULL (currently unused)"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_add_device_callback","title":"devsdk_add_device_callback","text":"To be notified when a device is added to the system (and assigned to this device service), provide an implementation of this function
Parameter Type Description impl void* The context data passed in when the service was created devname char* The name of the new device protocols devsdk_protocols* The protocol properties that comprise the device's address resources devsdk_device_resources* The operations supported by the device adminEnabled bool Whether the device is administratively enabled"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_update_device_callback","title":"devsdk_update_device_callback","text":"To be notified when a device managed by this service is modified, provide an implementation of this function
Parameter Type Description impl void* The context data passed in when the service was created devname char* The name of the updated device protocols devsdk_protocols* The protocol properties that comprise the device's address state bool Whether the device is administratively enabled"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_remove_device_callback","title":"devsdk_remove_device_callback","text":"To be notified when a device managed by this service is removed, provide an implementation of this function
Parameter Type Description impl void* The context data passed in when the service was created devname char* The name of the removed device protocols devsdk_protocols* The protocol properties that comprise the device's address"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_set_listeners","title":"devsdk_callbacks_set_listeners","text":"Call this to add your add, remove and/or update listener functions to the callbacks structure. Any of the functions may be NULL
Parameter Type Description cb devsdk_callbacks* structure to be modified device_added devsdk_add_device_callback device addition listener device_updated devsdk_update_device_callback device update listener device_removed devsdk_remove_device_callback device removal listener"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_autoevent_start_handler","title":"devsdk_autoevent_start_handler","text":"Some device types may be configured to generate readings automatically at intervals. Such behavior may be enabled by providing implementations of this function and the stop handler described below. If \"AutoEvents\" have been defined for a device, this function will be called to request that automatic events should begin. The events when generated should be posted using the devsdk_post_readings
function. In the absence of an implementation of this function, the SDK will poll the device via the get handler.
The function should return a pointer to a data structure that will be provided in a subsequent call to the stop handler when this autoevent is t be stopped
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_autoevent_stop_handler","title":"devsdk_autoevent_stop_handler","text":"This function is called to request that automatic events should cease
Parameter Type Description impl void* The context data passed in when the service was created handle void* The data structure returned by a previous call to the start handler"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_set_autoevent_handlers","title":"devsdk_callbacks_set_autoevent_handlers","text":"Call this to add your autoevent management functions to the callbacks structure. Both start and stop handlers are required
Parameter Type Description cb devsdk_callbacks* structure to be modified ae_starter devsdk_autoevent_start_handler Autoevent start handler ae_stopper devsdk_autoevent_stop_handler Autoevent stop handler"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#initialisation-and-shutdown","title":"Initialisation and Shutdown","text":"These functions manage the lifecycle of the device service and should be called in the order presented here
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_service_new","title":"devsdk_service_new","text":"This function creates a new device service
Parameter Type Description defaultname char* The device service name, used in logging, metadata lookups and to scope configuration. This may be overridden via the commandline version char* The version string for this service. This is for information only, and will be logged during startup impldata void* An object pointer which will be passed back whenever one of the callback functions is invoked implfns devsdk_callbacks* Structure containing the device implementation functions. The SDK will call these functions in order to carry out its various actions argc int* A pointer to argc as passed into main(). This will be adjusted to account for arguments consumed by the SDK argv char** argv as passed into main(). This will be adjusted to account for arguments consumed by the SDK err devsdk_error* Nonzero reason codes will be set here in the event of errorsThe newly created service is represented by an object of type devsdk_service_t, which is returned if the service is created successfully
The SDK modifies the commandline argument parameters argc
and argv
, removing those arguments which it supports. The implementation may support additional arguments by inspecting these modified values after the create function has been called
Start the device service. Default values for the implementation-specific configuration are passed in here. These must be provided in a string-keyed iot_data_t map. A value named \"X\" may be over-ridden in the configuration file by an entry for X in the [Driver]
section. For dynamically-updatable configuration, set a value for \"Writable/X\". This will correspond to a configuration file entry in the [Writable.Driver]
section and updates may be received by implementing the devsdk_reconfigure
function
Stop the device service. Any automatic events will be cancelled and the REST API for the device service will be shut down
Parameter Type Description svc devsdk_service_t* The device service force bool Force stop. Currently unused but is passed through to the stop handler err devsdk_error* Nonzero reason codes will be set here in the event of errors"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_service_free","title":"devsdk_service_free","text":"This function disposes of the device service object and all associated resources
Parameter Type Description svc devsdk_service_t* The device service"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#additional-functionality","title":"Additional functionality","text":""},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_usage","title":"devsdk_usage","text":"This function writes out the commandline options supported by the SDK. It may be useful if a --help
option is to be implemented
This function returns a map of properties (keyed on string) for the named protocol.
Parameter Type Description prots devsdk_protocols* The protocols to search name char* The name of the protocol to search for"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_protocols_new","title":"devsdk_protocols_new","text":"This function creates a new protocols object, or adds a property set to an existing one.
Parameter Type Description name char* The name of the new protocol properties iot_data_t* The properties of the new protocol list devsdk_protocols* The protocols object to extend, or NULL to create a new one"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_protocols_dup","title":"devsdk_protocols_dup","text":"This function duplicates a protocols object
Parameter Type Description e devsdk_protocols* object to duplicate"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_protocols_free","title":"devsdk_protocols_free","text":"This function disposes of the memory used by a protocols object
Parameter Type Description e devsdk_protocols* object to free"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_get_secrets","title":"devsdk_get_secrets","text":"This function returns secrets (credentials) for the service. In insecure mode these will be part of the service configuration, in secure mode they will be retrieved from the secret store (eg, Vault).
The secrets are returned as a string-keyed map. This should be disposed after use using iot_data_free
This function posts readings to EdgeX. Depending on configuration this may be via REST to core-data or via the Message Bus to various upstream services. The readings are assembled into an Event and then posted
This function may be used in services which implement the autoevent handlers or by any other service where the natural operation is that readings are generated by the device rather than being explicitly requested
Parameter Type Description svc devsdk_service_t* The device service device_name char* Name of the device that has generated the readings resource_name char* Name of the resource (or command) corresponding to this set of readings values devsdk_commandresult* The readings to be postedThe cardinality of the values
array will depend on the resource - if it is a deviceResource
there should be a single reading; for a deviceCommand
there may be several
This function should be called in response to a request for device discovery, but may be called at any time if for a particular device class immediate automatic discovery is appropriate. The function takes an array of devices in order to allow for batching, but it may be called multiple times during the course of a single invocation of discovery if necessary
Parameter Type Description svc devsdk_service_t* The device service ndevices uint32_t Number of devices discovered devices devsdk_discovered_device* Array of discovered devices"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_set_device_opstate","title":"devsdk_set_device_opstate","text":"This function can be used to indicate that a device has become non-operational or non-responsive, or that a device has returned from such a state. The SDK will return errors for requests for a device marked non-operational without calling the get or set handler
Parameter Type Description svc devsdk_service_t* The device service devname char* The device that has changed state operational bool The new operational state"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_get_devices","title":"devsdk_get_devices","text":"Returns a list of devices registered with this service
Parameter Type Description svc devsdk_service_t* The device serviceThe returned list should be disposed after use using devsdk_free_devices
Returns information on a device
Parameter Type Description svc devsdk_service_t* The device service name char* The device to query forThe returned device should be disposed after use using devsdk_free_devices
Frees a devices structure returned by devsdk_get_devices
or devsdk_get_device
The iot_data_t
type is a holder for various types of data, and it is used in the SDK API to hold reading values and name-value collections (maps keyed by string). This chapter describes how to use iot_data_t
in interactions with the SDK. It is not a complete guide to either the type or to the IOT utilities package which includes it
The type of data held in an iot_data_t
object is represented by the iot_typecode_t
type. This has a field type
, which is an iot_data_type_t
, and can take the following values:
IOT_DATA_INT8 IOT_DATA_INT16 IOT_DATA_INT32 IOT_DATA_INT64
for signed integersIOT_DATA_UINT8 IOT_DATA_UINT16 IOT_DATA_UINT32 IOT_DATA_UINT64
for unsigned integersIOT_DATA_FLOAT32 IOT_DATA_FLOAT64
for floating point valuesIOT_DATA_BOOL
for booleansIOT_DATA_STRING
for stringsIOT_DATA_ARRAY
for arraysIOT_DATA_BINARY
for binary dataIOT_DATA_MAP
for maps (used for EdgeX Object type)For the array case, the iot_typecode_t
has an element_type
field, also of type iot_data_type_t
which indicates the type of the array elements - integers, floats and booleans are supported.
Instances of iot_data_t
are created with the iot_data_alloc_*
functions
For primitive types, use
iot_data_alloc_i8 iot_data_alloc_i16 iot_data_alloc_i32 iot_data_alloc_i64
for signed integersiot_data_alloc_ui8 iot_data_alloc_ui16 iot_data_alloc_ui32 iot_data_alloc_ui64
for unsigned integersiot_data_alloc_f32 iot_data_alloc_f64
for floatsiot_data_alloc_bool
for booleansEach takes a single parameter which is the value to hold
"},{"location":"microservices/device/sdk/Ch-Using-iot-data-t/#strings","title":"Strings","text":"Strings are allocated using iot_data_alloc_string
. In addition to the const char*
which specifies the string to hold, a further parameter of type iot_data_ownership_t
must be provided. This sets the ownership semantics for the string, and can take the following values:
iot_data_t
object is freed IOT_DATA_COPY A copy will be made of the string. This copy will be freed when the iot_data_t
object is freed, but the calling code remains responsible for the original"},{"location":"microservices/device/sdk/Ch-Using-iot-data-t/#arrays","title":"Arrays","text":"For array readings use iot_data_alloc_array
For binary data use iot_data_alloc_binary
Object-typed readings are represented by a map. Allocate it using
iot_data_alloc_map (IOT_DATA_STRING)
Values are added to the map using the iot_data_string_map_add
function
The accessors for primitive types are
iot_data_i8 iot_data_i16 iot_data_i32 iot_data_i64
iot_data_ui8 iot_data_ui16 iot_data_ui32 iot_data_ui64
iot_data_f32 iot_data_f64
iot_data_bool
Each function takes an iot_data_t*
as parameter and returns the value in the expected C type
The iot_data_string
function returns the char*
held in the data object
iot_data_array_length
returns the length of an arrayiot_data_address
returns a pointer to the first elementiot_data_array_type
returns the type of the elements (as iot_data_type_t
)iot_data_address
returns a pointer to the binary dataiot_data_array_length
returns the length in bytesUse iot_data_string_map_get
to obtain the iot_data_t
instance representing a field
Instances of iot_data_t
are freed using the iot_data_free
function
The DeviceServiceSDK
API provides the following APIs for the device service developer to use.
type DeviceServiceSDK interface {\nAddDevice(device models.Device) (string, error)\nDevices() []models.Device\nGetDeviceByName(name string) (models.Device, error)\nUpdateDevice(device models.Device) error\nRemoveDeviceByName(name string) error\nAddDeviceProfile(profile models.DeviceProfile) (string, error)\nDeviceProfiles() []models.DeviceProfile\nGetProfileByName(name string) (models.DeviceProfile, error)\nUpdateDeviceProfile(profile models.DeviceProfile) error\nRemoveDeviceProfileByName(name string) error\nAddProvisionWatcher(watcher models.ProvisionWatcher) (string, error)\nProvisionWatchers() []models.ProvisionWatcher\nGetProvisionWatcherByName(name string) (models.ProvisionWatcher, error)\nUpdateProvisionWatcher(watcher models.ProvisionWatcher) error\nRemoveProvisionWatcher(name string) error\nDeviceResource(deviceName string, deviceResource string) (models.DeviceResource, bool)\nDeviceCommand(deviceName string, commandName string) (models.DeviceCommand, bool)\nAddDeviceAutoEvent(deviceName string, event models.AutoEvent) error\nRemoveDeviceAutoEvent(deviceName string, event models.AutoEvent) error\nUpdateDeviceOperatingState(name string, state models.OperatingState) error\nDeviceExistsForName(name string) bool\nPatchDevice(updateDevice dtos.UpdateDevice) error\nRun() error\nName() string\nVersion() string\nAsyncReadingsEnabled() bool\nAsyncValuesChannel() chan *sdkModels.AsyncValues\nDiscoveredDeviceChannel() chan []sdkModels.DiscoveredDevice\nDeviceDiscoveryEnabled() bool\nDriverConfigs() map[string]string\nAddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error\nAddCustomRoute(route string, authenticated Authenticated, handler func(echo.Context) error, methods ...string) error\nLoadCustomConfig(customConfig UpdatableConfig, sectionName string) error\nListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error\nLoggingClient() logger.LoggingClient\nSecretProvider() interfaces.SecretProvider\nMetricsManager() interfaces.MetricsManager\n}\n
"},{"location":"microservices/device/sdk/SDK-Go-API/#apis","title":"APIs","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#auto-event","title":"Auto Event","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#adddeviceautoevent","title":"AddDeviceAutoEvent","text":"AddDeviceAutoEvent(deviceName string, event models.AutoEvent) error
This API adds a new AutoEvent to the Device with given name. An error is returned if not able to add AutoEvent
"},{"location":"microservices/device/sdk/SDK-Go-API/#removedeviceautoevent","title":"RemoveDeviceAutoEvent","text":"RemoveDeviceAutoEvent(deviceName string, event models.AutoEvent) error
This API removes an AutoEvent from the Device with given name. An error is returned if not able to remove AutoEvent
"},{"location":"microservices/device/sdk/SDK-Go-API/#device","title":"Device","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#adddevice","title":"AddDevice","text":"AddDevice(device models.Device) (string, error)
This API adds a new Device to Core Metadata and device service's cache. Returns new Device id or an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#updatedevice","title":"UpdateDevice","text":"UpdateDevice(device models.Device) error
This API updates the Device in Core Metadata and device service's cache. An error is returned if the Device can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#updatedeviceoperatingstate","title":"UpdateDeviceOperatingState","text":"UpdateDeviceOperatingState(deviceName string, state models.OperatingState) error
This API updates the Device's operating state for the given name in Core Metadata and device service's cache. An error is return if the operating state can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#removedevicebyname","title":"RemoveDeviceByName","text":"RemoveDeviceByName(name string) error
This API removes the specified Device by name from Core Metadata and device service cache. An error is return if the Device can not be removed.
"},{"location":"microservices/device/sdk/SDK-Go-API/#devices","title":"Devices","text":"Devices() []models.Device
This API returns all managed Devices from the device service's cache
"},{"location":"microservices/device/sdk/SDK-Go-API/#getdevicebyname","title":"GetDeviceByName","text":"GetDeviceByName(name string) (models.Device, error)
This API returns the Device by its name if it exists in the device service's cache, or returns an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#patchdevice","title":"PatchDevice","text":"PatchDevice(updateDevice dtos.UpdateDevice) error
This API patches the specified device properties in Core Metadata. Device name is required to be provided in the UpdateDevice.
Note
All properties of UpdateDevice are pointers and anything that is nil
will not modify the device. In the case of Arrays and Maps, the whole new value must be sent, as it is applied as an overwrite operation.
Example - PatchDevice()
service := interfaces.Service()\nlocked := models.Locked\nreturn service.PatchDevice(dtos.UpdateDevice{\nName: &name,\nAdminState: &locked,\n})\n
"},{"location":"microservices/device/sdk/SDK-Go-API/#deviceexistsforname","title":"DeviceExistsForName","text":"DeviceExistsForName(name string) bool
This API returns true if a device exists in cache with the specified name, otherwise it returns false.
"},{"location":"microservices/device/sdk/SDK-Go-API/#device-profile","title":"Device Profile","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#adddeviceprofile","title":"AddDeviceProfile","text":"AddDeviceProfile(profile models.DeviceProfile) (string, error)
This API adds a new DeviceProfile to Core Metadata and device service's cache. Returns new DeviceProfile id or error
"},{"location":"microservices/device/sdk/SDK-Go-API/#updatedeviceprofile","title":"UpdateDeviceProfile","text":"UpdateDeviceProfile(profile models.DeviceProfile) error
This API updates the DeviceProfile in Core Metadata and device service's cache. An error is returned if the DeviceProfile can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#removedeviceprofilebyname","title":"RemoveDeviceProfileByName","text":"RemoveDeviceProfileByName(name string) error
This API removes the specified DeviceProfile by name from Core Metadata and device service's cache. An error is return if the DeviceProfile can not be removed.
"},{"location":"microservices/device/sdk/SDK-Go-API/#deviceprofiles","title":"DeviceProfiles","text":"DeviceProfiles() []models.DeviceProfile
This API returns all managed DeviceProfiles from device service's cache.
"},{"location":"microservices/device/sdk/SDK-Go-API/#getprofilebyname","title":"GetProfileByName","text":"GetProfileByName(name string) (models.DeviceProfile, error)
This API returns the DeviceProfile by its name if it exists in the cache, or returns an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#provision-watcher","title":"Provision Watcher","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#addprovisionwatcher","title":"AddProvisionWatcher","text":"AddProvisionWatcher(watcher models.ProvisionWatcher) (string, error)
This API adds a new Watcher to Core Metadata and device service's cache. Returns new ProvisionWatcherid or error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#updateprovisionwatcher","title":"UpdateProvisionWatcher","text":"UpdateProvisionWatcher(watcher models.ProvisionWatcher) error
This API updates the ProvisionWatcherin in Core Metadata and device service's cache. An error is returned if the ProvisionWatcher can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#removeprovisionwatcher","title":"RemoveProvisionWatcher","text":"RemoveProvisionWatcher(name string) error
This API removes the specified ProvisionWatcherby name from Core Metadata and device service's cache. An error is return if the ProvisionWatcher can not be removed.
"},{"location":"microservices/device/sdk/SDK-Go-API/#provisionwatchers","title":"ProvisionWatchers","text":"ProvisionWatchers() []models.ProvisionWatcher
This API returns all managed ProvisionWatchers from device service's cache.
"},{"location":"microservices/device/sdk/SDK-Go-API/#getprovisionwatcherbyname","title":"GetProvisionWatcherByName","text":"GetProvisionWatcherByName(name string) (models.ProvisionWatcher, error)
This API returns the ProvisionWatcher by its name if it exists in the device service's , or returns an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#resource-command","title":"Resource & Command","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#deviceresource","title":"DeviceResource","text":"DeviceResource(deviceName string, deviceResource string) (models.DeviceResource, bool)
This API retrieves the specific DeviceResource instance from device service's cache for the specified Device name and Resource name. Returns the DeviceResource and true if found in device service's cache or false if not found.
"},{"location":"microservices/device/sdk/SDK-Go-API/#devicecommand","title":"DeviceCommand","text":"DeviceCommand(deviceName string, commandName string) (models.DeviceCommand, bool)
This API retrieves the specific DeviceCommand instance from device service's cache for the specified Device name and Command name. Returns the DeviceCommand and true if found in device service's cache or false if not found.
"},{"location":"microservices/device/sdk/SDK-Go-API/#custom-configuration","title":"Custom Configuration","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#loadcustomconfig","title":"LoadCustomConfig","text":"LoadCustomConfig(customConfig service.UpdatableConfig, sectionName string) error
This API attempts to load service's custom configuration. It uses the same command line flags to process the custom config in the same manner as the standard configuration. Returns an error is custom configuration can not be loaded. See Custom Structured Configuration section for more details.
"},{"location":"microservices/device/sdk/SDK-Go-API/#listenforcustomconfigchanges","title":"ListenForCustomConfigChanges","text":"ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
This API attempts to start listening for changes to the specified custom configuration section. LoadCustomConfig API must be called before this API. See Custom Structured Configuration section for more details.
"},{"location":"microservices/device/sdk/SDK-Go-API/#miscellaneous","title":"Miscellaneous","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#name","title":"Name","text":"Name() string
This API returns the name of the Device Service.
"},{"location":"microservices/device/sdk/SDK-Go-API/#version","title":"Version","text":"Version() string
This API returns the version number of the Device Service.
"},{"location":"microservices/device/sdk/SDK-Go-API/#driverconfigs","title":"DriverConfigs","text":"DriverConfigs() map[string]string
This API returns the driver specific configuration
"},{"location":"microservices/device/sdk/SDK-Go-API/#asyncreadingsenabled","title":"AsyncReadingsEnabled","text":"AsyncReadingsEnabled() bool
This API returns a bool value to indicate whether the asynchronous reading is enabled via configuration.
"},{"location":"microservices/device/sdk/SDK-Go-API/#devicediscoveryenabled","title":"DeviceDiscoveryEnabled","text":"DeviceDiscoveryEnabled() bool
This API returns a bool value to indicate whether the device discovery is enabled via configuration.
"},{"location":"microservices/device/sdk/SDK-Go-API/#addroute-deprecated","title":"AddRoute (Deprecated)","text":"AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error
This API is deprecated in favor of AddCustomRoute()
which has an explicit parameter to indicate whether the route should require authentication.
AddCustomRoute(route string, authenticated interfaces.Authenticated, handler func(echo.Context) error, methods ...string) error
This API allows leveraging the existing internal web server to add routes specific to the Device Service. If the route is marked authenticated, it will require an EdgeX JWT when security is enabled. Returns error is route could not be added.
Note
The handler
function uses the signature of echo.HandlerFunc
which is func(echo.Context) error
. See echo API HandlerFunc section for more details.
LoggingClient() logger.LoggingClient
This API returns the LoggingClient
used to log messages.
SecretProvider() interfaces.SecretProvider
This API returns the SecretProvider used to get/save the service secrets. See Secret Provider API section for more details.
"},{"location":"microservices/device/sdk/SDK-Go-API/#metricsmanager","title":"MetricsManager","text":"MetricsManager () interfaces.MetricsManager
This API returns the MetricsManager used to register custom service metrics. See Service Metrics for more details
"},{"location":"microservices/device/sdk/SDK-Go-API/#asyncvalueschannel","title":"AsyncValuesChannel","text":"AsyncValuesChannel() chan *sdkModels.AsyncValues
This API returns a channel to allow developer send asynchronous reading back to SDK.
"},{"location":"microservices/device/sdk/SDK-Go-API/#discovereddevicechannel","title":"DiscoveredDeviceChannel","text":"DiscoveredDeviceChannel() chan []sdkModels.DiscoveredDevice
This API returns a channel to allow developer send discovered devices back to SDK.
"},{"location":"microservices/device/sdk/SDK-Go-API/#internal","title":"Internal","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#run","title":"Run","text":"Run() error
This internal API call starts this Device Service. It should not be called directly by a device service. Instead, call startup.Bootstrap(...)
.
The following table lists the EdgeX device services and protocols they support.
Device Service Repository Protocol Status Comments Documentation device-onvif-camera ONVIF Active Full implementation of ONVIF spec. Note that not all cameras implement the complete ONVIF spec. device-onvif-camera docs device-usb-camera USB Active USB using V4L2 API. ONLY works on Linux with kernel v5.10 or higher. Includes RTSP server for video streaming. device-usb-camera docs device-rest-go REST Active provides one-way communications only. Allows posting of binary and JSON data via REST. Events are single reading only. device-rfid-llrp-go LLRP Active Communications with RFID readers via LLRP. device-snmp-go SNMP Active Basic implementation of SNMP protocol. Async callbacks and traps not currently supported. device-virtual-go Active Simulates sensor readings of type binary, Boolean, float, integer and unsigned integer device-virtual docs device-mqtt-go MQTT Active Two way communications via multiple MQTT topics device-modbus-go Modbus Active Supports Modbus over TCP or RTU device-gpio GPIO Active Linux only; uses sysfs ABI device-bacnet-c BACnet Active Supports BACnet via ethernet (IP) or serial (MSTP). Uses the Steve Karag BACnet stack device-coap-c CoAP Active This service is in the process of being redeveloped and expanded for upcoming release for Kamakura \u2013 and will support Thread as a subset of functionality. Currently supports CoAP-based REST and is one way communications (read-only) device-uart UART Active Linux only; for connecting serial UART devices to EdgeXNote
Check the above Device Service README(s) for known devices that have been tested with the Device Service. Not all Device Service READMEs will have this information.
"},{"location":"microservices/device/services/device-onvif-camera/General/","title":"General","text":""},{"location":"microservices/device/services/device-onvif-camera/General/#overview","title":"Overview","text":"The Open Network Video Interface Forum (ONVIF) Device Service is a microservice created to address the lack of standardization and automation of camera discovery and onboarding. EdgeX Foundry is a flexible microservice-based architecture created to promote the interoperability of multiple device interface combinations at the edge. In an EdgeX deployment, the ONVIF Device Service controls and communicates with ONVIF-compliant cameras, while EdgeX Foundry presents a standard interface to application developers. With normalized connectivity protocols and a vendor-neutral architecture, EdgeX paired with ONVIF Camera Device Service, simplifies deployment of edge camera devices.
Use the ONVIF Device Service to streamline and scale your edge camera device deployment.
"},{"location":"microservices/device/services/device-onvif-camera/General/#how-it-works","title":"How It Works","text":"The figure below illustrates the software flow through the architecture components.
Figure 1: Software Flow
A brief video demonstration of building and using the device service:
Get Started>
"},{"location":"microservices/device/services/device-onvif-camera/General/#examples","title":"Examples","text":"To see an example utilizing the ONVIF device service, refer to the camera management example application
"},{"location":"microservices/device/services/device-onvif-camera/General/#security","title":"Security","text":"This software has numerous security features. For production environments, it is recommended to use secure mode when running the EdgeX software stack. This documentation will contain warnings about any known security vulnerabilities or risks. In addition to the security features, it is suggested to use best security practices. These include, but are not limited to:
For more information, please visit the EdgeX Security documentation
"},{"location":"microservices/device/services/device-onvif-camera/General/#resources","title":"Resources","text":"Learn more about EdgeX Core Metadata Learn more about EdgeX Core Command
"},{"location":"microservices/device/services/device-onvif-camera/General/#references","title":"References","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/swagger/","title":"Device ONVIF Swagger API Documentation","text":"Use this RESTful API documentation to learn more about the capabilities of the device service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/","title":"Custom Build","text":"Follow this guide to make custom configurations and build the device service image from the source.
Warning
This is not the recommended method of deploying the service. To use the default images, see here.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#get-the-source-code","title":"Get the Source Code","text":"Clone the device-onvif-camera repository.
git clone https://github.com/edgexfoundry/device-onvif-camera.git\n
Navigate into the directory
cd device-onvif-camera\n
Checkout the latest release (main):
git checkout main\n
Configuring pre-defined devices will allow the service to automatically provision them into core-metadata. Create a list of devices with the appropriate information as outlined below.
Make a copy of the camera.yaml.example
:
cp ./cmd/res/devices/camera.yaml.example ./cmd/res/devices/camera.yaml\n
Warning
Be careful when storing any potentially important information in cleartext on files in your computer. Potentially sensitive information in this case could include the IP address of your ONVIF camera or any custom metadata you configure.
Open the cmd/res/devices/camera.yaml
file using your preferred text editor and update the Address
and Port
fields to match the IP address of the Camera and port used for ONVIF services:
Sample: Snippet from camera.yaml
deviceList:\n- name: Camera001 # Modify as desired\nprofileName: onvif-camera # Default profile\ndescription: onvif conformant camera # Modify as desired\nprotocols:\nOnvif:\nAddress: 191.168.86.34 # Set to your camera IP address\nPort: '2020' # Set to the port your camera uses\nCustomMetadata:\nCommonName: Outdoor camera\n
Optionally, modify the Name
and Description
fields to more easily identify the camera. The Name
is the camera name used when using ONVIF Device Service Rest APIs. The Description
is simply a more detailed explanation of the camera.
You can also optionally configure the CustomMetadata
with custom fields and values to store any extra information you would like.
To add more pre-defined devices, copy the above configuration and edit to match your extra devices.
Open the cmd/res/configuration.yaml
file using your preferred text editor
Make sure secret name
is set to match SecretName
in camera.yaml
. In the sample below, it is \"credentials001\"
. If you have multiple cameras, make sure the secret names match.
Under secretName
, set username
and password
to your camera credentials. If you have multiple cameras copy the Writable.InsecureSecrets
section and edit to include the new information.
Warning
Be careful when storing any potentially important information in cleartext on files in your computer. In this case, the credentials for the camera(s) are stored in cleartext in the configuration.yaml
file on your system. InsecureSecrets
is for non-production use only.
Sample: Snippet from configuration.yaml
Writable:\nLogLevel: INFO\nInsecureSecrets:\ncredentials001:\nSecretName: credentials001\nSecretData:\nusername: <Credentials 1 username>\npassword: <Credentials 1 password>\nmode: usernametoken # assign \"digest\" | \"usernametoken\" | \"both\" | \"none\"\ncredentials002:\nSecretName: credentials002\nSecretData:\nusername: <Credentials 2 username>\npassword: <Credentials 2 password>\nmode: usernametoken # assign \"digest\" | \"usernametoken\" | \"both\" | \"none\"\n
For optional configurations, see here.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#build-the-docker-image","title":"Build the Docker Image","text":"In the device-onvif-camera
directory, run make docker:
make docker\n
[Optional] Build with NATS Messaging Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. This means that the published Docker image and Snaps do not include the NATS messaging capability. To build the docker image using NATS, run make docker-nats: make docker-nats\n
See Compose Builder nat-bus
option to generate compose file for NATS and local dev images. Verify the ONVIF Device Service Docker image was successfully created:
docker images\n
REPOSITORY TAG IMAGE ID CREATED SIZE\nedgexfoundry-holding/device-onvif-camera 0.0.0-dev 75684e673feb 6 weeks ago 21.3MB\n
Navigate to edgex-compose
and enter the compose-builder
directory. bash cd edgex-compose/compose-builder
Update .env
file to add the registry and image version variable for device-onvif-camera: Add the following registry and version information:
DEVICE_ONVIFCAM_VERSION=0.0.0-dev\n
Update the add-device-onvif-camera.yml
to point to the local image.
services:\n device-onvif-camera:\n image: edgexfoundry/device-onvif-camera:${DEVICE_ONVIFCAM_VERSION}\n
Here is some information on how to specially configure parts of the service beyond the provided defaults.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#configure-the-device-profiles","title":"Configure the Device Profiles","text":"The device profile contains general information about the camera and includes all of the device resources and commands that the device resources can use to manage the cameras. The default profile found at cmd/res/devices/camera.yaml
contains all possible resources a camera could implement. Enable and disable supported resources in this file, or create an entirely new profile. It is important to set up the device profile to match the capabilities of the camera. Information on the resources supported by specific cameras can be found here. Learn more about device profiles in EdgeX here.
Sample: Snippet from camera.yaml
name: \"onvif-camera\" # general information about the profile\nmanufacturer: \"Generic\"\nmodel: \"Generic ONVIF\"\nlabels:\n- \"onvif\"\ndescription: \"EdgeX device profile for ONVIF-compliant IP camera.\" deviceResources:\n# Network Configuration\n- name: \"Hostname\" # an example of a resource with get/set values\nisHidden: false\ndescription: \"Camera Hostname\"\nattributes:\nservice: \"Device\"\ngetFunction: \"GetHostname\"\nsetFunction: \"SetHostname\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"RW\"\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#configure-the-provision-watchers","title":"Configure the Provision Watchers","text":"The provision watcher sets up parameters for EdgeX to automatically add devices to core-metadata. They can be configured to look for certain features, as well as block features. The default provision watcher is sufficient unless you plan on having multiple different cameras with different profiles and resources. Learn more about provision watchers here.
Sample: Snippet from generic.provision.watcher.yaml
name: Generic-Onvif-Provision-Watcher\nidentifiers:\nAddress: .\nblockingIdentifiers: {}\nadminState: UNLOCKED\ndiscoveredDevice:\nserviceName: device-onvif-camera\nprofileName: onvif-camera\nadminState: UNLOCKED\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#next-steps","title":"Next Steps","text":"Deploy and Run the Service>
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#license","title":"License","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/","title":"Deployment","text":"Follow this guide to deploy and run the service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#deploy-edgex-and-onvif-device-camera-microservice","title":"Deploy EdgeX and ONVIF Device Camera Microservice","text":"DockerNativeNavigate to the EdgeX compose-builder
directory:
cd edgex-compose/compose-builder/\n
Checkout the latest release (main):
git checkout main\n
Run Edgex with the ONVIF microservice in secure or non-secure mode.
Note
Go version 1.20+ is required to run natively. See here for more information.
Navigate to the EdgeX compose-builder
directory:
cd edgex-compose/compose-builder/\n
Checkout the latest release (main):
git checkout main\n
Run EdgeX:
make run no-secty\n
Navigate out of the edgex-compose
directory to the device-onvif-camera
directory:
cd device-onvif-camera\n
Checkout the latest release (main):
git checkout main\n
Run the service
make run\n
[Optional] Run with NATS
make run-nats\n
make run no-secty ds-onvif-camera\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#secure-mode","title":"Secure mode","text":"Note
Recommended for secure and production level deployments.
make run ds-onvif-camera\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#token-generation-secure-mode-only","title":"Token Generation (secure mode only)","text":"Note
Need to wait for sometime for the services to be fully up before executing the next set of commands. Securely store Consul ACL token and the JWT token generated which are needed to map credentials and execute apis. It is not recommended to store these secrets in cleartext in your machine.
Note
The JWT token expires after 119 minutes, and you will need to generate a new one.
Generate the Consul ACL Token. Use the token generated anywhere you see <consul-token>
in the documentation.
make get-consul-acl-token\n
Example output: 12345678-abcd-1234-abcd-123456789abc\n
Generate the JWT Token. Use the token generated anywhere you see <jwt-token>
in the documentation.
make get-token\n
Example output: eyJhbGciOiJFUzM4NCIsImtpZCI6IjUyNzM1NWU4LTQ0OWYtNDhhZC05ZGIwLTM4NTJjOTYxMjA4ZiJ9.eyJhdWQiOiJlZGdleCIsImV4cCI6MTY4NDk2MDI0MSwiaWF0IjoxNjg0OTU2NjQxLCJpc3MiOiIvdjEvaWRlbnRpdHkvb2lkYyIsIm5hbWUiOiJlZGdleHVzZXIiLCJuYW1lc3BhY2UiOiJyb290Iiwic3ViIjoiMGRjNThlNDMtNzBlNS1kMzRjLWIxM2QtZTkxNDM2ODQ5NWU0In0.oa8Fac9aXPptVmHVZ2vjymG4pIvF9R9PIzHrT3dAU11fepRi_rm7tSeq_VvBUOFDT_JHwxDngK1VqBVLRoYWtGSA2ewFtFjEJRj-l83Vz33KySy0rHteJIgVFVi1V7q5
Note
Secrets such as passwords, certificates, tokens and more in Edgex are stored in a secret store which is implemented using Vault a product of Hashicorp. Vault supports security features allowing for the issuing of consul tokens. JWT token is required for the API Gateway which is a trust boundry for Edgex services. It allows for external clients to be verified when issuing REST requests to the microservices. For more info refer Secure Consul, API Gateway and Edgex Security.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#verify-service-and-device-profiles","title":"Verify Service and Device Profiles","text":"via Command Linevia EdgeX UICheck the status of the container:
docker ps\n
The status column will indicate if the container is running, and how long it has been up.
Example output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n33f9c5ecb70e nexus3.edgexfoundry.org:10004/device-onvif-camera:latest \"/device-onvif-camer\u2026\" 7 weeks ago Up 48 minutes 127.0.0.1:59985->59985/tcp edgex-device-onvif-camera\n
Check whether the device service is added to EdgeX:
Note
If running in secure mode all the api executions need the JWT token generated previously. E.g.
curl --location --request GET 'http://localhost:59881/api/v3/deviceservice/name/device-onvif-camera' \\\n--header 'Authorization: Bearer <jwt-token>' \\\n--data-raw ''\n
curl -s http://localhost:59881/api/v3/deviceservice/name/device-onvif-camera | jq .\n
Good response: {\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"service\": {\n\"created\": 1657227634593,\n\"modified\": 1657291447649,\n\"id\": \"e1883aa7-f440-447f-ad4d-effa2aeb0ade\",\n\"name\": \"device-onvif-camera\",\n\"baseAddress\": \"http://edgex-device-onvif-camera:59984\",\n\"adminState\": \"UNLOCKED\"\n} }\n
Bad response: {\n\"apiVersion\" : \"v3\",\n\"message\": \"fail to query device service by name device-onvif-camer\",\n\"statusCode\": 404\n}\n
Check whether the device profile is added:
curl -s http://localhost:59881/api/v3/deviceprofile/name/onvif-camera | jq -r '\"profileName: \" + '.profile.name' + \"\\nstatusCode: \" + (.statusCode|tostring)'\n
Good response: profileName: onvif-camera\nstatusCode: 200\n
Bad response: profileName: \nstatusCode: 404\n
Note
jq -r
is used to reduce the size of the displayed response. The entire device profile with all resources can be seen by removing -r '\"profileName: \" + '.profile.name' + \"\\nstatusCode: \" + (.statusCode|tostring)', and replacing it with '.'
Note
Secure mode login to Edgex UI requires the JWT token generated in the above step
Entering the JWT token
Visit http://localhost:4000 to go to the dashboard for EdgeX Console GUI:
Figure 1: EdgeX Console Dashboard
To see Device Services, Devices, or Device Profiles, click on their respective tab:
Figure 2: EdgeX Console Device Service List
Figure 3: EdgeX Console Device List
Figure 4: EdgeX Console Device Profile List
Additionally, ensure that the service config has been deployed and that Consul is reachable.
Note
If running in secure mode this command needs the Consul ACL token generated previously.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera?keys=true\"\n
Example output:
[\"edgex/v3/device-onvif-camera/AppCustom/BaseNotificationURL\", \"edgex/v3/device-onvif-camera/AppCustom/CheckStatusInterval\",\n \"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/NoAuth\", ... , \"edgex/v3/device-onvif-camera/Writable/InsecureSecrets/credentials001/SecretData/username\", \"edgex/v3/device-onvif-camera/Writable/InsecureSecrets/credentials001/SecretName\",\n \"edgex/v3/device-onvif-camera/Writable/LogLevel\"]\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#manage-devices","title":"Manage Devices","text":"Follow these instructions to add and update devices manually.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#curl-commands","title":"Curl Commands","text":""},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#add-device","title":"Add Device","text":"Warning
Be careful when storing any potentially important information in cleartext on files in your computer. This includes information such as your camera IP and MAC addresses.
Edit the information to appropriately match the camera. The fields Address
, MACAddress
and Port
should match that of the camera:
Note
If running in secure mode the commands might need the JWT or consul token generated previously.
curl -X POST -H 'Content-Type: application/json' \\\nhttp://localhost:59881/api/v3/device \\\n-d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\":\"Camera001\",\n \"serviceName\": \"device-onvif-camera\",\n \"profileName\": \"onvif-camera\",\n \"description\": \"My test camera\",\n \"adminState\": \"UNLOCKED\",\n \"operatingState\": \"UP\",\n \"protocols\": {\n \"Onvif\": {\n \"Address\": \"10.0.0.0\",\n \"Port\": \"10000\",\n \"MACAddress\": \"aa:bb:cc:11:22:33\",\n \"FriendlyName\":\"Default Camera\"\n },\n \"CustomMetadata\": {\n \"Location\":\"Front door\"\n }\n }\n }\n }\n]'\n
Example output:
[{\"apiVersion\" : \"v3\",\"statusCode\":201,\"id\":\"fb5fb7f2-768b-4298-a916-d4779523c6b5\"}]\n
Update credentials in Secret Store.
Secure modeNon-secure modeNote
If running in secure mode all the api executions need the JWT token generated previously.
Enter your chosen username, password, and authentication mode and credentials name and then execute the command to create the secrets.
Note
The options for authentication mode are: usernametoken
, digest
, or both
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"<creds-name>\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<password>\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"<auth-mode>\"\n }\n ]\n }' --header 'Authorization:Bearer <jwt-token>' -X POST \"http://localhost:59984/api/v3/secret\"\n
Example output: {\"apiVersion\":\"v3\",\"statusCode\":201}\n
Enter your chosen username, password, and authentication mode and credentials name and then execute the command to create the secrets.
Note
The options for authentication mode are: usernametoken
, digest
, or both
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"<creds-name>\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<password>\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"<auth-mode>\"\n }\n ]\n }' -X POST \"http://localhost:59984/api/v3/secret\"\n
Example output:
{\"apiVersion\":\"v3\",\"statusCode\":201}\n
Map credentials to devices.
Secure ModeNon-secure modea. Enter your mac-address(es) and then execute the command to add the mac address(es) to the mapping.
Note
If you want to map multiple mac addresses, enter a comma separated list in the command
curl --data '<mac-address>' -H \"X-Consul-Token:<consul-token>\" -X PUT \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>\"\n
Example output: true\n
b. Check the status of the credentials map.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap?keys=true\" | jq .\n
Example output: [\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/NoAuth\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials001\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials002\"\n]\n
c. Check the mac addresses mapped to a specific credenential name. Insert the credential name in the command to see the mac addresses associated with it.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>?raw=true\"\n
Example output: 11:22:33:44:55:66\n
a. Enter your mac-address(es) and then execute the command to add the mac address(es) to the mapping.
Note
If you want to map multiple mac addresses, enter a comma separated list in the command
curl --data '<mac-address>' -X PUT \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>\"\n
Example output:
true\n
b. Check the status of the credentials map.
curl -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap?keys=true\" | jq .\n
Example output: [\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/NoAuth\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials001\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials002\"\n]\n
c. Check the mac addresses mapped to a specific credenential name. Insert the credential name in the command to see the mac addresses associated with it.
curl -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>?raw=true\"\n
Example response: 11:22:33:44:55:66\n
Note
The helper scripts may also be used, but they have been deprecated.
Verify device(s) have been successfully added to core-metadata.
curl -s http://localhost:59881/api/v3/device/all | jq -r '\"deviceName: \" + '.devices[].name''\n
Example output:
deviceName: Camera001\ndeviceName: device-onvif-camera\n
Note
jq -r
is used to reduce the size of the displayed response. The entire device with all information can be seen by removing -r '\"deviceName: \" + '.devices[].name'', and replacing it with '.'
There are multiple commands that can update aspects of the camera entry in meta-data. Refer to the Swagger documentation for Core Metadata for more information. For editing specific fields, see the General Usage tab.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#delete-device","title":"Delete Device","text":"curl -X 'DELETE' \\\n'http://localhost:59881/api/v3/device/name/<device name>' \\\n-H 'accept: application/json'
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#shutting-down","title":"Shutting Down","text":"To stop all EdgeX services (containers), execute the make down
command. This will stop all services but not the images and volumes, which still exist.
edgex-compose/compose-builder
directory.make down\n
make clean\n
Learn how to use the device service>
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#license","title":"License","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/","title":"General Usage","text":"This document will describe how to execute some of the most important commands used with the device service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/#execute-getstreamuri-command-through-edgex","title":"Execute GetStreamURI Command through EdgeX","text":"Note
Make sure to replace Camera001
in all the commands below, with the proper deviceName.
Get the profile token by executing the GetProfiles
command:
curl -s http://0.0.0.0:59882/api/v3/device/name/Camera001/MediaProfiles | jq -r '\"profileToken: \" + '.event.readings[].objectValue.Profiles[].Token''\n
Example output: profileToken: profile_1\nprofileToken: profile_2\n
To get the RTSP URI from the ONVIF device, execute the GetStreamURI
command, using a profileToken found in step 1: In this example, profile_1
is the profileToken:
curl -s \"http://0.0.0.0:59882/api/v3/device/name/Camera001/StreamUri?jsonObject=$(base64 -w 0 <<< '{\n \"StreamSetup\" : {\n \"Stream\" : \"RTP-Unicast\",\n \"Transport\" : {\n \"Protocol\" : \"RTSP\"\n }\n },\n \"ProfileToken\": \"profile_1\"\n}')\" | jq -r '\"streamURI: \" + '.event.readings[].objectValue.MediaUri.Uri''\n
Example output: streamURI: rtsp://192.168.86.34:554/stream1\n
Stream the RTSP stream.
Warning
RTSP streams are insecure, as the credentials are included in plaintext. Always keep this in mind when streaming via RTSP.
ffplay can be used to stream. The command follows this format:ffplay -rtsp_transport tcp \"rtsp://<user>:<password>@<IP address>:<port>/<streamname>\"\n
Using the streamURI
returned from the previous step, run ffplay: ffplay -rtsp_transport tcp \"rtsp://admin:Password123@192.168.86.34:554/stream1\"\n
While the streamURI
returned did not contain the username and password, those credentials are required in order to correctly authenticate the request and play the stream. Therefore, it is included in both the VLC and ffplay streaming examples.
If the password uses special characters, you must use percent-encoding.
To shut down ffplay, use the ctrl-c command.
To learn more about the API, see here
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/#troubleshooting-guide","title":"Troubleshooting Guide","text":""},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/#axis-camera-authentication-failure","title":"Axis camera authentication failure","text":"If while using Axis cameras you face authentication failure it might help by disabling its replay attack protection
. For doing so please refer to Axis-replay-attack-protection. For more info on this refer to Axis-onvif-stackoverflow.
Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/","title":"Setup","text":"Follow this guide to set up your system to run the ONVIF Device Service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#system-requirements","title":"System Requirements","text":"Note
The instructions in this guide were developed and tested using Ubuntu 20.04 LTS and the Tapo C200 Pan/Tilt Wi-Fi Camera, referred to throughout this document as the Tapo C200 Camera. However, the software may work with other Linux distributions and ONVIF-compliant cameras. Refer to our list of tested cameras for more information
Other Requirements
You must have administrator (sudo) privileges to execute the user guide commands.
Make sure that the cameras are secured and the computer system runnning this software is secure.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#dependencies","title":"Dependencies","text":"The software has dependencies, including Git, Docker, Docker Compose, and assorted tools. Follow the instructions below to install any dependency that is not already installed.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#install-git","title":"Install Git","text":"Install Git from the official repository as documented on the Git SCM site.
Update installation repositories:
sudo apt update\n
Add the Git repository:
sudo add-apt-repository ppa:git-core/ppa -y\n
Install Git:
sudo apt install git\n
Install Docker from the official repository as documented on the Docker site.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#verify-docker","title":"Verify Docker","text":"To enable running Docker commands without the preface of sudo, add the user to the Docker group. Then run Docker with the hello-world
test.
Create Docker group:
sudo groupadd docker\n
Note
If the group already exists, groupadd
outputs a message: groupadd: group docker
already exists. This is OK.
Add User to group:
sudo usermod -aG docker $USER\n
Restart your computer for the changes to take effect.
To verify the Docker installation, run hello-world
:
docker run hello-world\n
A Hello from Docker! greeting indicates successful installation. Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n2db29710123e: Pull complete \nDigest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n...\n
Install Docker Compose from the official repository as documented on the Docker Compose site.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#install-tools","title":"Install Tools","text":"Install the build, media streaming, and parsing tools:
sudo apt install build-essential ffmpeg jq curl\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#tool-descriptions","title":"Tool Descriptions","text":"The table below lists command line tools this guide uses to help with EdgeX configuration and device setup.
Tool Description Note curl Allows the user to connect to services such as EdgeX. Use curl to get transfer information either to or from this service. In the tutorial, usecurl
to communicate with the EdgeX API. The call will return a JSON object. jq Parses the JSON object returned from the curl
requests. The jq
command includes parameters that are used to parse and format data. In this tutorial, the jq
command has been configured to return and format appropriate data for each curl
command that is piped into it. base64 Converts data into the Base64 format. Table 1: Command Line Tools
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#download-edgex-compose","title":"Download EdgeX Compose","text":"Clone the EdgeX compose repository:
git clone https://github.com/edgexfoundry/edgex-compose.git\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#proxy-setup-optional","title":"Proxy Setup (Optional)","text":"Note
These steps are only required if a proxy is present in the user environment.
Setup Docker Daemon or Docker Desktop to use proxied environment.
Follow guide here for Docker Daemon proxy setup (Linux)
Follow guide here for Docker Desktop proxy setup (Windows)
Configuration file to set Docker Daemon proxy via daemon.json
{\n \"proxies\": {\n \"http-proxy\": \"http://proxy.example.com:3128\",\n \"https-proxy\": \"https://proxy.example.com:3129\",\n \"no-proxy\": \"*.test.example.com,.example.org,127.0.0.0/8\"\n }\n }\n
Note if building custom images
If building your own custom images, set environment variables for HTTP_PROXY, HTTPS_PROXY and NO_PROXY
Example
export HTTP_PROXY=http://proxy.example.com:3128\nexport HTTPS_PROXY=https://proxy.example.com:3129\nexport NO_PROXY=*.test.example.com,localhost,127.0.0.0/8\n
Note
Automated discovery of ONVIF device requires updating proper discovery subnets and proper network interface in ONVIF configuration.yaml or setting up EdgeX environment variables
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#next-steps","title":"Next Steps","text":"Default Images>
Warning
While not recommended, you can follow the process for manually building the images.
Build Images>
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#license","title":"License","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/assets/onvif-mermaid/","title":"Onvif mermaid","text":"Render
sequenceDiagram Onvif Device Service->>Onvif Camera: WS-Discovery Probe Onvif Camera->>Onvif Device Service: Probe Response Onvif Device Service->>Onvif Camera: GetDeviceInformation Onvif Camera->>Onvif Device Service: GetDeviceInformation Response Onvif Device Service->>Onvif Camera: GetNetworkInterfaces Onvif Camera->>Onvif Device Service: GetNetworkInterfaces Response Onvif Device Service->>EdgeX Core-Metadata: Create Device EdgeX Core-Metadata->>Onvif Device Service: Device AddedRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD %% -------- Node Definitions -------- %% Multicast[/Devices Discoveredvia Multicast/] Netscan[/Devices Discoveredvia Netscan/] DupeFilter[Filter Duplicate Devicesbased on EndpointRef] MACMatches{MAC Addressmatches existingdevice?} RefMatches{EndpointRefmatches existingdevice?} IPChanged{IP AddressChanged?} MACChanged{MAC AddressChanged?} UpdateIP[Update IP Address] UpdateMAC(Update MAC Address) RegisterDevice(Register New DeviceWith EdgeX) DeviceNotRegistered(Device Not Registered) PWMatches{Device matchesProvision Watcher?} %% -------- Graph Definitions -------- %% Multicast --> DupeFilter Netscan --> DupeFilter DupeFilter --> ForEachDevice subgraph ForEachDevice[For Each Unique Device] MACMatches -->|Yes| IPChanged MACMatches -->|No| RefMatches RefMatches -->|Yes| IPChanged RefMatches -->|No| ForEachPW ForEachPW --> PWMatches PWMatches-->|No Matches| DeviceNotRegistered IPChanged -->|No| MACChanged IPChanged -->|Yes| UpdateIP UpdateIP --> MACChanged MACChanged -->|Yes| UpdateMAC PWMatches -->|Yes| RegisterDevice endRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD; %% -------- Node Definitions -------- %% DiscoveredDevice[/Discovered Device/] UseDefault[Use Default Credentials] EndpointRefHasMAC{Does EndpointRefcontainMAC Address?} InNoAuthGroup{MAC Belongsto NoAuth group?} AuthModeNone[Set AuthMode to 'none'] ApplyCreds[Apply Credentials] InSecretStore{Credentials existin SecretStore?} CreateClient[Create Onvif Client] GetDeviceInfo[Get Device Information] GetNetIfaces[Get Network Interfaces] CreateDevice(Create Device:<Mfg>-<Model>-<EndpointRef>) CreateUnknownDevice(Create Device:unknown_unknown_<EndpointRef>) %% -------- Graph Definitions -------- %% DiscoveredDevice --> ForAllMAC subgraph ForAllMAC[For all MAC Addresses in CredentialsMap] EndpointRefHasMAC end EndpointRefHasMAC -->|Yes| InNoAuthGroup EndpointRefHasMAC -- No Matches --> UseDefault InNoAuthGroup -->|Yes| AuthModeNone InNoAuthGroup -->|No| InSecretStore UseDefault --> InSecretStore AuthModeNone --> CreateClient InSecretStore -->|Yes| ApplyCreds InSecretStore -->|No| AuthModeNone ApplyCreds --> CreateClient CreateClient --> GetDeviceInfo GetDeviceInfo -->|Failed| CreateUnknownDevice GetDeviceInfo -->|Success| GetNetIfaces GetNetIfaces ----> CreateDeviceRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD; %% -------- Node Definitions -------- %% ExistingDevice[/Existing Device/] ContainsMAC{Device Metadata containsMAC Address?} ValidMAC{Is it a validMAC Address?} InMap{MAC exists inCredentialsMap?} InNoAuth{MAC Belongsto NoAuth group?} UseDefault[Use Default Credentials] InSecretStore{Credentials existin SecretStore?} AuthModeNone(Set AuthMode to 'none') ApplyCreds(Apply Credentials) CreateClient(Create Onvif Client) %% -------- Edge Definitions -------- %% ExistingDevice --> ContainsMAC ContainsMAC -->|Yes| ValidMAC ValidMAC -->|Yes| InMap ValidMAC -->|No| AuthModeNone InMap -->|Yes| InNoAuth InMap -->|No| AuthModeNone ContainsMAC -->|No| UseDefault InNoAuth -->|Yes| AuthModeNone InNoAuth -->|No| InSecretStore UseDefault --> InSecretStore InSecretStore -->|Yes| ApplyCreds InSecretStore -->|No| AuthModeNone AuthModeNone ----> CreateClient ApplyCreds ----> CreateClientRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD; %% -------- Node Definitions -------- %% CheckDeviceStatus(Check Device Status) UpdateDeviceStatus[Update Device Statusin Core-Metadata] SetLastSeen[Set LastSeen = Now] UpdateMetadata[Update Core-Metadata] CheckNowUpWithAuth{Status Changed&&Status == UpWithAuth?} DeviceHasMAC{Device HasMAC Address?} CreateClient[Create Onvif Client] GetCapabilities[Device::GetCapabilities] CheckUpdatedMAC[Check CredentialsMap forupdated MAC Address] TCPProbe[TCP Probe] GetDeviceInfo[GetDeviceInformation] UpdateDeviceInfo[Update Device Information] UpdateMACAddress[Update MAC Address] UpdateEndpointRef[Update EndpointRefAddress] DeviceUnknown{Device Namebegins withunknown_unknown_?} RemoveDevice[Remove Deviceunknown_unknown_<EndpointRef>] CreateDevice[Create Device<Mfg>-<Model>-<EndpointRef>] %% -------- Graph Definitions -------- %% CheckDeviceStatus --> DeviceHasMAC DeviceHasMAC -->|No| CheckUpdatedMAC DeviceHasMAC -->|Yes| CreateClient CheckUpdatedMAC --> CreateClient subgraph TestConnection[Test Connection Methods] CreateClient --> GetCapabilities GetCapabilities -->|Failed| TCPProbe GetCapabilities -->|Success| GetDeviceInfo GetDeviceInfo -->|Success| UpWithAuth GetDeviceInfo -->|Failed| UpWithoutAuth TCPProbe -->|Failed| Unreachable TCPProbe -->|Success| Reachable end UpWithAuth --> SetLastSeen UpWithoutAuth --> SetLastSeen Reachable --> SetLastSeen Unreachable --> UpdateDeviceStatus UpdateDeviceStatus --> CheckNowUpWithAuth SetLastSeen --> UpdateDeviceStatus CheckNowUpWithAuth -->|Yes| RefreshDevice subgraph RefreshDevice[Refresh Device] UpdateDeviceInfo --> UpdateMACAddress UpdateMACAddress --> UpdateEndpointRef UpdateEndpointRef --> DeviceUnknown DeviceUnknown -->|No| UpdateMetadata DeviceUnknown -->|Yes| RemoveDevice RemoveDevice --> CreateDevice end"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/","title":"Onvif Camera Device Service Specifications","text":"This Onvif Camera Device Service is developed to control/communicate ONVIF-compliant cameras accessible via http in an EdgeX deployment
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#table-of-contents","title":"Table of Contents","text":"The latest version main of the device service API specifications can be found here.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#onvif-device-service-protocol-properties","title":"ONVIF Device Service Protocol Properties","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#onvif-protocol","title":"ONVIF Protocol","text":"All properties in the Onvif
protocol field are defined by internal device information and some user defined information.
All properties in the CustomMetadata
protocol field are user defined. It can hold multiple different entries. For more information, see here
The device service supports the onvif features listed in the following table:
Feature Onvif Web Service Onvif Function EdgeX Value Type User Authentication Core WS-Usernametoken Authentication HTTP Digest Auto Discovery Core WS-Discovery Device GetDiscoveryMode Object SetDiscoveryMode Object GetScopes Object SetScopes Object AddScopes Object RemoveScopes Object Network Configuration Device GetHostname Object SetHostname Object GetDNS Object SetDNS Object GetNetworkInterfaces Object SetNetworkInterfaces Object GetNetworkProtocols Object SetNetworkProtocols Object GetNetworkDefaultGateway Object SetNetworkDefaultGateway Object System Function Device GetDeviceInformation Object GetSystemDateAndTime Object SetSystemDateAndTime Object SetSystemFactoryDefault Object SystemReboot Object User Handling Device GetUsers Object CreateUsers Object DeleteUsers Object SetUser Object Metadata Configuration Media GetMetadataConfiguration Object GetMetadataConfigurations Object GetCompatibleMetadataConfigurations Object GetMetadataConfigurationOptions Object AddMetadataConfiguration Object RemoveMetadataConfiguration Object SetMetadataConfiguration Object Video Streaming Media GetProfiles Object GetStreamUri Object VideoEncoder Config Media GetVideoEncoderConfiguration Object SetVideoEncoderConfiguration Object GetVideoEncoderConfigurationOptions Object PTZ Node PTZ GetNode Object GetNodes Object PTZ Configuration GetConfigurations Object GetConfiguration Object GetConfigurationOptions Object SetConfiguration Object Media AddPTZConfiguration Object Media RemovePTZConfiguration Object PTZ Actuation PTZ AbsoluteMove Object RelativeMove Object ContinuousMove Object Stop Object GetStatus Object GetPresets Object GotoPreset Object RemovePreset Object PTZ Home Position PTZ GotoHomePosition Object SetHomePosition Object PTZ Auxiliary Operations PTZ SendAuxiliaryCommand Object Event Handling Event Notify Object Subscribe Object Renew Object Unsubscribe Object CreatePullPointSubscription Object PullMessages Object TopicFilter Object MessageContentFilter Object Analytics Profile Configuration Media2 GetProfiles Object GetAnalyticsConfigurations Object AddConfiguration Object RemoveConfiguration Object Analytics Module Configuration Analytics GetSupportedAnalyticsModules Object GetAnalyticsModules Object CreateAnalyticsModules Object DeleteAnalyticsModules Object GetAnalyticsModuleOptions Object ModifyAnalyticsModules Object Rule Configuration Analytics GetSupportedRules Object GetRules Object CreateRules Object DeleteRules Object GetRuleOptions Object ModifyRule ObjectNote
The functions in the bold text are mandatory for Onvif protocol.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#custom-features","title":"Custom Features","text":"The device service also include custom function to enhance the usage for the EdgeX user.
Feature Service Function EdgeX Value Type Description System Function EdgeX RebootNeeded Bool Read only. Used to indicate the camera should reboot to apply the configuration change System Function EdgeX CameraEvent Bool A device resource which is used to send the async event to north bound System Function EdgeX SubscribeCameraEvent Bool Create a subscription to subscribe the event from the camera System Function EdgeX UnsubscribeCameraEvent Bool Unsubscribe all subscription from the camera Media EdgeX GetSnapshot Binary Get Snapshot from the snapshot uri Custom Metadata EdgeX CustomMetadata Object Read and write custom metadata to the camera entry in EdgeX Custom Metadata EdgeX DeleteCustomMetadata Object Delete custom metadata fields from the camera entry in EdgeX"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#how-does-the-device-service-work","title":"How does the device service work?","text":"The Onvif camera uses Web Services standards such as XML, SOAP 1.2 and WSDL1.1 over an IP network. - XML is used as the data description syntax - SOAP is used for message transfer - and WSDL is used for describing the services.
The spec can refer to ONVIF-Core-Specification.
For example, we can send a SOAP request to the Onvif camera as below:
curl --request POST 'http://192.168.12.128:2020/onvif/service' \\\n--header 'Content-Type: application/soap+xml' \\\n--data-raw '<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<soap-env:Envelope xmlns:soap-env=\"http://www.w3.org/2003/05/soap-envelope\" xmlns:soap-enc=\"http://www.w3.org/2003/05/soap-encoding\" xmlns:tan=\"http://www.onvif.org/ver20/analytics/wsdl\" xmlns:onvif=\"http://www.onvif.org/ver10/schema\" xmlns:trt=\"http://www.onvif.org/ver10/media/wsdl\" xmlns:timg=\"http://www.onvif.org/ver20/imaging/wsdl\" xmlns:tds=\"http://www.onvif.org/ver10/device/wsdl\" xmlns:tev=\"http://www.onvif.org/ver10/events/wsdl\" xmlns:tptz=\"http://www.onvif.org/ver20/ptz/wsdl\" >\n <soap-env:Header>\n <Security xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\">\n <UsernameToken>\n <Username>myUsername</Username>\n <Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest\">+HKcvc+LCGClVwuros1sJuXepQY=</Password>\n <Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">w490bn6rlib33d5rb8t6ulnqlmz9h43m</Nonce>\n <Created xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">2021-10-21T03:43:21.02075Z</Created>\n </UsernameToken>\n </Security>\n </soap-env:Header>\n <soap-env:Body>\n <trt:GetStreamUri>\n <trt:ProfileToken>profile_1</trt:ProfileToken>\n </trt:GetStreamUri>\n </soap-env:Body>\n </soap-env:Envelope>'\n
And the response should be like the following XML data: <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<SOAP-ENV:Envelope\nxmlns:SOAP-ENV=\"http://www.w3.org/2003/05/soap-envelope\" xmlns:SOAP-ENC=\"http://www.w3.org/2003/05/soap-encoding\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsa=\"http://schemas.xmlsoap.org/ws/2004/08/addressing\"\nxmlns:wsdd=\"http://schemas.xmlsoap.org/ws/2005/04/discovery\" xmlns:chan=\"http://schemas.microsoft.com/ws/2005/02/duplex\"\nxmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\"\nxmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsa5=\"http://www.w3.org/2005/08/addressing\"\nxmlns:xmime=\"http://tempuri.org/xmime.xsd\" xmlns:xop=\"http://www.w3.org/2004/08/xop/include\" xmlns:wsrfbf=\"http://docs.oasis-open.org/wsrf/bf-2\"\nxmlns:wstop=\"http://docs.oasis-open.org/wsn/t-1\" xmlns:wsrfr=\"http://docs.oasis-open.org/wsrf/r-2\" xmlns:wsnt=\"http://docs.oasis-open.org/wsn/b-2\"\nxmlns:tt=\"http://www.onvif.org/ver10/schema\" xmlns:ter=\"http://www.onvif.org/ver10/error\" xmlns:tns1=\"http://www.onvif.org/ver10/topics\"\nxmlns:tds=\"http://www.onvif.org/ver10/device/wsdl\" xmlns:trt=\"http://www.onvif.org/ver10/media/wsdl\"\nxmlns:tev=\"http://www.onvif.org/ver10/events/wsdl\" xmlns:tdn=\"http://www.onvif.org/ver10/network/wsdl\" xmlns:timg=\"http://www.onvif.org/ver20/imaging/wsdl\"\nxmlns:trp=\"http://www.onvif.org/ver10/replay/wsdl\" xmlns:tan=\"http://www.onvif.org/ver20/analytics/wsdl\" xmlns:tptz=\"http://www.onvif.org/ver20/ptz/wsdl\">\n<SOAP-ENV:Header></SOAP-ENV:Header>\n<SOAP-ENV:Body>\n<trt:GetStreamUriResponse>\n<trt:MediaUri>\n<tt:Uri>rtsp://192.168.12.128:554/stream1</tt:Uri>\n<tt:InvalidAfterConnect>false</tt:InvalidAfterConnect>\n<tt:InvalidAfterReboot>false</tt:InvalidAfterReboot>\n<tt:Timeout>PT0H0M2S</tt:Timeout>\n</trt:MediaUri>\n</trt:GetStreamUriResponse>\n</SOAP-ENV:Body>\n</SOAP-ENV:Envelope>\n
Since the SOAP message is an HTTP call, the device service can just do the transformation between REST(JSON) and SOAP(XML).
For the concept of implementation: - The device service accepts the REST request from the client, then transforms the request to SOAP format and forward it to the Onvif camera. - Once the device service receives the response from the Onvif camera, the device service will transform the SOAP response to REST format for the client.
- Onvif Web Service\n\n - Onvif Function \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Input Parameter \u2502 Device Service \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 \u2502 REST request \u2502 \u2502 SOAP request \u2502 \u2502\n\u2502 Client \u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u25ba Transform \u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u25ba Onvif Camera \u2502\n\u2502 \u2502 \u2502 to SOAP request \u2502 \u2502 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 Device Service \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 \u2502 REST response \u2502 \u2502 SOAP response \u2502 \u2502\n\u2502 Client \u25c4\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500 Transform \u25c4\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500 Onvif Camera \u2502\n\u2502 \u2502 \u2502 to REST response \u2502 \u2502 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
Warning
Both REST and SOAP commands over the network can be subject to attacks while in transit. Please take all necessary precautions to protect network traffic.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#tested-onvif-cameras","title":"Tested Onvif Cameras","text":"The following table shows the Onvif functions tested for various Onvif cameras:
Use these links to access maufacturer documentation
Warning
Information in this page may be outdated.
The device-onvif-camera implement the Analytic function according to Onvif Profile M
to manage the Analytics Module and Rule configuration.
The spec can refer to
This page uses the BOSCH DINION IP starlight 6000 HD
as the test camera and used the BOSCH Configuration Manager
as the camera viewer. - The product page refer to https://commerce.boschsecurity.com/tw/en/DINION-IP-starlight-6000-HD/p/20827877387/ - The configuration manager can download from https://downloadstore.boschsecurity.com/index.php?type=CM
In the scope of profile M, the device-onvif-camera should be able to manage the Analytics Module
and Rule
configuration, we can illustrate the APIs scope as following example:
For more information, please refer to the Annex D. Radiometry https://www.onvif.org/specs/srv/analytics/ONVIF-Analytics-Service-Spec.pdf
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#manage-the-analytics-module-configuration","title":"Manage the Analytics Module Configuration","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#query-the-analytics-module","title":"Query the Analytics Module","text":"curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsModules?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n...\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n...\n \"objectValue\" : {\n\"AnalyticsModule\" : [\n{\n\"Name\" : \"Viproc\",\n \"Parameters\" : {\n\"SimpleItem\" : [\n{\n\"Name\" : \"Mode\",\n \"Value\" : \"Profile 1\"\n},\n {\n\"Name\" : \"AnalysisType\",\n \"Value\" : \"Intelligent Video Analytics\"\n}\n]\n},\n \"Type\" : \"tt:Viproc\"\n}\n]\n},\n }\n],\n \"sourceName\" : \"AnalyticsModules\"\n},\n \"statusCode\" : 200\n}\n
Note
The jsonObject parameter is encoded from {\"ConfigurationToken\": \"{ANALYTIC_CONFIG_TOKEN}\"}
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/SupportedAnalyticsModules' | jq .\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 692 100 692 0 0 2134 0 --:--:-- --:--:-- --:--:-- 2217\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n...\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"70545263-30e7-4c03-9741-0011300f2f9c\",\n \"objectValue\" : {\n\"SupportedAnalyticsModules\" : {\n\"AnalyticsModuleDescription\" : [\n{\n\"Fixed\" : true,\n \"MaxInstances\" : 1,\n \"Name\" : \"tt:Viproc\",\n \"Parameters\" : {\n\"SimpleItemDescription\" : [\n{\n\"Name\" : \"Mode\",\n \"Type\" : \"xs:string\"\n},\n {\n\"Name\" : \"AnalysisType\",\n \"Type\" : \"xs:string\"\n}\n]\n}\n}\n]\n}\n},\n }\n],\n \"sourceName\" : \"SupportedAnalyticsModules\"\n},\n \"statusCode\" : 200\n}\n
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsModuleOptions?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"deviceName\" : \"Camera003\",\n \"profileName\" : \"onvif-camera\",\n ...\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"43f0e59b-6f3e-4119-978e-299ccd59049d\",\n \"objectValue\" : {\n\"Options\" : [\n{\n\"AnalyticsModule\" : \"tt:Viproc\",\n \"Name\" : \"Mode\",\n \"StringItems\" : {\n\"Item\" : [\n\"Off\",\n \"Silent VCA\",\n \"Profile 1\",\n \"Profile 2\",\n \"Scheduled\",\n \"Event Triggered\"\n]\n}\n},\n {\n\"AnalyticsModule\" : \"tt:Viproc\",\n \"Name\" : \"AnalysisType\",\n \"StringItems\" : {\n\"Item\" : [\n\"MOTION+\",\n \"Intelligent Video Analytics\"\n]\n}\n}\n]\n},\n ...\n \"resourceName\" : \"AnalyticsModuleOptions\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"AnalyticsModuleOptions\"\n},\n \"statusCode\" : 200\n}\n
Note
The jsonObject parameter is encoded from {\"ConfigurationToken\": \"{ANALYTIC_CONFIG_TOKEN}\"}
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#modify-the-analytics-module-options","title":"Modify the Analytics Module Options","text":"
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsModules' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"AnalyticsModules\": {\n \"ConfigurationToken\": \"1\",\n \"AnalyticsModule\": [\n {\n \"Name\": \"Viproc\",\n \"Type\": \"tt:Viproc\",\n \"Parameters\": {\n \"SimpleItem\": [\n {\n \"Name\": \"Mode\",\n \"Value\": \"Profile 1\"\n },\n {\n \"Name\": \"AnalysisType\",\n \"Value\": \"Intelligent Video Analytics\"\n }\n ]\n }\n\n }\n ]\n }\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#manage-the-rule-configuration","title":"Manage the Rule Configuration","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#query-the-rules","title":"Query the Rules","text":"curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsRules?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"profileName\" : \"onvif-camera\",\n ...\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"1abea901-ad51-4a55-b9bb-0b00271307df\",\n \"objectValue\" : {\n\"Rule\" : [\n{\n\"Name\" : \"Detect any object\",\n \"Parameters\" : {\n\"SimpleItem\" : [\n{\n\"Name\" : \"Armed\",\n \"Value\" : \"true\"\n}\n]\n},\n \"Type\" : \"tt:ObjectInField\"\n}\n]\n},\n \"origin\" : 1639480270526564000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"AnalyticsRules\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"AnalyticsRules\"\n},\n \"statusCode\" : 200\n}\n
Note
The jsonObject parameter is encoded from {\"ConfigurationToken\": \"{ANALYTIC_CONFIG_TOKEN}\"}
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsSupportedRules?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 9799 0 9799 0 0 9605 0 --:--:-- 0:00:01 --:--:-- 9740\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"id\" : \"07f7b42e-835b-4ecc-97b1-fe4d5f52575b\",\n \"origin\" : 1639482296788863000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"6fca707b-3c52-4694-be37-2e23ecf65de1\",\n \"objectValue\" : {\n\"SupportedRules\" : {\n\"RuleDescription\" : [\n....\n {\n\"MaxInstances\" : 16,\n \"Messages\" : {\n\"Data\" : {\n\"SimpleItemDescription\" : [\n{\n\"Name\" : \"Count\",\n \"Type\" : \"xs:int\"\n}\n]\n},\n \"IsProperty\" : true,\n \"ParentTopic\" : \"tns1:RuleEngine/CountAggregation/Counter\",\n \"Source\" : {\n\"SimpleItemDescription\" : [\n{\n\"Name\" : \"VideoSource\",\n \"Type\" : \"tt:ReferenceToken\"\n},\n {\n\"Name\" : \"Rule\",\n \"Type\" : \"xs:string\"\n}\n]\n}\n},\n \"Name\" : \"tt:LineCounting\",\n \"Parameters\" : {\n\"ElementItemDescription\" : [\n{\n\"Name\" : \"Segments\"\n}\n],\n \"SimpleItemDescription\" : [\n{\n\"Name\" : \"Armed\",\n \"Type\" : \"xs:boolean\"\n},\n {\n\"Name\" : \"Direction\",\n \"Type\" : \"tt:Direction\"\n},\n {\n\"Name\" : \"MinObjectHeight\",\n \"Type\" : \"xs:int\"\n},\n ...\n {\n\"Name\" : \"ClassFilter\",\n \"Type\" : \"tt:StringList\"\n}\n]\n}\n}\n]\n}\n},\n \"origin\" : 1639482296788863000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"AnalyticsSupportedRules\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"AnalyticsSupportedRules\"\n},\n \"statusCode\" : 200\n}\n
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/RuleOptions?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 1168 100 1168 0 0 755 0 0:00:01 0:00:01 --:--:-- 759\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"id\" : \"3ac81a5c-48f2-46d7-a3f9-d4919f97ae8d\",\n \"origin\" : 1639482979553667000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"6eae2e16-71f7-4b92-95b6-32e398be25ca\",\n \"objectValue\" : {\n\"RuleOptions\" : [\n...\n {\n\"MaxOccurs\" : \"3\",\n \"MinOccurs\" : \"0\",\n \"Name\" : \"Field\",\n \"PolygonOptions\" : {\n\"VertexLimits\" : {\n\"Max\" : 16,\n \"Min\" : 3\n}\n}\n},\n {\n\"IntRange\" : {\n\"Max\" : 16,\n \"Min\" : 2\n},\n \"MaxOccurs\" : \"3\",\n \"MinOccurs\" : \"1\",\n \"Name\" : \"Segments\"\n},\n {\n\"Name\" : \"Direction\",\n \"StringList\" : \"Any Right Left\"\n},\n {\n\"Name\" : \"ClassFilter\",\n \"StringList\" : \"Person Bike Car Truck\"\n}\n]\n},\n \"origin\" : 1639482979553667000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"RuleOptions\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"RuleOptions\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#add-the-rule","title":"Add the Rule","text":"curl --location --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsCreateRules' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"AnalyticsCreateRules\": {\n \"ConfigurationToken\": \"1\",\n \"Rule\": [\n {\n \"Name\": \"Object Counting\",\n \"Type\": \"tt:LineCounting\",\n \"Parameters\": {\n \"SimpleItem\": [\n {\n \"Name\":\"Armed\", \n \"Value\":\"true\"\n }\n ],\n \"ElementItem\": [\n {\n \"Name\":\"Segments\", \n \"Polyline\": {\n \"Point\": [\n {\n \"x\":\"0.16\",\n \"y\": \"0.5\"\n },\n {\n \"x\":\"0.16\",\n \"y\": \"-0.5\"\n }\n ]\n }\n }\n ]\n }\n\n }\n ]\n }\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/","title":"Event Handling","text":"Warning
Information in this page may be outdated.
The device service shall be able to use at least one way to retrieve events out of the following: * PullPoint - \"Pull\" using the CreatePullPointSubscription and PullMessage operations * BaseNotification - \"Push\" using Notify, Subscribe and Renew operations from WSBaseNotification
The spec can refer to https://www.onvif.org/ver10/events/wsdl/event.wsdl and https://docs.oasis-open.org/wsn/wsn-ws_base_notification-1.3-spec-os.pdf
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-the-device-resources-for-event-handling","title":"Define the device resources for Event Handling","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-a-cameraevent-resource-for-device-service-to-publish-the-event","title":"Define a CameraEvent resource for device service to publish the event","text":"Before receiving the event data from the camera, we must define a device resource for the event.
- name: \"CameraEvent\"\nisHidden: true\ndescription: \"This resource is used to send the async event reading to north bound\"\nattributes:\nservice: \"EdgeX\"\ngetFunction: \"CameraEvent\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"R\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-device-resource-for-pullpoint","title":"Define device resource for PullPoint","text":"Define a SubscribeCameraEvent resource with PullPoint subscribeType for creating the subscription
- name: \"SubscribeCameraEvent\"\nisHidden: false\ndescription: \"Create a subscription to subscribe the event from the camera\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"SubscribeCameraEvent\"\n# PullPoint | BaseNotification\nsubscribeType: \"PullPoint\"\ndefaultSubscriptionPolicy: \"\"\ndefaultInitialTerminationTime: \"PT1H\"\ndefaultAutoRenew: true\ndefaultTopicFilter: \"tns1:RuleEngine/TamperDetector\"\ndefaultMessageContentFilter: \"boolean(//tt:SimpleItem[@Name=\u201dIsTamper\u201d])\"\ndefaultMessageTimeout: \"PT5S\"\ndefaultMessageLimit: 10\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Define an UnsubscribeCameraEvent resource for unsubscribing
- name: \"UnsubscribeCameraEvent\"\nisHidden: false\ndescription: \"Unsubscribe all event from the camera\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"UnsubscribeCameraEvent\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Define a SubscribeCameraEvent resource with BaseNotification subscribeType
- name: \"SubscribeCameraEvent\"\nisHidden: false\ndescription: \"Create a subscription to subscribe the event ...\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"SubscribeCameraEvent\"\n# PullPoint | BaseNotification\nsubscribeType: \"BaseNotification\"\ndefaultSubscriptionPolicy: \"\"\ndefaultInitialTerminationTime: \"PT1H\"\ndefaultAutoRenew: true\ndefaultTopicFilter: \"...\"\ndefaultMessageContentFilter: \"...\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Define a driver config BaseNotificationURL to indicate the device service network location
# configuration.yaml\nAppCustom:\n# BaseNotificationURL indicates the device service network location (which should be accessible from onvif devices on the network), when\n# configuring an Onvif Event subscription.\nBaseNotificationURL: 'http://192.168.12.112:59984'\n
Device service will generate the following path for pushing event from Camera to device service: - {BaseNotificationURL}/api/v3/resource/{DeviceName}/{ResourceName} - {BaseNotificationURL}/api/v3/resource/Camera1/CameraEvent
Note
The user can also override the config from the docker-compose environment variable:
export HOST_IP=$(ifconfig eth0 | grep \"inet \" | awk '{ print $2 }')\n
environment:\nDRIVER_BASENOTIFICATIONURL: http://${HOST_IP}:59984\n
Then the device service can be accessed by the external camera from the other subnetwork."},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-device-resource-for-unsubscribing-the-event","title":"Define device resource for unsubscribing the event","text":" - name: \"UnsubscribeCameraEvent\"\nisHidden: true\ndescription: \"Unsubscribe all subscription from the camera\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"UnsubscribeCameraEvent\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#find-the-supported-event-topics","title":"Find the supported Event Topics","text":"Finding out what notifications a camera supports and what information they contain:
curl --request GET 'http://localhost:59882/api/v3/device/name/Camera003/EventProperties'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#create-a-pull-point","title":"Create a Pull Point","text":"User can create pull point with the following command:
curl --request PUT 'http://localhost:59882/api/v3/device/name/Camera003/PullPointSubscription' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"PullPointSubscription\": {\n \"MessageContentFilter\": \"boolean(//tt:SimpleItem[@Name=\\\"Rule\\\"])\",\n \"InitialTerminationTime\": \"PT120S\",\n \"MessageTimeout\": \"PT20S\"\n }\n}'\n
Note
User can create subscription, the InitialTerminationTime is required and should greater than ten seconds:
curl --request PUT 'http://localhost:59882/api/v3/device/name/Camera003/BaseNotificationSubscription' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"BaseNotificationSubscription\": {\n \"TopicFilter\": \"tns1:RuleEngine/TamperDetector/Tamper\",\n \"InitialTerminationTime\": \"PT180S\"\n }\n}'\n
Note
The user can unsubscribe all subscriptions(PullPoint and BaseNotification) from the camera with the following command:
curl --request PUT 'http://localhsot:59882/api/v3/device/name/Camera003/UnsubscribeCameraEvent' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"UnsubscribeCameraEvent\": {\n }\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/","title":"User Handling","text":"Warning
Information in this page may be outdated.
The device service shall be able to create, list, modify and delete users from the device using the CreateUsers, GetUsers, SetUser and DeleteUsers operations.
The spec can refer to https://www.onvif.org/ver10/device/wsdl/devicemgmt.wsdl
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#getusers","title":"GetUsers","text":"This operation lists the registered users and corresponding credentials on a device.
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera001/Users'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#createusers","title":"CreateUsers","text":"This operation creates new camera users and corresponding credentials on a device for authentication purposes.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/CreateUsers' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"CreateUsers\": {\n \"User\": [\n {\n \"Username\": \"user1\",\n \"Password\": \"Password1\",\n \"UserLevel\": \"User\"\n },\n {\n \"Username\": \"user2\",\n \"Password\": \"Password1\",\n \"UserLevel\": \"User\"\n }\n ]\n }\n }'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#setuser","title":"SetUser","text":"This operation updates the settings for one or several users on a device for authentication purposes.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/Users' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"Users\": {\n \"User\": [\n {\n \"Username\": \"user1\",\n \"UserLevel\": \"Administrator\"\n },\n {\n \"Username\": \"user2\",\n \"UserLevel\": \"Operator\"\n }\n ]\n }\n }'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#deleteusers","title":"DeleteUsers","text":"This operation deletes users on a device.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/DeleteUsers' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"DeleteUsers\": {\n \"Username\": [\"user1\",\"user2\"]\n }\n }'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/","title":"Auto Discovery","text":"There are two methods that the device service can use to discover and add ONVIF compliant cameras using WS-Discovery: multicast and netscan.
For more info on how WS-Discovery works, see here.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#how-to","title":"How To","text":"Note
Ensure that the cameras are all installed and configured before attempting discovery.
Device discovery is triggered by the device SDK. Once the device service starts, it will discover the Onvif camera(s) at the specified interval.
Note
You can also manually trigger discovery using this command: curl -X POST http://<service-host>:59984/api/v3/discovery
See Configuration Section for full details
Note
Alternatively, for netscan
you can set the DiscoverySubnets
automatically after the service has been deployed by running the bin/configure-subnets.sh script
Netscan
, there is a one line command to determine the DiscoverySubnets
of your current machine: ip -4 -o route list scope link | sed -En \"s/ dev ($(find /sys/class/net -mindepth 1 -maxdepth 2 -not -lname '*devices/virtual*' -execdir grep -q 'up' \"{}/operstate\" \\; -printf '%f\\n' | paste -sd\\| -)).+//p\" | grep -v \"169.254.0.0/16\" | sort -u | paste -sd, -\n
Example Output: 192.168.1.0/24
Define the following configurations in cmd/res/configuration.yaml
for auto-discovery mechanism:
Device:\n# The location of Provision Watcher yaml files to import when using auto-discovery\nProvisionWatchersDir: ./res/provisionwatchers\nDiscovery:\nEnabled: true\nInterval: 1h\n\n# Custom configs\nAppCustom:\nDefaultSecretName: credentials001\n# Select which discovery mechanism(s) to use\nDiscoveryMode: both # netscan, multicast, or both\n# The target ethernet interface for multicast discovering\nDiscoveryEthernetInterface: eth0\n# List of IPv4 subnets to perform netscan discovery on, in CIDR format (X.X.X.X/Y)\n# separated by commas ex: \"192.168.1.0/24,10.0.0.0/24\"\nDiscoverySubnets: \"192.168.1.0/24\" # Fill in with your actual subnet(s)\n
Define the following environment variables in docker-compose.yaml
:
device-onvif-camera:\nenvironment:\nDEVICE_DISCOVERY_ENABLED: \"true\" # enable device discovery\nDEVICE_DISCOVERY_INTERVAL: \"1h\" # set to desired interval\n\n# The target ethernet interface for multicast discovering\nAPPCUSTOM_DISCOVERYETHERNETINTERFACE: \"eth0\"\n# The Secret Name of the default credentials to use for devices\nAPPCUSTOM_DEFAULTSECRETNAME: \"credentials001\"\n# Select which discovery mechanism(s) to use\nAPPCUSTOM_DISCOVERYMODE: \"both\" # netscan, multicast, or both\n# List of IPv4 subnets to perform netscan discovery on, in CIDR format (X.X.X.X/Y)\n# separated by commas ex: \"192.168.1.0/24,10.0.0.0/24\"\nAPPCUSTOM_DISCOVERYSUBNETS: \"192.168.1.0/24\" # Fill in with your actual subnet(s)\n
Enter the subnet into this command, and execute it to set the DiscoverySubnets
Note
If you are operating in secure mode, you must use the Consul ACL Token generated previously. If not, you can omit the -H \"X-Consul-Token:<consul-token>\"
portion of the command.
curl --data '<subnet>' -H \"X-Consul-Token:<consul-token>\" -X PUT \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/DiscoverySubnets\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#step-2-set-credentialsmap","title":"Step 2. Set CredentialsMap","text":"See Credentials Guide for more information.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#configuration-guide","title":"Configuration Guide","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#discoverymode","title":"DiscoveryMode","text":"Note
For docker, set the env var APPCUSTOM_DISCOVERYMODE
DiscoveryMode
allows you to select which discovery mechanism(s) to use. The three options are: netscan
, multicast
, and both
.
netscan
works by sending unicast UDP WS-Discovery probes to a set of IP addresses on the CIDR subnet(s) configured via DiscoverySubnets
.
For example, if the provided CIDR is 10.0.0.0/24
, it will probe the all IP addresses from 10.0.0.1
to 10.0.0.254
. This will result in a total of 254 probes on the network.
This method is a little slower and more network-intensive than multicast WS-Discovery, because it has to make individual connections. However, it can reach a much wider set of networks and works better behind NATs (such as docker networks).
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#multicast","title":"multicast","text":"multicast
works by sending a single multicast UDP WS-Discovery Probe to the multicast address 239.255.255.250
on port 3702
. In certain networks this traffic is blocked, and it is also not forwarded across subnets, so it is not compatible with NATs such as docker networks (except in the case of running an Onvif simulator inside the same docker network).
multicast
requires some additional configuration. Edit the add-device-onvif-camera.yml
in the edgex-compose/compose-builder
as follows:
Example
services:\n device-onvif-camera:\n image: edgexfoundry/device-onvif-camera${ARCH}:0.0.0-dev\n container_name: edgex-device-onvif-camera\n hostname: edgex-device-onvif-camera\n read_only: true\n restart: always\n network_mode: \"host\"\n environment:\n SERVICE_HOST: 192.168.93.151 # set to internal ip of your machine\n MESSAGEQUEUE_HOST: localhost\n EDGEX_SECURITY_SECRET_STORE: \"false\"\n REGISTRY_HOST: localhost\n CLIENTS_CORE_DATA_HOST: localhost\n CLIENTS_CORE_METADATA_HOST: localhost\n # Host Network Interface, IP, Subnet\n APPCUSTOM_DISCOVERYETHERNETINTERFACE: wlp1s0 # determine this setting for your machine\n APPCUSTOM_DISCOVERYSUBNETS: 192.168.93.0/24 # determine this setting for your machine\n APPCUSTOM_DISCOVERYMODE: multicast\n depends_on:\n - consul\n - data\n - metadata\n security_opt:\n - no-new-privileges:true\n user: \"${EDGEX_USER}:${EDGEX_GROUP}\"\n command: --cp=consul.http://localhost:8500\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#both","title":"both","text":"This option combines both netscan and multicast.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#discoverysubnets","title":"DiscoverySubnets","text":"Note
For docker, set the env var APPCUSTOM_DISCOVERYSUBNETS
This is the list of IPv4 subnets to perform netscan discovery on, in CIDR format (X.X.X.X/Y) separated by commas ex: \"192.168.1.0/24,10.0.0.0/24\". See how to configure this value here.
Also, the following one-line command can determine the subnets of your machine:
ip -4 -o route list scope link | sed -En \"s/ dev ($(find /sys/class/net -mindepth 1 -maxdepth 2 -not -lname '*devices/virtual*' -execdir grep -q 'up' \"{}/operstate\" \\; -printf '%f\\n' | paste -sd\\| -)).+//p\" | grep -v \"169.254.0.0/16\" | sort -u | paste -sd, -\n
Example Output: 192.168.1.0/24
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#discoveryethernetinterface","title":"DiscoveryEthernetInterface","text":"Note
For docker, set the env var APPCUSTOM_DISCOVERYETHERNETINTERFACE
This is the target Ethernet Interface to use for multicast discovering. Keep in mind this interface is relative to the environment it is being run under. For example, when running in docker, those interfaces are different from your host machine's interfaces.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#probeasynclimit","title":"ProbeAsyncLimit","text":"Note
For docker, set the env var APPCUSTOM_PROBEASYNCLIMIT
This is the maximum simultaneous network probes when running netscan discovery.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#probetimeoutmillis","title":"ProbeTimeoutMillis","text":"Note
For docker, set the env var APPCUSTOM_PROBETIMEOUTMILLIS
This is the maximum amount of milliseconds to wait for each IP probe before timing out. This will also be the minimum time the discovery process can take.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#maxdiscoverdurationseconds","title":"MaxDiscoverDurationSeconds","text":"Note
For docker, set the env var APPCUSTOM_MAXDISCOVERDURATIONSECONDS
This is the maximum amount of seconds the discovery process is allowed to run before it will be cancelled. It is especially important to have this configured in the case of larger subnets such as /16 and /8.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#adding-the-devices-to-edgex","title":"Adding the Devices to EdgeX","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#rediscovery","title":"Rediscovery","text":"The device service is able to rediscover and update devices that have been discovered previously. Nothing additional is needed to enable this. It will run whenever the discover call is sent, regardless of whether it is a manual or automated call to discover.
The following logic is to determine if the device is already registered or not.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#troubleshooting","title":"Troubleshooting","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#netscan-discovery-was-called-but-discoverysubnets-are-empty","title":"netscan discovery was called, but DiscoverySubnets are empty!","text":"This message occurs when you have not configured the AppCustom.DiscoverySubnets
configuration. It is required in order to know which subnets to scan for Onvif Cameras. See here
This message occurs when you have multicast discovery enabled, but AppCustom.DiscoveryEthernetInterface
is configured to a network interface that does not exist. See here
Control plane events have been added to enable the Core Metadata to emit events onto the message bus when a device has been added, updated, or deleted.
Refer Device System Events for more information.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/","title":"Credentials","text":"Camera credentials are stored in the EdgeX Secret Store and referenced by MAC Address. All devices by default are configured with credentials from DefaultSecretName
unless configured as part of a group within AppCustom.CredentialsMap
.
Three things must be done in order to add an authenticated camera to EdgeX: - Add device to EdgeX - Manually or via auto-discovery - Add Credentials
to Secret Store
- Manually or via utility scripts - Map Credentials
to devices - Manually or via utility scripts - Configure as DefaultSecretName
Secret Store
under a specific Secret Name
key.Secret
which contains a mapping of username
, password
, and authentication mode
.Secrets
Vault
Consul
). They can be pre-configured via configuration.yaml
's Writable.InsecureSecrets
section.Secret
as they are stored in the Secret Store
.AppCustom.CredentialsMap
) this contains the mappings between Secret Name
and MAC Address
. Each key in the map is a Secret Name
which points to Credentials
in the Secret Store
. The value for each key is a comma separated list of MAC Addresses
which should use those Credentials
.Secret Name
which points to the Credentials
to use as the default for all devices which are not configured in the CredentialsMap
.Secret Name
that does not exist in the Secret Store
. It is pre-configured as Credentials
with Authentication Mode
of none
. NoAuth
can be used most places where a Secret Name
is expected.Camera credentials are stored in the EdgeX Secret Store, which is Vault in secure mode, and Consul in non-secure mode. The term Secret Name
is often used to refer to the name of the credentials as they are stored in the Secret Store. Credentials are then mapped to devices either using the DefaultSecretName
which applies to all devices by default, or by configuring the AppCustom.CredentialsMap
which maps one or more MAC Addresses to the desired credentials.
Credentials
are SecretData
comprised of three fields: - username
: the admin username for the camera - password
: the admin password - mode
: the type of Authentication to use - usernametoken
: use a username and token based authentication - digest
: use a digest based authentication - both
: use both usernametoken
and digest
- none
: do not send any authentication headers
Note
Credentials can be added and modified via utility scripts after the service is running
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#non-secure-mode","title":"Non-Secure Mode","text":"Helper ScriptsManualSee here for the full guide.
Replace <secret-name>
with the name of the secret, <username>
with the username, <password>
with the password, and <mode>
with the auth mode.
Set SecretName to <device-name>
curl -X PUT --data \"<secret-name>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretName\"\n
Set username to <username>
curl -X PUT --data \"<username>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretData/username\"\n
Set password to <password>
curl -X PUT --data \"<password>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretData/password\"\n
Set auth mode to <auth-mode>
curl -X PUT --data \"<auth-mode>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretData/mode\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#secure-mode","title":"Secure Mode","text":"Helper ScriptsManual See here for the full guide.
Credentials can be added via EdgeX Secrets:
Replace <secret-name>
with the name of the secret, <username>
with the username, <password>
with the password, and <mode>
with the auth mode.
curl --location --request POST 'http://localhost:59984/api/v3/secret' \\\n--header 'Content-Type: application/json' \\\n--data-raw '\n{\n \"apiVersion\" : \"v3\",\n \"name\": \"<secret-name>\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<password>\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"<mode>\"\n }\n ]\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#mapping-credentials-to-devices","title":"Mapping Credentials to Devices","text":"Note
Credential mappings can be set via utility scripts after the service is running
The device service supports three types of credential mapping. All three types can be used in conjunction with each other.
1 to All
- All devices are given the default credentials based on DefaultSecretName
1 to Many
- In the CredentialsMap
, one secret name can be assigned multiple MAC addresses1 to 1
- In the CredentialsMap
, assign each secret name 1 MAC AddressNote
Any key present in AppCustom.CredentialsMap
must also exist in the secret store!
# AppCustom.CredentialsMap is a map of SecretName -> Comma separated list of mac addresses.\n# Every SecretName used here must also exist as a valid secret in the Secret Store.\n#\n# Note: Anything not defined here will be assigned the default credentials configured via `DefaultSecretName`.\n#\n# Example: (Single mapping for 1 mac address to 1 credential)\n# credentials001 = \"aa:bb:cc:dd:ee:ff\"\n#\n# Example: (Multi mapping for 3 mac address to 1 shared credentials)\n# credentials002 = \"11:22:33:44:55:66,ff:ee:dd:cc:bb:aa,ab:12:12:34:34:56:56\"\n#\n# These mappings can also be referred to as \"groups\". In the above case, the `credentials001` group has 1 MAC\n# Address, and the `credentials002` group has 3 MAC Addresses.\n#\n# The special group 'NoAuth' defines mac addresses of cameras where no authentication is needed.\n# The 'NoAuth' key does not exist in the SecretStore. It is not required to add MAC Addresses in here,\n# however it avoids sending the default credentials to cameras which do not need it.\n#\n# IMPORTANT: A MAC Address may only exist in one credential group. If a MAC address is defined in more\n# than one group, it is unpredictable which group the MAC will end up in! If you wish to change the group a MAC\n# address belongs to, first remove it from its existing group, and then add it to the new one.\nCredentialsMap:\nNoAuth: \"\"\ncredentials001: \"aa:bb:cc:dd:ee:ff\"\ncredentials002: \"11:22:33:44:55:66,ff:ee:dd:cc:bb:aa,ab:12:12:34:34:56:56\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#credential-lookup","title":"Credential Lookup","text":"Here is an in-depth look at the logic behind mapping Credentials
to Devices.
Custom metadata can be applied and retrieved for each camera added to the service.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#usage","title":"Usage","text":"CustomMetadata
map is an element in the ProtocolProperties
device field. It is initialized to be empty on discovery, so the user can add their desired fields. Otherwise, the user can pre-define this field in a camera.yaml file.If you add pre-defined devices, set up the CustomMetadata
object as shown in the cmd/res/devices/camera.yaml.example
.
deviceList:\n- name: Camera001\nprofileName: onvif-camera\ndescription: onvif conformant camera\nprotocols:\n...\nCustomMetadata:\nLocation: Front door\nColor: Black and white\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#set-custom-metadata","title":"Set Custom Metadata","text":"Use the CustomMetadata resource to set the fields of CustomMetadata
. Choose the key/value pairs to represent your custom fields.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/CustomMetadata' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"CustomMetadata\": {\n \"Location\":\"Front Door\",\n \"Color\":\"Black and white\",\n \"Condition\": \"Good working condition\"\n }\n }' | jq .\n
{\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
Note
Ensure all data is properly formatted json, and that all special characters are escaped if necessary
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#get-custom-metadata","title":"Get Custom Metadata","text":"Use the CustomMetadata resource to get and display the fields of CustomMetadata
.
curl http://localhost:59882/api/v3/device/name/<device name>/CustomMetadata | jq .\n
2. The repsonse from the curl command. {\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"ba3987f9-b45b-480a-b582-f5501d673c4d\",\n \"origin\" : 1655409814077374935,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"cf96e5c0-bde1-4c0b-9fa4-8f765c8be456\",\n \"objectValue\" : {\n\"Color\" : \"Black and white\",\n \"Condition\" : \"Good working condition\",\n \"Location\" : \"Front Door\"\n},\n \"origin\" : 1655409814077374935,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"CustomMetadata\",\n \"value\" : \"\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"CustomMetadata\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#get-specific-custom-metadata","title":"Get Specific Custom Metadata","text":"Pass the CustomMetadata
resource a query to get specific field(s) in CustomMetadata. The query must be a base64 encoded json object with an array of fields you want to access.
Json object holding an array of fields you want to query.
'[\n\"Color\",\n\"Location\"\n]'\n
Use this command to convert the json object to base64.
echo '[\n \"Color\",\n \"Location\"\n]' | base64\n
The response converted to base64.
WwogICAgIkNvbG9yIiwKICAgICJMb2NhdGlvbiIKXQo=\n
Use this command to query the fields you provided in the json object.
curl http://localhost:59882/api/v3/device/name/<device name>/CustomMetadata?jsonObject=WwogICAgIkNvbG9yIiwKICAgICJMb2NhdGlvbiIKXQo= | jq .\n
Curl response.
{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"24c3eb0a-48b1-4afe-b874-965aeb2e42a2\",\n \"origin\" : 1655410556448058195,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"d0c26303-20b5-4ccd-9e63-fb02b87b8ebc\",\n \"objectValue\" : {\n\"Color\": \"Black and white\",\n \"Location\" : \"Front Door\"\n},\n \"origin\" : 1655410556448058195,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"CustomMetadata\",\n \"value\" : \"\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"CustomMetadata\"\n},\n \"statusCode\" : 200\n}\n
Use the DeleteCustomMetadata resource to delete entries in custom metadata
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/DeleteCustomMetadata' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"DeleteCustomMetadata\": [\n \"Color\", \"Condition\"\n ]\n }' | jq .\n
{\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
The device status goes hand in hand with the rediscovery of the cameras, but goes beyond the scope of just discovery. It is a separate background task running at a specified interval (default 30s) to determine the most accurate operating status of the existing cameras. This applies to all devices regardless of how or where they were added from.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/device-status/#states-and-descriptions","title":"States and Descriptions","text":"Currently, there are 4 different statuses that a camera can have
EnableStatusCheck
to enable the device status background service.CheckStatusInterval
is the interval at which the service will determine the status of each camera.# Enable or disable the built in status checking of devices, which runs every CheckStatusInterval.\nEnableStatusCheck: true\n# The interval in seconds at which the service will check the connection of all known cameras and update the device status \n# A longer interval will mean the service will detect changes in status less quickly\n# Maximum 300s (5 minutes)\nCheckStatusInterval: 30\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/device-status/#automatic-triggers","title":"Automatic Triggers","text":"Currently, there are some actions that will trigger an automatic status check: - Any modification to the CredentialsMap
from the config provider (Consul)
Friendly name and MAC address can be set and retrieved for each camera added to the service.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#preset-friendlyname","title":"Preset FriendlyName","text":"FriendlyName
is an element in the Onvif ProtocolProperties
device field. It is initialized to be empty or <Manufacturer+Model>
if credentials are provided on discovery. The user can also pre-define this field in a camera.yaml file.
If you add pre-defined devices, set up the FriendlyName
field as shown in the cmd/res/devices/camera.yaml.example
.
# Pre-defined Devices\ndeviceList:\n- name: Camera001\nprofileName: onvif-camera\ndescription: onvif conformant camera\nprotocols:\nOnvif:\nAddress: 192.168.12.123\nPort: '80'\nFriendlyName: Home camera\nCustomMetadata:\nLocation: Front door\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#set-friendly-name","title":"Set Friendly Name","text":"Friendly name can also be set via Edgex device command. FriendlyName device resource is used to set FriendlyName
of a camera.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/FriendlyName' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"FriendlyName\":\"Home camera\"\n }' | jq .\n
2. The response from the curl command. {\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
Note
Ensure all data is properly formatted json, and that all special characters are escaped if necessary
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#get-friendly-name","title":"Get Friendly Name","text":"Use the FriendlyName device resource to retrieve FriendlyName
of a camera.
curl http://localhost:59882/api/v3/device/name/<device name>/FriendlyName | jq .\n
2. Response from the curl command. FriendlyName value can be found under value
field in the json response. {\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"event\": {\n\"apiVersion\" : \"v3\",\n \"id\": \"5b924351-31c7-469e-a9ba-dea063fdbf3a\",\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"profileName\": \"onvif-camera\",\n \"sourceName\": \"FriendlyName\",\n \"origin\": 1658441317910501400,\n \"readings\": [\n{\n\"id\": \"62a0424b-a3c1-45ea-b640-58c7aa3ea476\",\n \"origin\": 1658441317910501400,\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"resourceName\": \"FriendlyName\",\n \"profileName\": \"onvif-camera\",\n \"valueType\": \"String\",\n \"value\": \"Home camera\"\n}\n]\n}\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#preset-macaddress","title":"Preset MACAddress","text":"MACAddress
is an element in the Onvif ProtocolProperties
device field. It will be set to empty string if no value is provided, or it will be set with the MAC address value of the camera if valid credentials are provided. The user can pre-define this field in a camera.yaml file.
If you add pre-defined devices, set up the MACAddress
field as shown in the cmd/res/devices/camera.yaml.example
).
MACAddress can also be set via Edgex device command.This is useful for setting the MAC Address for devices which do not contain the MAC Address in the Endpoint Reference Address, or have been added manually without a MAC Address. Since the MAC is used to map credentials for cameras, it is important to have this field filled out.
Note
When a camera successfully becomes UpWithAuth
, the MAC Address is automatically queried and overridden by the system if available.
Device resource MACAddress is used to set MACAddress
of a camera.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/MACAddress' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"MACAddress\":\"11:22:33:44:55:66\"\n }' | jq .\n
{\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
Note
Ensure all data is properly formatted json, and that all special characters are escaped if necessary.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#get-mac-address","title":"Get MAC Address","text":"Use the MACAddress device resource to retrieve MACAddress
of a camera.
curl http://localhost:59882/api/v3/device/name/<device name>/MACAddress | jq .\n
2. Response from the curl command. MACAddress value can be found under value
field in the json response. {\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"event\": {\n\"apiVersion\" : \"v3\",\n \"id\": \"c13245b0-397f-47c0-84b2-4de3d2fb891d\",\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-1027f5ea8888\",\n \"profileName\": \"onvif-camera\",\n \"sourceName\": \"MACAddress\",\n \"origin\": 1658441498356294000,\n \"readings\": [\n{\n\"id\": \"7a7735ed-3b61-4426-84df-5e9a524e4022\",\n \"origin\": 1658441498356294000,\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-1027f5ea8888\",\n \"resourceName\": \"MACAddress\",\n \"profileName\": \"onvif-camera\",\n \"valueType\": \"String\",\n \"value\": \"11:22:33:44:55:66\"\n}\n]\n}\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/","title":"Getting Started With Docker (Security Mode)","text":"Warning
Information in this page may be outdated.
This section describes how to run device-onvif-camera with docker and EdgeX security mode.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#1-build-docker-image","title":"1. Build docker image","text":"Build docker image named edgex/device-onvif-camera:0.0.0-dev with the following command:
make docker\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#2-prepare-edgex-composecompose-builder","title":"2. Prepare edgex-compose/compose-builder","text":"edgex-compose/compose-builder
make run ds-onvif-camera\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#31-check-whether-the-services-are-running-from-consul","title":"3.1 Check whether the services are running from Consul","text":"$ make get-consul-acl-token\n14891947-51b3-603d-9e35-628fb82993f4\n
http://localhost:8500/
curl --location --request POST 'http://0.0.0.0:59984/api/v3/secret' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"bosch\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"administrator\"\n },\n {\n \"key\":\"password\",\n \"value\":\"Password1!\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"digest\"\n }\n ]\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#5-add-the-device-profile-to-edgex","title":"5. Add the device profile to EdgeX","text":"Change directory back to the device-onvif-camera
and add the device profile to core-metadata service with the following command:
curl http://localhost:59881/api/v3/deviceprofile/uploadfile \\\n-F \"file=@./cmd/res/profiles/camera.yaml\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#6-add-the-device-to-edgex","title":"6. Add the device to EdgeX","text":"Add the device data to core-metadata service with the following command:
curl -X POST -H 'Content-Type: application/json' \\\nhttp://localhost:59881/api/v3/device \\\n-d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\":\"Camera003\",\n \"serviceName\": \"device-onvif-camera\",\n \"profileName\": \"onvif-camera\",\n \"description\": \"My test camera\",\n \"adminState\": \"UNLOCKED\",\n \"operatingState\": \"UNKNOWN\",\n \"protocols\": {\n \"Onvif\": {\n \"Address\": \"192.168.12.148\",\n \"Port\": \"80\",\n \"AuthMode\": \"digest\",\n \"SecretName\": \"bosch\"\n }\n }\n }\n }\n ]'\n
Check the available commands from core-command service:
$ curl http://localhost:59882/api/v3/device/name/Camera003 | jq .\n{\n\"apiVersion\" : \"v3\",\n \"deviceCoreCommand\" : {\n\"coreCommands\" : [\n{\n\"get\" : true,\n \"set\" : true,\n \"name\" : \"DNS\",\n \"parameters\" : [\n{\n\"resourceName\" : \"DNS\",\n \"valueType\" : \"Object\"\n}\n],\n \"path\" : \"/api/v3/device/name/Camera003/DNS\",\n \"url\" : \"http://edgex-core-command:59882\"\n},\n ...\n {\n\"get\" : true,\n \"name\" : \"StreamUri\",\n \"parameters\" : [\n{\n\"resourceName\" : \"StreamUri\",\n \"valueType\" : \"Object\"\n}\n],\n \"path\" : \"/api/v3/device/name/Camera003/StreamUri\",\n \"url\" : \"http://edgex-core-command:59882\"\n}\n],\n \"deviceName\" : \"Camera003\",\n \"profileName\" : \"onvif-camera\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#7-execute-a-get-command","title":"7. Execute a Get Command","text":"$ curl http://0.0.0.0:59882/api/v3/device/name/Camera003/Users | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"id\" : \"c0826f49-2840-421b-9474-7ad63a443302\",\n \"origin\" : 1639525215434025100,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"d4dc823a-d75f-4fe1-8ee4-4220cc53ddc6\",\n \"objectValue\" : {\n\"User\" : [\n{\n\"UserLevel\" : \"Operator\",\n \"Username\" : \"user\"\n},\n {\n\"UserLevel\" : \"Administrator\",\n \"Username\" : \"service\"\n},\n {\n\"UserLevel\" : \"Administrator\",\n \"Username\" : \"administrator\"\n}\n]\n},\n \"origin\" : 1639525215434025100,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"Users\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"Users\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-footnotes/","title":"ONVIF Footnotes","text":"Warning
Information in this page may be outdated.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-footnotes/#command-support","title":"Command Support","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-footnotes/#tapo-c200-user-management","title":"Tapo C200 - User Management","text":"Tapo returns 200 OK
for all User Management commands, but none of them actually do anything. The only way to modify the users is through the Tapo app.
Tapo does not support setting the DaylightSavings
field to false
. Regardless of the setting, the camera will always use daylight savings time.
You must use Digest Auth
or Both
as the Auth-Mode in order for this to work.
Warning
Information in this page may be outdated.
According to the Onvif user authentication flow, the device service shall: * Implement WS-Usernametoken according to WS-security as covered by the core specification. * Implement HTTP Digest as covered by the core specification.
The spec can refer to https://www.onvif.org/specs/core/ONVIF-Core-Specification.pdf
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-user-authentication/#ws-usernametoken","title":"WS-Usernametoken","text":"When the Onvif camera requires authentication through WS-UsernameToken, the device service must set user information with the appropriate privileges in WS-UsernameToken.
This use case contains an example of setting that user information using GetHostname.
WS-UsernameToken requires the following parameters: * Username \u2013 The user name for a certified user. * Password \u2013 The password for a certified user. According to the ONVIF specification, Password should not be set in plain text. Setting a password generates PasswordDigest, a digest that is calculated according to an algorithm defined in the specification for WS-UsernameToken: Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) ) * Nonce \u2013 A random string generated by a client. * Created \u2013 The UTC Time when the request is made.
For example:
curl --request POST 'http://192.168.56.101:10000/onvif/device_service' \\\n--header 'Content-Type: application/soap+xml' \\\n-d '<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n <soap-env:Envelope xmlns:soap-env=\"http://www.w3.org/2003/05/soap-envelope\" ...>\n <soap-env:Header>\n <Security xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\">\n <UsernameToken>\n <Username>administrator</Username>\n <Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest\">\n +HKcvc+LCGClVwuros1sJuXepQY=\n </Password>\n <Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">\n w490bn6rlib33d5rb8t6ulnqlmz9h43m\n </Nonce>\n <Created xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">\n 2021-10-21T03:43:21.02075Z\n </Created>\n </UsernameToken>\n </Security>\n </soap-env:Header>\n <soap-env:Body>\n <tds:GetHostname>\n </tds:GetHostname>\n </soap-env:Body>\n </soap-env:Envelope>'\n
The spec can refer to https://www.onvif.org/wp-content/uploads/2016/12/ONVIF_WG-APG-Application_Programmers_Guide-1.pdf
You can inspect the request by network tool like the Wireshark:
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-user-authentication/#http-digest","title":"HTTP Digest","text":"The Digest scheme is based on a simple challenge-response paradigm and the spec can refer to https://datatracker.ietf.org/doc/html/rfc2617#page-6
The authentication follow can be illustrated as below: 1. The device service sends the request without the acceptable Authorization header. 2. The Onvif camera return the response with a \"401 Unauthorized\" status code, and a WWW-Authenticate header. - The WWW-Authenticate header contains the required data - qop: Indicates what \"quality of protection\" the client has applied to the message. - nonce: A server-specified data string which should be uniquely generated each time a 401 response is made. The onvif camera can limit the time of the nonce's validity. - realm: name of the host performing the authentication - And the device service will put the qop, nonce, realm in the header at next request 3. The device service sends the request again, and the Authorization header must contain: - qop: retrieve from the previous response - nonce: retrieve from the previous response - realm: retrieve from the previous response - username: The user's name in the specified realm. - uri: Request uri - nc: The nc-value is the hexadecimal count of the number of requests (including the current request) that the client has sent with the nonce value in this request. - cnonce: A random string generated by a client. - response: A string of 32 hex digits computed as defined below, which proves that the user knows a password. - MD5( hash1:nonce:nc:cnonce:qop:hash2) - hash1: MD5(username:realm:password) - hash2: MD5(POST:uri)
Inspect the request by the Wireshark:
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/reboot-needed/","title":"RebootNeeded","text":"Warning
Information in this page may be outdated.
Currently, only the SetNetworkInterfaces function returns the RebootNeeded value. If RebootNeeded is true, the user needs to reboot the camera to apply the config changes.
Since the Set command can't return the RebootNeeded value in the command response, the device service will store the value, then the user can use the custom web service EdgeX and function RebootNeeded to check the value.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/reboot-needed/#how-does-the-rebootneeded-work-with-edgex","title":"How does the RebootNeeded work with EdgeX?","text":"curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/NetworkInterfaces' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"NetworkInterfaces\": {\n \"InterfaceToken\": \"eth0\",\n \"NetworkInterface\": {\n \"Enabled\": true,\n \"IPv4\": {\n \"DHCP\": true\n }\n } \n }\n}'\n
Check the RebootNeeded value:
$ curl 'http://0.0.0.0:59882/api/v3/device/name/Camera001/RebootNeeded' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera001\",\n \"id\" : \"e370bbb5-55d2-4392-84ca-8d9e7f097dae\",\n \"origin\" : 1635750695886624000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera001\",\n \"id\" : \"abd5c555-ef7d-44a7-9273-c1dbb4d14de2\",\n \"origin\" : 1635750695886624000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"RebootNeeded\",\n \"value\" : \"true\",\n \"valueType\" : \"Bool\"\n}\n],\n \"sourceName\" : \"RebootNeeded\"\n},\n \"statusCode\" : 200\n}\n
The RebootNeeded value is true which indicates the camera should reboot to apply the necessary changes. Reboot the camera to apply the change:
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/SystemReboot' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"SystemReboot\": {}\n}'\n
Check The RebootNeeded value:
$ curl 'http://0.0.0.0:59882/api/v3/device/name/Camera001/RebootNeeded' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera001\",\n \"id\" : \"53585696-ec1a-4ac7-9a42-7d480c0a75d9\",\n \"origin\" : 1635750854455262000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera001\",\n \"id\" : \"87819d3a-25d0-4313-b69a-54c4a0c389ed\",\n \"origin\" : 1635750854455262000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"RebootNeeded\",\n \"value\" : \"false\",\n \"valueType\" : \"Bool\"\n}\n],\n \"sourceName\" : \"RebootNeeded\"\n},\n \"statusCode\" : 200\n}\n
Because of the reboot, RebootNeeded is now false
. This instruction introduce how to test with the Post REST client tool.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#test-onvif-api","title":"Test ONVIF API","text":"Before using device-onvif-camera
, the user can verify the camera's functionality via ONVIF APIs, we provide the following collections for testing: - Capabilities - Auto Discovery - Network Configuration - System Function - User Handling - Metadata Configuration - Video Streaming - Video Encoder Configuration - PTZ - Event Handling - Analytics
Download and import the following JSON files into Postman REST client tool: - onvif_camera_without_edgex_postman_collection.json - onvif_camera_without_edgex_postman_environment.json
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#set-up-the-authentication-for-onvif-security","title":"Set Up the Authentication for ONVIF security","text":"Replace the following onvif environment variable
on the Postman REST client. - WS_USERNAME - The username for a certified user - WS_NONCE - A random, unique number generated by a client - WS_UTC_TIME - The UtcTime when the request is made. - WS_PASSWORD_DIGEST - a digest that is calculated according to an algorithm defined in the specification for WS-UsernameToken: Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) )
According to the ONVIF spec and programmer guide, the client needs to provide the password digest for WS-UsernameToken. For example, we can generate the password digest in golang:
package main\n\nimport (\n\"crypto/sha1\"\n\"encoding/base64\"\n\"fmt\"\n)\n\nfunc main() {\nnonce := \"abcd\"\npassword := \"Password1!\"\ncreated := \"2022-06-06T12:26:37.769698Z\"\npasswordDigest := generatePasswordDigest(nonce, created, password)\n\nfmt.Println(\"Nonce:\", nonce)\nfmt.Println(\"Created:\", created)\nfmt.Println(\"PasswordDigest:\", passwordDigest)\n}\n\n//Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) )\nfunc generatePasswordDigest(Nonce string, Created string, Password string) string {\nsDec, _ := base64.StdEncoding.DecodeString(Nonce)\nhasher := sha1.New()\nhasher.Write([]byte(string(sDec) + Created + Password))\nreturn base64.StdEncoding.EncodeToString(hasher.Sum(nil))\n}\n
The runnable code: https://go.dev/play/p/ZnE2nZYorg9"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#set-up-the-api-endpoint","title":"Set Up the API Endpoint","text":"Generally, the device web service endpoint is http:/${address}:${port}/onvif/device_service, then we can use GetCapabilities
ONVIF function to query other web service's endpoint:
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<env:Envelope ...>\n<env:Body>\n<tds:GetCapabilitiesResponse>\n<tds:Capabilities>\n<tt:Device>\n<tt:XAddr>http://192.168.12.123/onvif/device_service</tt:XAddr>\n...\n </tt:Device>\n<tt:Events>\n<tt:XAddr>http://192.168.12.123/onvif/Events</tt:XAddr>\n...\n </tt:Events>\n...\n </tds:GetCapabilitiesResponse>\n</env:Body>\n</env:Envelope>\n
And we should replace the following onvif environment variable
on the Postman REST client. - DEVICE_ENDPOINT - device web service endpoint - MEDIA_ENDPOINT - media web service endpoint - EVENT_ENDPOINT - event web service endpoint - PTZ_ENDPOINT - ptz web service endpoint
Then we can execute other ONVIF function via Postman REST client tool.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#test-device-onvif-camera-api","title":"Test device-onvif-camera API","text":"After adding the device according to the Getting Started Guide, then we can import the following Postman collections for testing the APIs: - onvif_camera_with_edgex_postman_collection.json - onvif_camera_with_edgex_postman_environment.json
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/","title":"Utility Scripts","text":"Note
If running EdgeX in Secure Mode, you will need a Consul ACL Token and JWT Token in order to use these scripts.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#use-cases","title":"Use Cases","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#create-new-credentials-and-assign-mac-addresses","title":"Create new credentials and assign MAC Addresses","text":"bin/map-credentials.sh
(Create New)
Note
Currently EdgeX is unable to provide a way to query the names of existing secrets from the secret store, so this method only works with credentials which have a key in the CredentialsMap. If the credentials were added via these utility scripts, a placeholder key was added for you to the CredentialsMap.
bin/map-credentials.sh
bin/edit-credentials.sh
Warning
This will modify the username/password for ALL devices using these credentials. Proceed with caution!
bin/query-mappings.sh
Output will look something like this:
Credentials Map:\n mycreds = 'aa:bb:cc:dd:ee:ff'\n mycreds2 = ''\n simcreds = 'cb:4f:86:30:ef:19,87:52:89:4d:66:4d,f0:27:d2:e8:9e:e1,9d:97:d9:d8:07:4b,99:70:6d:f5:c2:16'\n tapocreds = '10:27:F5:EA:88:F3'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#configure-discoverysubnets","title":"Configure DiscoverySubnets","text":"bin/configure-subnets.sh
bin/configure-subnets.sh [-s/--secure-mode] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about","title":"About","text":"The purpose of this script is to make it easier for an end user to configure Onvif device discovery without the need to have knowledge about subnets and/or CIDR format. The DiscoverySubnets
config option defaults to blank in the configuration.yaml
file, and needs to be provided before a discovery can occur. This allows the device-onvif-camera device service to be run in a NAT-ed environment without host-mode networking, because the subnet information is user-provided and does not rely on device-onvif-camera
to detect it.
This script finds the active subnet for any and all network interfaces that are on the machine which are physical (non-virtual) and online (up). It uses this information to automatically fill out the DiscoverySubnets
configuration option through Consul of a deployed device-onvif-camera
instance.
bin/edit-credentials.sh [-s/--secure-mode] [-u <username>] [-p <password>] [--auth-mode {usernametoken|digest|both}] [-P secret-name] [-M mac-addresses] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about_1","title":"About","text":"The purpose of this script is to allow end-users to modify credentials either through EdgeX InsecureSecrets via Consul, or EdgeX Secrets via the device service.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#map-credentialssh","title":"map-credentials.sh","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#usage_2","title":"Usage","text":"bin/map-credentials.sh [-s/--secure-mode] [-u <username>] [-p <password>] [--auth-mode {usernametoken|digest|both}] [-P secret-name] [-M mac-addresses] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about_2","title":"About","text":"The purpose of this script is to allow end-users to add credentials either through EdgeX InsecureSecrets via Consul, or EdgeX Secrets via the device service. It then allows the end-user to add a list of MAC Addresses to map to those credentials via Consul.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#query-mappingssh","title":"query-mappings.sh","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#usage_3","title":"Usage","text":"bin/query-mappings.sh [-s/--secure-mode] [-u <username>] [-p <password>] [--auth-mode {usernametoken|digest|both}] [-P secret-name] [-M mac-addresses] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about_3","title":"About","text":"The purpose of this script is to allow end-users to see what MAC Addresses are mapped to what credentials.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ws-discovery/","title":"How does WS-Discovery work?","text":"ONVIF devices support WS-Discovery, which is a mechanism that supports probing a network to find ONVIF capable devices.
Probe messages are sent over UDP to a standardized multicast address and UDP port number.
WS-Discovery is generally faster than netscan becuase it only sends out one broadcast signal. However, it is normally limited by the network segmentation since the multicast packages typically do not traverse routers.
Example: 1. The client sends Probe message to find Onvif camera on the network.
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<soap-env:Envelope\nxmlns:soap-env=\"http://www.w3.org/2003/05/soap-envelope\"\nxmlns:soap-enc=\"http://www.w3.org/2003/05/soap-encoding\"\nxmlns:a=\"http://schemas.xmlsoap.org/ws/2004/08/addressing\">\n<soap-env:Header>\n<a:Action mustUnderstand=\"1\">http://schemas.xmlsoap.org/ws/2005/04/discovery/Probe</a:Action>\n<a:MessageID>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</a:MessageID>\n<a:ReplyTo>\n<a:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>\n</a:ReplyTo>\n<a:To mustUnderstand=\"1\">urn:schemas-xmlsoap-org:ws:2005:04:discovery</a:To>\n</soap-env:Header>\n<soap-env:Body>\n<Probe\nxmlns=\"http://schemas.xmlsoap.org/ws/2005/04/discovery\"/>\n</soap-env:Body>\n</soap-env:Envelope>\n
The Onvif camera responds the Hello message according to the Probe message > The Hello message from HIKVISION
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<env:Envelope\nxmlns:env=\"http://www.w3.org/2003/05/soap-envelope\"\n...>\n<env:Header>\n<wsadis:MessageID>urn:uuid:cea94000-fb96-11b3-8260-686dbc5cb15d</wsadis:MessageID>\n<wsadis:RelatesTo>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</wsadis:RelatesTo>\n<wsadis:To>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</wsadis:To>\n<wsadis:Action>http://schemas.xmlsoap.org/ws/2005/04/discovery/ProbeMatches</wsadis:Action>\n<d:AppSequence InstanceId=\"1637072188\" MessageNumber=\"17\"/>\n</env:Header>\n<env:Body>\n<d:ProbeMatches>\n<d:ProbeMatch>\n<wsadis:EndpointReference>\n<wsadis:Address>urn:uuid:cea94000-fb96-11b3-8260-686dbc5cb15d</wsadis:Address>\n</wsadis:EndpointReference>\n<d:Types>dn:NetworkVideoTransmitter tds:Device</d:Types>\n<d:Scopes>onvif://www.onvif.org/type/video_encoder onvif://www.onvif.org/Profile/Streaming onvif://www.onvif.org/MAC/68:6d:bc:5c:b1:5d onvif://www.onvif.org/hardware/DFI6256TE http:123</d:Scopes>\n<d:XAddrs>http://192.168.12.123/onvif/device_service</d:XAddrs>\n<d:MetadataVersion>10</d:MetadataVersion>\n</d:ProbeMatch>\n</d:ProbeMatches>\n</env:Body>\n</env:Envelope>\n
The Hello message from Tapo C200
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<SOAP-ENV:Envelope\nxmlns:SOAP-ENV=\"http://www.w3.org/2003/05/soap-envelope\"\n...>\n<SOAP-ENV:Header>\n<wsa:MessageID>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</wsa:MessageID>\n<wsa:RelatesTo>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</wsa:RelatesTo>\n<wsa:ReplyTo SOAP-ENV:mustUnderstand=\"true\">\n<wsa:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</wsa:Address>\n</wsa:ReplyTo>\n<wsa:To SOAP-ENV:mustUnderstand=\"true\">urn:schemas-xmlsoap-org:ws:2005:04:discovery</wsa:To>\n<wsa:Action SOAP-ENV:mustUnderstand=\"true\">http://schemas.xmlsoap.org/ws/2005/04/discovery/ProbeMatches</wsa:Action>\n</SOAP-ENV:Header>\n<SOAP-ENV:Body>\n<wsdd:ProbeMatches>\n<wsdd:ProbeMatch>\n<wsa:EndpointReference>\n<wsa:Address>uuid:3fa1fe68-b915-4053-a3e1-c006c3afec0e</wsa:Address>\n<wsa:ReferenceProperties></wsa:ReferenceProperties>\n<wsa:PortType>ttl</wsa:PortType>\n</wsa:EndpointReference>\n<wsdd:Types>tdn:NetworkVideoTransmitter</wsdd:Types>\n<wsdd:Scopes>onvif://www.onvif.org/name/TP-IPC onvif://www.onvif.org/hardware/MODEL onvif://www.onvif.org/Profile/Streaming onvif://www.onvif.org/location/ShenZhen onvif://www.onvif.org/type/NetworkVideoTransmitter </wsdd:Scopes>\n<wsdd:XAddrs>http://192.168.12.128:2020/onvif/device_service</wsdd:XAddrs>\n<wsdd:MetadataVersion>1</wsdd:MetadataVersion>\n</wsdd:ProbeMatch>\n</wsdd:ProbeMatches>\n</SOAP-ENV:Body>\n</SOAP-ENV:Envelope>\n
The USB Device Service is a microservice created to address the lack of standardization and automation of camera discovery and onboarding. EdgeX Foundry is a flexible microservice-based architecture created to promote the interoperability of multiple device interface combinations at the edge. In an EdgeX deployment, the USB Device Service controls and communicates with USB cameras, while EdgeX Foundry presents a standard interface to application developers. With normalized connectivity protocols and a vendor-neutral architecture, EdgeX paired with USB Camera Device Service, simplifies deployment of edge camera devices.
Specifically, the device service uses V4L2 API to get camera metadata, FFmpeg framework to capture video frames and stream them to an RTSP server, which is embedded in the dockerized device service. This allows the video stream to be integrated into the larger architecture.
Use the USB Device Service to streamline and scale your edge camera device deployment.
"},{"location":"microservices/device/services/device-usb-camera/General/#how-it-works","title":"How It Works","text":"The figure below illustrates the software flow through the architecture components.
Figure 1: Software Flow
Core Metadata
.Core Metadata
for devices and associated configuration.Video Analytics Pipeline
through HTTP Post Request.Get Started>
"},{"location":"microservices/device/services/device-usb-camera/General/#examples","title":"Examples","text":"To see an example utilizing the USB device service, refer to the camera management example application
"},{"location":"microservices/device/services/device-usb-camera/General/#references","title":"References","text":"Apache-2.0
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/USB-protocol/","title":"USB Camera Device Service Specifications","text":""},{"location":"microservices/device/services/device-usb-camera/supplementary-info/USB-protocol/#usb-protocol-properties","title":"USB Protocol Properties","text":"Property Description EdgeX Value Type Path Specifies the internal /dev/video path for the camera device. DEPRECATED: Path will be removed in the next major release, use Paths. String Paths A list of internal /dev/video paths for the camera device. This list includes all streaming capable video paths for each device. Object SerialNumber The serial number of the camera device. String CardName The manufacturer specified name of the camera device. String AutoStreaming A value indicating if the device should automatically start streaming. String"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/","title":"Advanced Options","text":""},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#rtsp-authentication","title":"RTSP Authentication","text":"The device service allows for rtsp stream authentication using the rtsp-simple-server. Authentication is enabled by default.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#secret-configuration","title":"Secret Configuration","text":"To configure the username and password for rtsp authentication when building your own images, edit the fields in the 'configuration.yaml'.
Note
This should only be used when you are in non-secure mode.
Warning
Be careful when storing any potentially important information in cleartext on files in your computer. In this case, the credentials for the stream are stored in cleartext in the configuration.yaml
file on your system. InsecureSecrets
is for non-production use only.
Note
Leaving the fields blank will NOT disable authentication. The stream will not be able to be authenticated until credentials are provided.
Snippet from configuration.yaml
...\nWritable:\nLogLevel: \"INFO\"\nInsecureSecrets:\nrtspauth:\nSecretName: rtspauth\nSecretData:\nusername: \"<enter-username>\"\npassword: \"<enter-password>\"\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#authentication-server-configuration","title":"Authentication Server Configuration","text":"externalAuthenticationURL line from the Dockerfile
RUN sed -i 's,externalAuthenticationURL:,externalAuthenticationURL: http://localhost:8000/rtspauth,g' rtsp-simple-server.yml\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#set-device-configuration-parameters","title":"Set Device Configuration Parameters","text":""},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#set-frame-rate","title":"Set frame rate","text":"This command sets the frame rate for the capture device.
Before setting the frame rate first execute the DataFormat
command to see the available frame rates of a device for any of its video streaming path or stream format:
Example DataFormat Command with PathIndex
query parameter
curl http://localhost:59882/api/v3/device/name/<device name>/DataFormat?PathIndex=<path_index>\n
OR
Example DataFormat Command with StreamFormat
query parameter
curl http://localhost:59882/api/v3/device/name/<device name>/DataFormat?StreamFormat=<stream_format>\n
Note
The PathIndex
refers to the index of the device video streaming path from the path list. For example if a usb device has one video streaming path such as /dev/video0 the PathIndex
value will be 0. In case of Intel\u2122 RealSense\u00ae cameras there are three video streaming paths, hence the user will have 3 options for PathIndex
which are 0, 1 and 2. The default value is 0 if no PathIndex
input is provided. StreamFormat
refers to different video streaming formats and the formats currently supported by the service are RGB
, Depth
or Greyscale
.
Example DataFormat Response
{\n\"apiVersion\": \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\": \"v3\",\n\"id\": \"bf48b7c6-5e94-4831-a7ba-cea4e9773ae1\",\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"profileName\": \"USB-Camera-General\",\n\"sourceName\": \"DataFormat\",\n\"origin\": 1689621129335558590,\n\"readings\": [\n{\n\"id\": \"7f4918ca-31c9-4bcf-9490-a328eb62beab\",\n\"origin\": 1689621129335558590,\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"resourceName\": \"DataFormat\",\n\"profileName\": \"USB-Camera-General\",\n\"valueType\": \"Object\",\n\"value\": \"\",\n\"objectValue\": {\n\"/dev/video6\": {\n\"BytesPerLine\": 1280,\n\"Colorspace\": \"sRGB\",\n\"Field\": \"none\",\n\"FrameRates\": [\n{\n\"Denominator\": 1,\n\"Numerator\": 30\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 24\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 20\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 15\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 10\n},\n{\n\"Denominator\": 2,\n\"Numerator\": 15\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 5\n}\n],\n\"Height\": 480,\n\"PixelFormat\": \"YUYV 4:2:2\",\n\"Quantization\": \"Limited range\",\n\"SizeImage\": 614400,\n\"Width\": 640,\n\"XferFunc\": \"Rec. 709\",\n\"YcbcrEnc\": \"ITU-R 601\"\n}\n}\n]\n}\n}\n
Use one of the supported FrameRates
value from the previous command output to set the frame rate based on PathIndex
or StreamFormat
.
Example Set FrameRate Command
curl -X PUT -d '{\n \"FrameRate\": {\n \"FrameRateValueDenominator\": \"1\"\n \"FrameRateValueNumerator\": \"10\",\n }\n }' http://localhost:59882/api/v3/device/name/<device name>/FrameRate?PathIndex=<path_index>\n
Example Set FrameRate Response
{\n\"apiVersion\": \"v3\",\n \"statusCode\": 200\n}
The newly set framerate can be verified using a GET request:
Example Get FrameRate command
curl -X GET http://localhost:59882/api/v3/device/name/<device name>/FrameRate?PathIndex=<path_index>\n
Example Get FrameRate response
{\n\"apiVersion\": \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\": \"v3\",\n\"id\": \"8ee12059-fed6-401c-b268-992fede19840\",\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"profileName\": \"USB-Camera-General\",\n\"sourceName\": \"FrameRate\",\n\"origin\": 1692730015347762386,\n\"readings\": [{\n\"id\": \"b991d703-b7ac-4139-a598-87e0f190d617\",\n\"origin\": 1692730015347762386,\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"resourceName\": \"FrameRate\",\n\"profileName\": \"USB-Camera-General\",\n\"valueType\": \"Object\",\n\"value\": \"\",\n\"objectValue\": {\n\"/dev/video6\": {\n\"Denominator\": 1,\n\"Numerator\": 10\n}\n}\n}]\n}\n}\n
Warning
3rd party applications such as vlc or ffplay may overwrite your chosen frame rate value, so make sure to keep that in mind when using other applications.
This command sets the desired pixel format for the capture device.
Before setting the pixel format the ImageFormats
command can be executed to see the available pixel formats for a camera for any of its video streaming path or stream format (RGB, Greyscale or Depth)
Example Get ImageFormats Command
curl -X GET http://localhost:59882/api/v3/device/name/<device name>/ImageFormats?PathIndex=<path_index>\n
Use one of the supported PixelFormat
values to set the pixel format based on PathIndex
or StreamFormat
.
Note
PixelFormat
has to be specified in the set request with a specific code which is acceptable by the v4l2 driver. This service currently supports the formats whose codes are YUYV
,GREY
,MJPG
,Z16
,RGB
,JPEG
,MPEG
,H264
,MPEG4
,UYVY
,BYR2
,Y8I
,Y12I
. Refer to V4l2 Image Formats for more info. The service only supports setting of height, width or pixel format.
Example Set PixelFormat Command
curl -X PUT -d '{\n \"PixelFormat\": {\n \"Width\":\"640\",\n \"Height\":\"480\",\n \"PixelFormat\": \"YUYV\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/PixelFormat?PathIndex=<path_index>\n
Example Set PixelFormat Response
{\n\"apiVersion\": \"v3\",\n \"statusCode\": 200\n}\n
The newly set pixel format can be verified using a GET request:
Example Get PixelFormat command
curl -X GET http://localhost:59882/api/v3/device/name/<device name>/PixelFormat?PathIndex=<path_index>\n
Example Get PixelFormat Response
{\n\"apiVersion\": \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\": \"v3\",\n\"id\": \"03cc2182-6a48-4869-ac00-52f968850452\",\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"profileName\": \"USB-Camera-General\",\n\"sourceName\": \"PixelFormat\",\n\"origin\": 1692728351448270645,\n\"readings\": [\n{\n\"id\": \"ded64ad7-955a-4979-9acd-ff5f1cbc9e9c\",\n\"origin\": 1692728351448270645,\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"resourceName\": \"PixelFormat\",\n\"profileName\": \"USB-Camera-General\",\n\"valueType\": \"Object\",\n\"value\": \"\",\n\"objectValue\": {\n\"BytesPerLine\": 1280,\n\"Colorspace\": \"sRGB\",\n\"Field\": \"none\",\n\"Flags\": 0,\n\"HSVEnc\": \"Default\",\n\"Height\": 480,\n\"PixelFormat\": \"YUYV 4:2:2\",\n\"Priv\": 4276996862,\n\"Quantization\": \"Default\",\n\"SizeImage\": 614400,\n\"Width\": 640,\n\"XferFunc\": \"Default\",\n\"YcbcrEnc\": \"Default\"\n}\n}\n]\n}\n}\n
There are two types of options:
Input
prefix are used for the camera, such as specifying the image size and pixel format. Output
prefix are used for the output video, such as specifying aspect ratio and quality. These options can be passed in through object value when calling the StartStreaming
command.
Query parameter: - device name
: The name of the camera
Example StartStreaming Command
curl -X PUT -d '{\n \"StartStreaming\": {\n \"InputImageSize\": \"640x480\",\n \"OutputVideoQuality\": \"5\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/StartStreaming\n
Supported Input options:
InputFps
: Ignore original timestamps and instead generate timestamps assuming constant frame rate fps. (default - same as source) InputImageSize
: Specifies the image size of the camera. The format is wxh
, for example \"640x480\". (default - automatically selected by FFmpeg) InputPixelFormat
: Set the preferred pixel format (for raw video). (default - automatically selected by FFmpeg)Supported Output options:
OutputFrames
: Set the number of video frames to output. (default - no limitation on frames) OutputFps
: Duplicate or drop input frames to achieve constant output frame rate fps. (default - same as InputFps) OutputImageSize
: Performs image rescaling. The format is wxh
, for example \"640x480\". (default - same as InputImageSize) OutputAspect
: Set the video display aspect ratio specified by aspect. For example \"4:3\", \"16:9\". (default - same as source) OutputVideoCodec
: Set the video codec. For example \"mpeg4\", \"h264\". (default - mpeg4) OutputVideoQuality
: Use fixed video quality level. Range is a integer number between 1 to 31, with 31 being the worst quality. (default - dynamically set by FFmpeg) You can also set default values for these options by adding additional attributes to the device resource StartStreaming
. The attribute name consists of a prefix \"default\" and the option name.
Snippet from device.yaml
deviceResources:\n- name: \"StartStreaming\"\ndescription: \"Start streaming process.\"\nattributes:\n{ command: \"VIDEO_START_STREAMING\",\n defaultInputFrameSize: \"320x240\",\n defaultOutputVideoQuality: \"31\"\n}\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Note
It's NOT recommended to set default video options in the 'cmd/res/profiles/general.usb.camera.yaml' as they may not be supported by every camera.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#keep-the-paths-of-existing-cameras-up-to-date","title":"Keep the paths of existing cameras up to date","text":"The paths (/dev/video*) of the connected cameras may change whenever the cameras are re-connected or the system restarts. To ensure the paths of the existing cameras are up to date, the device service scans all the existing cameras to check whether their serial numbers match the connected cameras. If there is a mismatch between them, the device service will scan all paths to find the matching device and update the existing device with the correct path.
This check can also be triggered by using the Device Service API /refreshdevicepaths
.
curl -X POST http://localhost:59983/api/v3/refreshdevicepaths\n
It's recommended to trigger a check after re-plugging cameras.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#configurable-rtsp-server-hostname-and-port","title":"Configurable RTSP server hostname and port","text":"Enable/Disable RTSP server and set hostname and port of the RTSP server to which the device service publishes video streams can be configured in the [Driver] section of the service configuration located in the cmd/res/configuration.yaml
file. RTSP server is enabled by default.
Snippet from configuration.yaml
Driver:\nEnableRtspServer: \"true\"\nRtspServerHostName: \"localhost\"\nRtspTcpPort: \"8554\"\nRtspAuthenticationServer: \"localhost:8000\"\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#camerastatus-command","title":"CameraStatus Command","text":"Use the following query to determine the status of the camera. URL parameter:
Example CameraStatus Command
curl -X GET http://localhost:59882/api/v3/device/name/<DeviceName>/CameraStatus?InputIndex=0 | jq -r '\"CameraStatus: \" + (.event.readings[].value|tostring)'\n
Example Output:
CameraStatus: 0\n
Response meanings:
Response Description 0 Ready 1 No Power 2 No Signal 3 No Color"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/","title":"Dynamic Discovery","text":"The device service supports dynamic discovery. During dynamic discovery, the device service scans all connected USB devices and sends the discovered cameras to Core Metadata. The device name of the camera discovered by the device service is comprised of Card Name and Serial Number, and the characters colon, space and dot will be replaced with underscores as they are invalid characters for device names in EdgeX. Take the camera Logitech C270 as an example, it's Card Name is \"C270 HD WEBCAM\" and the Serial Number is \"B1CF0E50\" hence the device name - \"C270_HD_WEBCAM-B1CF0E50\".
Note
Card Name and Serial number are used by the device service to uniquely identify a camera. Some manufactures, however, may not support unique serial numbers for their cameras. Please check with your camera manufacturer.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#dynamic-discovery-function","title":"Dynamic Discovery function","text":"Dynamic discovery is enabled by default to make setup easier. It can be disabled by changing the Enabled
option to false
as shown below.
Snippet from device.yaml
Device: ...\nDiscovery:\nEnabled: false\nInterval: \"1h\"\n
export DEVICE_DISCOVERY_ENABLED=false\nexport DEVICE_DISCOVERY_INTERVAL=1h\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#configure-discovery-interval","title":"Configure discovery interval","text":"configuration.yamlDocker / Env Vars Snippet from device.yaml
Device: ...\nDiscovery:\nEnabled: true\nInterval: \"1h\"\n
export DEVICE_DISCOVERY_ENABLED=true\nexport DEVICE_DISCOVERY_INTERVAL=1h\n
To manually trigger a Dynamic Discovery, use this device service API.
curl -X POST http://<service-host>:59983/api/v3/discovery\n
The interval value must be a Go duration.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#rediscovery","title":"Rediscovery","text":"The device service is able to rediscover and update devices that have been discovered previously. Nothing additional is needed to enable this. It will run whenever the discover call is sent, regardless of whether it is a manual or automated call to discover. The steps to configure discovery or to manually trigger discovery is explained here
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#configure-the-provision-watchers","title":"Configure the Provision Watchers","text":"Note
This section is for manually adding provision watchers, one is already added by default.
The provision watcher sets up parameters for EdgeX to automatically add devices to core-metadata. They can be configured to look for certain features, as well as block features. The default provision watcher is sufficient unless you plan on having multiple different cameras with different profiles and resources. Learn more about provision watchers here. The provision watchers are located at ./cmd/res/provision_watchers
.
Example Command
curl -X POST \\\n-d '[\n{\n \"provisionwatcher\":{\n \"apiVersion\" : \"v3\",\n \"name\":\"USB-Camera-Provision-Watcher\",\n \"adminState\":\"UNLOCKED\",\n \"identifiers\":{\n \"Path\": \".\"\n },\n \"serviceName\": \"device-usb-camera\",\n \"profileName\": \"USB-Camera-General\"\n },\n \"apiVersion\" : \"v3\"\n}\n]' http://localhost:59881/api/v3/provisionwatcher\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/","title":"Custom Build","text":""},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#get-the-device-usb-camera-source-code","title":"Get the Device USB Camera Source Code","text":"Change into the edgex directory:
cd ~/edgex\n
Clone the device-usb-camera repository:
git clone https://github.com/edgexfoundry/device-usb-camera.git\n
Checkout the latest release (main):
git checkout main\n
Each device resource should have a mandatory attribute named command
to indicate what action the device service should take for it.
Commands can be one of two types:
METADATA_
prefix are used to get camera metadata.Snippet from general.usb.device.yaml
deviceResources:\n- name: \"CameraInfo\"\ndescription: >-\nCamera information including driver name, device name, bus info, and capabilities.\nSee https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-querycap.html.\nattributes:\n{ command: \"METADATA_DEVICE_CAPABILITY\" }\nproperties:\nvalueType: \"Object\"\nreadWrite: \"R\"\n
VIDEO_
prefix are related to video stream.Snippet from general.usb.device.yaml
deviceResources:\n- name: \"StreamURI\"\ndescription: \"Get video-streaming URI.\"\nattributes:\n{ command: \"VIDEO_STREAM_URI\" }\nproperties:\nvalueType: \"String\"\nreadWrite: \"R\"\n
For all supported commands, refer to the sample at cmd/res/profiles/general.usb.camera.yaml
.
Note
In general, this sample should be applicable to all types of USB cameras.
Note
You don't need to define device profile yourself unless you want to modify resource names or set default values for video options.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#define-the-device","title":"Define the device","text":"The device's protocol properties contain: * Path
is a file descriptor of camera created by OS. You can find the path of the connected USB camera through v4l2-ctl utility. * AutoStreaming
indicates whether the device service should automatically start video streaming for cameras. Default value is false.
Snippet from general.usb.camera.yaml.example
deviceList:\n- name: \"example-camera\"\nprofileName: \"USB-Camera-General\"\ndescription: \"Example Camera\"\nlabels: [ \"device-usb-camera-example\", ]\nprotocols:\nUSB:\nPath: \"/dev/video0\"\nAutoStreaming: \"false\"\n
See the examples at cmd/res/devices
Note
When a new device is created in Core Metadata, a callback function of the device service will be called to add the device card name and serial number to protocol properties for identification purposes. These two pieces of information are obtained through V4L2
API and udev
utility.
Enable/Disable RTSP server and set hostname and port in the Driver
section of device-usb-camera/cmd/res/configuration.yaml
file. The default values can be used in this guide. The RtspAuthenticationServer value indicates the internal hostname and port on which the device service will listen for RTSP authentication requests on. If this value is changed, you will have to also change the mediamtx configuration to point to the new hostname/port as well.
Snippet from configuration.yaml
Driver:\nEnableRtspServer: \"true\"\nRtspServerHostName: \"localhost\"\nRtspTcpPort: \"8554\"\nRtspAuthenticationServer: \"localhost:8000\"\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#configure-rtsp-authentication","title":"Configure RTSP authentication","text":"Set the username and password
Snippet from configuration.yaml
...\nWritable:\nLogLevel: \"INFO\"\nInsecureSecrets:\nrtspauth:\nSecretName: rtspauth\nSecretData:\nusername: \"<set-username>\"\npassword: \"<set-password>\"\n
For more information on rtsp authentication, including how to disable it, see here
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#building-the-docker-image","title":"Building the docker image","text":"Change into newly created directory:
cd ~/edgex/device-usb-camera\n
Build the docker image of the device-usb-camera service:
make docker\n
[Optional] Build with NATS Messaging Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. This means that the published Docker image and Snaps do not include the NATS messaging capability. To build the docker image using NATS, run make docker-nats: make docker-nats\n
See Compose Builder nat-bus
option to generate compose file for NATS and local dev images. Navigate to the Edgex compose directory.
cd ~/edgex/edgex-compose/compose-builder\n
.env
file to add the registry and image version variable for device-usb-camera:Add the following registry and version information:
DEVICE_USBCAM_VERSION=0.0.0-dev\n
add-device-usb-camera.yml
to point to the local image:services:\ndevice-usb-camera:\n image: edgexfoundry/device-usb-camera${ARCH}:${DEVICE_USBCAM_VERSION}\n
Deploy the device service>
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/","title":"Deployment","text":"Follow this guide to deploy and run the service.
DockerNativeNavigate to the Edgex compose directory.
cd edgex-compose/compose-builder\n
Checkout the latest release (main):
git checkout main\n
Run EdgeX with the USB microservice in secure or non-secure mode:
Navigate to the Edgex compose directory.
cd edgex-compose/compose-builder\n
Checkout the latest release (main):
git checkout main\n
Run EdgeX:
make run no-secty\n
Navigate out of the edgex-compose
directory to the device-usb-camera
directory:
cd device-usb-camera\n
Checkout the latest release (main):
git checkout main\n
Build the executable
make build\n
[Optional] Build with NATS Messaging Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. To build using NATS, run make build-nats:
make build-nats\n
Deploy the service
cd cmd && EDGEX_SECURITY_SECRET_STORE=false ./device-usb-camera\n
make run ds-usb-camera no-secty\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#secure-mode","title":"Secure mode","text":"Note
Recommended for secure and production level deployments.
make run ds-usb-camera\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#token-generation-secure-mode-only","title":"Token Generation (secure mode only)","text":"Note
Need to wait for sometime for the services to be fully up before executing the next set of commands. Securely store Consul ACL token and the JWT token generated which are needed to map credentials and execute apis. It is not recommended to store these secrets in cleartext in your machine.
Note
The JWT token expires after 119 minutes, and you will need to generate a new one.
Generate the Consul ACL Token. Use the token generated anywhere you see <consul-token>
in the documentation.
make get-consul-acl-token\n
Example output: 12345678-abcd-1234-abcd-123456789abc\n
Generate the JWT Token. Use the token generated anywhere you see <jwt-token>
in the documentation.
make get-token\n
Example output: eyJhbGciOiJFUzM4NCIsImtpZCI6IjUyNzM1NWU4LTQ0OWYtNDhhZC05ZGIwLTM4NTJjOTYxMjA4ZiJ9.eyJhdWQiOiJlZGdleCIsImV4cCI6MTY4NDk2MDI0MSwiaWF0IjoxNjg0OTU2NjQxLCJpc3MiOiIvdjEvaWRlbnRpdHkvb2lkYyIsIm5hbWUiOiJlZGdleHVzZXIiLCJuYW1lc3BhY2UiOiJyb290Iiwic3ViIjoiMGRjNThlNDMtNzBlNS1kMzRjLWIxM2QtZTkxNDM2ODQ5NWU0In0.oa8Fac9aXPptVmHVZ2vjymG4pIvF9R9PIzHrT3dAU11fepRi_rm7tSeq_VvBUOFDT_JHwxDngK1VqBVLRoYWtGSA2ewFtFjEJRj-l83Vz33KySy0rHteJIgVFVi1V7q5
Note
Secrets such as passwords, certificates, tokens and more in Edgex are stored in a secret store which is implemented using Vault a product of Hashicorp. Vault supports security features allowing for the issuing of consul tokens. JWT token is required for the API Gateway which is a trust boundry for Edgex services. It allows for external clients to be verified when issuing REST requests to the microservices. For more info refer Secure Consul, API Gateway and Edgex Security.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#verify-service-device-profiles-and-device","title":"Verify Service, Device Profiles, and Device","text":"Check the status of the container:
docker ps -f name=device-usb-camera\n
The status column will indicate if the container is running and how long it has been up.
Example output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nf0a1c646f324 edgexfoundry/device-usb-camera:0.0.0-dev \"/docker-entrypoint.\u2026\" 26 hours ago Up 20 hours 127.0.0.1:8554->8554/tcp, 127.0.0.1:59983->59983/tcp edgex-device-usb-camera edgex-device-onvif-camera\n
Check whether the device service is added to EdgeX:
Note
If running in secure mode all the api executions need the JWT token generated previously. E.g.
curl --location --request GET 'http://localhost:59881/api/v3/deviceservice/name/device-usb-camera' \\\n--header 'Authorization: Bearer <jwt-token>' \\\n--data-raw ''\n
curl -s http://localhost:59881/api/v3/deviceservice/name/device-usb-camera | jq .\n
Successful:
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"service\": {\n\"created\": 1658769423192,\n\"modified\": 1658872893286,\n\"id\": \"04470def-7b5b-4362-9958-bc5ff9f54f1e\",\n\"name\": \"device-usb-camera\",\n\"baseAddress\": \"http://edgex-device-usb-camera:59983\",\n\"adminState\": \"UNLOCKED\"\n}\n}\n
Unsuccessful: {\n\"apiVersion\" : \"v3\",\n\"message\": \"fail to query device service by name device-usb-camera\",\n\"statusCode\": 404\n}\n
Verify device(s) have been successfully added to core-metadata.
curl -s http://localhost:59881/api/v3/device/all | jq -r '\"deviceName: \" + '.devices[].name''\n
Example output:
deviceName: NexiGo_N930AF_FHD_Webcam_NexiG-20201217010\n
Note
The jq -r
option is used to reduce the size of the displayed response. The entire device with all information can be seen by removing -r '\"deviceName: \" + '.devices[].name'', and replacing it with '.'
Note
If running in secure mode this command needs the Consul ACL token generated previously.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-usb-camera?keys=true\"\n
Note
If you want to disable rtsp authentication entirely, you must build a custom image.
Non-secure ModeSecure ModeExample credential command
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"rtspauth\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<pick-a-username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<pick-a-secure-password>\"\n }\n ]\n}' -X POST http://localhost:59983/api/v3/secret\n
edgex-compose/compose-builder
directory.make get-token\n
Example credential command
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"rtspauth\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<pick-a-username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<pick-a-secure-password>\"\n }\n ]\n}' -H Authorization:Bearer \"<enter your JWT token here (make get-token)>\" -X POST http://localhost:59983/api/v3/secret\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#manage-devices","title":"Manage Devices","text":"Warning
This section only needs to be performed if discovery is disabled. Discovery is enabled by default.
Devices can either be added to the service by defining them in a static configuration file, discovering devices dynamically, or with the REST API. For this example, the device will be added using the REST API.
Run the following command to determine the Path
to the usb camera for video streaming:
v4l2-ctl --list-devices\n
The output should look similar to this:
NexiGo N930AF FHD Webcam: NexiG (usb-0000:00:14.0-1):\n /dev/video6\n /dev/video7\n /dev/media2\n
For this example, the Path
is /dev/video6
.
Edit the information to appropriately match the camera. Find more information about the device protocol properties here.
Example Command
curl -X POST -H 'Content-Type: application/json' \\\nhttp://localhost:59881/api/v3/device \\\n-d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\": \"Camera001\",\n \"serviceName\": \"device-usb-camera\",\n \"profileName\": \"USB-Camera-General\",\n \"description\": \"My test camera\",\n \"adminState\": \"UNLOCKED\",\n \"operatingState\": \"UP\",\n \"protocols\": {\n \"USB\": {\n \"CardName\": \"NexiGo N930AF FHD Webcam: NexiG\",\n \"Paths\": [\"/dev/video6\",],\n \"AutoStreaming\": \"false\"\n }\n }\n }\n }\n]'\n
Example output:
[{\"apiVersion\" : \"v3\",\"statusCode\":201,\"id\":\"fb5fb7f2-768b-4298-a916-d4779523c6b5\"}]\n
Learn how to use the device service>
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/","title":"General Usage","text":"This document will describe how to execute some of the most important types of commands used with the device service.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#start-video-streaming","title":"Start Video Streaming","text":"Unless the device service is configured to stream video from the camera automatically, a StartStreaming
command must be sent to the device service.
Note
Streaming credentials for the rtsp stream must be added prior to starting the stream. Please refer to Deployment for additional information.
There are two types of options: - The options that start with Input
as a prefix are used for camera configuration, such as specifying the image size and pixel format. - The options that start with Output
as a prefix are used for video output configuration, such as specifying aspect ratio and quality.
These options can be passed in through Object value when calling StartStreaming.
Query parameter: - device name
: The name of the camera
Example StartStreaming Command
curl -X PUT -d '{\n \"StartStreaming\": {\n \"InputImageSize\": \"640x480\",\n \"OutputVideoQuality\": \"5\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/StartStreaming\n
Note
If running in secure mode all the api executions (for this api and subsequent apis) need the JWT token generated previously. E.g.
curl -X PUT -d '{\n \"StartStreaming\": {\n \"InputImageSize\": \"640x480\",\n \"OutputVideoQuality\": \"5\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/StartStreaming \\\n--header 'Authorization: Bearer <jwt-token>'\n
Example output:
{\"apiVersion\":\"v3\",\"statusCode\":200}\n
Supported Input options:
InputFps
: Ignore original timestamps and instead generate timestamps assuming constant frame rate fps. (default - same as source) InputImageSize
: Specifies the image size of the camera. The format is wxh
, for example \"640x480\". (default - automatically selected by FFmpeg) InputPixelFormat
: Set the preferred pixel format (for raw video). (default - automatically selected by FFmpeg) Supported Output options:
OutputFrames
: Set the number of video frames to output. (default - no limitation on frames) OutputFps
: Duplicate or drop input frames to achieve constant output frame rate fps. (default - same as InputFps) OutputImageSize
: Performs image rescaling. The format is wxh
, for example \"640x480\". (default - same as InputImageSize) OutputAspect
: Set the video display aspect ratio specified by aspect. For example \"4:3\", \"16:9\". (default - same as source) OutputVideoCodec
: Set the video codec. For example \"mpeg4\", \"h264\". (default - mpeg4) OutputVideoQuality
: Use fixed video quality level. Range is a integer number between 1 to 31, with 31 being the worst quality. (default - dynamically set by FFmpeg) The device service provides a way to determine the stream URI of a camera.
Query parameter: - device name
: The name of the camera
Example StreamURI Command
curl -s http://localhost:59882/api/v3/device/name/<device name>/StreamURI | jq -r '\"StreamURI: \" + '.event.readings[].value''\n
Example output:
StreamURI: rtsp://localhost:8554/stream/NexiGo_N930AF_FHD_Webcam__NexiG-20201217010\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#play-the-rtsp-stream","title":"Play the RTSP stream.","text":"mplayer can be used to stream. The command follows this format:
mplayer rtsp://'<username>:<password>'@<IP address>:<port>/<streamname>`.\n
Using the streamURI
returned from the previous step, run mplayer:
Example Stream Command
mplayer rtsp://'admin:pass'@localhost:8554/stream/NexiGo_N930AF_FHD_Webcam__NexiG-20201217010\n
To shut down mplayer, use the ctrl-c command.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#stop-video-streaming","title":"Stop Video Streaming","text":"To stop the usb camera from live streaming, use the following command:
Query parameter: - device name
: The name of the camera
Example StopStreaming Command
curl -X PUT -d '{\n \"StopStreaming\": \"true\"\n}' http://localhost:59882/api/v3/device/name/<device name>/StopStreaming\n
Example output:
{\"apiVersion\":\"v3\",\"statusCode\":200}\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#optional-shutting-down","title":"Optional: Shutting Down","text":"To stop all EdgeX services (containers), execute the make down
command:
Navigate to the edgex-compose/compose-builder
directory.
cd ~/edgex/edgex-compose/compose-builder\n
Run this command
make down\n
To shut down and delete all volumes, run this command
Warning
This will delete all edgex-related data.
make clean\n
To verify the usb camera is set to stream video, use the command below
curl http://localhost:59882/api/v3/device/name/<device name>/StreamingStatus | jq -r '\"StreamingStatus: \" + (.event.readings[].objectValue.IsStreaming|tostring)'\n
If the StreamingStatus is false, the camera is not configured to stream video. Please try the Start Video Streaming section again here."},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#v4l2-error","title":"V4L2 error","text":"If you get an error like this:
.../go4vl@v0.0.2/v4l2/capability.go:48:33: could not determine kind of name for C.V4L2_CAP_IO_MC\n.../go4vl@v0.0.2/v4l2/capability.go:46:33: could not determine kind of name for C.V4L2_CAP_META_OUTPUT\n
You are missing the appropriate kernel headers needed by the github.com/vladimirvivien/go4vl
module. One possible solution is to manually download and install a more recent version of the libc-dev for your OS. In the case of Ubuntu 20.04, one is not available in the normal repositories, so you can get it via these steps:
wget https://launchpad.net/~canonical-kernel-team/+archive/ubuntu/bootstrap/+build/20950478/+files/linux-libc-dev_5.10.0-14.15_amd64.deb\nsudo dpkg -i linux-libc-dev_5.10.0-14.15_amd64.deb\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/","title":"Setup","text":"Follow this guide to set up your system to run the USB Device Service.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#system-requirements","title":"System Requirements","text":"The software has dependencies, including Git, Docker, Docker Compose, and assorted tools. Follow the instructions from the following link to install any dependency that are not already installed.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#install-git","title":"Install Git","text":"Install Git from the official repository as documented on the Git SCM site.
Update installation repositories:
sudo apt update\n
Add the Git repository:
sudo add-apt-repository ppa:git-core/ppa -y\n
Install Git:
sudo apt install git\n
Install Docker from the official repository as documented on the Docker site.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#verify-docker","title":"Verify Docker","text":"To enable running Docker commands without the preface of sudo, add the user to the Docker group. Then run Docker with the hello-world
test.
Create Docker group:
sudo groupadd docker\n
Note
If the group already exists, groupadd
outputs a message: groupadd: group docker already exists
. This is OK.
Add User to group:
sudo usermod -aG docker $USER\n
Please logout or reboot for the changes to take effect.
To verify the Docker installation, run hello-world
:
docker run hello-world\n
A Hello from Docker! greeting indicates successful installation.
Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n2db29710123e: Pull complete \nDigest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n...\n
Install Docker compose from the official repository as documented on the Docker Compose site.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#install-tools","title":"Install Tools","text":"Install the build, media streaming, and parsing tools:
sudo apt install build-essential jq curl v4l-utils mplayer\n
Note
The device service ONLY works on Linux with kernel v5.10 or higher.
The table below lists command line tools this guide uses to help with EdgeX configuration and device setup.
Tool Description Note build-essential Developer tools such as libc, gcc, g++ and make. jq Parses the JSON object returned from thecurl
requests. The jq
command includes parameters that are used to parse and format data. In this tutorial, the jq
command has been configured to return and format appropriate data for each curl
command that is piped into it. curl Allows the user to connect to services such as EdgeX. Use curl to get transfer information either to or from this service. In the tutorial, use curl
to communicate with the EdgeX API. The call will return a JSON object. v4l-utils USB camera utility tools This will be used to determine camera paths on the system for manual addition of cameras. mplayer Video player Use this to view the camera stream. >Table 1: Command Line Tools"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#download-edgex-compose","title":"Download EdgeX Compose","text":"Clone the EdgeX compose repository:
git clone https://github.com/edgexfoundry/edgex-compose.git\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#proxy-setup-optional","title":"Proxy Setup (Optional)","text":"Note
These steps are only required if a proxy is present in the user environment.
Setup Docker Daemon or Docker Desktop to use proxied environment.
Follow guide here for Docker Daemon proxy setup (Linux)
Follow guide here for Docker Desktop proxy setup (Windows)
Configuration file to set Docker Daemon proxy via daemon.json
{\n \"proxies\": {\n \"http-proxy\": \"http://proxy.example.com:3128\",\n \"https-proxy\": \"https://proxy.example.com:3129\",\n \"no-proxy\": \"*.test.example.com,.example.org,127.0.0.0/8\"\n }\n }\n
Note if building custom images
If building your own custom images, set environment variables for HTTP_PROXY, HTTPS_PROXY and NO_PROXY
Example
export HTTP_PROXY=http://proxy.example.com:3128\nexport HTTPS_PROXY=https://proxy.example.com:3129\nexport NO_PROXY=*.test.example.com,localhost,127.0.0.0/8\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#next-steps","title":"Next Steps","text":"Deploy the service with default images>
Warning
While not recommended, you can follow the process for manually building the images.
Build a custom image for the service>
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/","title":"Device Virtual","text":""},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#introduction","title":"Introduction","text":"The virtual device service simulates different kinds of devices to generate events and readings to the core data micro service, and users send commands and get responses through the command and control micro service. These features of the virtual device services are useful when executing functional or performance tests without having any real devices.
The virtual device service, built in Go and based on the device service Go SDK, can simulate sensors by generating data of the following data types:
By default, the virtual device service is included and configured to run with all EdgeX Docker Compose files. This allows users to have a complete EdgeX system up and running - with simulated data from the virtual device service - in minutes.
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#using-the-virtual-device-service","title":"Using the Virtual Device Service","text":"The virtual device service contains 4 pre-defined devices as random value generators:
These devices are created by the virtual device service in core metadata when the service first initializes. These devices are defined by device profiles that ship with the virtual device service. Each virtual device causes the generation of one to many values of the type specified by the device name. For example, Random-Integer-Device generates integer values: Int8, Int16, Int32 and Int64. As with all devices, the deviceResources in the associated device profile of the device defind what values are produced by the device service. In the case of Random-Integer-Device, the Int8, Int16, Int32 and Int64 values are defined as deviceResources (see the device profile).
Additionally, there is an accompanying deviceResource for each of the generated value deviceResource. Each deviceResources has an associated EnableRandomization_X deviceResource. In the case of the integer deviceResources above, there are the associated EnableRandomization_IntX deviceResources (see the device profile). The EnableRandomization deviceResources are boolean values, and when set to true, the associated simulated sensor value is generated by the device service. When the EnableRandomization_IntX value is set to false, then the associated simulator sensor value is fixed.
Info
The Enable_Randomization attribute of resource is automatically set to false when you use a PUT
command to set a specified generated value. Furtehr, the minimum and maximum values of generated value deviceResource can be specified in the device profile. Below, Int8 is set to be between -100 and 100.
deviceResources:\n-\nname: \"Int8\"\nisHidden: false\ndescription: \"Generate random int8 value\"\nproperties:\nvalueType: \"Int8\"\nreadWrite: \"RW\"\nminimum: \"-100\"\nmaximum: \"100\"\ndefaultValue: \"0\"\n
For the binary deviceResources, values are generated by the function rand.Read(p []byte) in Golang math package. The []byte size is fixed to MaxBinaryBytes/1000.
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#core-command-and-the-virtual-device-service","title":"Core Command and the Virtual Device Service","text":"Use the following core command service APIs to execute commands against the virtual device service for the specified devices. Both GET
and PUT
commands can be issued with these APIs. GET
command request the next generated value while PUT
commands will allow you to disable randomization (EnableRandomization) and set the fixed values to be returned by the device.
Note
Port 59882 is the default port for the core command service.
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services.
For each device, the virual device service will contain a DeviceList with associated Protocols and AutoEvents as shown by the example below.
DeviceListDeviceList/DeviceList.Protocols/DeviceList.Protocols.otherDeviceList/DeviceList.AutoEvents Property Example Value Description properties used in defining the static provisioning of each of the virtual devices Name 'Random-Integer-Device' name of the virtual device ProfileName 'Random-Integer-Device' device profile that defines the resources and commands of the virtual device Description 'Example of Device Virtual' description of the virtual device Labels ['device-virtual-example'] labels array used for searching for virtual devices Property Example Value Description Address 'device-virtual-int-01' address for the virtual device Protocol '300' Property Default Value Description properties used to define how often an event/reading is schedule for collection to send to core data from the virtual device Interval '15s' every 15 seconds OnChange false collect data regardless of change SourceName 'Int8' deviceResource to collect - in this case the Int8 resource"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#api-reference","title":"API Reference","text":"Device Service - SDK- API Reference
"},{"location":"microservices/general/","title":"Cross Cutting Concerns","text":""},{"location":"microservices/general/#event-tagging","title":"Event Tagging","text":"In an edge solution, it is likely that several instances of EdgeX are all sending edge data into a central location (enterprise system, cloud provider, etc.)
In these circumstances, it will be critical to associate the data to its origin. That origin could be specified by the GPS location of the sensor, the name or identification of the sensor, the name or identification of some edge gateway that originally collected the data, or many other means.
EdgeX provides the means to \u201ctag\u201d the event data from any point in the system. The Event object has a Tags
property which is a key/value pair map that allows any service that creates or otherwise handles events to add custom information to the Event in order to help identify its origin or otherwise label it before it is sent to the north side.
For example, a device service could populate the Tags
property with latitude and longitude key/value pairs of the physical location of the sensor when the Event is created to send sensed information to Core Data.
When the Event gets to the Application Service Configurable, for example, the service has an optional function (defined by Writable.Pipeline.Functions.AddTags
in configuration) that will add additional key/value pair to the Event Tags
. The key and value for the additional tag are provided in configuration (as shown by the example below). Multiple tags can be provide separated by commas.
AddTags:\nParameters:\ntags: \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\"\n
"},{"location":"microservices/general/#custom-application-service","title":"Custom Application Service","text":"In the case, of a custom application service, an AddTags function can be used to add a collection of specified tags to the Event's Tags collection (see Built in Transforms/Functions)
If the Event already has Tags
when it arrives at the application service, then configured tags will be added to the Tags
map. If the configured tags have the same key as an existing key in the Tags
map, then the configured key/value will override what is already in the Event Tags
map.
All services have the ability to collect Common Service Metrics, only Core Data, Application Services and Device Services are collecting additional service specific metrics. Additional service metrics will be added to all services in future releases. See Writable.Telemetry
at Common Configuration for details on configuring the reporting of service metrics.
See Custom Application Service Metrics for more detail on Application Services capability to collect their own custom service metrics via use of the App SDK API.
See Custom Device Service Metrics for more detail on Go Device Services capability to collect their own custom service metrics via use of the Go Device SDK API.
Each service defines (in code) a set of service metrics that it collects and optionally reports if configured. The names the service gives to its metrics are used in the service's Telemetry
configuration to enable/disable the reporting of those metrics. See Core Data's Writable.Telemetry
at Core Data Configuration as example of the names used for the service metrics that Core Data is currently collecting.
The following metric types are available to be used by the EdgeX services:
counter-count
gauge-value
gaugeFloat64-value
timer-count
, timer-min
, timer-max
, timer-mean
, timer-stddev
and timer-variance
histogram-count
, histogram-min
, histogram-max
, histogram-mean
, histogram-stddev
and histogram-variance
Service metrics which are enabled for reporting are published to the EdgeX MessageBug every configured interval using the configured Telemetry
base topic. See Writable.Telemetry
at Common Configuration for details on these configuration items. The service name
and the metric name
are added to the configured base topic. This allows subscribers to subscribe only for specific metrics or metrics from specific services. Each metric is published (reported) independently using the Metric DTO (Data Transfer Object) define in go-mod-core-contracts.
The aggregation of these service metrics is left to adopters to implement as best suits their deployment(s). This can be accomplished with a custom application service that sets the function pipeline Target Type
to the dtos.Metric
type. Then create a custom pipeline function which aggregates the metrics and provides them to the telemetry dashboard service of choice via push (export) or pull (custom GET endpoint). See App Services here for more details on Target Type
.
Example - DTO from Core Data in JSON format for the EventsPersisted
metric as publish to the EdgeX MessageBus
{\n\"apiVersion\" : \"v3\",\n\"name\": \"EventsPersisted\",\n\"fields\": [\n{\n\"name\": \"counter-count\",\n\"value\": 276\n}\n],\n\"tags\": [\n{\n\"name\": \"service\",\n\"value\": \"core-data\"\n}\n],\n\"timestamp\": 1650301898926166900\n}\n
Note
The service name is added to the tags for every metric reported from each service. Additional tags may be added via the service's Telemetry configuration. See the Writable.Telemetry
at Common Configuration for more details. A service may also add metric specific tags via code when it collects the individual metrics.
All services have the ability to collect the following common service metrics
EdgeX 3.1
Support for loading files from a remote location via URI is new in EdgeX 3.1.
Different files like configurations, units of measurements, device profiles, device definitions, and provision watchers can be loaded either from the local file system or from a remote location. For the remote location, HTTP and HTTPS URIs are supported. When using HTTPS, certificate validation is performed using the system's built-in trust anchors.
"},{"location":"microservices/general/#authentication","title":"Authentication","text":""},{"location":"microservices/general/#username-password-in-uri-not-recommended","title":"username-password in URI (not recommended)","text":"Users can specify the username-password (<username>:<password>@
) in the URI as plain text. This is ok network wise when using HTTPS, but if the credentials are specified in configuration or other service files, this is not a good practice to follow.
Example - configuration file with plain text username-password
in URI
[UoM]\n UoMFile = \"https://myuser:mypassword@example.com/uom.yaml\"\n
"},{"location":"microservices/general/#secure-credentials-preferred","title":"Secure Credentials (preferred)","text":"The edgexSecretName
query parameter can be specified in the URI as a secure way for users to specify credentials. When running in secure mode, this parameter specifies a Secret Name from the service's Secret Store where the credentials must be seeded. If insecure mode is running, edgexSecretName
must be specified in the InsecureSecrets section of the configuration.
Example - configuration file with edgexSecretName
query parameter
[UoM]\nUoMFile = \"https://example.com/uom.yaml?edgexSecretName=mySecretName\"\n
The authentication type and credentials are contained in the secret data specified by the Secret Name. Only httpheader
is currently supported. The headername
specifies the authentication method (ie Basic Auth, API-Key, Bearer)
Example - secret data using httpheader
type=httpheader\nheadername=<name>\nheadercontents=<contents>\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\n<name>: <contents>\n
"},{"location":"microservices/general/messagebus/","title":"EdgeX MessageBus","text":""},{"location":"microservices/general/messagebus/#introduction","title":"Introduction","text":"EdgeX has an internal message bus referred to as the EdgeX MessageBus , which is used for internal communications between EdgeX services. An EdgeX Service is any Core/Support/Application/Device Service from EdgeX or any custom Application or Device Service built with the EdgeX SDKs.
The following diagram shows how each of the EdgeX Service use the EdgeX MessageBus.
The EdgeX MessageBus is meant for internal EdgeX service to service communications. It is not meant as an entry point for external services to communicate with the internal EdgeX services. The eKuiper Rules Engine is an exception to this as it is tightly integrated with EdgeX.
The EdgeX services intended as external entry points are:
REST API on all the EdgeX services - Accessed directly in non-secure mode or via the API Gateway when running in secure mode
App Service using External MQTT Trigger - An App Service configured to use the External MQTT Trigger will accept data from external services on an \"external\" MQTT connection
App Service using HTTP Trigger - An App Service configured to use the HTTP Trigger will accept data from external services on an \"external\" REST connection. Accessed in the same manner as other EdgeX REST APIs.
App Service using Custom Trigger - An App Service configured to use a Custom Trigger can accept data from external services or over additional protocols with few limitations. See Custom Trigger Example for an example.
Core Command External MQTT Connection - Core Command now receives command requests and publishes responses via an external MQTT connection that is separate from the EdgeX MessageBus. The requests are forwarded to the EdgeX MessageBus and the corresponding responses are forwarded back to the external MQTT connection.
Originally, the EdgeX MessageBus was only used to send Event/Readings from Core Data to the Application Services layer. In recent releases, more services use the EdgeX MessageBus rather than REST for inter service communication.
All messages published to the EdgeX MessageBus are wrapped in a MessageEnvelope
. This envelope contains metadata describing the message payload, such as the payload Content Type (JSON or CBOR), Correlation Id, etc.
Note
Unless noted below, the MessageEnvelope
is JSON encoded when publishing it to the EdgeX MessageBus. This does result in the MessageEnvelope
's payload being double encoded.
The EdgeX MessageBus is defined by the message bus abstraction implemented in go-mod-messaging. This module defines an abstract client API which currently has four implementations of the API for the different underlying message bus protocols.
"},{"location":"microservices/general/messagebus/#common-messagebus-configuration","title":"Common MessageBus Configuration","text":"Each service that uses the EdgeX MessageBus has a configuration section which defines the implementation to use, the connection method, and the underlying protocol client. This section is the MessageBus:
section in the service common configuration for all EdgeX services. See the MessageBus tab in Common Configuration for more details.
The common MessageBus configuration elements for each implementation are:
Type=redis
Type=mqtt
Type=nats-core
Type=nats-jetstream
redis
for Redis Pub/Subtcp
for MQTT 3.1tcp
for NATS Coretcp
for NATS JetStreamNote
In general all EdgeX Services running in a deployment must be configured to use the same EdgeX MessageBus implementation. By default all services that use the EdgeX MessageBus are configured to use the Redis Pub/Sub implementation. NATS does support a compatibility mode with MQTT. See the NATS MQTT Mode section below for details.
"},{"location":"microservices/general/messagebus/#redis-pubsub","title":"Redis Pub/Sub","text":"As stated above this is the default implementation that all EdgeX Services are configured to use. It takes advantage of the existing Redis DB instance for the broker. Redis Pub/Sub is a fire and forget protocol, so delivery is not guaranteed. If more robustness is required, use the MQTT or NATS implementations.
"},{"location":"microservices/general/messagebus/#configuration","title":"Configuration","text":"See Common Configuration section above for the common configuration elements for all implementations.
"},{"location":"microservices/general/messagebus/#security-configuration","title":"Security Configuration","text":"Option Default Value Description AuthModeusernamepassword
Mode of authentication to use. Values are none
, usernamepassword
, clientcert
, or cacert
. In secure mode Redis Pub/Sub uses usernamepassword
SecretName redisb
Secret name used to look up credentials in the service's SecretStore"},{"location":"microservices/general/messagebus/#additional-configuration","title":"Additional Configuration","text":"This implementation does not have any additional configuration.
"},{"location":"microservices/general/messagebus/#mqtt-31","title":"MQTT 3.1","text":"Robust message bus protocol, which has additional configuration options for robustness and requires an additional MQTT Broker to be running. See MQTT Spec for more details on this protocol.
"},{"location":"microservices/general/messagebus/#configuration_1","title":"Configuration","text":"See Common Configuration section above for the common configuration elements for all implementations.
"},{"location":"microservices/general/messagebus/#security-configuration_1","title":"Security Configuration","text":"Option Default Value Description AuthModenone
Mode of authentication to use. Values are none
, usernamepassword
, clientcert
, or cacert
. In secure mode the MQTT Broker uses usernamepassword
SecretName blank Secret name used to look up credentials in the service's SecretStore"},{"location":"microservices/general/messagebus/#additional-configuration_1","title":"Additional Configuration","text":"Except where noted default values exist in the service common configuration.
Option Default Value Description ClientId service key Unique name of the client connecting to the MQTT broker (Set in each service's private configuration) Qos0
Quality of Service level 0: At most once delivery1: At least once delivery2: Exactly once deliverySee the MQTT QOS Spec for more details KeepAlive 10
Maximum time interval in seconds that is permitted to elapse between the point at which the client finishes transmitting one control packet and the point it starts sending the next. If exceeded, the broker will close the client connection Retained false
If true, Server MUST store the Application Message and its QoS, so that it can be delivered to future subscribers whose subscriptions match its topic name. See Retained Messages for more details. AutoReconnect true
If true, automatically attempts to reconnect to the broker when connection is lost ConnectTimeout 30
Timeout in seconds for the connection to the broker to be successful CleanSession false
if true, Server MUST discard any previous Session and start a new one. This Session lasts as long as the Network Connection"},{"location":"microservices/general/messagebus/#nats","title":"NATS","text":"NATS is a high performance messaging system that offers some interesting options for local deployments. It uses a lightweight text-based protocol notably similar to http. This protocol includes full header support that can allow conveyance of the EdgeX MessageEnvelope
across service boundaries without the need for double-encoding if all services in the deployment are using NATS. Currently services must be specially built with the include_nats_messaging
tag to enable this option.
An ordinary NATS server uses interest, or existence of a client subscription, as the basis for subject availability on the server. This makes Publish a fire and forget operation much like Redis, and gives the system an at most once
quality of service.
The JetStream persistence layer binds NATS subjects to persistent streams which enables the server to collect messages for subjects that have no registered interest, and allows support for at least once
quality of service. Notably, services running in core-nats
mode can still subscribe and publish to jetstream-enabled subjects without the additional overhead associated with publish acknowledgement.
See Common Configuration section above for the common configuration elements for all implementations.
"},{"location":"microservices/general/messagebus/#security-configuration_2","title":"Security Configuration","text":"Option Default Value Description AuthModenone
Mode of authentication to use. Values are none
, usernamepassword
, clientcert
, or cacert
. The NATS Server is currently not secured in secure mode. SecretName blank Secret name used to look up credentials in the service's SecretStore NKeySeedFile blank Path to a seed file to use for authentication. See the NATS documentation for more detail CredentialsFile blank Path to a credentials file to use for authentication. See the NATS documentation for more detail"},{"location":"microservices/general/messagebus/#additional-configuration_2","title":"Additional Configuration","text":"Except where noted default values exist in the service common configuration.
Option Default Value Description ClientId service key Unique name of the client connecting to the NATS Server (Set in each service's private configuration) Formatnats
Format of the actual message published. Valid values are:- nats : Metadata from the MessageEnvlope
are put into the NATS header and the payload from the MessageEnvlope
is published as is. Preferred format when all services are using NATS- json : JSON encodes the MessageEnvelope
and publish it as the message. Use this format for compatibility when other services using MQTT 3.1 and running the NATS Server in MQTT mode. ConnectTimeout 30
Timeout in seconds for the connection to the broker to be successful RetryOnFailedConnect false
Retry on connection failure - expects a string representation of a boolean QueueGroup blank Specifies a queue group to distribute messages from a stream to a pool of worker services Durable blank Specifies a durable consumer should be used with the given name. Note that if a durable consumer with the specified name does not exist it will be considered ephemeral and deleted by the client on drain / unsubscribe (JetStream only) Subject blank Specifies the subject for subscribing stream if a Durable is not specified - will also be formatted into a stream name to be used on subscription. This subject is used for auto-provisioning the stream if needed as well and should be configured with the 'root' topic common to all subscriptions (eg edgex/#
) to ensure that all topics on the bus are covered. (JetStream only) AutoProvision false
Automatically provision NATS streams. (JetStream only) Deliver new
Specifies delivery mode for subscriptions - options are \"new\", \"all\", \"last\" or \"lastpersubject\". See the NATS documentation for more detail (JetStream only) DefaultPubRetryAttempts 2
Number of times to attempt to retry on failed publish (JetStream only)"},{"location":"microservices/general/messagebus/#resource-provisioning-with-nats-box","title":"Resource Provisioning with nats-box","text":"While the SDK will attempt to auto-provision streams needed if configured to do so, if you need specific features or policies enabled it is generally best to provision your own. A nats-box docker image is available preloaded with various utilities to make this easier.
For information on stream provisioning using the nats cli see here.
For nkey generation a utility called nk is provided with nats-box. For generating nkey seed files see here.
For credential management a utility called nsc is provided with nats-box. For using credentials files see documentation on resolvers and the companion memory resolver tutorial.
"},{"location":"microservices/general/messagebus/#nats-mqtt-mode","title":"NATS MQTT Mode","text":"A JetStream enabled server can support MQTT connections on the same set of underlying subjects. This can be especially useful if you are using prebuilt EdgeX services like device-onvif-camera but want to transition your system towards using NATS. Note that format=json
must be used so that the NATS messagebus client can read the double-encoded envelopes sent by MQTT clients. For more information see NATS MQTT Documentation.
The EdgeX MessageBus uses multi-level topics and wildcards to allow filtering of data via subscriptions and has standardized on a MQTT like scheme. See MQTT multi-level topics and wildcards for more information.
The Redis implementation converts the Redis Pub/Sub multi-level topic scheme to match that of MQTT. In Redis Pub/Sub the \".\" is used as a level separator, \"*\" followed by a level separator is used as the single level wildcard and \"*\" at the end is used as the multiple level wildcard. These are converted to \"/\" and \"+\" and \"#\" respectively, which are used by MQTT.
The NATS implementations convert the NATS multi-level topic scheme to match that of MQTT. In NATS \".\" is used as a level separator, \"*\" is used as the single level wildcard and \">\" is used for the multi-level wild card. These are converted to \"/\", \"+\" and \"#\" respectively, which are compliant with the MQTT scheme.
Example Multi-level topics and wildcards for EdgeX MessageBus
edgex/events/#
All events coming from any device service or core data for any device profile, device or source
edgex/events/device/#
All events coming from any device service for any device profile, device or source
edgex/events/+/device-onvif-camera/#
Events coming from only device service \"device-onvif-camera\" for any device profile, device and source
edgex/events/+/+/+/camera-001/#
Events coming from any device service or core data for any device profile, but only for the device \"camera-001\" and for any source
edgex/events/device/+/onvif/+/status
Events coming from any device service for only the device profile \"onvif\", and any device and only for the source \"status\"
All EdgeX services are capable of using the Redis Pub/Sub without any changes to configuration. The released compose files and snaps use Redis Pub/Sub.
"},{"location":"microservices/general/messagebus/#mqtt-31_1","title":"MQTT 3.1","text":"All EdgeX services are capable of using MQTT 3.1 by simply making changes to each service's configuration.
Note
As mentioned above, the MQTT 3.1 implementation requires the addition of a MQTT Broker service to be running.
"},{"location":"microservices/general/messagebus/#configuration-changes","title":"Configuration Changes","text":"Edgex 3.0
For EdgeX 3.0 MessageQueue
configuration has been renamed to MessageBus and is now in common configuration.
The MessageBus configuration is in common configuration where the following changes only need to be made once and apply to all services. See the MessageBus tab in Common Configuration for more details.
Example MQTT Configurations changes for all services
The following MessageBus
configuration settings must be changed in common configuration for all EdgeX Services to use MQTT 3.1
MessageBus:\nType: \"mqtt\"\nProtocol: \"tcp\" Host: \"localhost\" # in docker this must be overriden to be the docker host name of the MQTT Broker\nPort: 1883\nAuthMode: \"none\" # set to \"usernamepassword\" when running in secure mode\nSecreName: \"message-bus\"\n...\n
Note
The optional settings that apply to MQTT are already in the common configuration, so are not included above.
"},{"location":"microservices/general/messagebus/#docker","title":"Docker","text":"The EdgeX Compose Builder utility provides an option to easily generate a compose file with all the selected services re-configured for MQTT 3.1 using environment overrides. This is accomplished by using the mqtt-bus
option. See Compose Builder README for details on all available options.
Example Secure mode compose generation for MQTT 3.1
make gen ds-virtual ds-rest mqtt-bus\n
Non-secure mode compose generation for MQTT 3.1
make gen no-secty ds-virtual ds-rest mqtt-bus\n
Note
The run
command can be used to generate and run the compose file in one command, but any changes made to the generated compose file will be overridden the next time run
is used. An alternative is to use the up
command, which runs the latest generated compose file with any modifications that may have been made.
For Snap deployment, each services' configuration has to modified manually or via environment overrides after install. For more details see the Configuration section in the Snaps getting started guide.
"},{"location":"microservices/general/messagebus/#nats_1","title":"NATS","text":"The EdgeX Go based services are not capable of using the NATS implementation without being rebuild using the include_nats_messaging
build tag. Any EdgeX Core/Support/Go Device/Application Service targeted to use NATS in a deployment must have the Makefile modified to add this build flag. The service can then be rebuild for native and/or Docker.
Core Data make target modified to include NATS
cmd/core-data/core-data:\n$(GOCGO) build -tags \"include_nats_messaging $(NON_DELAYED_START_GO_BUILD_TAG_FOR_CORE)\" $(CGOFLAGS) -o $@ ./cmd/core-data\n
Note
The C Device SDK does not currently have a NATS implementation, so C Devices can not be used with the NATS based EdgeX MessageBus.
"},{"location":"microservices/general/messagebus/#configuration-changes_1","title":"Configuration Changes","text":"Edgex 3.0
For EdgeX 3.0 MessageQueue
configuration has been renamed to MessageBus and is now in common configuration.
The MessageBus configuration is in common configuration where the following changes only need to be made once and apply to all services. See the MessageBus tab in Common Configuration for more details.
Example NATS Configurations changes for all services
The following MessageBus
configuration settings must be changed in common configuration for all EdgeX Services to use NATS Jetstream
MessageBus:\nType: \"nats-jetstream\"\nProtocol: \"tcp\" Host: \"localhost\" # in docker this must be overriden to be the docker host name of the NATS server\nPort: 4222\nAuthMode: \"none\" # Currently in secure mode the NATS server is not secured\n
Note
The optional setting that apply to NATS are already in the common configuration, so are not included above.
"},{"location":"microservices/general/messagebus/#docker_1","title":"Docker","text":"The EdgeX Compose Builder utility provides an option to easily generate a compose file with all the selected services re-configured for NATS using environment overrides. This is accomplished by using the nats-bus
option. This option configures the services to use the NATS Jetstream implementation. See Compose Builder README for details on all available options. If NATS Core is preferred, simply do a search and replace of nats-jetstream
with nats-core
in the generated compose file.
Example Secure mode compose generation for NATS
make gen ds-virtual ds-rest nats-bus\n
Non-secure mode compose generation for NATS
make gen no-secty ds-virtual ds-rest nats-bus\n
"},{"location":"microservices/general/messagebus/#snaps_1","title":"Snaps","text":"The published Snaps are built without NATS included, so the use of NATS in those Snaps is not possible. One could modify the Makefiles as described above and then build and install local snap packages. In this case it would be easier to modify each service's configuration as describe above so that the locally built and installed snaps are already configured for NATS.
"},{"location":"microservices/support/Ch-SupportingServices/","title":"Supporting Services","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Micro services in the supporting services layer perform normal software application duties such as scheduler, and notifications/alerting .
These services often need some amount of core services to function. In all cases, consider supporting service optional. Leave these services out of an EdgeX deployment depending on use case needs and system resources.
Supporting services include:
LF Edge eKuiper is the EdgeX reference implementation rules engine (or edge analytics) implementation.
"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#what-is-lf-edge-ekuiper","title":"What is LF Edge eKuiper?","text":"LF Edge eKuiper is a lightweight open source software (Apache 2.0 open source license agreement) package for IoT edge analytics and stream processing implemented in Go lang, which can run on various resource constrained edge devices. Users can realize fast data processing on the edge and write rules in SQL. The eKuiper rules engine is based on three components Source
, SQL
and Sink
.
The relationship among Source, SQL and Sink in eKuiper is shown below.
eKuiper runs very efficiently on resource constrained edge devices. For common IoT data processing, the throughput can reach 12k per second. Readers can refer to here to get more performance benchmark data for eKuiper.
"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#ekuiper-rules-engine-of-edgex","title":"eKuiper rules engine of EdgeX","text":"An extension mechanism allows eKuiper to be customized to analyze and process data from different data sources. By default for the EdgeX configuration, eKuiper analyzes data coming from the EdgeX message bus. EdgeX provides an abstract message bus interface, and implements the Redis Pub/Sub, MQTT and NATS protocols respectively to support information exchange between different micro-services. The integration of eKuiper and EdgeX mainly includes the following:
5566
on which the Application Service publishes messages. After the data from the Core Data Service is processed by the Application Service, it will flow into the eKuiper rules engine for processing.Info
The eKuiper tutorials and documentation are available in both English and Chinese.
For more information on the LF Edge eKuiper project, please refer to the following resources.
When another system or a person needs to know that something occurred in EdgeX, the alerts and notifications microservice sends that notification. Examples of alerts and notifications that other services could broadcast, include the provisioning of a new device, sensor data detected outside of certain parameters (usually detected by a device service or rules engine) or system or service malfunctions (usually detected by system management services).
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#terminology","title":"Terminology","text":"Notifications are informative, whereas Alerts are typically of a more important, critical, or urgent nature, possibly requiring immediate action.
This diagram shows the high-level architecture of the notifications service. On the left side, the APIs are provided for other microservices, on-box applications, and off-box applications to use. The APIs could be in REST, AMQP, MQTT, or any standard application protocols.
This diagram is drawn by diagrams.net with the source file EdgeX_SupportingServicesAlertsArchitecture.xml
Warning
Currently in EdgeX Foundry, only the RESTful interface is provided.
On the right side, the notifications receiver could be a person or an application system on Cloud or in a server room. By invoking the Subscription RESTful interface to subscribe the specific types of notifications, the receiver obtains the appropriate notifications through defined receiving channels when events occur. The receiving channels include SMS message, e-mail, REST callback, AMQP, MQTT, and so on.
Warning
Currently in EdgeX Foundry, e-mail and REST callback channels are provided.
When the notifications service receives notifications from any interface, the notifications are passed to the Notifications Handler internally. The Notifications Handler persists the received notifications first, and passes them to the Distribution Coordinator.
When the Distribution Coordinator receives a notification, it first queries the Subscription database to get receivers who need this notification and their receiving channel information. According to the channel information, the Distribution Coordinator passes this notification to the corresponding channel senders. Then, the channel senders send out the notifications to the subscribed receivers.
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#workflow","title":"Workflow","text":""},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#normalminor-notifications","title":"Normal/Minor Notifications","text":"When a client requests a notification to be sent with \"NORMAL\" or \"MINOR\" status, the notification is immediately sent to its receivers via the Distribution Coordinator, and the status is updated to \"PROCESSED\".
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#critical-notifications","title":"Critical Notifications","text":"Notifications with \"CRITICAL\" status are also sent immediately. When encountering any error during sending critical notification, an individual resend task is scheduled, and each transmission record persists. After exceeding the configurable limit (resend limit), the service escalates the notification and create a new notification to notify particular receivers of the escalation subscription (name = \"ESCALATION\") of the failure.
Note
All notifications are processed immediately. The resend feature is only provided for critical notifications. The resendLimit and resendInterval properties can be defined in each subscription. If the properties are not provided, use the default values in the configuration properties.
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#data-model","title":"Data Model","text":"The latest developed data model will be updated in the Swagger API document.
This diagram is drawn by diagrams.net with the source file EdgeX_SupportingServicesNotificationsModel.xml
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#data-dictionary","title":"Data Dictionary","text":"SubscriptionChannelNotificationTransmissionTransmissionRecord Property Description The object used to describe the receiver and the recipient channels ID Uniquely identifies a subscription, for example a UUID Name Uniquely identifies a subscription Receiver The name of the party interested in the notification Description Human readable description explaining the subscription intent Categories Link the subscription to one or more categories of notification. Labels An array of associated means to label or tag for categorization or identification Channels An array of channel objects indicating the destination for the notification ResendLimit The retry limit for attempts to send notifications ResendInterval The interval in ISO 8691 format of resending the notification AdminState An enumeration string indicating the subscription is locked or unlocked Property Description The object used to describe the notification end point. Channel supports transmissions and notifications with fields for delivery via email or REST Type Object of ChannelType - indicates whether the channel facilitates email or REST MailAddress EmailAddress object for an array of string email addresses RESTAddress RESTAddress object for a REST API destination endpoint Property Description The object used to describe the message and sender content of a notification. ID Uniquely identifies a notification, for example a UUID Sender A string indicating the notification message sender Category A string categorizing the notification Severity An enumeration string indicating the severity of the notification - as either normal or critical Content The message sent to the receivers Description Human readable description explaining the reason for the notification or alert Status An enumeration string indicating the status of the notification as new, processed or escalated Labels Array of associated means to label or tag a notification for better search and filtering ContentType String indicating the type of content in the notification message Property Description The object used to group Notifications ID Uniquely identifies a transmission, for example a UUID Created A timestamp indicating when the notification was created NotificationId The notification id to be sent SubscriptionName The name of the subscription interested in the notification Channel A channel object indicating the destination for the notification Status An enumeration string indicating whether the transmission failed, was sent, was resending, was acknowledged, or was escalated ResendCount Number indicating the number of resent attempts Records An array of TransmissionRecords Property Description Information the status and response of a notification sent to a receiver Status An enumeration string indicating whether the transmission failed, was sent, was acknowledged, or escalated Response The response string from the receiver Sent A timestamp indicating when the notification was sent"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Support Notifications.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics TBD
Service metrics that Support Notification collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. i.e. Gateway=\"my-iot-gateway\"
Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration Port 59860 Micro service port number StartupMsg This is the Support Notifications Microservice Message logged when service completes bootstrap start-up Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration Name 'notifications' Document store or database name Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration ClientId \"support-notifications Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description Config to connect to applicable SMTP (email) service. All the properties with prefix \"smtp\" are for mail server configuration. Configure the mail server appropriately to send alerts and notifications. The correct values depend on which mail server is used. Smtp Host smtp.gmail.com SMTP service host name Smtp Port 587 SMTP service port number Smtp EnableSelfSignedCert false Indicates whether a self-signed cert can be used for secure connectivity. Smtp SecretPath smtp Specify the secret path to store the credential(username and password) for connecting the SMTP server via the /secret API, or set Writable SMTP username and password for insecure secrets Smtp Sender jdoe@gmail.com SMTP service sender/username Smtp Subject EdgeX Notification SMTP notification message subject Property Default Value Description Enabled false Enable or disable notification retention. Interval 30m Purging interval defines when the database should be rid of notifications above the MaxCap. MaxCap 5000 The maximum capacity defines where the high watermark of notifications should be detected for purging the amount of the notification to the minimum capacity. MinCap 4000 The minimum capacity defines where the total count of notifications should be returned to during purging."},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"No configuration updated
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#gmail-configuration-example","title":"Gmail Configuration Example","text":"Before using Gmail to send alerts and notifications, configure the sign-in security settings through one of the following two methods:
Then, use the following settings for the mail server properties:
Smtp Port=25\nSmtp Host=smtp.gmail.com\nSmtp Sender=${Gmail account}\nSmtp Password=${Gmail password or App password}\n
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#yahoo-mail-configuration-example","title":"Yahoo Mail Configuration Example","text":"Similar to Gmail, configure the sign-in security settings for Yahoo through one of the following two methods:
Then, use the following settings for the mail server properties:
Smtp Port=25\nSmtp Host=smtp.mail.yahoo.com\nSmtp Sender=${Yahoo account}\nSmtp Password=${Yahoo password or App password}\n
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#writable","title":"Writable","text":"The Writable.InsecureSecrets.SMTP
section has been added.
Example Writable.InsecureSecrets.SMTP section
Writable:\nInsecureSecrets:\nSMTP:\nSecretName: \"smtp\"\nSecretData:\nusername: \"username@mail.example.com\"\npassword: \"\"\n
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#api-reference","title":"API Reference","text":"Support Notifications API Reference
"},{"location":"microservices/support/scheduler/Ch-Scheduler/","title":"Support Scheduler","text":""},{"location":"microservices/support/scheduler/Ch-Scheduler/#introduction","title":"Introduction","text":"The support scheduler microservice provide an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time (called an interval), the service calls on any EdgeX service API URL via REST to trigger an operation (called an interval action). For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#default-interval-actions","title":"Default Interval Actions","text":"Scheduled interval actions configured by default with the reference implementation of the service include:
NOTE The removal of stale records occurs on a configurable schedule. By default, the default action above is invoked once a day at midnight.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#scheduler-persistence","title":"Scheduler Persistence","text":"Support scheduler uses a data store to persist the Interval(s) and IntervalAction(s). Persistence is accomplished by the Scheduler DB located in your current configured database for EdgeX.
Info Redis DB is used by default to persist all scheduler service information to include intervals and interval actions.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#iso-8601-standard","title":"ISO 8601 Standard","text":"The times and frequencies defined in the scheduler service's intervals are specified using the international date/time standard - ISO 8601. So, for example, the start of an interval would be represented in YYYYMMDD'T'HHmmss format. 20180101T000000 represents January 1, 2018 at midnight. Frequencies are represented with ISO 8601 durations.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#data-model","title":"Data Model","text":"The latest developed data model will be updated in the Swagger API document.
NOTE Only RESTAddress is supported. The MQTTAddress may be implemented in a future release.
This diagram is drawn by diagram.net, and the source file is here.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#data-dictionary","title":"Data Dictionary","text":"IntervalsIntervalActionsIntervalActions.Address Property Description An object defining a specific \"period\" in time Id Uniquely identifies an interval, for example a UUID Created A timestamp indicating when the interval was created in the database Modified A timestamp indicating when the interval was last modified Name the name of the given interval - unique for the EdgeX instance Start The start time of the given interval in ISO 8601 format using local system timezone End The end time of the given interval in ISO 8601 format using local system timezone Interval How often the specific resource needs to be polled. It represents as a duration string. The format of this field is to be an unsigned integer followed by a unit which may be \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Eg, \"100ms\", \"24h\" Property Description The action triggered by the service when the associated interval occurs Id Uniquely identifies an interval action, for example a UUID Created A timestamp indicating when the interval action was created in the database Modified A timestamp indicating when the interval action was last modified Name the name of the interval action Interval associated interval that defines when the action occurs AdminState interval action state - either LOCKED or UNLOCKED AuthMethod interval action authentication method - either NONE or JWT (EdgeX microservice authentication JWT) Content The actual content to be sent as the body ContentType Indicates which request contentType should be used (i.e.text/html
, application/json
), the default is application/json
Property Description An object inside IntervalActions
indicating how to contact a specific endpoint by HTTP protocol Type Currently only support REST
Host The host targeted by the action when it activates Port The port on the targeted host HttpMethod Indicates which Http verb should be used for the REST endpoint.(Only using when type is REST Path The HTTP path at the targeted host for fulfillment of the action.(Only using when type is REST) See Interval and IntervalAction for more information, please see Interval and IntervalAction endpoints.
Warning
AuthMethod: JWT
exposes a sensitive credential and should only be used for, and is required to be used for, authenticating to peer EdgeX microservices.
Scheduler interval actions to expunge old and exported (pushed) records from Core Data
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Support Scheduler.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics TBD
Service metrics that Support Scheduler collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. i.e. Gateway=\"my-iot-gateway\"
Property Default Value Description ScheduleIntervalTime 500 the time, in milliseconds, to trigger any applicable interval actions Property Default Value Description Unique settings for Support Scheduler. The common settings can be found at Common Configuration Port 59861 Micro service port number StartupMsg This is the Support Scheduler Microservice Message logged when service completes bootstrap start-up Property Default Value Description Unique settings for Support Scheduler. The common settings can be found at Common Configuration Name 'scheduler' Document store or database name Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration ClientId \"support-scheduler Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description Default intervals for use with default interval actions Name midnight Name of the every day at midnight interval Start 20180101T000000 Indicates the start time for the midnight interval which is a midnight, Jan 1, 2018 which effectively sets the start time as of right now since this is in the past Interval 24h defines a frequency of every 24 hours Property Default Value Description Configuration of the core data clean old events operation which is to kick off every midnight Name scrub-aged-events name of the interval action Host localhost run the request on core data assumed to be on the localhost Port 59880 run the request against the default core data port Protocol http Make a RESTful request to core data Method DELETE Make a RESTful delete operation request to core data Path /api/v3/event/age/604800000000000 request core data's remove old events API with parameter of 7 days Interval midnight run the operation every midnight as specified by the configuration defined interval"},{"location":"microservices/support/scheduler/Ch-Scheduler/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"RequireMessageBus
AuthMethod
is added to IntervalActions.ScrubAged
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#api-reference","title":"API Reference","text":"Support Scheduler API Reference
"},{"location":"security/Ch-APIGateway/","title":"API Gateway","text":""},{"location":"security/Ch-APIGateway/#introduction","title":"Introduction","text":"EdgeX 3.0
This content is completely new for EdgeX 3.0. EdgeX 3.0 uses a brand new API gateway solution based on NGINX and Hashicorp Vault instead of Kong and Postgres. The new solution means that EdgeX 3.0 will be able to run in security enabled mode on more resource-constrained devices.
API gateways are used in microservice architectures that expose HTTP-accessible APIs to create a security layer that separates internal and external callers. An API gateway accepts client requests, authenticates the client, forwards the request to a backend microservice, and relays the results back to the client.
Although authentication is done at the microservice layer in EdgeX 3.0, EdgeX Foundry as elected to continue to use an API gateway for the following reasons:
It provides a convenient choke point and policy enforcement point for external HTTP requests and enables EdgeX adopters to easily replace the default authentication logic.
It defers the urgency of implementing fine-grained authorization at the microservice layer.
It provides defense-in-depth against microservice authentication bugs and other technical debt that might otherwise put EdgeX microservices at risk.
The API gateway listens on two ports:
8000: This is an unencrypted HTTP port exposed to localhost-only (also exposed to the edgex-network Docker network). When EdgeX is running in security-enabled mode, the EdgeX UI uses port 8000 for authenticated local microservice calls.
8443: This is a TLS 1.3 encrypted HTTP port exposed via the host's network interface to external clients. The default TLS certificate on this port is untrusted by default and should be replaced with a trusted certificate for production usage.
EdgeX 3.0 uses NGINX as the API gateway implementation and delegates to EdgeX's secret store (powered by Hashicorp Vault) for user and JWT authentication.
"},{"location":"security/Ch-APIGateway/#start-the-api-gateway","title":"Start the API Gateway","text":"The API gateway is started by default in either the snap-based EdgeX deployment or the Docker-based EdgeX deployment using the Docker Compose files found at https://github.com/edgexfoundry/edgex-compose/.
In Docker, the command to start EdgeX inclusive of API gateway related services is (where \"somerelease\" denotes the EdgeX release, such as \"jakarta\" or \"minnesota\"):
git clone -b somerelease https://github.com/edgexfoundry/edgex-compose\ncd edgex-compose\nmake run\n
or
git clone -b somerelease https://github.com/edgexfoundry/edgex-compose\ncd edgex-compose\nmake run arm64\n
The API gateway is not started if EdgeX is started with security features disabled by appending no-secty
to the previous make
commands. This disables all EdgeX security features, not just the API gateway.
The API gateway will generate a default self-signed TLS certificate that is used for external communication. Since this certificate is not trusted by client software, it is commonplace to replace this auto-generated certificate with one generated from a known certificate authority, such as an enterprise PKI, or a commercial certificate authority.
The process for obtaining a certificate is out-of-scope for this document. For purposes of the example, the X.509 PEM-encoded certificate is assumed to be called cert.pem
and the unencrypted PEM-encoded private key is called key.pem
. Do not use an encrypted private key as the API gateway will hang on startup in order to prompt for a password.
Run the following command to install a custom certificate using the assumptions above:
docker compose -p edgex -f docker-compose.yml run --rm -v `pwd`:/host:ro --entrypoint /edgex/secrets-config proxy-setup proxy tls --inCert /host/cert.pem --inKey /host/key.pem\n
The following command can verify the certificate installation was successful.
echo \"GET /\" | openssl s_client -showcerts -servername edge001.example.com -connect 127.0.0.1:8443\n
(where edgex001.example.com
is the hostname by which the client is externally reachable)
The TLS certificate installed in the previous step should be among the output of the openssl
command.
A standard set of routes are configured statically via the security-proxy-setup
microservice. Additional routes can be added via the EDGEX_ADD_PROXY_ROUTE
environment variable. Here is an example:
security-proxy-setup:\n...\nenvironment:\n...\nEDGEX_ADD_PROXY_ROUTE: \"app-myservice.http://edgex-app-myservice:56789\"\n...\n\n...\n\napp-myservice:\n...\ncontainer_name: app-myservice-container\nhostname: edgex-app-myservice\n...\n
The value of EDGEX_ADD_PROXY_ROUTE
takes a comma-separated list of one or more paired additional prefix and URL for which to create proxy routes. The paired specification is given as the following:
<RoutePrefix>.<TargetRouteURL>\n
where RoutePrefix is the base path that will be created off of the root of the API gateway to route traffic to the target. This should typically be the service key that the app uses to register with the EdgeX secret store and configuration provider, as the name of the service in the docker-compose file has security implications when using delayed-start services.
TargetRouteURL is the fullly qualified URL for the target service, like http://edgex-app-myservice:56789
as it is known on on the network on which the API gateway is running. For Docker, the hostname should match the hostname specified in the docker-compose.yml
file.
For example, using the above docker-compose.yml
:
EDGEX_ADD_PROXY_ROUTE: \"app-myservice.http://edgex-app-myservice:56789\"\n
When a request to the API gateway is received, such as GET https://localhost:8443/app-myservice/api/v3/ping
, the API gateway will reissue the request as GET http://edgex-app-myservice:56789/api/v3/ping
. Note that the route prefix is stripped from the re-issued request.
If the EdgeX API gateway is not in use, a client can access and use any REST API provided by the EdgeX microservices by sending an HTTP request to the service endpoint. E.g., a client can consume the ping endpoint of the Core Data microservice with curl command like this:
curl http://<core-data-microservice-ip>:59880/api/v3/ping\n
Where <core-data-microservice-ip>
is the Docker IP address of the container running the core-data microservice (if using Docker), or additionally localhost
in the default configuration for snaps and Docker. This means that in the default configuration, EdgeX microservices are only accessible to local host processes.
The API gateway serves as single external endpoint for all the REST APIs. The curl command to ping the endpoint of the same Core Data service, as shown above, needs to change to:
curl https://<api-gateway-host>:8443/core-data/api/v3/ping\n
Comparing these two curl commands you may notice several differences.
http
is switched to https
as we enable the SSL/TLS for secure communication. This applies to any client side request. (If the certificate is not trusted, the -k
option to curl
may also be required.)/core-data/
path in the URL is used to identify which EdgeX micro service the request is routed to. As each EdgeX micro service has a dedicated service port open that accepts incoming requests, there is a mapping table kept by the API gateway that maps paths to micro service ports. A partial listing of the map between ports and URL paths is shown in the table below.Note that any such request issued will be met with an
401 Not Authorized\n
response to the lack of an authentication token on the request. Authentication will be explained later.
The EdgeX documentation maintains an up-to-date list of default service ports.
Microservice Host Name Port number Partial URL edgex-core-data 59880 core-data edgex-core-metadata 59881 core-metadata edgex-core-command 59882 core-command edgex-support-notifications 59860 support-notifications edgex-support-scheduler 59861 support-scheduler edgex-kuiper 59720 rules-engine device-virtual 59900 device-virtual"},{"location":"security/Ch-APIGateway/#creating-access-token-for-api-gateway-authentication","title":"Creating Access Token for API Gateway Authentication","text":"Authentication is more fully explained in the authentication chapter.
The authentication chapter goes into detail on:
The TL;DR version to get an API gateway token, for development and test purposes, is
make get-token\n
(in the edgex-compose repository, if using Docker).
The get-token
target will return a JWT in the form
eyJ.... \".\" base64chars \".\" base64chars\n
As a bearer token, it has a limited lifetime for security reasons. The get-token
process should be repeated to obtain fresh tokens periodically. In the long form process described in the authentication chapter, this means re-authenticating to the EdgeX secret store and requesting a fresh JWT.
EdgeX versions prior to 3.0 used to support registering a public key with the API gateway, and allowing clients to self-generate their JWT for API gateway authentication. Regrettably, this \"raw key JWT\" authentication method is no longer supported. As consolation, the EdgeX secret store backend, Hashicorp Vault, supports many other authentication backends. EdgeX only enables the userpass
auth engine by default, and only passes the userpass
auth endpoints through the API gateway by default. Customizing an EdgeX implementation to use alternative authentication methods is left as an exercise for the adopter.
Once the resource mapping and access token to API gateway are in place, a client can use the access token to use the protected EdgeX REST API resources behind the API gateway. Again, without the API Gateway in place, here is the sample request to hit the ping endpoint of the EdgeX Core Data microservice using curl:
curl http://<core-data-microservice-ip>:59880/api/v3/ping\n
With the security service and JWT authentication is enabled, the command changes to:
curl -k -H 'Authorization: Bearer <JWT>' https://myhostname:8443/core-data/api/v3/ping\n
In summary the difference between the two commands are listed below:
-k
tells curl to ignore certificate errors. This is for demonstration purposes. In production, a known certificate that the client trusts be installed on the proxy and this parameter omitted.-H \"Authorization: Bearer <JWT>\"
to pass the authentication token as part of the request.EdgeX 3.0
Microservice-level authentication is new for EdgeX 3.0.
"},{"location":"security/Ch-Authenticating/#introduction","title":"Introduction","text":"Starting in EdgeX 3.0, when EdgeX is run in secure mode, EdgeX microservices require an authentication token before they will respond to requests issued over the REST API. (These changes are detailed in the EdgeX microservice authentication ADR and were introduced to mitigate against certain threats that originate from behind the API gateway or have somehow bypassed the API gateway.)
Prior to EdgeX 3.0, requests that originated remotely were authenticated at the API gateway via an HTTP Authorization
header that contained a JWT bearer token. Internally-originated requests required no authentication. In EdgeX 3.0, the Authorization
header is additionally checked at the microservice level on a per-route basis, where the majority of URL paths require authentication.
In order to make an authenticated EdgeX service call to a REST API, an appropriate authentication token must be present on the HTTP Authorization
header. To be recognized as valid, these tokens must be issued by EdgeX's secret store.
Built-in EdgeX services already have a token that allows them access to the EdgeX secret store. The Configuring Add-on Services chapter contains details on what is required to enroll a new microservice into EdgeX, for the purpose of obtaining a secret store token. The secret store token is used to obtain a JWT that is used for authenticating EdgeX REST API calls. The service's secret store token is not used directly, as this would enable the receiver to access the senders private slice of the secret store. Instead, the identity of the caller is attested using a JWT authenticator.
Non-services such as interactive users and script clients are also required to obtain a secret store token and exchange it for a JWT authenticator for REST API calls.
There are several possible authentication scenarios:
Authentication for non-service clients (includes EdgeX UI)
Local service-to-service clients using EdgeX service clients
Local service-to-service clients using the SecretProvider interface
The service-to-service scenario using the API gateway is not currently supported. The built-in service clients are not reverse-proxy-aware, and the lack of service prefixes in generated URLs will result in the API gateway blocking requests.
"},{"location":"security/Ch-Authenticating/#authentication-for-non-service-clients","title":"Authentication for Non-service Clients","text":"Non-service clients include interactive users using the EdgeX UI, clients using hand-crafted REST API requests, or other API usages where the caller of an EdgeX microservice is not also an EdgeX microservice.
Authentication consists of three steps:
When running EdgeX in Docker using the edgex-compose
repository, steps 1, 2, and 3 above have been automated by the following command:
make get-token\n
This method should only be used for development and testing: the username is fixed by the script, and the password is reset every time the script is run.
The example will be done in the Docker environment. For snaps, refer here.
The long form of make get-token
is below:
Internally, a user identity is a paring of a Vault identity and an associated userpass
login method bound to that identity. Vault supports many other authentication backends besides userpass
, making it possible to federate with enterprise single sign-on, for example, but userpass
is the only authentication method enabled by default.
The provided secrets-config
tool includes two sub-functions, adduser
and deluser
, for creating user identities.
Let use first set a shell variable to hold a username:
username=exampleuser\n
Optional: Delete existing user
docker exec -ti edgex-security-proxy-setup ./secrets-config proxy deluser --user \"${username}\" --useRootToken\n
Create new user identity, capture the password. In this example, the Vault token has a 60 second time-to-live (TTL), and any JWTs that we create will have a 119 minute TTL. This is set at the time of account creation.
password=$(docker exec -ti edgex-security-proxy-setup ./secrets-config proxy adduser --user \"${username}\" --tokenTTL 60 --jwtTTL 119m --useRootToken | jq -r '.password')\n
The username and password created above should be saved for future use; they will be required in the future to obtain fresh JWT's.
"},{"location":"security/Ch-Authenticating/#2-obtaining-a-temporary-secret-store-token","title":"2. Obtaining a Temporary Secret Store Token","text":"Authenticate to the EdgeX secret store using the username and password generated above to obtain a temporary secret store token. This token must be exchanged for a JWT within the tokenTTL
liveness period.
vault_token=$(curl -ks \"http://localhost:8200/v1/auth/userpass/login/${username}\" -d \"{\\\"password\\\":\\\"${password}\\\"}\" | jq -r '.auth.client_token')\n
This temporary token can be discarded after the next step.
In the microservice-to-microservice authentication scenario, secret store tokens are periodically renewed and used to request further JWTs and access the service's secret store. Tokens associated with user identities, however, only be used to obtain a JWT.
"},{"location":"security/Ch-Authenticating/#3-obtaining-a-jwt-authentication-token","title":"3. Obtaining a JWT authentication token","text":"The token created in the previous step is passed as an authenticator to Vault's identity secrets engine. The output is a JWT that expires after jwtTTL
(see above) has passed.
id_token=$(curl -ks -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/token/${username}\" | jq -r '.data.token')\n\necho \"${id_token}\"\n
Optionally, if the secret store token (vault_token) isn't expired yet, it can be used to check the validity of an arbitrary JWT. This example checks the validity of the JWT that was issued above. Any JWT that passes this check should suffice for making an authenticated EdgeX microservice call.
introspect_result=$(curl -ks -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/introspect\" -d \"{\\\"token\\\":\\\"${id_token}\\\"}\" | jq -r '.active')\necho \"${introspect_result}\"\n
"},{"location":"security/Ch-Authenticating/#4-using-the-jwt-to-call-an-edgex-api-or-edgex-ui","title":"4. Using the JWT to Call an EdgeX API or EdgeX UI","text":""},{"location":"security/Ch-Authenticating/#calls-via-edgex-ui","title":"Calls via EdgeX UI","text":"EdgeX UI users should supply the id_token
to the prompt issued by the EdgeX UI. When the token eventually expires, obtain another token using the above process.
To call an EdgeX service directly from host context using a command-line interface, go directly to the service's localhost-mapped port, and pass the JWT as an HTTP Authorization
header:
curl -H\"Authorization: Bearer ${id_token}\" \"http://localhost:59xxx/api/v3/version\"\n
"},{"location":"security/Ch-Authenticating/#remote-calls-to-services-via-api-gateway","title":"Remote Calls to Services via API Gateway","text":"Calling an EdgeX service from a remote machine using the EdgeX API gateway looks similar to the above, with a few minor changes:
The docker network architecture is illustrated below:
In the example below, ca.crt
is the CA certificate that is used to verify the TLS certificate presented by the API gateway, and SERVICENAME
is the name of the EdgeX service that is being proxied by the API gateway, such as core-data
:
curl --cacert ca.crt -H\"Authorization: Bearer ${id_token}\" \"https://`hostname --fqdn`:8443/SERVICENAME/api/v3/version\"\n
This is identical to what was done in EdgeX versions prior to 3.0. The only thing that has changed is the method use to obtain the JWT.
"},{"location":"security/Ch-Authenticating/#local-service-to-service-using-edgex-service-clients","title":"Local Service-to-Service - Using EdgeX Service Clients","text":"The preferred method of making an authenticated call to an EdgeX microservice is to use the service proxies configured by go-mod-bootstrap.
Clients are retrieved from the dependency injection container using the helper functions in clients.go in go-mod-bootstrap. For example:
import \"github.com/edgexfoundry/go-mod-bootstrap/bootstrap/container\"\n\n// ... \n\ncommandClient := container.CommandClientFrom(dic.Get)\n
EdgeX methods invoked via the service proxies automatically authenticate to peer EdgeX microservices with no additional work needed on the part of the developer.
If EdgeX is run in non-secure mode, the built-in service clients that are configured in go-mod-bootstrap gracefully degrade to non-authenticating clients.
"},{"location":"security/Ch-Authenticating/#local-service-to-service-using-the-secretprovider-interface","title":"Local Service-to-Service - Using the SecretProvider interface","text":"In the example where two user-provided services directly invoke one-another, there will be no service client available. In this case, it is necessary to use go-mod-bootstrap's SecretProvider
interface to obtain a JWT.
See the following pseudo-code to add an Authorization
header to an outgoing HTTP request, req.
import (\nbootstrapContainer \"github.com/edgexfoundry/go-mod-bootstrap/v3/bootstrap/container\"\nclientInterfaces \"github.com/edgexfoundry/go-mod-core-contracts/v3/clients/interfaces\"\n\"github.com/edgexfoundry/go-mod-bootstrap/v3/bootstrap/secret\"\n)\n\n\n// Get the SecretProvider from bootstrap's DI container.\n// Internally, this is a wrapper for go-mod-secret's GetSelfJWT()\nsecretProvider := bootstrapContainer.SecretProviderFrom(dic.Get)\n\n// get an instance of the AuthenticationInjector helper\nvar jwtSecretProvider clientInterfaces.AuthenticationInjector\njwtSecretProvider = secret.NewJWTSecretProvider(m.secretProvider)\n\n// Call the AddAuthenticationData helper method\n// internally, this calls GetSelfJWT() on the SecretProvider\n// to obtain a JWT and adds an Authorization header to the HTTP request\nerr := jwtSecretProvider.AddAuthenticationData(req);\n
"},{"location":"security/Ch-Authenticating/#implementation-notes","title":"Implementation Notes","text":"Internally, the receiving microservice will call the secret store's token introspection endpoint to validate incoming JWT's. Note that as in all things dealing with the EdgeX secret store, calling the introspection endpoint is also an authenticated call, and a service must have explicit authorization to invoke this API.
Similarly, explicit authorization is required for a calling microservice to obtain a JWT to pass as an authentication token. In the EdgeX implementation, microservices use the userpass login authentication method to obtain an initial secret store token. This token is explicitly granted the ability to generate a JWT.
In the external user scenario of the API gateway, clients must manually log in to the secret store, and exchange the resulting token for JWT. In the internal usage scenario, EdgeX microservices are typically pre-seeded with a valid JWT, and obtain a fresh JWT for each outbound microservice call.
There are obvious opportunities for caching to reduce round trips to the EdgeX secret store, but none have been implemented at this time.
"},{"location":"security/Ch-CORS-Settings/","title":"CORS settings","text":"The EdgeX microservices provide REST APIs and those services might be called from a GUI through a browser. Browsers prevent service calls from a different origin, making it impossible to host a management GUI on one domain that manages an EdgeX device on a different domain. Thus, EdgeX supports Cross-Origin Resource Sharing (CORS) since Jakarta release (v2.1), and this feature can be controlled by the configurations. The default behavior of CORS is disabled. Here is a good reference to understand CORS.
Note
C Device SDK doesn't support CORS, and enabling CORS in Device Services is not recommended because browsers should not access Device Services directly.
"},{"location":"security/Ch-CORS-Settings/#enabling-cors","title":"Enabling CORS","text":"There are two different ways to enable CORS depending on whether EdgeX is deployed in the security-enabled configuration. In the non-security configuration, EdgeX microservices are directly exposed on host ports. EdgeX microservices receive client requests directly in this configuration, and thus, the EdgeX microservices themselves must respond to CORS requests. In the security-enabled configuration, EdgeX microservices are exposed behind an API gateway that will receive CORS requests first. Only authenticated calls will be forwarded to the EdgeX microservice, but CORS pre-flight requests are always unauthenticated.
CORS can be enabled at the API gateway in a security-enabled configuration, and at the individual microservice level in the non-security configuration. However, implementers should choose one or the other, not both.
"},{"location":"security/Ch-CORS-Settings/#enabling-cors-for-microservices","title":"Enabling CORS for Microservices","text":"There are two different options to enable CORS.
core-common-config-bootstrapper
service section on docker-compose.file. They can be set via SERVICE_CORSCONFIGURATION_*
environment variables. Please refer to the following example:Example - Set EnableCORS
to true
by environment variables override
core-common-config-bootstrapper:\nenvironment: SERVICE_CORSCONFIGURATION_ENABLECORS: \"true\"\n
Service.CORSConfiguration.EnableCORS
via Consul for the targeted service and restart the service.Service.CORSConfiguration.EnableCORS
to each services private configuration file.Please refer to the Common Configuration page to learn the details.
"},{"location":"security/Ch-CORS-Settings/#enabling-cors-for-api-gateway","title":"Enabling CORS for API Gateway","text":"The default CORS settings for the API gateway come from the following section in cat cmd/core-common-config-bootstrapper/res/configuration.yaml
in the edgex-go
repository
all-services:\n Service:\n CORSConfiguration:\n EnableCORS: false\n CORSAllowCredentials: false\n CORSAllowedOrigin: \"https://localhost\"\n CORSAllowedMethods: \"GET, POST, PUT, PATCH, DELETE\"\n CORSAllowedHeaders: \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\"\n CORSExposeHeaders: \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\"\n CORSMaxAge: 3600\n
In the Docker configuration if the EDGEX_SERVICE_CORSCONFIGURATION_*
environment variables are set on the security-proxy-setup
microservice, the CORS configuration will be applied to all microservices (EDGEX_SERVICE_CORSCONFIGURATION_ENABLECORS=true
). There is not a way, when using the API gateway, to turn CORS on for one microservice but not another without writing a custom security-proxy-setup
microservice.
Note
The settings under the CORSConfiguration configuration section are the same as those under the Service.CORSConfiguration so please refer to the Common Configuration page to learn the details. Note that these overrides are prefixed with EDGEX_
.
Note
The name of the configuration sections and environment variable overrides are intentionally different than the API gateway section, in alignment with the guidance that CORS should be enabled at the microservice level or the API gateway level, but not both. Thus, the security-enabled overrides are accomplished with EDGEX_SERVICE_CORSCONFIGURATION_*
overrides, and the no-security overrides with SERVICE_CORSCONFIGURATION_*
.
To enable CORS support in the API gateway in the EdgeX Snap, a slightly different procedure is required.
First, we need to override the EDGEX_SERVICE_CORSCONFIGURATION_*
environment variables like was done in Docker. However, we need to override this in the security-bootstrapper-nginx
service. This service runs before nginx.service
to write the NGINX configuration file. If started prior to this configuration, restart the security-bootstrapper-nginx
service to generate a new configuration, and also restart nginx
to put the new configuration into effect. Otherwise, start the services as usual. Lastly, we send a sample CORS preflight request at the API gateway to make sure everything is working.
Note
Setting CORSAllowedOrigin=\"*\"
is not a security best practice for an authenticated API; rather, it should be set to the domain that is hosting your user interface. The example provided is for illustrative purposes only.
Example, assuming the services are running:
$ sudo snap set edgexfoundry apps.security-bootstrapper-nginx.config.edgex-service-corsconfiguration-corsallowedorigin=\"*\"\n$ sudo snap set edgexfoundry apps.security-bootstrapper-nginx.config.edgex-service-corsconfiguration-enablecors=true\n$ sudo snap restart edgexfoundry.security-bootstrapper-nginx\n$ sudo snap restart edgexfoundry.nginx\n$ curl -ki -X OPTIONS -H\"Origin: http://localhost\" \"https://localhost:8443/core-data/api/v2/ping\"\nHTTP/1.1 204 No Content\nServer: nginx\nDate: Wed, 23 Aug 2023 03:08:18 GMT\nConnection: keep-alive\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET, POST, PUT, PATCH, DELETE\nAccess-Control-Allow-Headers: Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\nAccess-Control-Max-Age: 3600\nVary: origin\nContent-Type: text/plain; charset=utf-8\nContent-Length: 0\n
"},{"location":"security/Ch-Configuring-Add-On-Services/","title":"Configuring Add-on Service","text":"In the current EdgeX security serivces, we set up and configure all security related properties and environments for the existing default serivces like core-data
, core-metadata
, device-virtual
, and so on.
The settings and service environment variables are pre-wired and ready to run in secure mode without any update or modification to the Docker-compose files. However, there are some pre-built add-on services like some device services (e.g.device-camera
, device-modbus
), and some of application services (e.g. app-http-export
, app-mqtt-export
) are not pre-wired for by default. Also if you are adding on your custom application service, there is no pre-wiring for it and thus need some configuration efforts to make them run in secure mode.
EdgeX provides a way for a user to add and configure those add-on services into EdgeX Docker software stack running in secure mode. This can be done vai Docker-compose files with a few additional environment variables and some modification of micro-service's Dockerfile. From edgex-compose
repository, the compose-builder
utility provides some ways to deal with those add-on services like through add-security.yml
via make
targets to generate docker-compose
file for running them in secure mode. For more details, please refer to README documentation of compose-builder.
The above same guidelines can also be applied to custom device and application services, i.e. non-EdgeX built services.
One of the major security features in EdgeX Ireland release is to utilize the service security-bootstrapper
to ensure the right starting sequence so that all services have their needed security dependencies when they start up.
Currently EdgeX uses Vault
as the default implementation for secret store and Consul as the configuration and/or registry server if user chooses to do so. There are some default services pre-configured to have Secret Stores
created by default such as EdgeX core/support services, device-virtual, device-rest, and app-rules-engine services.
For running additional add-on services (e.g. device-camera
, app-http-export
) in secure mode, their Secret Stores
are not generated by default but they can be generated through some configuring steps as shown below.
In the following scenario, we assume the EdgeX services are running in Docker environments, and thus the examples are given in terms of Docker-compose ways. It should not be much or bigger difference for snap
running environment to apply the same steps or concepts if found to do so.
If users want to configure and set up an add-on service, e.g. device-camera
, they can achieve this by following the steps that are outlined below:
To use the Docker entrypoint scripts for gating mechanism from security-bootstrapper
, the Dockerfile of device-camera
should inherit shell scripting capability like alpine
-based as the base Docker image and should install dumb-init
(see details in Why you need an init system) via apk add --update
command.
Dockerfile example using alpine-base image and add dumb-init
:
......\nFROM alpine:3.12\n\n# dumb-init needed for injected secure bootstrapping entrypoint script when run in secure mode.\nRUN apk add --update --no-cache dumb-init\n......\n
and then for the service itself should add /edgex-init/ready_to_run_wait_install.sh
as the entrypoint script for the service in gating fashion and add related Docker volumes for edgex-init
and for Secret Store
token, which will be outlined in the next section.
A good example of this will be like app-service-rules
:
...\napp-service-rules:\nentrypoint: [\"/edgex-init/ready_to_run_wait_install.sh\"]\ncommand: \"/app-service-configurable ${DEFAULT_EDGEX_RUN_CMD_PARMS}\"\nvolumes:\n- edgex-init:/edgex-init:ro,z\n- /tmp/edgex/secrets/app-rules-engine:/tmp/edgex/secrets/app-rules-engine:ro,z\ndepends_on:\n- security-bootstrapper\n...\n
Note that we also add command
directive override in the above example because we override Docker's entrypoint script in the original Dockerfile and Docker ignores the original command when the entrypoint script is overridden. In this case, we also override the command
for app-service-rules
service with arguments to execute.
Secret Store
to use","text":"Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
Note that the service key , i.e.device-onvif-camera
, must be used for the Path
and in the TokenFile
path to keep it consistent and easier to maintain. These are now part of the built in default values for the SecretStore configuration. Then the add-on service's service key must be added to the EdgeX service secretstore-setup
'sEDGEX_ADD_SECRETSTORE_TOKENS
environment variable in the environment
section of docker-compose
as the example shown below:
...\nsecretstore-setup:\ncontainer_name: edgex-secretstore-setup\ndepends_on:\n- security-bootstrapper\n- vault\nenvironment:\nEDGEX_ADD_SECRETSTORE_TOKENS: 'device-onvif-camera'\n...\n
With that, secretstore-setup
then will generate Secret Store
token from Vault
and store it in the TokenFile
path specified in the SecretStore configuration.
Also note that the value of EDGEX_ADD_SECRETSTORE_TOKENS
can take more than one service in a form of comma separated list like \"device-camera
, device-modbus
\" if needed.
The EDGEX_ADD_KNOWN_SECRETS
environment variable on secretstore-setup
allows for known secrets to be added to an add-on service's Secret Store
.
For the Ireland release, the only known
secret is the Redis DB credentials
identified by the name redisdb
. Any add-on service needing access to the Redis DB
such as App Service HTTP Export with Store and Forward enabled will need the Redis DB credentials
put in its Secret Store
. Also, since the Redis DB
service is now used for the MessageBus implementation, all services that connect to the MessageBus also need the Redis DB credentials
Note that the steps needed for connecting add-on services to the Secure MessageBus
are:
security-bootstrapper
to ensure proper startup sequenceSecret Store
for the add-on serviceredisdb
's known secret to the add-on service's Secret Store
and if the add-on service is not connecting to the bus or the Redis database, then this step can be skipped.
So given an example for service device-virtual
to use the Redis
message bus in secure mode, we need to tell secretstore-setup
to add the redisdb
known secret to Secret Store
for device-virtual
. This can be done through the configuration of adding redisdb[device-virtual]
into the environment variable EDGEX_ADD_KNOWN_SECRETS
in secretstore-setup
service's environment section, in which redisdb
is the name of the known secret
and device-virtual
is the service key of the add-on service.
...\nsecretstore-setup:\ncontainer_name: edgex-secretstore-setup\ndepends_on:\n- security-bootstrapper\n- vault\nenvironment:\nEDGEX_ADD_SECRETSTORE_TOKENS: 'device-onvif-camera, my-service'\nEDGEX_ADD_KNOWN_SECRETS: redisdb[app-rules-engine],redisdb[device-rest],redisdb[device-virtual]\n...\n
In the above docker-compose
section of secretstore-setup
, we specify the known secret of redisdb
to add/copy the Redis database credentials to the Secret Store
for the app-rules-engine
, device-rest
, and device-virtual
services.
We can also use the alternative or simpler form of EDGEX_ADD_KNOWN_SECRETS
environment variable's value like
EDGEX_ADD_KNOWN_SECRETS: redisdb[app-rules-engine; device-rest; device-virtual]\n
in which all add-on services are put together in a comma separated list associated with the known secret redisdb
.
This is a new step coming from securing Consul
security features as part of EdgeX Ireland release.
If the add-on service uses Consul
as the configuration and/or registry service, then we also need to configure the environment variable EDGEX_ADD_REGISTRY_ACL_ROLES
to tell security-bootstrapper
to generate an ACL role for Consul
to associate with its token.
An example of configuring ACL roles of the registry Consul
for the add-on services device-modbus
and app-http-export
is shown as follows:
...\nconsul:\ncontainer_name: edgex-core-consul\ndepends_on:\n- security-bootstrapper\n- vault\nentrypoint:\n- /edgex-init/consul_wait_install.sh\nenvironment:\nEDGEX_ADD_REGISTRY_ACL_ROLES: app-http-export,device-modbus\n...\n
The configuration of Edgex service consul
's environment variable EDGEX_ADD_REGISTRY_ACL_ROLES
tells the security-bootstrapper
to set up Consul
ACL role so that the ACL token is generated, hence the permission is granted for that service with the access to Consul
in secure mode.
Without this step the add-on service will get status Forbidden
(HTTP status code = 403) error when the service is depending on Consul and attempting to access Consul for configuration or service registry.
If it is desirable to let user or other application services outside EdgeX's Docker network access the endpoint of the add-on service, then we can configure and add it via proxy-setup
service's EDGEX_ADD_PROXY_ROUTE
environment variable. proxy-setup
adds those services listed in that environment variable into the API gateway routes so that the endpoint can be accessible via the gateway.
One example of adding API gateway access routes for both device-camera
and device-modbus
is given as follows:
...\nedgex-proxy:\n...\nenvironment:\n...\nEDGEX_ADD_PROXY_ROUTE: \"device-camera.http://edgex-device-onvif-camera:59984, device-modbus.http://edgex-device-modbus:59901\"\n...\n...\n
where in the comma separated list, the first part of configured value device-onvif-camera
is the service key and the URL format is the service's hostname with its docker network port number 59984
for device-camera
. The same idea applies to device-modbus
with its values.
With that setup, we can then access the endpoints of device-camera
from the host like https://<HostName>:8443/device-onvif-camera/{device-name}/name
assuming the caller can resolve <HostName>
from DNS server.
For more details on the introduction to the API gateway and how it works, please see APIGateway documentation page.
"},{"location":"security/Ch-DelayedStartServices/","title":"Delayed-Start Services","text":"In some use cases, it is not possible to deliver a secret store token to an EdgeX microservice at the time the framework is started. This may be because a service is optional, because it is transient (doesn't run all the time), or because it may be difficult to deliver the token generated by security-secretstore-setup.
To accommodate this use case, EdgeX microservices have an ability to obtain their secretstore tokens via SPIFFE workload attestation. Non-core EdgeX microservices have SPIFFE support compiled into their binaries by default, and core services are compiled with a non_delayedstart
build flag which removes this functionality for space reasons. Note that delayed start can be compiled into the core services as well, if desired, via a Makefile
change.
The article Remote Devices in Secure Mode describes how to use the delayed-start feature in a remote device service scenario. A workload attestation agent must be running on every node in order to use delayed start services.
"},{"location":"security/Ch-DelayedStartServices/#how-to-enable-docker","title":"How to Enable (Docker)","text":""},{"location":"security/Ch-DelayedStartServices/#enable-custom-application-or-device-services-optional","title":"Enable Custom Application or Device Services (Optional)","text":"If using EdgeX with custom Application or Device services in Secure mode, first generate a docker-compose.yml file by running the following command from edgex-compose/compose-builder
$ make gen delayed-start\n
Open the generated docker-compose.yml file and set the EDGEX_SPIFFE_CUSTOM_SERVICES
Environment variable. To set multiple custom services, use a white space delimiter.
security-spire-config:\n...\nenvironment:\n...\nEDGEX_SPIFFE_CUSTOM_SERVICES: '<custom-service> <custom-service-2>'\n
Run the modified Docker Compose file
$ docker compose -p edgex up -d\n
Refer to the configuration steps below to finish setting up any custom/non-core services.
"},{"location":"security/Ch-DelayedStartServices/#running-in-delayed-start-mode","title":"Running in Delayed Start Mode","text":"Using the Docker run scripts, start the framework with the delayed-start
option:
$ make run delayed-start\n
This will cause the following microservices to be started:
Next, pass the following environment variables to any non-core EdgeX microservice that has SPIFFE/SPIRE support compiled-in:
SECRETSTORE_RUNTIMETOKENPROVIDER_ENABLED: \"true\"\nSECRETSTORE_RUNTIMETOKENPROVIDER_HOST: edgex-security-spiffe-token-provider\n
If the configuration is successfully applied the following log messages should appear in the output (device-virtual
service shown):
level=INFO ts=2023-04-04T01:10:04.805777526Z app=device-virtual source=secret.go:196 msg=\"runtime token provider enabled\"\nlevel=INFO ts=2023-04-04T01:10:04.805811012Z app=device-virtual source=methods.go:138 msg=\"using Unix Domain Socket at unix:///tmp/edgex/secrets/spiffe/public/api.sock\"\nlevel=INFO ts=2023-04-04T01:10:04.860221916Z app=device-virtual source=methods.go:150 msg=\"workload got X509 source\"\nlevel=INFO ts=2023-04-04T01:10:04.999743052Z app=device-virtual source=methods.go:120 msg=\"successfully got token from spiffe-token-provider!\"\nlevel=INFO ts=2023-04-04T01:10:04.999984978Z app=device-virtual source=secret.go:93 msg=\"Attempting to create secret client\"\nlevel=INFO ts=2023-04-04T01:10:05.001185555Z app=device-virtual source=secret.go:104 msg=\"Created SecretClient\"\nlevel=INFO ts=2023-04-04T01:10:05.001261424Z app=device-virtual source=secrets.go:277 msg=\"kick off token renewal with interval: 30m0s\"\n
These messages indicate that the workload has been successfully attested, a SPIFFE SVID obtained, and that SVID has been exchanged with the edgex-security-spiffe-token-provider
service for an EdgeX secret store token.
Workload attestation failures are indicated by a hang in the service's log messages:
level=INFO ts=2023-04-04T01:10:04.805777526Z app=device-virtual source=secret.go:196 msg=\"runtime token provider enabled\"\nlevel=INFO ts=2023-04-04T01:10:04.805811012Z app=device-virtual source=methods.go:138 msg=\"using Unix Domain Socket at unix:///tmp/edgex/secrets/spiffe/public/api.sock\"\n
Workload attestation failures can be confirmed by examining edgex-security-spire-agent
logs:
$ docker logs edgex-security-spire-agent\ntime=\"2023-04-04T21:51:58Z\" level=error msg=\"No identity issued\" method=FetchX509SVID pid=87411 registered=false service=WorkloadAPI subsystem_name=endpoints\n
This message is preceded by a set of key-value pairs collected by the agent to identify the workload:
type:\"docker\" value:\"label:com.docker.compose.image:sha256:9ddd29b3453149a799a0ec3549537fa3f59f8ee85eb0e4e5c54febf1b74f0fc4\"\ntype:\"docker\" value:\"label:com.docker.compose.service:app-http-export\"\ntype:\"unix\" value:\"path:/app-service-configurable\"\ntype:\"unix\" value:\"sha256:2c72b9f4a871ff98ba410c292ee97206df8ee584002b34a4d08b6355e686c3d2\"\n
The agent communicates with the server/controller to authorize the workload. The server/controller consults an authorization database that is seeded with a script: https://github.com/edgexfoundry/edgex-go/blob/main/cmd/security-spire-config/seed_builtin_entries.sh
This authorization database can be dumped with the following command:
$ docker exec -ti edgex-security-spire-server spire-server entry show -socketPath /tmp/edgex/secrets/spiffe/private/api.sock\n\nFound ### entries\n...\nEntry ID : 2034b8d2-fa29-48bc-bce1-4e30ea0b66c2\nSPIFFE ID : spiffe://edgexfoundry.org/service/device-virtual\nParent ID : spiffe://edgexfoundry.org/spire/agent/x509pop/cn/agent0\nRevision : 0\nTTL : default\nSelector : docker:label:com.docker.compose.service:device-virtual\nDNS name : edgex-device-virtual\n
The key-value pairs collected by the agent is matched against the Selector
in the authorization database to determine whether an SVID should be generated. The agent will return the authorization decision to the service, which will continue to retry authentication.
Authorization entries may be persistently added to the authorization database by modifying the above script or adding them manually, replacing the CAPITALIZED words with appropriate values:
$ docker exec -ti edgex-security-spire-server spire-server entry create -socketPath /tmp/edgex/secrets/spiffe/private/api.sock -parentID \"spiffe://edgexfoundry.org/spire/agent/x509pop/cn/agent0\" -dns \"SERVICE-DNS-NAME\" -spiffeID \"spiffe://edgexfoundry.org/service/SERVICEKEY\" -selector \"docker:label:com.docker.compose.service:DOCKERCOMPOSESERVICEKEY\"\n
"},{"location":"security/Ch-RemoteDeviceServices/","title":"Remote Device Services in Secure Mode","text":"This page describes the remote device service example in the edgex-examples
GitHub repository.
Running a remote device service poses several problems when EdgeX is running in secure mode:
Network traffic between the primary EdgeX node and the remote device service node is unencrypted.
The remote device service will not have a Consul authentication token that allows it to talk to the registry and configuration services.
The remote device service will not have a secret store token that allows access to the EdgeX secret store (which is also needed to obtain a Consul authentication token).
This example will resolve the above complications by
Creating secure SSH network tunnel between nodes to encrypt network communication.
Use the delayed start feature introduced in EdgeX Kamakura to lasily obtain a secret store token that will grant the device service access to the EdgeX secret store, EdgeX registry service, and EdgeX configuration service.
First, clone the edgex-examples repository
, checkout main
and change to the security/remote_devices/spiffe_and_ssh
directory.
Next, run the generate_keys.sh
script to generate an SSH keypair for the SSH tunnel. This keypair is used only for the SSH tunnel and should have no other privileges.
Once the generate_keys.sh
script has been run, copy the remote
folder to the remote device service machine.
Change directories to the local
folder.
Edit docker-compose.yml
and change the TUNNEL_HOST
environment variable to the IP address of the remote node.
Run
$ docker compose build\n$ docker compose up -d\n
After the framework has been built and is running, check the device-ssh-proxy
service
$ docker ps -a | grep device-ssh-proxy\na92ff2d6999c device-ssh-proxy:latest \"/edgex-init\u2026\" 2 minutes ago Restarting (1) 16 seconds ago edgex-device-ssh-proxy\n$ docker logs device-ssh-proxy\n+ scp -p -o 'StrictHostKeyChecking=no' -o 'UserKnownHostsFile=/dev/null' -P 2223 /srv/spiffe/remote-agent/agent.key 192.168.122.193:/srv/spiffe/remote-agent/agent.key\nssh: connect to host 192.168.122.193 port 2223: Connection refused\nlost connection\n
The SSH connection will continue to fail until the remote node is brought up.
Next, authorize the workload running on the remote node.
$ ./add-server-entry.sh\nEntry ID : f62bfec6-b19c-43ea-94b8-975f7e9a258e\nSPIFFE ID : spiffe://edgexfoundry.org/service/device-virtual\nParent ID : spiffe://edgexfoundry.org/spire/agent/x509pop/cn/remote-agent\nRevision : 0\nTTL : default\nSelector : docker:label:com.docker.compose.service:device-virtual\nDNS name : edgex-device-virtual\n
That is all to be done on the local node.
"},{"location":"security/Ch-RemoteDeviceServices/#on-the-remote-machine","title":"On the Remote Machine","text":"Change directories to the remote
folder and run
$ docker compose build\n$ docker compose up -d\n
After the framework has been built and is running for about a minute, check the device-virtual
service
$ docker logs -f edgex-device-virtual\nlevel=INFO ts=2022-05-05T14:28:30.005673094Z app=device-virtual source=config.go:391 msg=\"Loaded service configuration from ./res/configuration.yaml\"\nlevel=INFO ts=2022-05-05T14:28:30.006211643Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Port' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_PORT=59841\"\nlevel=INFO ts=2022-05-05T14:28:30.006286584Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Protocol' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_PROTOCOL=https\"\nlevel=INFO ts=2022-05-05T14:28:30.006341968Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Clients.core-metadata.Host' by environment variable: CLIENTS_CORE_METADATA_HOST=edgex-core-metadata\"\nlevel=INFO ts=2022-05-05T14:28:30.006382102Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'MessageQueue.Host' by environment variable: MESSAGEQUEUE_HOST=edgex-redis\"\nlevel=INFO ts=2022-05-05T14:28:30.006416098Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.EndpointSocket' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_ENDPOINTSOCKET=/tmp/edgex/secrets/spiffe/public/api.sock\"\nlevel=INFO ts=2022-05-05T14:28:30.006457406Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.RequiredSecrets' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_REQUIREDSECRETS=redisdb\"\nlevel=INFO ts=2022-05-05T14:28:30.006495791Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Enabled' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_ENABLED=true\"\nlevel=INFO ts=2022-05-05T14:28:30.006529808Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Host' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_HOST=edgex-security-spiffe-token-provider\"\nlevel=INFO ts=2022-05-05T14:28:30.006575741Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Clients.core-data.Host' by environment variable: CLIENTS_CORE_DATA_HOST=edgex-core-data\"\nlevel=INFO ts=2022-05-05T14:28:30.006617026Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.Host' by environment variable: SECRETSTORE_HOST=edgex-vault\"\nlevel=INFO ts=2022-05-05T14:28:30.006650922Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.Port' by environment variable: SECRETSTORE_PORT=8200\"\nlevel=INFO ts=2022-05-05T14:28:30.006691769Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.TrustDomain' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_TRUSTDOMAIN=edgexfoundry.org\"\nlevel=INFO ts=2022-05-05T14:28:30.006729711Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Service.Host' by environment variable: SERVICE_HOST=edgex-device-virtual\"\nlevel=INFO ts=2022-05-05T14:28:30.006764754Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Registry.Host' by environment variable: REGISTRY_HOST=edgex-core-consul\"\nlevel=INFO ts=2022-05-05T14:28:30.006904867Z app=device-virtual source=secret.go:55 msg=\"Creating SecretClient\"\nlevel=INFO ts=2022-05-05T14:28:30.006953018Z app=device-virtual source=secret.go:62 msg=\"Reading secret store configuration and authentication token\"\nlevel=INFO ts=2022-05-05T14:28:30.006994824Z app=device-virtual source=secret.go:165 msg=\"runtime token provider enabled\"\nlevel=INFO ts=2022-05-05T14:28:30.007064786Z app=device-virtual source=methods.go:138 msg=\"using Unix Domain Socket at unix:///tmp/edgex/secrets/spiffe/public/api.sock\"\n
If the workload was not authorized on the local side, the output will stop as shown above. The service would be hung waiting for a SPIFFE authentication token.
Since the local site was stuck in a retry loop trying to establish an SSH connection to the remote, the service may stay stuck in this state for several minutes until the network tunnels are established.
Otherwise the log would continue as follows:
level=INFO ts=2022-05-05T14:29:25.078483584Z app=device-virtual source=methods.go:150 msg=\"workload got X509 source\"\nlevel=INFO ts=2022-05-05T14:29:25.168325689Z app=device-virtual source=methods.go:120 msg=\"successfully got token from spiffe-token-provider!\"\nlevel=INFO ts=2022-05-05T14:29:25.169095621Z app=device-virtual source=secret.go:80 msg=\"Attempting to create secret client\"\nlevel=INFO ts=2022-05-05T14:29:25.172259336Z app=device-virtual source=secret.go:91 msg=\"Created SecretClient\"\nlevel=INFO ts=2022-05-05T14:29:25.172359472Z app=device-virtual source=secret.go:96 msg=\"SecretsFile not set, skipping seeding of service secrets.\"\nlevel=INFO ts=2022-05-05T14:29:25.172539631Z app=device-virtual source=secrets.go:276 msg=\"kick off token renewal with interval: 30m0s\"\nlevel=INFO ts=2022-05-05T14:29:25.172433598Z app=device-virtual source=config.go:551 msg=\"Using local configuration from file (14 envVars overrides applied)\"\nlevel=INFO ts=2022-05-05T14:29:25.172916142Z app=device-virtual source=httpserver.go:131 msg=\"Web server starting (edgex-device-virtual:59900)\"\nlevel=INFO ts=2022-05-05T14:29:25.172948285Z app=device-virtual source=messaging.go:69 msg=\"Setting options for secure MessageBus with AuthMode='usernamepassword' and SecretName='redisdb\"\nlevel=INFO ts=2022-05-05T14:29:25.174321296Z app=device-virtual source=messaging.go:97 msg=\"Connected to redis Message Bus @ redis://edgex-redis:6379 publishing on 'edgex/events/device' prefix topic with AuthMode='usernamepassword'\"\nlevel=INFO ts=2022-05-05T14:29:25.174585076Z app=device-virtual source=init.go:135 msg=\"Check core-metadata service's status by ping...\"\nlevel=INFO ts=2022-05-05T14:29:25.176202842Z app=device-virtual source=init.go:54 msg=\"Service clients initialize successful.\"\nlevel=INFO ts=2022-05-05T14:29:25.176377929Z app=device-virtual source=clients.go:124 msg=\"Using configuration for URL for 'core-metadata': http://edgex-core-metadata:59881\"\nlevel=INFO ts=2022-05-05T14:29:25.176559116Z app=device-virtual source=clients.go:124 msg=\"Using configuration for URL for 'core-data': http://edgex-core-data:59880\"\nlevel=INFO ts=2022-05-05T14:29:25.176806351Z app=device-virtual source=restrouter.go:55 msg=\"Registering v2 routes...\"\nlevel=INFO ts=2022-05-05T14:29:25.192658275Z app=device-virtual source=service.go:230 msg=\"device service device-virtual exists, updating it\"\nlevel=INFO ts=2022-05-05T14:29:25.195403199Z app=device-virtual source=profiles.go:54 msg=\"Loading pre-defined profiles from /res/profiles\"\nlevel=INFO ts=2022-05-05T14:29:25.197297762Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Binary-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.240099318Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Boolean-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.24221092Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Float-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.245516797Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Integer-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.250310838Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-UnsignedInteger-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.250961547Z app=device-virtual source=devices.go:49 msg=\"Loading pre-defined devices from /res/devices\"\nlevel=INFO ts=2022-05-05T14:29:25.252216571Z app=device-virtual source=devices.go:85 msg=\"Device Random-Boolean-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252274853Z app=device-virtual source=devices.go:85 msg=\"Device Random-Integer-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252290321Z app=device-virtual source=devices.go:85 msg=\"Device Random-UnsignedInteger-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252297541Z app=device-virtual source=devices.go:85 msg=\"Device Random-Float-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252304305Z app=device-virtual source=devices.go:85 msg=\"Device Random-Binary-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252698155Z app=device-virtual source=autodiscovery.go:33 msg=\"AutoDiscovery stopped: disabled by configuration\"\nlevel=INFO ts=2022-05-05T14:29:25.252726349Z app=device-virtual source=autodiscovery.go:42 msg=\"AutoDiscovery stopped: ProtocolDiscovery not implemented\"\nlevel=INFO ts=2022-05-05T14:29:25.252736451Z app=device-virtual source=message.go:50 msg=\"Service dependencies resolved...\"\nlevel=INFO ts=2022-05-05T14:29:25.252804946Z app=device-virtual source=message.go:51 msg=\"Starting device-virtual main \"\nlevel=INFO ts=2022-05-05T14:29:25.252817404Z app=device-virtual source=message.go:55 msg=\"device virtual started\"\nlevel=INFO ts=2022-05-05T14:29:25.252880346Z app=device-virtual source=message.go:58 msg=\"Service started in: 55.248960914s\"\n
At this point, the remote device service is up and running in secure mode.
"},{"location":"security/Ch-RemoteDeviceServices/#ssh-tunneling-explained","title":"SSH Tunneling Explained","text":"In this example, SSH port forwarding is used to establish an encrypted network channel between the local and remote nodes. The local machine as the primary host is running the whole EdgeX core services including core services and security services but without any device service. The device services are running on the remote machine.
The SSH communication is established by introducing some extra SSH-related services:
1) device-ssh-proxy
. This service runs on the local machine an is an SSH client that initiates communication with the remote node. The device-ssh-proxy
service has the private key needed to establish the network connection and also authorizes the network tunnels.
2) sshd-remote
. This service runs on the remote machine and provides an SSH server for the purposes of establishing network communcation with the remote device service.
Running sshd
in Docker is a container anti-pattern, as one can enter a container for remote administration using docker exec
. In this use case, however, we are not using sshd
for remote administration, but instead to set up a network tunnel.
For an example of how to run a SSH server in Docker, checkout the SPIFFE and SHH example for detailed instructions.
The generate-keys.sh
helper script generates an RSA keypair, and copies the authorized_keys
file into the remote/sshd-remote
folder. The sample's Dockerfile
will then build this key into the the remote sshd
container image and use it for authentication. The private key remains on the local machine and is bind-mounted to the host from the device-ssh-proxy
service.
In this use case, we want to impersonate a device service that is running on a remote machine. We use local port forwarding to receive inbound requests on the device service's port, and ask that the traffic be forwarded through the ssh tunnel to a remote host and a remote port. The -L flag of ssh command is important here.
ssh -N \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-L *:$SERVICE_PORT:$SERVICE_HOST:$SERVICE_PORT \\\n-p $TUNNEL_SSH_PORT \\\n$TUNNEL_HOST
where environment variables are:
TUNNEL_HOST
is the remote host name or IP address that SSH daemon or server is running on;
TUNNEL_SSH_PROT
is the port number to be used on the SSH tunnel communication between the local machine and the remote machine
SERVICE_PORT
is the port number from the local or the primary to be forwared to the remote machine; without lose of generality, the port number on the remote machine is the same as the local one
SERVICE_HOST
is the service host name or IP address of the Docker containers that are running on the remote machine
In order to make the other containers aware of the port forwarding, the docker-compose.yml
is configured to so that the device-ssh-proxy
service impersonates edgex-device-virtual
on the local docker network.
device-ssh-proxy:\nimage: device-ssh-proxy:latest\nnetworks:\nedgex-network:\naliases:\n- edgex-device-virtual\n
The port-forwarding is transparent to the EdgeX services running on the local machine.
"},{"location":"security/Ch-RemoteDeviceServices/#remote-port-forwarding","title":"Remote Port Forwarding","text":"This step is to show the reverse direction of SSH tunneling: from the remote back to the local machine.
The reverse SSH tunneling is also needed because the device services depends on the core services like core-data
, core-metadata
, Redis (for message queuing), Vault (for the secret store), and Consul (for registry and configuration). These core services are running on the local machine and should be reverse tunneled back from the remote machine. Essentially, the sshd
container will impersonate these services on the remote side. This can be achieved by using -R
flag of ssh command. Extending the previous example:
ssh -N \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-L *:$SERVICE_PORT:$SERVICE_HOST:$SERVICE_PORT \\\n-R 0.0.0.0:$SECRETSTORE_PORT:$SECRETSTORE_HOST:$SECRETSTORE_PORT \\\n-R 0.0.0.0:6379:$MESSAGEQUEUE_HOST:6379 \\\n-R 0.0.0.0:8500:$REGISTRY_HOST:8500 \\\n-R 0.0.0.0:5563:$CLIENTS_CORE_DATA_HOST:5563 \\\n-R 0.0.0.0:59880:$CLIENTS_CORE_DATA_HOST:59880 \\\n-R 0.0.0.0:59881:$CLIENTS_CORE_METADATA_HOST:59881 \\\n-R 0.0.0.0:$SECURITY_SPIRE_SERVER_PORT:$SECURITY_SPIRE_SERVER_HOST:$SECURITY_SPIRE_SERVER_PORT \\\n-R 0.0.0.0:$SECRETSTORE_RUNTIMETOKENPROVIDER_PORT:$SECRETSTORE_RUNTIMETOKENPROVIDER_HOST:$SECRETSTORE_RUNTIMETOKENPROVIDER_PORT \\\n-p $TUNNEL_SSH_PORT \\\n$TUNNEL_HOST
As was done on the local side, the remote side does in reverse, masquerading on the network as the core services needed by device services:
sshd-remote:\nimage: edgex-sshd-remote:latest\nnetworks:\nedgex-network:\naliases:\n- edgex-core-consul\n- edgex-core-data\n- edgex-core-metadata\n- edgex-redis\n- edgex-security-spire-server\n- edgex-security-spiffe-token-provider\n- edgex-vault\n
"},{"location":"security/Ch-RemoteDeviceServices/#security-edgex-secret-store-token","title":"Security: EdgeX Secret Store Token","text":"Beyond port forwarding, extra steps need to be taken to enable the remote device service to use SPIFFE/SPIRE to obtain a token for the EdgeX secret store.
"},{"location":"security/Ch-RemoteDeviceServices/#local-side","title":"Local side","text":"On the local machine side, the device-ssh-proxy
service has some initialization code inserted into its entrypoint script. It is done this way to facilitate ease-of-use for the example. In a production deployment this should be done out-of-band.
# Wait for agent CA creation\n\nwhile test ! -f \"/srv/spiffe/ca/public/agent-ca.crt\"; do\necho \"Waiting for /srv/spiffe/ca/public/agent-ca.crt\"\nsleep 1\ndone\n\n# Pre-create remote agent certificate\n\nif test ! -f \"/srv/spiffe/remote-agent/agent.crt\"; then\nopenssl ecparam -genkey -name secp521r1 -noout -out \"/srv/spiffe/remote-agent/agent.key\"\nSAN=\"\" openssl req -subj \"/CN=remote-agent\" -config \"/usr/local/etc/openssl.conf\" -key \"/srv/spiffe/remote-agent/agent.key\" -sha512 -new -out \"/run/agent.req.$$\"\nSAN=\"\" openssl x509 -sha512 -extfile /usr/local/etc/openssl.conf -extensions agent_ext -CA \"/srv/spiffe/ca/public/agent-ca.crt\" -CAkey \"/srv/spiffe/ca/private/agent-ca.key\" -CAcreateserial -req -in \"/run/agent.req.$$\" -days 3650 -out \"/srv/spiffe/remote-agent/agent.crt\"\nrm -f \"/run/agent.req.$$\"\nfi\n\n\nwhile true; do\nscp -p \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-P $TUNNEL_SSH_PORT \\\n/srv/spiffe/remote-agent/agent.key $TUNNEL_HOST:/srv/spiffe/remote-agent/agent.key\n scp -p \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-P $TUNNEL_SSH_PORT \\\n/srv/spiffe/remote-agent/agent.crt $TUNNEL_HOST:/srv/spiffe/remote-agent/agent.crt\n scp -p \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-P $TUNNEL_SSH_PORT \\\n/tmp/edgex/secrets/spiffe/trust/bundle $TUNNEL_HOST:/tmp/edgex/secrets/spiffe/trust/bundle ssh \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-p $TUNNEL_SSH_PORT \\\n$TUNNEL_HOST -- \\\nchown -Rh 2002:2001 /tmp/edgex/secrets/spiffe\n\n ...\n
The one-time setup is generating a new agent key from the agent CA certificate. This will enable the SPIRE server to trust the new agent. There is also automation to copy the certificate and private key to the remote node as part of SSH session establishment. This entire flow could be done as an out-of-band process.
The last part, which is to copy the current trust bundle to the remote node as part of SSH session establishment, should be left as-is, as the trust bundle is on a temp file system and might be cleaned between reboots.
"},{"location":"security/Ch-RemoteDeviceServices/#remote-side","title":"Remote side","text":"On the remote side, the SPIRE agent looks mostly like the local side SPIRE agent, except that the paths are different, and there is a delay loop waiting for the agent key and certificate to be copied to the node via the above process.
The requirements for the remote side are:
The SPIRE server must be able to establish trust in the agent. There are many mechanisms available to do this. The example uses a public key infrastructure to establish trust.
The SPIRE agent must have network connectivity with the SPIRE server. This is provided by the SSH reverse proxy tunnel.
The easiest way to test the setup is to make a call from the local machine to the remote device-virtual
service:
$ curl -s http://127.0.0.1:59900/api/v3/config | jq\n{\n\"apiVersion\" : \"v3\",\n \"config\": {\n\"Writable\": {\n\"LogLevel\": \"INFO\",\n \"InsecureSecrets\": {\n\"DB\": {\n\"Path\": \"redisdb\",\n \"Secrets\": {\n\"password\": \"\",\n \"username\": \"\"\n}\n}\n},\n \"Reading\": {\n\"ReadingUnits\": true\n}\n},\n \"Clients\": {\n\"core-data\": {\n\"Host\": \"edgex-core-data\",\n \"Port\": 59880,\n \"Protocol\": \"http\"\n},\n \"core-metadata\": {\n\"Host\": \"edgex-core-metadata\",\n \"Port\": 59881,\n \"Protocol\": \"http\"\n}\n},\n \"Registry\": {\n\"Host\": \"edgex-core-consul\",\n \"Port\": 8500,\n \"Type\": \"consul\"\n},\n \"Service\": {\n\"HealthCheckInterval\": \"10s\",\n \"Host\": \"edgex-device-virtual\",\n \"Port\": 59900,\n \"ServerBindAddr\": \"\",\n \"StartupMsg\": \"device virtual started\",\n \"MaxResultCount\": 0,\n \"MaxRequestSize\": 0,\n \"RequestTimeout\": \"5s\",\n \"CORSConfiguration\": {\n\"EnableCORS\": false,\n \"CORSAllowCredentials\": false,\n \"CORSAllowedOrigin\": \"https://localhost\",\n \"CORSAllowedMethods\": \"GET, POST, PUT, PATCH, DELETE\",\n \"CORSAllowedHeaders\": \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\",\n \"CORSExposeHeaders\": \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\",\n \"CORSMaxAge\": 3600\n}\n},\n \"Device\": {\n\"DataTransform\": true,\n \"MaxCmdOps\": 128,\n \"MaxCmdValueLen\": 256,\n \"ProfilesDir\": \"./res/profiles\",\n \"DevicesDir\": \"./res/devices\",\n \"Discovery\": {\n\"Enabled\": false,\n \"Interval\": \"30s\"\n},\n \"AsyncBufferSize\": 16,\n \"EnableAsyncReadings\": true,\n \"Labels\": [],\n \"UseMessageBus\": true\n},\n \"Driver\": {},\n \"SecretStore\": {\n\"Type\": \"vault\",\n \"Host\": \"edgex-vault\",\n \"Port\": 8200,\n \"Path\": \"device-virtual/\",\n \"Protocol\": \"http\",\n \"Namespace\": \"\",\n \"RootCaCertPath\": \"\",\n \"ServerName\": \"\",\n \"Authentication\": {\n\"AuthType\": \"X-Vault-Token\",\n \"AuthToken\": \"\"\n},\n \"TokenFile\": \"/tmp/edgex/secrets/device-virtual/secrets-token.json\",\n \"SecretsFile\": \"\",\n \"DisableScrubSecretsFile\": false,\n \"RuntimeTokenProvider\": {\n\"Enabled\": true,\n \"Protocol\": \"https\",\n \"Host\": \"edgex-security-spiffe-token-provider\",\n \"Port\": 59841,\n \"TrustDomain\": \"edgexfoundry.org\",\n \"EndpointSocket\": \"/tmp/edgex/secrets/spiffe/public/api.sock\",\n \"RequiredSecrets\": \"redisdb\"\n}\n},\n \"MessageQueue\": {\n\"Type\": \"redis\",\n \"Protocol\": \"redis\",\n \"Host\": \"edgex-redis\",\n \"Port\": 6379,\n \"PublishTopicPrefix\": \"edgex/events/device\",\n \"SubscribeTopic\": \"\",\n \"AuthMode\": \"usernamepassword\",\n \"SecretName\": \"redisdb\",\n \"Optional\": {\n\"AutoReconnect\": \"true\",\n \"ClientId\": \"device-virtual\",\n \"ConnectTimeout\": \"5\",\n \"KeepAlive\": \"10\",\n \"Password\": \"(redacted)\",\n \"Qos\": \"0\",\n \"Retained\": \"false\",\n \"SkipCertVerify\": \"false\",\n \"Username\": \"redis5\"\n},\n \"SubscribeEnabled\": false\n},\n \"MaxEventSize\": 0\n},\n \"serviceName\": \"device-virtual\"\n}\n
"},{"location":"security/Ch-SecretProviderApi/","title":"Secret Provider API","text":""},{"location":"security/Ch-SecretProviderApi/#introduction","title":"Introduction","text":"The SecretProvider API is available to custom Application and Device Services to access the service's Secret Store. This API is available in both secure and non-secure modes. When in secure mode, it provides access to the service's Secret Store in Vault, otherwise it uses the service's [InsecureSecrets]
configuration section as the Secret Store. See InsecureSecrets section here for more details.
type SecretProvider interface {\nStoreSecret(secretName string, secrets map[string]string) error\nGetSecret(secretName string, keys ...string) (map[string]string, error)\nHasSecret(secretName string) (bool, error)\nListSecretNames() ([]string, error)\nSecretsLastUpdated() time.Time RegisterSecretUpdatedCallback(secretName string, callback func(secretName string)) error\nDeregisterSecretUpdatedCallback(secretName string)\n}\n
"},{"location":"security/Ch-SecretProviderApi/#storesecret","title":"StoreSecret","text":"StoreSecret(secretName string, secrets map[string]string) error
Stores new secrets into the service's SecretStore at the specified secretName
. An error is returned if the secrets can not be stored.
Note
This API is only valid to call when in secure mode. It will return an error when in non-secure mode. Insecure Secrets should be added/updated directly in the configuration file or via the Configuration Provider (aka Consul).
"},{"location":"security/Ch-SecretProviderApi/#getsecret","title":"GetSecret","text":"GetSecret(secretName string, keys ...string) (map[string]string, error)
Retrieves the secrets from the service's SecretStore for the specified secretName
. The list of keys is optional and limits the secret data returned to just those keys specified, otherwise all keys are returned. An error is returned if the secretName
doesn't exist in the service's Secret Store or if one or more of the optional keys specified are not present.
HasSecret(secretName string) (bool, error)
Returns true if the service's Secret Store contains a secret at the specified secretName
. An error is returned if the Secret Store can not be accessed.
ListSecretNames() ([]string, error)
Returns a list of secret names from the current service's Secret Store. An error is returned if the Secret Store can not be accessed.
"},{"location":"security/Ch-SecretProviderApi/#secretslastupdated","title":"SecretsLastUpdated","text":"SecretsLastUpdated() time.Time
Returns the timestamp for last time when the service's secrets were updated in its Secret Store. This is useful when using external client that is initialized with the secret and needs to be recreated if the secret has changed.
"},{"location":"security/Ch-SecretProviderApi/#registersecretupdatedcallback","title":"RegisterSecretUpdatedCallback","text":"RegisterSecretUpdatedCallback(secretName string, callback func(secretName string)) error\n
Registers a callback for when the specified secretName
is added or updated. The secretName
that changed is provided as an argument to the callback so that the same callback can be utilized for multiple secrets if desired.
Note
The constant value secret.WildcardName
can be used to register a callback for when any secret has changed. The actual secretName
that changed will be passed to the callback. Note that the callbacks set for a specific secretName
are given a higher precedence over wildcard ones, and will be called instead of the wildcard one if both are present.
Note
This function will return an error if there is already a callback registered for the specified secretName
. Please call DeregisterSecretUpdatedCallback
first before attempting to register a new one.
DeregisterSecretUpdatedCallback(secretName string)\n
Removes the registered callback for the specified secretName
. If none exist, this is a no-op.
There are all kinds of secrets used within EdgeX Foundry micro services, such as tokens, passwords, certificates etc. The secret store serves as the central repository to keep these secrets. The developers of other EdgeX Foundry micro services utilize the secret store to create, store and retrieve secrets relevant to their corresponding micro services.
Currently the EdgeX Foundry secret store is implemented with Vault, a HashiCorp open source software product.
Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, database credentials, service credentials, or certificates. Vault provides a unified interface to any secret, while providing tight access control and multiple authentication mechanisms (token, LDAP, etc.). Additionally, Vault supports pluggable \"secrets engines\". EdgeX uses the Consul secrets engine to allow Vault to issue Consul access tokens to EdgeX microservices.
In EdgeX, Vault's storage backend is the host file system.
"},{"location":"security/Ch-SecretStore/#start-the-secret-store","title":"Start the Secret Store","text":"The EdgeX secret store is started by default when using the secure version of the Docker Compose scripts found at https://github.com/edgexfoundry/edgex-compose/tree/ireland.
The command to start EdgeX with the secret store enabled is:
git clone -b ireland https://github.com/edgexfoundry/edgex-compose\nmake run\n
or
git clone -b ireland https://github.com/edgexfoundry/edgex-compose\nmake run arm64\n
The EdgeX secret store is not started if EdgeX is started with security features disabled by appending no-secty
to the previous commands. This disables all EdgeX security features, not just the API gateway.
Documentation on how the EdgeX security store is sequenced with respect to all of the other EdgeX services is covered in the Secure Bootstrapping of EdgeX Architecture Decision Record(ADR).
"},{"location":"security/Ch-SecretStore/#using-the-secret-store","title":"Using the Secret Store","text":""},{"location":"security/Ch-SecretStore/#preferred-approach","title":"Preferred Approach","text":"The preferred approach for interacting with the EdgeX secret store is to use the SecretClient
interface in go-mod-secrets.
Each EdgeX microservice has access to a StoreSecrets()
method that allows setting of per-microservice secrets, and a GetSecrets()
method to read them back.
If manual \"super-user\" to the EdgeX secret store is required, it is necesary to obtain a privileged access token, called the Vault root token.
"},{"location":"security/Ch-SecretStore/#obtaining-the-vault-root-token","title":"Obtaining the Vault Root Token","text":"For security reasons (the Vault production hardening guide recommends revokation of the root token), the Vault root token is revoked by default. EdgeX automatically manages the secrets required by the framework, and provides a programmatic interface for individual microservices to interact with their partition of the secret store.
If global access to the secret store is required, it is necessary to obtain a copy of the Vault root token using the below recommended procedure. Note that following this procedure directly contradicts the Vault production hardening guide. Since the root token cannot be un-revoked, the framework must be started for the first time with root token revokation disabled.
Shut down the entire framework and remove the Docker persistent volumes using make clean
in edgex-compose
or docker volume prune
after stopping all the containers. Optionally remove /tmp/edgex
as well to clean the shared secrets volume.
Edit docker-compose.yml
and add an environment variable override for SECRETSTORE_REVOKEROOTTOKENS
secretstore-setup:\nenvironment:\nSECRETSTORE_REVOKEROOTTOKENS: \"false\"\n
Start EdgeX using make run
or some other mechanism.
Reveal the contents of the resp-init.json
file stored in a Docker volume.
docker run --rm -ti -v edgex_vault-config:/vault/config:ro alpine:latest cat /vault/config/assets/resp-init.json\n
root_token
field value from the resulting JSON output.As an alternative to overriding SECRETSTORE_REVOKEROOTTOKENS
from the beginning, it is possible to regenerate the root token from the Vault unseal keys in resp-init.json
using the Vault's documented procedure. The EdgeX framework executes this process internally whenever it requires root token capability. Note that a token created in this manner will again be revoked the next time EdgeX is restarted if SECRETSTORE_REVOKEROOTTOKENS
remains set to its default value: all root tokens are revoked every time the framework is started if SECRETSTORE_REVOKEROOTTOKENS
is true
.
Execute a shell session in the running Vault container:
docker exec -it edgex-vault sh -l\n
Login to Vault using Vault CLI and the gathered Root Token:
edgex-vault:/# vault login s.ULr5bcjwy8S0I5g3h4xZ5uWa\nSuccess! You are now authenticated. The token information displayed below\nis already stored in the token helper. You do NOT need to run \"vault login\"\nagain. Future Vault requests will automatically use this token.\n\nKey Value\n--- -----\ntoken s.ULr5bcjwy8S0I5g3h4xZ5uWa\ntoken_accessor Kv5FUhT2XgN2lLu8XbVxJI0o\ntoken_duration \u221e\ntoken_renewable false\ntoken_policies [\"root\"]\nidentity_policies []\npolicies [\"root\"]\n
Perform an introspection lookup
on the current token login. This proves the token works and is valid.
edgex-vault:/# vault token lookup\nKey Value\n--- -----\naccessor Kv5FUhT2XgN2lLu8XbVxJI0o\ncreation_time 1623371879\ncreation_ttl 0s\ndisplay_name root\nentity_id n/a\nexpire_time <nil>\nexplicit_max_ttl 0s\nid s.ULr5bcjwy8S0I5g3h4xZ5uWa\nmeta <nil>\nnum_uses 0\norphan true\npath auth/token/root\npolicies [root]\nttl 0s\ntype service\n
!!! Note: The Root Token is the only token that has no expiration enforcement rules (Time to Live TTL counter).
As an example, let's poke around and spy on the Redis database password:
edgex-vault:/# vault list secret \n\nKeys\n----\nedgex/\n\nedgex-vault:/# vault list secret/edgex\nKeys\n----\napp-rules-engine/\ncore-command/\ncore-data/\ncore-metadata/\ndevice-rest/\ndevice-virtual/\nsecurity-bootstrapper-redis/\nsupport-notifications/\nsupport-scheduler/\n\nedgex-vault:/# vault list secret/edgex/core-data\nKeys\n----\nredisdb\n\nedgex-vault:/# vault read secret/edgex/core-data/redisdb\nKey Value\n--- -----\nrefresh_interval 168h\npassword 9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3\nusername redis5\n
With the root token, it is possible to modify any Vault setting. See the Vault manual for available commands.
"},{"location":"security/Ch-SecretStore/#use-the-vault-rest-api","title":"Use the Vault REST API","text":"Vault also supports a REST API with functionality equivalent to the command line interface:
The equivalent of the
vault read secret/edgex/core-data/redisdb\n
command looks like the following using the REST API:
Displaying (GET) the redis credentials from Core Data's secret store:
curl -s -H 'X-Vault-Token: s.ULr5bcjwy8S0I5g3h4xZ5uWa' http://localhost:8200/v1/secret/edgex/core-data/redisdb | python -m json.tool\n{\n \"request_id\": \"9d28ffe0-6b25-c0a8-e395-9fbc633f20cc\",\n \"lease_id\": \"\",\n \"renewable\": false,\n \"lease_duration\": 604800,\n \"data\": {\n \"password\": \"9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3\",\n \"username\": \"redis5\"\n },\n \"wrap_info\": null,\n \"warnings\": null,\n \"auth\": null\n}\n
See HashiCorp Vault API documentation for further details on syntax and usage (https://developer.hashicorp.com/vault/api-docs).
"},{"location":"security/Ch-SecretStore/#using-the-vault-web-ui","title":"Using the Vault Web UI","text":"The Vault Web UI is not exposed via the API gateway. It must therefore be accessed via localhost
or a network tunnel of some kind.
Open a browser session on http://localhost:8200
and sign-in with the Root Token.
Upper left corner of the current Vault UI session, the sign-out menu displaying the current token name:
Select the Vault secret backend, and navigate to any secret that is of interest:
The Vault UI also allows entering Vault CLI commands (see above 1st alternative) using an embedded console:
"},{"location":"security/Ch-SecretStore/#see-also","title":"See also","text":"Some of the command used in implementing security services have man-style documentation:
In the current EdgeX architecture, Consul
is pre-wired as the default agent service for Service Configuration
, Service Registry
, and Service Health Check
purposes. Prior to EdgeX's Ireland release, the communication to Consul
uses plain HTTP calls without any access control (ACL) token header and thus are insecure. With the Ireland release, that situation is now improved by adding required ACL token header X-Consul-Token
in any HTTP calls. Moreover, Consul
itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected.
In this documentation, we will highlight some major features incorporated into EdgeX framework system for Securing Consul
, including how the Consul
token is generated via the integration of secret store management system Vault
with Consul
via Vault's Consul Secrets Engine APIs. Also a brief overview on how Consul token is governed by Vault using Consul's ACL policy associated with a Vault role for that token is given. Finally, EdgeX provides an easy way for getting Consul token from edgex-compose
's compose-builder
utility for better developer experiences.
In order to reduce another token generation system to maintain, we utilize the Vault's feature of Consul Secrets Engine
APIs, governed by Vault itself, and integrated with Consul. Consul service itself provides ACL system and is enabled via Consul's configuration settings like:
acl = {\nenabled = true\ndefault_policy = \"deny\"\nenable_token_persistence = true\n}\n
and this is set as part of EdgeX security-bootstrapper
service's process. Note that the default ACL policy is set to \"deny\" so that anything is not listed in the ACL list will get access denied by nature. The flag enable_token_persistence
is related to the persistence of Consul's agent token and is set to true so as to re-use the same agent token when EdgeX system restarts again.
During the process of Consul bootstrapping, the first main step of security-bootstrapper
for Consul is to bootstrap Consul's ACL system with Consul's API endpoint /acl/bootstrap
.
Once Consul's ACL is successfully bootstrapped, security-bootstrapper
stores the Consul's ACL bootstrap token onto the pre-configured folder under /tmp/edgex/secrets/consul-acl-token
.
As part of security-bootstrapper
process for Consul, Consul service's agent token is also set via Consul's sub-command: consul acl set-agent-token agent
or Consul's HTTP API endpoint /agent/token/<agent_token>
using Consul's ACL bootstrap token for the authentication. This agent token provides the identity for Consul service itself and access control for any agent-based API calls from client and thus provides better security.
The management token provides the identity for Consul service itself and access control for remote configuration from client and thus provides better security. It's created and stored onto the pre-configured folder under /tmp/edgex/secrets/consul-acl-token
.
security-bootstrapper
service also uses Consul's bootstrap token to generate Vault's role based from Consul Secrets Engine API /consul/role/<role_name>
for all internal default EdgeX services and add-on services via environment variable EDGEX_ADD_REGISTRY_ACL_ROLES
. Please see more details and some examples in Configuring Add-on Service documentation section for how to configure add-on services' ACL roles.
security-bootstrapper
then automatically associated with Consul's ACL policy rules with this provided ACL role so that Consul token will be created or generated with that ACL rules and hence enforced access controls by Consul when the service is communicating with it.
Note that Consul token is generated via Vault's /consul/creds/<role_name>
API with Vault's secretstore token and hence the generated Consul token is inherited the time-restriction nature from Vault system itself. Thus Consul token will be revoked by Vault if Vault's token used to generate it expires or is revoked. Currently in EdgeX we utilize the auto-renewal feature of Vault's token implemented in go-mod-secrets
to keep Consul token alive and not expire.
Consul's access token can be obtained from the compose-builder
of edgex-compose
repository via command make get-consul-acl-token
. One example of this will be like:
$ make get-consul-acl-token ef4a0580-d200-32bf-17ba-ba78e3a546e7\n
This output token is Consul's ACL management token and thus one can use it to login and access Consul service's features from Consul's GUI on http://localhost:8500/ui.
From the upper right-hand corner of Consul's GUI or the \"Log in\" button in the center, one can login with the obtained Consul token in order to access Consul's GUI features:
If the end user wants to access consul from the command line and since by default now Consul is running in ACL enabled mode, any API call to Consul's endpoints will requires the access token and thus one needs to give the access token into the header X-Consul-Token
of HTTP calls.
One example using curl
command with Consul access token to do local Consul KV store is given as follows:
curl -v -H \"X-Consul-Token:8775c1db-9340-d07b-ac95-bc6a1fa5fe57\" -X PUT --data 'TestKey=\"My key values\"' \\\n http://localhost:8500/v1/kv/my-test-key\n
where the Consul access token is passed into the header X-Consul-Token
and assuming it has write permission for accessing and updating data in Consul's KV store.
All the default services (Core Data, App Service Rules, Device Virtual, eKuiper, etc.) that utilize the MessageBus
are configured out of the box to connect securely.
Additional add-on services that require Secure MessageBus
access (App and/or Device services) need to follow the steps outline in the Configuring Add-On Services for Security section.
Security elements, both inside and outside of EdgeX Foundry, protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. With security service enabled, the administrator of the EdgeX would be able to initialize the security components, set up running environment for security services, manage user access control, and create JWT( JSON Web Token) for resource access for other EdgeX business services. There are two major EdgeX security components. The first is a security store, which is used to provide a safe place to keep the EdgeX secrets. The second is an API gateway, which is used as a reverse proxy to restrict access to EdgeX REST resources and perform access control related works. In summary, the current features are as below:
This page describes how to report EdgeX Foundry security issues and how they are handled.
"},{"location":"security/Ch-SecurityIssues/#security-announcements","title":"Security Announcements","text":"Join the edgexfoundry-announce group at: https://groups.google.com/d/forum/edgexfoundry-announce) for emails about security and major API announcements.
"},{"location":"security/Ch-SecurityIssues/#vulnerability-reporting","title":"Vulnerability Reporting","text":"The EdgeX Foundry Open Source Community is grateful for all security reports made by users and security researchers. All reports are thoroughly investigated by a set of community volunteers.
To make a report, please email the private list: security-issues@edgexfoundry.org, providing as much detail as possible. Use the security issue template: security_issue_template.
At this time we do not yet offer an encrypted bug reporting option.
"},{"location":"security/Ch-SecurityIssues/#when-to-report-a-vulnerability","title":"When to Report a Vulnerability?","text":"Each report is acknowledged and analyzed by Security Issue Review (SIR) team within one week.
Any vulnerability information shared with SIR stays private, and is shared with sub-projects as necessary to get the issue fixed.
As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated.
In the case of 3 rd party dependency (code or library not managed and maintained by the EdgeX community) related security issues, while the issue report triggers the same response workflow, the EdgeX community will defer to owning community for fixes.
On receipt of a security issue report, SIR:
7. Uploads a Common Vulnerabilities and Exposures (CVE) style report of the issue and associated threat
The issue reporter will be kept in the loop as appropriate. Note that a critical or high severity issue can delay a scheduled release to incorporate a fix or mitigation.
"},{"location":"security/Ch-SecurityIssues/#public-disclosure-timing","title":"Public Disclosure Timing","text":"A public disclosure date is negotiated by the EdgeX Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible AFTER a mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure may be immediate (especially publicly known issues) to a few weeks. The EdgeX Foundry Product Security Committee holds the final say when setting a disclosure date.
"},{"location":"security/SeedingServiceSecrets/","title":"Seeding Service Secrets","text":"All EdgeX services now have the capability to specify a JSON file that contains the service's secrets which are seeded into the service's SecretStore
during service start-up. This allows the secrets to be present in the service's SecretStore
when the service needs to use them.
Note
The service must already have a SecretStore
configured. This is done by default for the Core/Support services. See Configure the service's Secret Store section for details for add-on App and Device services
Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
"},{"location":"security/SeedingServiceSecrets/#secrets-file","title":"Secrets File","text":"The new SecretsFile
setting on the SecretStore
configuration allows the service to specify the fully-qualified path to the location of the service's secrets file. Normally this setting is left blank when a service has no secrets to be seeded.
This setting can overridden with the SECRETSTORE_SECRETSFILE
environment variable. When EdgeX is deployed using Docker/docker-compose the setting can be overridden in the docker-compose file and the file can be volume mounted into the service's container.
Example - Setting SecretsFile via environment override
environment:\nSECRETSTORE_SECRETSFILE: \"/tmp/my-service/secrets.json\"\n...\nvolumes:\n- /tmp/my-service/secrets.json:/tmp/my-service/secrets.json\n
During service start-up, after SecretStore
initialization, the service's secrets JSON file is read, validated, and the secrets stored into the service's SecretStore
. The file is then scrubbed of the secret data, i.e rewritten without the sensitive secret data that was successfully stored. See Disable Scrubbing section below for detail on disabling the scrubbing of the secret data
Example - Initial service secrets JSON
{\n\"secrets\": [\n{\n\"secretName\": \"credentials001\",\n\"imported\": false,\n\"secretData\": [\n{\n\"key\": \"username\",\n\"value\": \"my-user-1\"\n},\n{\n\"key\": \"password\",\n\"value\": \"password-001\"\n}\n]\n},\n{\n\"secretName\": \"credentials002\",\n\"imported\": false,\n\"secretData\": [\n{\n\"key\": \"username\",\n\"value\": \"my-user-2\"\n},\n{\n\"key\": \"password\",\n\"value\": \"password-002\"\n}\n]\n}\n]\n}\n
Example - Re-written service secrets JSON after seeding complete
{\n\"secrets\": [\n{\n\"secretName\": \"credentials001\",\n\"imported\": true,\n\"secretData\": []\n},\n{\n\"secretName\": \"credentials002\",\n\"imported\": true,\n\"secretData\": []\n}\n]\n}\n
The secrets marked with imported=true
are ignored the next time the service starts up since they are already in the service's SecretStore
. If the Secret Store service's persistence is cleared, the original version of service's secrets file will need to be provided for the next time the service starts up.
Note
The secrets file must have write permissions for the file to be scrubbed of the secret data. If not the service will fail to start-up with an error re-writing the file.
"},{"location":"security/SeedingServiceSecrets/#disable-scrubbing","title":"Disable Scrubbing","text":"Scrubbing of the secret data can be disabled by setting SecretStore.DisableScrubSecretsFile
to true
. This can be done in the by using the SECRETSTORE_DISABLESCRUBSECRETSFILE
environment variable override.
Example - Set DisableScrubSecretsFile via environment variable
environment:\nSECRETSTORE_DISABLESCRUBSECRETSFILE: \"true\"\n
"},{"location":"security/V3Migration/","title":"V3 Security Migration Guide","text":""},{"location":"security/V3Migration/#whats-changed-in-edgex-30-security","title":"What's Changed in EdgeX 3.0 Security","text":"EdgeX 3.0 (\"Minnesota\") release implements a significant change to its security architecture.
In EdgeX \"Fuji\" release, EdgeX introduced an opt-in secure mode that featured a secret store capability based on Hashicorp Vault and an API gateway based on Kong. The API gateway served to separate the outside Internet-facing network, which was \"untrusted\", from the internally-facing network, which was a \"trusted\".
EdgeX 3.0 takes significant steps to put limits on that trust. Whereas in EdgeX 1.0 and 2.0, microservice security was enforced at the API gateway, in EdgeX 3.0 microservice security is now also enforced at the individual microservice level. EdgeX 2.0 already enabled authentication for third-party components such as the EdgeX database, the EdgeX service registry, the EdgeX configuration provider, the EdgeX secret store, the EdgeX API gateway and the EdgeX message bus, but the EdgeX microservices themselves did not require authentication if the request originated from behind the API gateway. In EdgeX 3.0, even internal calls to EdgeX microservices now require an authentication token.
Compared to EdgeX 2.0, the security footprint of EdgeX 3.0 is reduced through the removal of the third-party Postgres and Kong components and using a minimally-configured NGINX gateway instead. Measurements taken before and after show a ~300 MB savings in downloaded Docker images in the container version of EdgeX, and a ~150 MB reduction in memory usage. Achieving these impressive improvements to the EdgeX footprint unfortunately means that there are some breaking changes to API gateway authentication that will be detailed later.
Although not a functional change, a significant addition to EdgeX 3.0 has been made in the form of a STRIDE Threat Model contributed by IOTech. This threat model takes an outside-in view of EdgeX, treating the EdgeX services together as a unit. The STRIDE threat model should serve as a good starting point for EdgeX adopters own threat models in which EdgeX is a component in the overall architecture. It should be noted, however, that since EdgeX services are taken together as a unit, the impact of the recent microservice authentication changes, which primarily affect EdgeX internals, is not reflected in the threat model.
"},{"location":"security/V3Migration/#api-gateway-breaking-authentication-changes","title":"API Gateway Breaking Authentication Changes","text":"In EdgeX 2.0, the secrets-config
utility was used to create a user account in the API gateway (Kong) and associate it to a user-specified public key. A user would then self-create a JWT, and use it for authentication against the API gateway. These tokens were opaque to EdgeX microservices because their contents were controlled by the user, and only the API gateway had the information needed to validate them.
In EdgeX 3.0, the secrets-config
utility is still used to create a user account, but instead of creating it in the API gateway, the user account is created in the EdgeX secret store, and the Vault identity secrets engine is used to generate and verify JWT's. All EdgeX services implicitly trust the EdgeX secret store and have a secret store token issued to them at startup that can be used to request a JWT from Vault.
Externally-originated requests are performed similarly to how they were done before: provide the JWT in the Authorization
header and direct the request at the API gateway with a path prefix denoting the desired service. The key difference is in obtaining the JWT. In EdgeX 2.0, the client simply generated the JWT using its private key. In EdgeX 3.0, obtaining a JWT is a two-step process. First, authenticate to the EdgeX secret store (Vault) to obtain a secret store token. Second, exchange the secret store token for a JWT. This process is described in detail in the authenticating chapter of the EdgeX documentation. Due to these changes, the secrets-config proxy jwt
helper command has been removed. This same chapter also explains that, similar to Kong, Vault has an extensible authentication mechanism, although only username/password (with a randomized strong password) is enabled out of the box.
As was before, all requests (with the exception of a passthrough for Vault authentication) are checked at the API gateway prior to forwarding to the backend service for fulfillment.
"},{"location":"security/V3Migration/#microservice-level-breaking-authentication-changes","title":"Microservice-level Breaking Authentication Changes","text":"EdgeX microservices in EdgeX 3.0 will now require authentication on a per-route basis, even for requests that originate behind the API gateway. Peer-to-peer service requests (such as a device service calling core-metadata, or core-command forwarding a request to a device service) are authenticated automatically. This new behavior may create compatibility issues for custom components that worked fine in EdgeX 2.0 that may suddenly experience authentication failures in EdgeX 3.0. This new behavior may also create issues for 3rd party components, such as the eKuiper rules engine, because of its ability to issue ad-hoc HTTP requests in response to certain events. The main V3 migration guide contains specific guidance for handling eKuiper rules that call back in to EdgeX.
To revert to legacy EdgeX 2.0 behavior--no authentication at the microservice level-- set the environment variable EDGEX_DISABLE_JWT_VALIDATION
to true
. JWT validation must be disabled on a per-microservice basis. This will not stop EdgeX microservices from sending JWT's to peer EdgeX microservices--it will only disable validation on the receiving side, allowing unauthenticated requests.
For sending JWTs, custom EdgeX services have two basic choices. The first is to use one of the pre-built service clients in go-mod-core-contracts
. The other is to to use the GetSelfJWT()
method of the SecretProviderExt
interface. The authenticating chapter of the EdgeX documentation explains in greater detail how to use these two methods.
Some minor changes have been made to the secrets-config proxy tls
command:
--snis
argument is no longer supported: the supplied TLS certificate and key will be used for all TLS connections.--incert
option is renamed to --inCert
, and--inkey
option is renamed to --inKey
for consistency of flag names.Several security-related environment variables have been renamed in EdgeX 3.0:
Old Name New Name ADD_KNOWN_SECRETS EDGEX_ADD_KNOWN_SECRETS ADD_PROXY_ROUTE EDGEX_ADD_PROXY_ROUTE ADD_REGISTRY_ACL_ROLES EDGEX_ADD_REGISTRY_ACL_ROLES ADD_SECRETSTORE_TOKENS EDGEX_ADD_SECRETSTORE_TOKENS IKM_HOOK EDGEX_IKM_HOOK"},{"location":"security/V3Migration/#references","title":"References","text":"% secrets-config-proxy(1) User Manuals secrets-config-proxy(1)
"},{"location":"security/secrets-config-proxy/#name","title":"NAME","text":"secrets-config-proxy \u2013 Configure EdgeX API gateway service
"},{"location":"security/secrets-config-proxy/#synopsis","title":"SYNOPSIS","text":"secrets-config proxy SUBCOMMAND [OPTIONS]
"},{"location":"security/secrets-config-proxy/#description","title":"DESCRIPTION","text":"Configures the EdgeX API gateway service.
This command is used to configure the TLS certificate for external connections, create authentication tokens for inbound proxy access, and other related utility functions.
Proxy configuration commands (listed below) require access to the secret store master key in order to generate temporary secret store access credentials.
"},{"location":"security/secrets-config-proxy/#options","title":"OPTIONS","text":"--configDir /path/to/directory/with/configuration.yaml (optional)
Points to directory containing a configuration.yaml file.
EdgeX 3.0
The --confdir
command line option is replaced by --configDir
in EdgeX 3.0.
tls
Configure inbound TLS certificate. This command will replace the default TLS certificate created with EdgeX is started for the first time. Requires additional arguments:
Path to TLS leaf certificate (PEM-encoded x.509) (the file extension is arbitrary). If intermediate certificates are required to chain to a certificate authority, these should also be included. The root certificate authority should not be included.
Path to TLS private key (PEM-encoded).
--keyFilename filename (optional)
Filename of private key file (on target (default \"nginx.key\")
--targetFolder directory-path (optional)
Path to TLS key file (default \"/etc/ssl/nginx\")
adduser
Create an API gateway user by creating a user identity the EdgeX secret store. Requires additional arguments:
Username of the user to add.
--jwtTTL duration (optional)
JWT created by vault identity provider lasts this long (_s, _m, _h, or _d, seconds if no unit) (default \"1h\")
Clients have up to tokenTTL
time available to exchange the secret store token for a signed JWT. The validity period of that JWT is governed by jwtTTL
.
--tokenTTL duration (optional)
Vault token created as a result of vault login lasts this long (_s, _m, _h, or _d, seconds if no unit) (default \"1h\")
The adduser
command creates a credential that enables a use to request a token for the secret store. The intended purpose of this token is to exchange it for a signed JWT. The duration specified here governs the time period within which a signed JWT can be requested.
Note that although these tokens are renewable, there is nothing to be done with the token except for requesting a JWT. Thus, the token renew endpoint is not currently exposed externally.
Normally, secrets-config
uses a service token in the secret store token file. As this token expires from inactivity an hour after it is created, it is possible to point secrets-config
at a resp-init.json
and a root token will be created afresh from the key shares in that file. The --useRootToken
flag is used to tell secrets-config
to use this authentication method to talk to the EdgeX secret store.
Upon completion, adduser
returns a JSON object with a random password
field set. This password is generated from the kernel random source and overwrites any previous password set on the account.
A sample shell script to turn this into an token that can be used for API gateway authentication is as follows:
username=example\npassword=password-from-above\n\nvault_token=$(curl -ks \"http://localhost:8200/v1/auth/userpass/login/${username}\" -d \"{\\\"password\\\":\\\"${password}\\\"}\" | jq -r '.auth.client_token')\n\nid_token=$(curl -ks -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/token/${username}\" | jq -r '.data.token')\n\necho \"${id_token}\"\n
It is expected the the username/password returned from adduser will be saved for later use. However, if the password is lost, adduser can be run a second time to reset the password.
deluser
Delete a API gateway user. Requires additional arguments:
Username of the user to delete.
jwt
EdgeX 3.0
The jwt
sub-command is no longer supported in EdgeX 3.0.
IKM_HOOK
Enables decryption of an encrypted secret store master key by pointing at an executable that returns an encryption seed that is formatted as a hex-encoded (typically 32-byte) string to its stdout. This optional feature, if enabled, requires pointing at the same executable that was used by security-secretstore-setup to provision and unlock the EdgeX the secret store.
secrets-config(1)
EdgeX Foundry Last change: 2023
"},{"location":"security/secrets-config/","title":"Secrets config","text":"% edgex-secrets-config(1) User Manuals edgex-secrets-config(1)
"},{"location":"security/secrets-config/#name","title":"NAME","text":"edgex-secrets-config \u2013 Perform post-installation EdgeX secrets configuration
"},{"location":"security/secrets-config/#synopsis","title":"SYNOPSIS","text":"edgex-secrets-config [OPTIONS] COMMAND [ARG...]
"},{"location":"security/secrets-config/#description","title":"DESCRIPTION","text":"edgex-secrets-config performs post-installation EdgeX secrets configuration. edgex-secrets-config takes a command that specifies which module is being configured, and module-specific arguments thereafter.
"},{"location":"security/secrets-config/#commands","title":"COMMANDS","text":"help
Return a list of available commands. Use edgex-secrets-config help (command)
for an overview of available subcommands.
proxy
Configure secrets related to the EdgeX reverse proxy. Use edgex-secrets-config help proxy
for an overview of available subcommands.
edgex-secrets-config-proxy(1)
EdgeX Foundry Last change: 2021
"},{"location":"security/security-file-token-provider.1/","title":"NAME","text":"security-file-token-provider -- Generate Vault tokens for EdgeX services
"},{"location":"security/security-file-token-provider.1/#synopsis","title":"SYNOPSIS","text":"security-file-token-provider [-h--configDir \\<configDir>] [-p|--profile \\<name>]
EdgeX 3.0
The --confdir
command line option is replaced by --configDir
in EdgeX 3.0.
security-file-token-provider generates per-service Vault tokens for EdgeX services so that they can make authenticated connections to Vault to retrieve application secrets. security-file-token-provider implements a generic secret seeding mechanism based on pre-created files and is designed for maximum portability. security-file-token-provider takes a configuration file that specifies the services for which tokens shall be generated and the Vault access policy that shall be applied to those tokens. security-file-token-provider assumes that there is some underlying protection mechanism that will be used to prevent EdgeX services from reading each other's tokens.
"},{"location":"security/security-file-token-provider.1/#options","title":"OPTIONS","text":"-h, --help
: Display help text
-cd, --configDir \\<configDir>
: Look in this directory for configuration.yaml instead.
-p, --profile \\<name>
: Indicate configuration profile other than default
EdgeX 3.0
The -c, --confdir
command line option is replaced by -cd, --configDir
in EdgeX 3.0.
This file specifies the TCP/IP location of the Vault service and parameters used for Vault token generation.
SecretService:\nScheme: \"https\"\nServer: \"localhost\"\nPort: 8200\n\nTokenFileProvider:\nPrivilegedTokenPath: \"/run/edgex/secrets/security-file-token-provider/secrets-token.json\"\nConfigFile: \"token-config.json\"\nOutputDir: \"/run/edgex/secrets/\"\nOutputFilename: \"secrets-token.json\"\n
"},{"location":"security/security-file-token-provider.1/#secrets-tokenjson","title":"secrets-token.json","text":"This file contains a token used to authenticate to Vault. The filename is customizable via OutputFilename.
{\n \"auth\": {\n \"client_token\": \"s.wOrq9dO9kzOcuvB06CMviJhZ\"\n }\n}\n
"},{"location":"security/security-file-token-provider.1/#token-configjson","title":"token-config.json","text":"This configuration file tells security-file-token-provider which tokens to generate.
In order to avoid a directory full of .hcl
files, this configuration file uses the JSON serialization of HCL, documented at https://github.com/hashicorp/hcl/blob/master/README.md.
Note that all paths are keys under the \"path\" object.
{\n \"service-name\": {\n \"edgex_use_defaults\": true,\n \"custom_policy\": {\n \"path\": {\n \"secret/non/standard/location/*\": {\n \"capabilities\": [ \"list\", \"read\" ]\n }\n }\n },\n \"custom_token_parameters\": { }\n }\n}\n
When edgex-use-default is true (the default), the following is added to the policy specification for the auto-generated policy. The auto-generated policy is named edgex-secrets-XYZ
where XYZ
is service-name
from the JSON key above. Thus, the final policy created for the token will be the union of the policy below (if using the default policy) plus the custom_policy
defined above.
{\n \"path\": {\n \"secret/edgex/service-name/*\": {\n \"capabilities\": [ \"create\", \"update\", \"delete\", \"list\", \"read\" ]\n }\n }\n}\n
When edgex-use-default is true (the default), the following is inserted (if not overridden) to the token parameters for the generated token. (See https://developer.hashicorp.com/vault/api-docs/auth/token#create-token.)
\"display_name\": token-service-name\n\"no_parent\": true\n\"policies\": [ \"edgex-service-service-name\" ]\n
Note that display_name
is set by vault to be \"token-\" + the specified display name. This is hard-coded in Vault from versions 0.6 to 1.2.3 and cannot be changed.
Additionally, a meta property, edgex-service-name
is set to service-name
. The edgex-service-name property may be used by clients to infer the location in the secret store where service-specific secrets are held.
\"meta\": {\n \"edgex-service-name\": service-name\n}\n
"},{"location":"security/security-file-token-provider.1/#outputdirservice-nameoutputfilename","title":"{OutputDir}/{service-name}/{OutputFilename}","text":"For example: /run/edgex/secrets/edgex-security-proxy-setup/secrets-token.json
For each \"service-name\" in {ConfigFile}
, a matching directory is created under {OutputDir}
and the corresponding Vault token is stored as {OutputFilename}
. This file contains the authorization token generated to allow the indicated EdgeX service to retrieve its secrets.
PrivilegedTokenPath
points to a non-expired Vault token that the security-file-token-provider will use to install policies and create per-service tokens. It will create policies with the naming convention \"edgex-service-service-name\"
where service-name
comes from JSON keys in the configuration file and the Vault policy will be configured to allow creation and modification of policies using this naming convention. This token must have the following policy (edgex-privileged-token-creator
) configured.
path \"auth/token/create\" {\n capabilities = [\"create\", \"update\", \"sudo\"]\n}\n\npath \"auth/token/create-orphan\" {\n capabilities = [\"create\", \"update\", \"sudo\"]\n}\n\npath \"auth/token/create/*\" {\n capabilities = [\"create\", \"update\", \"sudo\"]\n}\n\npath \"sys/policies/acl/edgex-service-*\"\n{\n capabilities = [\"create\", \"read\", \"update\", \"delete\" ]\n}\n\npath \"sys/policies/acl\"\n{\n capabilities = [\"list\"]\n}\n
"},{"location":"security/security-file-token-provider.1/#author","title":"AUTHOR","text":"EdgeX Foundry \\<info@edgexfoundry.org>
"},{"location":"threat-models/secret-store/","title":"EdgeX Foundry Secret Management Threat Model","text":""},{"location":"threat-models/secret-store/#table-of-contents","title":"Table of Contents","text":"The secret management components comprise a very small portion of the EdgeX framework. Many components of an actual system are out-of-scope including the underlying hardware platform, the operating system on which the framework is running, the applications that are using it, and even the existence of workload isolation technologies, although the reference code does support deployment as Docker containers or Snaps.
The goal of the EdgeX secret store is to provide general-purpose secret management to EdgeX core services and applications.
"},{"location":"threat-models/secret-store/background/#motivation","title":"Motivation","text":"The EdgeX Foundry security roadmap is published on the Security WG Wiki:
The security roadmap establishes the requirement for a secret storage engine at the edge, and that furthermore that hardware secure storage should be supported:
Initial EdgeX secrets (needed to start Vault/Kong) will be encrypted on the file system using a secure storage abstraction layer \u2013 allowing other implementations to store these in hardware stores (based on hardware root of trust systems)
The current state of secret storage is described in the Hardware Secure Storage Draft.
The AS-IS architecture resembles the following diagram:
As the diagram notes, the critical secrets for securing the entire on-device infrastructure sit unencrypted on bulk storage media. While the deptiction that the Vault contents are encrypted is true, the key needed to decrypt it is in plaintext nearby.
The Hardware Secure Storage Draft proposes the following future state:
This future state proposes a security service that can encrypt the currently unencrypted data items.
A number of problems must be resolved to make this future state a reality:
Initialization order of containers: containers must block until their prerequisites have been satisfied. It is not sufficient to have only start-ordering, as initialization can take a variable amount of time, and the initialization tasks of a previous step are not necessarily completed before the next step is initiated.
Allowing for variability in the hardware encryption component. A simple bulk encryption/decryption interface does not allow for interesting scenarios based on local attestation, for example.
Distribution of Vault tokens to services.
When using Vault at the edge, there are a number of general problems that must be solved as illustrated in the below diagram:
Working top to bottom and left to right:
The secret management design for EdgeX can be said to be finished when there is a sufficiently secure solution to the above challenges for the supported execution models.
"},{"location":"threat-models/secret-store/background/#next-steps-for-edgex","title":"Next Steps for EdgeX","text":"All parts of the system must collaborate in order to ensure a robust secret management design. What is needed is a systematic approach to secret management that will close the gaps between the AS-IS and TO-BE future state. This systematic approach is based on formal threat model with the aim that the system will meet some critical security objectives. The threat model is built against a proposed design and validates the security architecture of the design. Through threat modeling, we can identify assets, adversaries, threats, and mitigations against those threats. We can then make a prioritized implementation plan to address those threats. More importantly, for someone adopting EdgeX, the documented threat model outlines the threats that the framework has been designed to protect against and by omission, the threats that it has not.
"},{"location":"threat-models/secret-store/high_level_design/","title":"Detailed Design","text":"This document gets into the design details of the proposed secret management architecture, starting with a design overview and going into greater detail for each subsystem.
"},{"location":"threat-models/secret-store/high_level_design/#design-overview","title":"Design Overview","text":"In context of the stated future goal to support hardware-based secret storage, it is important to note that in a Vault-based design, not every secret is actually wrapped by a hardware-backed key. Instead, the secrets in Vault are wrapped by a single master key, and the encryption and decryption of secrets are done in a user-level process in software. The Vault master key is then wrapped by one more additional keys, ultimately to a root key that is hardware-based using some authorization mechanism. In a PKCS#11 hardware token, authorization is typically a PIN. In a TPM, authorization is typically a set of PCR values and an optional password. The idea is that the Vault master key is eventually protected by some uncopyable unique secret attached to physical hardware.
The hardware may or may not have non-volatile tamper-resistant storage. Non-volatile storage is useful for integrity protection as well as in pre-OS scenarios. An example of the former would be to store a hash value for HTTP Public Key Pinning (HPKP) in a manner that makes it difficult for an attacker to pin a different key. An example of the latter would be storing a LUKS disk encryption key that can decrypt a root file system when normal file system storage is not yet available. If non-volatile storage is available, it is often available only in very limited quantity.
Obvious with the above design is that at some point along the line, the Vault master key or a wrapping key is observably exposed to user-mode software. In fact, the number two recommendation for Vault hardening is \"single tenancy\" which is further explained, in priority order, as (a) giving Vault its own physical machine, (b) giving Vault its own virtual machine, or (c) giving Vault its own container. The general solution to the exposure of the Vault master key or a wrapping key is to use a Trusted Execution Environment (TEE) to limit observability. There is currently no platform- and architecture-independent TEE solution.
"},{"location":"threat-models/secret-store/high_level_design/#high-level-design","title":"High-level design","text":"Figure 1: High-level design.
The secrets to be protected are the application secrets (P-1). The application secrets are protected with a per-service Vault service token (S-1). The Vault service token is delivered by a \"token server\" running in the security service to a pre-agreed rendezvous location, where mandatory access control, namespaces, or file system permissions constrain path accessibility. Vault access tokens are simply 128-bit random handles that are renewed at the Vault server. They can be shared across multiple instances of a load-balanced service, and unlike a JWT there is no need to periodically re-issue them if they have not expired.
The token server has its own non-root token-issuing token (S-3) that is created by the security service with the root token after it has initialized or unlocked the vault but before the root token is revoked. (S-4) Because of the sensitive nature of this token, it is co-located in the security service, and revoked immediately after use.
The actual application secrets are stored in the Vault encrypted data store (S-6) that is logically stored in Consul's data store (S-7). The vault data store is encrypted with a master key (S-5) that is held in Vault memory and forgotten across Vault restarts. The master key must be resupplied whenever Vault is restarted. The security service encrypts the master key using AES-256-GCM where the key (S-13) is derived using an RFC5869 key derivation function (KDF). The input key material for the KDF originates from a vendor-defined plugin that interfaces with a hardware security mechanism such as a TPM, PKCS11-compatible HSM, trusted execution environments (TEE), or enclave. An encrypted Vault master key is what is ultimately saved to storage.
Confidentiality of the secret management APIs is established using server-side TLS. The PKI initialization component is responsible for generating a root certificate authority (S-8), one or more intermediate certificate authorities (S-9), and several leaf certificates (S-10) needed for initialization of the core services. The PKI can be generated afresh every boot, or installed during initial provisioning and cached. PKI intialization is covered next.
"},{"location":"threat-models/secret-store/high_level_design/#pki-initialization","title":"PKI Initialization","text":"Figure 2: PKI initialization.
PKI initialization must happen before any other component in the secret management architecture is started because Vault requires a PKI to be in place to protect its HTTP API. Creation of a PKI is a multi-stage operation and care must be taken to ensure that critical secrets, such as the the CA private keys, are not written to a location where they can be recovered, such as bulk storage devices. The PKI can be created on-device at every boot, at device provisioning time, or created off-device and imported. Caching of the PKI is optional if the PKI is created afresh every boot, but required otherwise.
If the implementation allows, the private keys for certificate authorities should be destroyed after PKI generation to prevent unauthorized issuance of new leaf certificates, except where the certificate authority is stored in Vault and controlled with an appropriate policy. Following creation of the PKI, or retrieving it from cache, the PKI initialization is responsible for distributing keying material to pre-agreed per-service drop locations that service configuration files expect to find them.
PKI initialization is not instantaneous. Even if PKI initialization is started first, dependent services may also be started before PKI initialization is completed. It is necessary to implement init-blocking code in dependent services that delays service startup until PKI assets have been delivered to the service.
Most dependent services do not support encrypted TLS private keys. File access controls offered by the underlying execution environment are their only protection. A potential future enhancement might be to re-use the key derivation strategy used earlier to generate additional keys to encrypt the cached PKI keying material at rest.
(Update: ADR 0015, adopted after this threat model was written, stipulates that TLS will not be used for single-node deployments of EdgeX.)
"},{"location":"threat-models/secret-store/high_level_design/#vault-initialization-and-unsealing-flow","title":"Vault initialization and unsealing flow","text":"Figure 3: Vault initialization and unsealing flow
When the security service starts the first thing that it does is check to see if a hardware security hook has been defined. The presence of a hardware security hook is indicated by an environment variable, IKM_HOOK, that points to an executable program. The security service will run the program and look for a hex-encoded key on its standard output. If a key is found, it will be used as the input key material for the HMAC key deriviation function, otherwise, hardware security will not be used. The input key material is combined with a random salt that is also saved to disk for later retrieval. The salt ensures that unique encryption keys will be used each time EdgeX is installed on a platform, even if the underlying input key material does not change. The salt also defends against weak input key material.
"},{"location":"threat-models/secret-store/high_level_design/#initialization-flow","title":"Initialization flow","text":"Next, the security service will determine if Vault has been initialized. In the case that Vault is uninitialized, Vault's initialization API will be invoked, which results in a set of keys that can be used to reconstruct a Vault master key. When hardware security is enabled, the input key material and salt are fed into the key derivation function to generate a unique AES-256-GCM encryption key for each key shard. The encrypted keys along with nonces will be persisted to disk. AES-GCM protects against padding oracle attacks, but is sensitive to re-use of the salt value. This weakness is addressed both by using a unique encryption key for each shard, as well as the expectation that encryption is performed exactly once: when Vault is initialized. The Vault response is saved to disk directly in the case that hardware security is not enabled.
"},{"location":"threat-models/secret-store/high_level_design/#unseal-flow","title":"Unseal flow","text":"If Vault is found to be in an initialized and sealed state, the Vault master key shards are retrieved from disk. If they are encrypted, they will be encrypted by reversing the process performed during initialization. The key shards are then fed back to Vault until the Vault is unsealed and operational.
"},{"location":"threat-models/secret-store/high_level_design/#token-issuing-flow","title":"Token-issuing flow","text":"Figure 7: Token-issuing flow.
"},{"location":"threat-models/secret-store/high_level_design/#client-side","title":"Client side","text":"Every service that wants to query Vault must link to a secrets module either directly (go-mod-secrets) or indirectly (go-mod-bootstrap) or implement their own Vault interface. The module must take as input a path to a file that contains a Vault access token specific to that service. There is currently no secrets module for the C SDK.
Clients must be prepared to handle a number of error conditions while attempting to access the secret store:
Judicious use of retry loops should be sufficient to handle most of the above issues.
"},{"location":"threat-models/secret-store/high_level_design/#server-side","title":"Server side","text":"On the server side, the Vault master key will be used to generate a fresh \"root token\". The root token will generate a special \"token-issuing token\" that will generate tokens for the EdgeX microservices. The root token will then be revoked, and a \"token provider\" process with access to the token-issuing token will be launched in the background.
EdgeX will provide a single reference implementation for the token provider: * security-file-token-provider: This token provider will consume a list of services that require tokens, along with a set of customizable parameters. At startup, the service tokens are created in bulk and delivered via the host file system on a per-service basis.
The token-issuing token will be revoked upon termination of the token provider.
"},{"location":"threat-models/secret-store/high_level_design/#token-revocation","title":"Token revocation","text":"Vault tokens are persistent. Although they will automatically expire if they are not renewed, inadvertent disclosure of a token would be difficult to detect. This condition could allow an attacker to maintain an unauthorized connection to Vault indefinitely. Since tokens do expire if not renewed, it is necessary to generate fresh tokens on startup. Therefore, part of the startup process is the revokation of all previously Vault tokens, as a mitigation against token disclosure as well as garbage collection of obsolete tokens.
"},{"location":"threat-models/secret-store/threat_model/","title":"Threat Model","text":""},{"location":"threat-models/secret-store/threat_model/#historical-context","title":"Historical Context","text":"This threat model was written in the EdgeX Fuji timeframe. Significant changes have occured to EdgeX since that time. This document serves as a historical record of motification for security changes that occured in the Fuji, Geneva, Hanoi, and Ireland releases of EdgeX.
This threat model also covers ONLY THE EDGEX SECRET STORE and not the EdgeX project as a whole.
"},{"location":"threat-models/secret-store/threat_model/#assumptions","title":"Assumptions","text":"The EdgeX Framework is a API-based software framework that strives to be platform and architecture-independent. The threat model considers only the following two deployment scenarios:
The threat model presented in this document analyzes the secret management subsystem of EdgeX, and has considerations for both of the above runtime environments, both of which implement protections beyond a stock user/process runtime environment. In generic terms, the secret management threat model assumes:
Any particular of implementation of EdgeX should perform its own threat modeling activity as part of securing the implementation, and may use this document to supplement analysis of the secret management subsystem of EdgeX.
"},{"location":"threat-models/secret-store/threat_model/#recommended-hardening","title":"Recommended Hardening","text":"Physical security and hardening of the underlying platform is out-of-scope for implementation by the EdgeX reference code. But since the privileged administrator can bypass all access controls, such hardening is nevertheless recommended: the threat model assumes that there are no unauthorized privileged administrators. One should look to industry standard hardening guides, such as CIS Benchmarks for hardening operating system and container runtimes. Additionally, typical EdgeX base platforms are likely to support the following types of hardening out-of-the-box(1), and these should be enabled where possible.
The EdgeX secret store provides hooks for utilizing hardware secure storage to ensure that secrets stored on the device can only be decrypted on that device. Implementations should use hardware security features where a suitable plug-in is available. For maximum benefit, hardware security should be combined with verified/secure boot, file system protection, and other software-level hardening.
Lastly, due consideration should be given to the security of the software supply chain: it is important to ensure that code deployed to a device is what is expected and free of known vulnerabilities. This implies an ability to update a device in the field to ensure that it remains free of known vulnerabilities.
Footnotes:
(1) Most Linux distributions support verified/secure boot. Microsoft Windows enables verified/secure boot by default, and can automatically use TPM hardware if full disk encryption is enabled and will fail to decrypt if verified/secure boot is disabled.
"},{"location":"threat-models/secret-store/threat_model/#protections-afforded-by-modeled-runtime-environments","title":"Protections afforded by modeled runtime environments","text":"The threat model considers Docker-based and Snap-based deployments. Each of these deployment environments offer sandboxing protections that go beyond a standard Unix user and process model. As mentioned earlier, the threat model assumes the sandboxing protections:
In the Linux environment, most of these protections are based on a combination of two technologies: Linux namespaces and mandatory access control (MAC) based on Linux Security Module (LSM).
"},{"location":"threat-models/secret-store/threat_model/#docker-based-runtimes","title":"Docker-based runtimes","text":"All services running within a single container are assumed to be within the same trust boundary. Docker-based runtimes are expected to provide the following properties:
"},{"location":"threat-models/secret-store/threat_model/#general-protections","title":"General protections","text":"root
user in a container is subject to namespace constraints and restricted set of capabilities./var/lib/docker
where they are observable on the host and stored persistently.All services running within a single snap are assumed to be within the same trust boundary. However, even in a snap, due to the use of mandatory access control, there are stronger-than-normal process isolation policies in place, as documented below.
"},{"location":"threat-models/secret-store/threat_model/#general-protections_1","title":"General protections","text":"root
user in a snap is subject to namespace constraints and MAC rules enforced by Linux Security Modules (LSMs) configured as part of the snap.$XDG_RUNTIME_DIR
which is a user-private user-writable-directory that is also per-snap. Snaps can write persistent data local to the snap to the $SNAP_DATA
folder.mount(2)
, capability./proc/mem
or to ptrace(2)
other processes.The security objectives call out the security goals of the architecture/design. They are:
Primary assets are the assets at the level of the conceptual data model of the system and primarily represent \"real-world\" things.
AssetId Name Description Attack Points P-1 Application secrets The things we are trying to protect In use, in transit, in storage"},{"location":"threat-models/secret-store/threat_model/#secondary-assets","title":"Secondary Assets","text":"Secondary assets are assets are used to support or protect the primary assets and are usually implementation details versus being part of the conceptual data model.
AssetId Name Description Attack Points S-1 Vault service token Vault service tokens are issued per-service and used by services to authenticate to vault and retrieve per-service application secrets. In-flight via API, at rest S-3 Vault token-issuing-token Used by the token issuing service to create vault service tokens for other services. (Called out separately from S-1 due to its high privilege.) In-flight via API, at rest S-4 Vault root token A special token created at Vault initialization time that has all capabilities and never expires. In-flight via API, at rest S-5 Vault master key A root secret that encrypts all of Vault's other secrets. In-flight via API, at rest, in-use. S-6 Vault data store A data store encrypted with the Vault master key that contains the contents of the vault. In storage S-7 Consul data store Back-end storage engine for vault data store. In storage S-8 CA key Private keys for on-device PKI certificate authority. In use, in transit, in storage S-9 Issuing CA key Private keys for on-device PKI issuing authorities. In use, in transit, in storage S-10 Leaf TLS key Private keys for TLS server authentication for on-device services (e.g. Vault service, Consul service) In use, in transit, in storage S-13 IKM Initial keying material as input to HMAC KDF In use, in transit, in storageNote that asset S-9 (issuing CA key) is not currently implemented: in all current EdgeX releases all TLS leaf certificates are derived from the root CA.
"},{"location":"threat-models/secret-store/threat_model/#attack-surfaces","title":"Attack Surfaces","text":"This table lists components in the system architecture that have assets of potential value to an attacker and how a potential attacker may attempt to gain access to those components.
System Element Compromise Type Assets Exposed Attack Method Consul API IA Vault data store, service location data/registry, settings Data modification, DoS against API Vault API CIA All application secrets, all vault tokens Data channel snooping or data modification, DoS against API Host file system CIA PKI private keys, Vault tokens, Vault master key, Vault store, Consul store Snooping or data modification, deletion of critical files PKI initiazation agent CI Private keys for on-device PKI Snooping generation of assets or forcing predictable PKI Vault initialization agent CI Vault master key, Vault root token, token-issuing-token, encryption key for Vault master key Snooping generation of assets or tampering with assets Token server API CIA Token issuing token, service tokens Data channel snooping, tampering with asset policies, or forcing service down Process memory CIA Most assets excluding hardware and storage media Read or modify process memory through /proc or related IPC mechanisms"},{"location":"threat-models/secret-store/threat_model/#adversaries","title":"Adversaries","text":"The adversary model is use-case specific, but for the sake of discussion assume the following simplistic list:
Persona Motivation Starting Access Skill / Effort Thief (Larceny) Quick cash by reselling stolen components. None Low Remote hacker Financial gain by harvesting resellable information or performing ransomware attacks via exploitable vulnerabilities. Network Medium Malicious administrator Out of scope. Cannot defend against attacks originating at level of system software. N/A N/A Malicious non-privileged service Escalation of privilege and data exfiltration. Malicious services includes software supply chain attackers. User mode access Medium Industrial espionage / Malicious developer Financial gain or harm by obtaining access to back-end systems and/or competitive data. Unknown HighThe malicious administrator is out of scope: the threat model assumes that there are no unauthorized privileged administrators on the device. This must be ensured through hardening of the underlying platform, which is out of scope.
Malicious non-privileged services are a concern. This can occur through a wide variety of software supply chain attacks, as well as implementation bugs that permit a service to exhibit unintended functionality.
The industrial espionage or malicious developer adversary deserves some explanation. Whereas the remote hacker adversary is primarily motivated by a one-time attack, the industrial espionage attacker seeks to maintain a persistent foothold or to insert back-doors into an entire fleet of devices. Making each device unique (e.g. device-unique secrets) helps to mitigate against break-once-run-everywhere (BORE) attacks.
"},{"location":"threat-models/secret-store/threat_model/#threat-matrix","title":"Threat Matrix","text":"The threat matrix indicates what assets are at risk for the various attack surfaces in the system.
Consul API Vault API Host FS PKI agent Vault agent Token svc /proc /mem Application secrets *a *p Vault service token *bd *b *bd *p Token-issuing-token *e *e *e *e *p Vault root token *f *f *f *p Vault master key *g *g *g *p Vault DS *hi Consul DS *j *j PKI CA *m *k *p PKI intermediate *m *l *p PKI leaf *m *m *p IKM *q *p"},{"location":"threat-models/secret-store/threat_model/#threats-and-mitigations","title":"Threats and Mitigations","text":"Format:
(identifier) Threat name
The EdgeX secret store threat model calls out a particular aspect of the Vault-based secret store architecture upon which the whole EdgeX secret store depends: the Vault master key. Because plaintext storage of the Vault master key at rest would be a known security weakness, the high level design calls for the Vault master key to be encrypted on storage.
One way of doing this would be to simply encrypt the whole drive upon which the Vault master key is stored. This is a good solution: it would encrypt not only the Vault master key, but also other part of the system to harden them against offline tampering and information disclosure risks. This solution also has drawbacks as well: whole volume encryption may slow down boot times and have a runtime performance impact on constrained devices without hardware-accelerated crypto.
The Vault Master Key Encryption feature of EdgeX enables a system designer to specifically target encryption of the Vault master key, and enables a variety of flexible use cases that are not tied to volume encryption such as key escrow (where a key is stored on another machine on the network), smart cards or USB HSMs (where a key us stored in a dongle or chip card), or TPM (security hardware found on many PC-class motherboards).
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#internal-design","title":"Internal design","text":"As stated in the high level design, an RFC-5869 key derivation function (KDF) is used to produce a set of wrapping keys that are used by the vault-worker process to encrypt the Vault master key.
An RFC-5869 KDF requires three inputs. A change to any input results in a different output key:
Input keying material (IKM). It need not be (but should be) cryptographically strong, and is the \"secret\" part of the KDF.
A salt. A non-secret random number that adds to the strength of the KDF.
An \"info\" argument. The info argument allows multiple keys to be generated from the same IKM and salt. This allows the same KDF to generate multiple keys each used for a different purpose. For instance, the same KDF can be used to generate an encryption key to protect the PKI at-rest.
The Vault Master Key Encryption feature consumes the IKM from a Unix-style pipe. The IKM is provided by a vendor-defined mechanism, and is intended to be tied into security hardware on the device, be device-unique, and explicitly not stored in the file system.
To further strengthen the solution, an implementation could choose to engineer a solution whereby the IKM is only released a configurable number of times per boot, so that malware that runs on the system post-boot cannot retrieve it.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#ikm-hook","title":"IKM HOOK","text":"The Vault Master Key Encryption feature is embedded into the EdgeX security-secretsetore-setup
utility. It is enabled by setting an environment variable, EDGEX_IKM_HOOK
, containing the path to an executable that implements the IKM interface, described below, when the security-secretstore-setup
executable is run in early boot to initialize or unseal the EdgeX secret store.
When this feature is enabled, the Vault master key is encrypted at rest, and cannot be recovered unless the same IKM is provided as when the secretstore was initialized.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#ikm-interface","title":"IKM interface","text":""},{"location":"threat-models/secret-store/vault_master_key_encryption/#name","title":"NAME","text":"ikm - Return input key material for a hash-based KDF.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#synopsis","title":"SYNOPSIS","text":"ikm
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#description","title":"DESCRIPTION","text":"ikm outputs initial keying material to stdout as a lowercase hex string to be used for the default EdgeX software implementation of an RFC-5869 KDF.
The ikm can output any number of octets. Typically, the KDF will pad the ikm if it is shorter than hashlen, and hash the ikm if it is longer than hashlen. Thus, if ikm returns variable-length output it is advantageous to ensure that the output is always greater than hashlen, where hashlen depends on the hash function used by the KDF.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#example","title":"EXAMPLE","text":"ikm\n64acd82883269a5e46b8b0426d5a18e2b006f7d79041a68a4efa5339f25aba80\n
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#sample-implementations","title":"Sample implementations","text":"This section lists example implementations of the EdgeX Hardware Security Hook.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#tutorial-configuring-edgex-hardware-security-hooks-to-use-a-tpm-on-intel-developer-zone","title":"Tutorial: Configuring EdgeX Hardware Security Hooks to use a TPM on Intel\u00ae Developer Zone","text":"There is a tutorial published on Intel\u00ae Developer Zone that uses TPM hardware through a device driver interface to encrypt the Vault master key shares. The sample uses TPM-based local attestation to attest the system state prior to releasing the IKM. The sample is based on the tpm2-software project in GitHub and is specifically designed to run as a statically-linked executable that could be injected into a Docker container. Although not a complete solution, it is an illustrative sample that demonstrates in concrete terms how to use the TSS C API to access TPM functionality.
"},{"location":"threat-models/stride-model/EdgeX-STRIDE/","title":"EdgeX Foundry STRIDE Threat Model","text":"STRIDE is an acroymn standing for:
STRIDE is a type of security threat modeling to identify security vulnerabilities and risks associated with IT systems and then put methods (mitigation) in place to protect against the vulnerabilities and risks. Specifically, the STRIDE approach to threat modeling looks for common threats as represented in the acroymn in a consistent and methodical way.
"},{"location":"threat-models/stride-model/EdgeX-STRIDE/#report","title":"Report","text":"There are many tools to help create STRIDE threat models. Many of these tools will allow the developer to visually diagram the system and then automatically analyze the diagram and generate STRIDE risks which the developer must then explore and mitigate.
This EdgeX STRIDE model was created using Microsoft's Threat Modeling Tool (MTMT). It is available for free. Documentation on the product is available here.
If you wish to use the tool, make changes and/or generate your own reports you will need to import the following files into the Microsoft TMT:
Created on 12/27/2022 3:06:56 PM
generated from HTML by https://www.convertsimple.com/convert-html-to-markdown/ embedded images extracted with Pandoc https://pandoc.org (Pandoc did not do well with tables so just used for image extraction) using the command below
pandoc -o EdgeXFoundryThreatReportV2.2.md -t markdown -f markdown EdgeXFoundryThreatReportV2.2-original.md --extract-media=./images\n
Threat Model Name: EdgeX Foundry Threat Model
Owner: Jim White (IOTech Systems)
Reviewer: Bryon Nevis, Lenny Goodell, Jim Wang (all from Intel), Farshid Tavakolizadeh (Canonical), Rodney Hess (Beechwoods)
Contributors:
Description: General Threat Model for EdgeX Foundry - inclusive of security elements (Kong, Vault, etc).
Assumptions: EdgeX is platform agnostic, but this Threat model assumes the underlying OS is a Linux distribution. EdgeX can run containerized or non-containerized (natively). This Threat Model assumes EdgeX is running in a containerized environment (Docker). EdgeX micro services can run distributed, but this Threat Model assumes EdgeX is running on a single host (single Docker deamon with a single Docker network unless otherwise specified). Many different devices/sensors can be connected to EdgeX via its device services. This Threat model treats all sensors/devices the same (which is not always the case given the varoius protocols of support). Per https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/, additional hardening such as secure boot with hardware root of trust, and secure disk encryption are outside of EdgeX control but would greatly improve the threat mitigation.
External Dependencies: Operating system and hardware (including devices/sensors) Device/sensor drivers Possibly a cloud system or external enterprise system that EdgeX gets data to A message bus broker (such as an MQTT broker)
"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#notes","title":"Notes:","text":"Id Note Date Added By 1 Tampering with Data - This is a threat where information in the system is changed by an attacker. For example, an attacker changes an account balance Unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet 8/25/2022 6:40:40 PM DESKTOP-SL3KKHH\\jpwhi 2 XSS protections: filter input on arrival (don't do), encode data on oputput (don't do), use appropriate headers (do), use CSP (dont do) 8/25/2022 6:54:16 PM DESKTOP-SL3KKHH\\jpwhi 3 priority is determined by the likelihood of a threat occuring and the severity of the impact of its occurance 8/25/2022 7:11:40 PM DESKTOP-SL3KKHH\\jpwhi 4 Repudiation - don't track and log users actions; can't prove a transaction took place 8/25/2022 7:13:14 PM DESKTOP-SL3KKHH\\jpwhi 5 Elevation of privil - authorized or unauthorized user gains access to info not authorized 8/25/2022 7:16:24 PM DESKTOP-SL3KKHH\\jpwhi 6 Remote code execution: https://www.comparitech.com/blog/information-security/remote-code-execution-attacks/ buffer overflow sanitize user inputs proper auth use a firewall 8/25/2022 7:21:28 PM DESKTOP-SL3KKHH\\jpwhi 7 Privilege escalation attacks occur when bad actors exploit misconfigurations, bugs, weak passwords, and other vulnerabilities 8/27/2022 1:57:18 PM DESKTOP-SL3KKHH\\jpwhi"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#threat-model-summary","title":"Threat Model Summary:","text":"Not Started 0 Not Applicable 27 Needs Investigation 14 Mitigation Implemented 100 Total 141 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-edgex-foundry-big-picture","title":"Diagram: EdgeX Foundry (Big Picture)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#edgex-foundry-big-picture-diagram-summary","title":"EdgeX Foundry (Big Picture) Diagram Summary:","text":"Not Started 0 Not Applicable 20 Needs Investigation 3 Mitigation Implemented 96 Total 119 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-config","title":"Interaction: config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#1-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"1. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Consul (configuration) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: EdgeX services that use Consul must use a Vault access token provided in bootstrapping of the service. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/. There is also per service ACL rules in place to limit Consul access. As of the Ireland release, access of Consul requires ACL token header X-Consul-Token in any HTTP calls. Moreover, Consul itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#2-spoofing-of-source-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"2. Spoofing of Source Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-configuration","title":"Interaction: configuration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#3-spoofing-of-source-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"3. Spoofing of Source Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#4-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"4. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Configuration Files can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Disclosure of configuration files is not important. Configuration data is not considered sensitive. As long as the configuration files are not manipulated, then access to configuration files is not deemed a threat. All secret configuration is made available through Vault. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-data","title":"Interaction: data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#5-spoofing-of-source-data-store-redis-state-mitigation-implemented-priority-low","title":"5. Spoofing of Source Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#6-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"6. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Redis can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Access control credentials for Redis are secured in Vault (provided to EdgeX services at bootstrapping but otherwise unknown). Access without credentials is denied. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#7-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"7. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-published-message","title":"Interaction: published message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#8-potential-excessive-resource-consumption-for-edgex-foundry-or-message-bus-broker-state-mitigation-implemented-priority-medium","title":"8. Potential Excessive Resource Consumption for EdgeX Foundry or Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Message Bus Broker take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: The EdgeX message broker is either Redis Pub/Sub or an MQTT broker like Mosquitto and runs as a container in a Docker network that, by default with security on, does not allow direct access to the broker. Access to publish or subscribe to cause it to use excessive resources would require authorized access to the host as the port to the internal message broker is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service producing too many message than the broker can handle) that result in a DoS event. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#9-spoofing-of-destination-data-store-message-bus-state-mitigation-implemented-priority-low","title":"9. Spoofing of Destination Data Store Message Bus\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Message Bus. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Message broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-queries-data","title":"Interaction: queries & data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#10-spoofing-of-destination-data-store-redis-state-mitigation-implemented-priority-low","title":"10. Spoofing of Destination Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Redis. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Database host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#11-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"11. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. Docker containers do not share the host's network interface by default and instead is based on virtual ethernet adapters and bridges. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#12-potential-excessive-resource-consumption-for-edgex-foundry-or-redis-state-mitigation-implemented-priority-low","title":"12. Potential Excessive Resource Consumption for EdgeX Foundry or Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Redis take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Redis runs as a container in a Docker network that, by default with security on, does not allow direct access to the database. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to much data into it) that result in a DoS event. EdgeX does have a routine with customizable configuration that \"cleans up\" and removes older data so that \"normal\" or otherwise expected use of the database for persistenct does not result in DoS. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#13-spoofing-of-destination-data-store-vault-state-mitigation-implemented-priority-low","title":"13. Spoofing of Destination Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Vault. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. EdgeX services that use Vault must use the go-mod-secrets client or a Vault service token to access its secrets (which is revoked by default). See https://docs.edgexfoundry.org/2.3/security/Ch-SecretStore/#using-the-secret-store Vault host and port are configured from static configuration or environment overrides (trusted input) and not Consul, making it difficult to misdirect services access to Vault. See EdgeX Threat Model documentation (https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/#threat-matrix) for additional considerations and mitigation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#14-potential-excessive-resource-consumption-for-edgex-foundry-or-vault-state-mitigation-implemented-priority-low","title":"14. Potential Excessive Resource Consumption for EdgeX Foundry or Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Vault take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Vault runs as a container in a Docker network that, by default with security on, does not allow direct access to the secret store. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to many secrets into it) that result in a DoS event. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query_1","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#15-spoofing-of-destination-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"15. Spoofing of Destination Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (REST authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#16-the-devicesensor-rest-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"16. The Device/Sensor (REST authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (REST authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#17-data-store-denies-devicesensor-rest-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"17. Data Store Denies Device/Sensor (REST authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (REST authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#18-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-medium","title":"18. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#19-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"19. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query_2","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#20-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"20. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or subscriber to the broker. Physical and sytem security is required to protect these and mitigate this threat. Query requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#21-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"21. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#22-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"22. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data that cause the broker or subscriber to go offline or appear unresponsive - depending on the capabilities of the broker or subscribing application. In the opposite direction, an MQTT publisher could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system or MQTT broker) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#23-data-flow-sniffing-state-mitigation-implemented-priority-high","title":"23. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#24-data-store-denies-devicesensor-via-external-mqtt-broker-authenticated-potentially-writing-data-state-mitigation-implemented-priority-high","title":"24. Data Store Denies Device/Sensor (via external MQTT broker - authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Device/Sensor (via external MQTT broker - authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#25-the-devicesensor-via-external-mqtt-broker-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"25. The Device/Sensor (via external MQTT broker - authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (via external MQTT broker - authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#26-spoofing-of-destination-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"26. Spoofing of Destination Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT query sendor (or the spoofed external message broker) would not be properly authenticated and thereby be unable to publish. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#27-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"27. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of a query (or the spoofed external message broker) would not be properly authenticated and thereby be unable to make its request. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query-or-actuation","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#28-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"28. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#29-spoofing-of-destination-data-store-devicesensor-state-needs-investigation-priority-high","title":"29. Spoofing of Destination Data Store Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof a legitimate device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#30-the-devicesensor-data-store-could-be-corrupted-state-not-applicable-priority-high","title":"30. The Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor. Ensure the integrity of the data flow to the data store. I.e. - example: a man in the middle attack on the wire between EdgeX and the wired device/sensor or an attack on the sensor (giggling a vibration sensor) Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device or intercept/use of the data to the device/sensor is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Additional optional mitigation ideas require modifications to the EdgeX device service. The device service could be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#31-data-store-denies-devicesensor-potentially-writing-data-state-mitigation-implemented-priority-low","title":"31. Data Store Denies Device/Sensor Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#32-data-flow-sniffing-state-not-applicable-priority-high","title":"32. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#33-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-state-not-applicable-priority-high","title":"33. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#34-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"34. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#35-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"35. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query-config","title":"Interaction: query & config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#36-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-configuration-state-mitigation-implemented-priority-low","title":"36. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (configuration) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Consul runs as a container in a Docker network that, by default with security on, does not allow direct access to the APIs and UI without the Consul access token (see https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/#how-to-get-consul-acl-token). A rogue authorized user or someone that illegally obtained the Consul token could force Consul to use too many resources by invoking its API or stuffing too much configuration in the system (or impact it enough that disrupts its abilty to service the EdgeX services). Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#37-spoofing-of-destination-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"37. Spoofing of Destination Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (configuration). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Replacing/spoofing the Consul container would require administrative access to the Docker socket. EdgeX services will talk to any service that answers on the configured consul hostname. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query-or-actuation_1","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#38-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"38. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#39-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"39. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#40-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"40. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (physically connected authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#41-data-flow-sniffing-state-not-applicable-priority-high","title":"41. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#42-data-store-denies-devicesensor-physically-connected-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"42. Data Store Denies Device/Sensor (physically connected authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (physically connected authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#43-the-devicesensor-physically-connected-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"43. The Device/Sensor (physically connected authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor (physically connected authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: With authentication and encrypting the data between EdgeX and the device/sensor (ex: using TLS), the data on the wire can be protected. The physcial security of the device/sensor still needs to be achieved to protect someone tampering with the device/sensor (ex: holding a match to a thermostat). As with device/sensors that are not authenticated, additional optional mitigation ideas to mitigate unprotected devices/sensors require modifications to the EdgeX device service. The device service could be constructed to filter data or report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#44-spoofing-of-destination-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"44. Spoofing of Destination Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#45-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"45. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-read","title":"Interaction: read","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#46-spoofing-of-destination-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"46. Spoofing of Destination Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Configuration Files. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#47-potential-excessive-resource-consumption-for-edgex-foundry-or-configuration-files-state-mitigation-implemented-priority-low","title":"47. Potential Excessive Resource Consumption for EdgeX Foundry or Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Configuration Files take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Config file does not consume resources other than file space. Configuration file is deployed with the service container and therefore, without access to the host and Docker, its size is controlled. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#48-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"48. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request_1","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#49-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"49. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX Foundry may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: There is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Impersonating EdgeX would require access to the host system and the Docker network. With this access, many other severe issues could occur (stopping the system, sending incorrect data, etc.). Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#50-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"50. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX Foundry. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Kong, the service would not know that the response came from something other than Kong. I.e. - there is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Kong is run as a container on the EdgeX Docker network. Replacing/spoofing the Kong container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Kong (with TLS cert in place). A spoofing service (in this case Kong), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request_2","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#51-elevation-by-changing-the-execution-flow-in-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"51. Elevation by Changing the Execution Flow in EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX UI - Web Application in order to change the flow of program execution within EdgeX UI - Web Application to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. In order to use the Web UI (with secure mode EdgeX), authentication required via Kong. With proper authentication, a rogue user could invoke commands, change the rules engine rules (and alter workkflows), stop services (and alter workflows), etc. - but these could also be accomplished directly with EdgeX. If the GUI is of extreme concern, it can be removed or turned off as it is a convenience mechanism and is not required for EdgeX operation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#52-edgex-ui-web-application-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-medium","title":"52. EdgeX UI - Web Application May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: Browser/API Caller may be able to remotely execute code for EdgeX UI - Web Application. Justification: <no mitigation provided> Possible Mitigation: Possible protections to be implemented: buffer overflow protection, sanitize user inputs, use of a firewall Mitigator: EdgeX Foundry Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#53-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"53. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Browser/API Caller in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. The Edge GUI is deployed as a container part of the EdgeX application set. Impersonation of Web Application would require access to the host (with privilege) and require changing or removing the existing GUI Web application. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#54-data-flow-request-is-potentially-interrupted-state-not-applicable-priority-low","title":"54. Data Flow request Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#55-potential-process-crash-or-stop-for-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"55. Potential Process Crash or Stop for EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: EdgeX UI - Web Application crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Other mechisms exist to work with EdgeX (such as the service APIs). As another EdgeX, stopping the service requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges. The GUI service can be removed for extra security. The GUI is a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#56-data-flow-sniffing-state-mitigation-implemented-priority-medium","title":"56. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Data flowing across request may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Use of a VPN or HTTPS can be used to secure the communications with the EdgeX UI. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#57-potential-data-repudiation-by-edgex-ui-web-application-state-not-applicable-priority-low","title":"57. Potential Data Repudiation by EdgeX UI - Web Application\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX UI - Web Application claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web UI can use elevated logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#58-cross-site-scripting-state-mitigation-implemented-priority-low","title":"58. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: X-XSS-Protection is enabled on all pages to protect against detected XSS. In environments where cross site scripting is a huge concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#59-potential-lack-of-input-validation-for-edgex-ui-web-application-state-needs-investigation-priority-medium","title":"59. Potential Lack of Input Validation for EdgeX UI - Web Application\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Tampering Description: Data flowing across request may be tampered with by an attacker. This may lead to a denial of service attack against EdgeX UI - Web Application or an elevation of privilege attack against EdgeX UI - Web Application or an information disclosure by EdgeX UI - Web Application. Failure to verify that input is as expected is a root cause of a very large number of exploitable issues. Consider all paths and the way they handle data. Verify that all input is verified for correctness using an approved list input validation approach. Justification: <no mitigation provided> Possible Mitigation: Input validation should be added to the GUI. However, access to the Web GUI (and then EdgeX) requires the API gateway token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). If this threat is likely, the Web GUI can be removed as this does not impact the remainder of EdgeX operations. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#60-spoofing-the-browserapi-caller-external-entity-state-not-applicable-priority-low","title":"60. Spoofing the Browser/API Caller External Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#61-spoofing-the-edgex-ui-web-application-process-state-mitigation-implemented-priority-low","title":"61. Spoofing the EdgeX UI - Web Application Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: EdgeX UI - Web Application may be spoofed by an attacker and this may lead to information disclosure by Browser/API Caller. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As one of the services deployed as a container of EdgeX, spoofing of EdgeX GUI would require either replacing the container (requiring host access and elevated privileges) and/or intercepting and rerouting traffic. Further, the GUI must obtain and use a Kong JWT token to access the EdgeX APIs which a spoofer would not have. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request_3","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#62-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"62. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#63-data-flow-request-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"63. Data Flow request Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#64-external-entity-kong-potentially-denies-receiving-data-state-not-applicable-priority-low","title":"64. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#65-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"65. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response_1","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#66-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"66. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Kong is run as a container on the EdgeX Docker network. Replacing/spoofing Kong would require privileaged access to the host. Kong is exposed via TLS and we provide a cli tool to install a custom certificate that the web UI can validate if the CA is trusted. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#67-cross-site-scripting-state-mitigation-implemented-priority-low","title":"67. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: Because the Web application is running as a container on the Docker network with Kong, access to the response traffic via Kong would require access to the Docker network (requiring access to the host with elevated privilege). The EdgeX Web GUI has X-XSS-Protection enabled. In environments where cross site scripting is a concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#68-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"68. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: The Web GUI must authenticate with Kong using a JWT token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). Without the proper JWT token access, the Web GUI cannot get eleveated privilege to EdgeX as a whole. An impersonating Web GUI might be used to have a user provide their JWT token which could be used to then perform other operations in EdgeX. If this is a real threat, the GUI can be removed and not used without other impacts to EdgeX. The GUI is a convenience tool. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response_2","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#69-data-flow-response-is-potentially-interrupted-state-not-applicable-priority-low","title":"69. Data Flow response Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Third Party Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#70-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"70. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web GUI can use elevated log level to log all requests. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#71-spoofing-of-the-browserapi-caller-external-destination-entity-state-not-applicable-priority-low","title":"71. Spoofing of the Browser/API Caller External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Browser/API Caller. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response_3","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#72-data-flow-response-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"72. Data Flow response Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#73-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"73. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging to document all requests. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#74-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"74. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#75-spoofing-of-source-data-store-devicesensor-state-not-applicable-priority-high","title":"75. Spoofing of Source Data Store Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof as a ligitimage device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#76-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-low","title":"76. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#77-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"77. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#78-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"78. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#79-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"79. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#80-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"80. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#81-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"81. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#82-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"82. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_1","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#83-external-entity-megaservice-cloud-or-enterprise-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"83. External Entity Megaservice - Cloud or Enterprise Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Megaservice - Cloud or Enterprise claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#84-spoofing-of-the-megaservice-cloud-or-enterprise-external-destination-entity-state-not-applicable-priority-low","title":"84. Spoofing of the Megaservice - Cloud or Enterprise External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Megaservice - Cloud or Enterprise may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Megaservice - Cloud or Enterprise. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of a megacloud or enterprise, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export) Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#85-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"85. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the cloud). If the data is deemed critical and if by some means the data flow was interrupted, then store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_2","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#86-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"86. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the external message bus). If the data is deemed critical and if by some means the data flow was interrupted, store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#87-external-entity-message-topic-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"87. External Entity Message Topic Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Message Topic claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#88-spoofing-of-the-message-topic-external-destination-entity-state-not-applicable-priority-low","title":"88. Spoofing of the Message Topic External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Topic may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Message Topic. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of an external message bus, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_3","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#89-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"89. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#90-spoofing-of-source-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"90. Spoofing of Source Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#91-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"91. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#92-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"92. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (physically connected authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#93-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"93. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#94-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"94. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#95-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"95. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#96-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"96. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (physically connected authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#97-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"97. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_4","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#98-spoofing-of-source-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"98. Spoofing of Source Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to Kong. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#99-external-entity-kong-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"99. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#100-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"100. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (REST authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#101-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"101. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#102-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"102. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#103-weakness-in-sso-authorization-state-mitigation-implemented-priority-high","title":"103. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_5","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#104-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-low","title":"104. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Access to publish data through the external MQTT broker is protected with authentication. Wrong data can also be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#105-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-mitigation-implemented-priority-low","title":"105. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (via external MQTT broker - authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor via MQTT (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#106-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"106. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#107-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"107. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#108-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"108. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#109-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"109. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (via external MQTT broker - authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#110-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"110. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#111-spoofing-of-source-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"111. Spoofing of Source Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby deny any request. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#112-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"112. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT receiver of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby be unable to receive. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-service-registration","title":"Interaction: service registration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#113-spoofing-of-destination-data-store-consul-registry-state-mitigation-implemented-priority-low","title":"113. Spoofing of Destination Data Store Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (registry) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (registry). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#114-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-registry-state-mitigation-implemented-priority-low","title":"114. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (registry) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX services and Consul run as containers in a Docker network that, by default with security on, does not allow direct access to the service APIs. During the process of Consul bootstrapping, the EdgeX security bootstrapper ensures that the Consul APIs and GUI cannot be accessed without an ACL token (see https://docs.edgexfoundry.org/2.2/security/Ch-Secure-Consul/). Therefore, using the Consul APIs to cause a DoS attack would require access tokens. A rogue authorized user or someone able to illegally get the Consul token could cause excess use of resources that cause the services or Consul down. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#115-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"115. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then TLS or overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-service-secrets","title":"Interaction: service secrets","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#116-weak-access-control-for-a-resource-state-mitigation-implemented-priority-medium","title":"116. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Improper data protection of Vault can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: The Vault root and service level tokens are revoked after setup and then all interactions is via the programmatic interface (with properly authenticated token). There are additional options to Vault Master Key encryption provided here: https://docs.edgexfoundry.org/2.2/threat-models/secret-store/vault_master_key_encryption/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#117-spoofing-of-source-data-store-vault-state-mitigation-implemented-priority-low","title":"117. Spoofing of Source Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-subscribed-message","title":"Interaction: subscribed message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#118-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"118. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Message Bus Broker can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: When running EdgeX in secure mode the Redis database service is secured with a username/password. Redis Pub/Sub utilizes the existing Redis database service so that no additional broker service is required. This in turn creates a Secure MessageBus. See https://docs.edgexfoundry.org/2.2/security/Ch-Secure-MessageBus/. MQTTS can used for internal message bus communications but not provided by EdgeX Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#119-spoofing-of-source-data-store-message-bus-broker-state-mitigation-implemented-priority-low","title":"119. Spoofing of Source Data Store Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus Broker may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-edgex-service-to-service-http-comms","title":"Diagram: EdgeX Service to Service HTTP comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#edgex-service-to-service-http-comms-diagram-summary","title":"EdgeX Service to Service HTTP comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 2 Mitigation Implemented 0 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-http","title":"Interaction: HTTP","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#120-edgex-service-a-process-memory-tampered-state-needs-investigation-priority-high","title":"120. EdgeX Service A Process Memory Tampered\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#121-elevation-using-impersonation-state-needs-investigation-priority-high","title":"121. Elevation Using Impersonation\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service APIs is restricted except through Kong. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, TLS can be used to encrypt all traffic. Service-to-service calls behind Kong are unauthenticated in the current implementation. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-edgex-service-to-service-message-bus-comms","title":"Diagram: EdgeX Service to Service message bus comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#edgex-service-to-service-message-bus-comms-diagram-summary","title":"EdgeX Service to Service message bus comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 2 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-message-bus-mqtt-redis-pubsub-nats","title":"Interaction: message bus (MQTT, Redis Pub/Sub, NATS)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#122-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"122. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: All services are required to authroize to the message bus, but all services authorized on the message bus have equal privilege to send and receive messages. Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service message bus is restricted to internal communications only. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, secure MQTT (MQTTS) message bus communications can be used. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#123-edgex-service-a-process-memory-tampered-state-mitigation-implemented-priority-high","title":"123. EdgeX Service A Process Memory Tampered\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-access-via-vpn","title":"Diagram: Access via VPN","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#access-via-vpn-diagram-summary","title":"Access via VPN Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-host-access","title":"Diagram: Host Access","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#host-access-diagram-summary","title":"Host Access Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-open-port-protections","title":"Diagram: Open Port Protections","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#open-port-protections-diagram-summary","title":"Open Port Protections Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-device-protocol-threats-modbus-example","title":"Diagram: Device Protocol Threats - Modbus example","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#device-protocol-threats-modbus-example-diagram-summary","title":"Device Protocol Threats - Modbus example Diagram Summary:","text":"Not Started 0 Not Applicable 7 Needs Investigation 9 Mitigation Implemented 2 Total 18 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-binary-rtu-get-or-set","title":"Interaction: Binary RTU (GET or SET)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#124-spoofing-of-destination-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"124. Spoofing of Destination Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#125-potential-excessive-resource-consumption-for-modbus-device-service-or-modbus-devicesensor-state-needs-investigation-priority-high","title":"125. Potential Excessive Resource Consumption for Modbus Device Service or Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does Modbus Device Service or Modbus Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#126-spoofing-the-modbus-device-service-process-state-needs-investigation-priority-high","title":"126. Spoofing the Modbus Device Service Process\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to unauthorized access to Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#127-the-modbus-devicesensor-data-store-could-be-corrupted-state-needs-investigation-priority-high","title":"127. The Modbus Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across Binary RTU (GET or SET) may be tampered with by an attacker. This may lead to corruption of Modbus Device/Sensor. Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#128-data-store-denies-modbus-devicesensor-potentially-writing-data-state-not-applicable-priority-high","title":"128. Data Store Denies Modbus Device/Sensor Potentially Writing Data\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: It is unlikely that a Modbus device/sensor has a log to provide an audit of requests. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#129-data-flow-sniffing-state-not-applicable-priority-high","title":"129. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across Binary RTU (GET or SET) may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized nor encrypted by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#130-weak-credential-transit-state-needs-investigation-priority-high","title":"130. Weak Credential Transit\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Credentials on the wire are often subject to sniffing by an attacker. Are the credentials re-usable/re-playable? Are credentials included in a message? For example, sending a zip file with the password in the email. Use strong cryptography for the transmission of credentials. Use the OS libraries if at all possible, and consider cryptographic algorithm agility, rather than hardcoding a choice. Justification: <no mitigation provided> Possible Mitigation: Modbus does not support any type of authentication/authorization in communications. Physical security of the device and wire are the only ways to thwart information disclosure. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#131-data-flow-binary-rtu-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"131. Data Flow Binary RTU (GET or SET) Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#132-data-store-inaccessible-state-needs-investigation-priority-high","title":"132. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-binary-rtu-response-get-or-se","title":"Interaction: Binary RTU Response (GET or SE","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#133-spoofing-of-source-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"133. Spoofing of Source Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to Modbus Device Service. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#134-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"134. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Modbus Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: As Modbus is a simple protocol (reporting data or reacting to accuation requests), it is not possible for the device or sensor to gain other data from the device service (or EdgeX as a whole). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#135-spoofing-the-modbus-device-service-process-state-not-applicable-priority-high","title":"135. Spoofing the Modbus Device Service Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to information disclosure by Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#136-potential-data-repudiation-by-modbus-device-service-state-mitigation-implemented-priority-high","title":"136. Potential Data Repudiation by Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device Service claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level can be used to log all data communications from a device/sensor. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#137-potential-process-crash-or-stop-for-modbus-device-service-state-mitigation-implemented-priority-medium","title":"137. Potential Process Crash or Stop for Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Modbus Device Service crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#138-data-flow-binary-rtu-response-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"138. Data Flow Binary RTU Response (GET or SET Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#139-data-store-inaccessible-state-needs-investigation-priority-high","title":"139. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#140-modbus-device-service-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-high","title":"140. Modbus Device Service May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Modbus Device/Sensor may be able to remotely execute code for Modbus Device Service. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#141-elevation-by-changing-the-execution-flow-in-modbus-device-service-state-not-applicable-priority-high","title":"141. Elevation by Changing the Execution Flow in Modbus Device Service\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into Modbus Device Service in order to change the flow of program execution within Modbus Device Service to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Physical security of the sensor and communications (wire) offer the best hope to mitigate this threat. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/","title":"Threat Modeling Report","text":"Created on 12/27/2022 3:06:56 PM
generated from HTML by https://www.convertsimple.com/convert-html-to-markdown/ embedded images extracted with Pandoc https://pandoc.org (Pandoc did not do well with tables so just used for image extraction) using the command below
pandoc -o EdgeXFoundryThreatReportV2.2.md -t markdown -f markdown EdgeXFoundryThreatReportV2.2-original.md --extract-media=./images\n
Threat Model Name: EdgeX Foundry Threat Model
Owner: Jim White (IOTech Systems)
Reviewer: Bryon Nevis, Lenny Goodell, Jim Wang (all from Intel), Farshid Tavakolizadeh (Canonical), Rodney Hess (Beechwoods)
Contributors:
Description: General Threat Model for EdgeX Foundry - inclusive of security elements (Kong, Vault, etc).
Assumptions: EdgeX is platform agnostic, but this Threat model assumes the underlying OS is a Linux distribution. EdgeX can run containerized or non-containerized (natively). This Threat Model assumes EdgeX is running in a containerized environment (Docker). EdgeX micro services can run distributed, but this Threat Model assumes EdgeX is running on a single host (single Docker deamon with a single Docker network unless otherwise specified). Many different devices/sensors can be connected to EdgeX via its device services. This Threat model treats all sensors/devices the same (which is not always the case given the varoius protocols of support). Per https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/, additional hardening such as secure boot with hardware root of trust, and secure disk encryption are outside of EdgeX control but would greatly improve the threat mitigation.
External Dependencies: Operating system and hardware (including devices/sensors) Device/sensor drivers Possibly a cloud system or external enterprise system that EdgeX gets data to A message bus broker (such as an MQTT broker)
"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#notes","title":"Notes:","text":"Id Note Date Added By 1 Tampering with Data - This is a threat where information in the system is changed by an attacker. For example, an attacker changes an account balance Unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet 8/25/2022 6:40:40 PM DESKTOP-SL3KKHH\\jpwhi 2 XSS protections: filter input on arrival (don't do), encode data on oputput (don't do), use appropriate headers (do), use CSP (dont do) 8/25/2022 6:54:16 PM DESKTOP-SL3KKHH\\jpwhi 3 priority is determined by the likelihood of a threat occuring and the severity of the impact of its occurance 8/25/2022 7:11:40 PM DESKTOP-SL3KKHH\\jpwhi 4 Repudiation - don't track and log users actions; can't prove a transaction took place 8/25/2022 7:13:14 PM DESKTOP-SL3KKHH\\jpwhi 5 Elevation of privil - authorized or unauthorized user gains access to info not authorized 8/25/2022 7:16:24 PM DESKTOP-SL3KKHH\\jpwhi 6 Remote code execution: https://www.comparitech.com/blog/information-security/remote-code-execution-attacks/ buffer overflow sanitize user inputs proper auth use a firewall 8/25/2022 7:21:28 PM DESKTOP-SL3KKHH\\jpwhi 7 Privilege escalation attacks occur when bad actors exploit misconfigurations, bugs, weak passwords, and other vulnerabilities 8/27/2022 1:57:18 PM DESKTOP-SL3KKHH\\jpwhi"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#threat-model-summary","title":"Threat Model Summary:","text":"Not Started 0 Not Applicable 27 Needs Investigation 14 Mitigation Implemented 100 Total 141 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-edgex-foundry-big-picture","title":"Diagram: EdgeX Foundry (Big Picture)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#edgex-foundry-big-picture-diagram-summary","title":"EdgeX Foundry (Big Picture) Diagram Summary:","text":"Not Started 0 Not Applicable 20 Needs Investigation 3 Mitigation Implemented 96 Total 119 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-config","title":"Interaction: config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#1-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"1. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Consul (configuration) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: EdgeX services that use Consul must use a Vault access token provided in bootstrapping of the service. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/. There is also per service ACL rules in place to limit Consul access. As of the Ireland release, access of Consul requires ACL token header X-Consul-Token in any HTTP calls. Moreover, Consul itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#2-spoofing-of-source-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"2. Spoofing of Source Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-configuration","title":"Interaction: configuration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#3-spoofing-of-source-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"3. Spoofing of Source Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#4-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"4. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Configuration Files can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Disclosure of configuration files is not important. Configuration data is not considered sensitive. As long as the configuration files are not manipulated, then access to configuration files is not deemed a threat. All secret configuration is made available through Vault. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-data","title":"Interaction: data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#5-spoofing-of-source-data-store-redis-state-mitigation-implemented-priority-low","title":"5. Spoofing of Source Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#6-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"6. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Redis can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Access control credentials for Redis are secured in Vault (provided to EdgeX services at bootstrapping but otherwise unknown). Access without credentials is denied. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#7-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"7. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-published-message","title":"Interaction: published message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#8-potential-excessive-resource-consumption-for-edgex-foundry-or-message-bus-broker-state-mitigation-implemented-priority-medium","title":"8. Potential Excessive Resource Consumption for EdgeX Foundry or Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Message Bus Broker take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: The EdgeX message broker is either Redis Pub/Sub or an MQTT broker like Mosquitto and runs as a container in a Docker network that, by default with security on, does not allow direct access to the broker. Access to publish or subscribe to cause it to use excessive resources would require authorized access to the host as the port to the internal message broker is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service producing too many message than the broker can handle) that result in a DoS event. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#9-spoofing-of-destination-data-store-message-bus-state-mitigation-implemented-priority-low","title":"9. Spoofing of Destination Data Store Message Bus\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Message Bus. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Message broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-queries-data","title":"Interaction: queries & data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#10-spoofing-of-destination-data-store-redis-state-mitigation-implemented-priority-low","title":"10. Spoofing of Destination Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Redis. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Database host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#11-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"11. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. Docker containers do not share the host's network interface by default and instead is based on virtual ethernet adapters and bridges. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#12-potential-excessive-resource-consumption-for-edgex-foundry-or-redis-state-mitigation-implemented-priority-low","title":"12. Potential Excessive Resource Consumption for EdgeX Foundry or Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Redis take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Redis runs as a container in a Docker network that, by default with security on, does not allow direct access to the database. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to much data into it) that result in a DoS event. EdgeX does have a routine with customizable configuration that \"cleans up\" and removes older data so that \"normal\" or otherwise expected use of the database for persistenct does not result in DoS. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#13-spoofing-of-destination-data-store-vault-state-mitigation-implemented-priority-low","title":"13. Spoofing of Destination Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Vault. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. EdgeX services that use Vault must use the go-mod-secrets client or a Vault service token to access its secrets (which is revoked by default). See https://docs.edgexfoundry.org/2.3/security/Ch-SecretStore/#using-the-secret-store Vault host and port are configured from static configuration or environment overrides (trusted input) and not Consul, making it difficult to misdirect services access to Vault. See EdgeX Threat Model documentation (https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/#threat-matrix) for additional considerations and mitigation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#14-potential-excessive-resource-consumption-for-edgex-foundry-or-vault-state-mitigation-implemented-priority-low","title":"14. Potential Excessive Resource Consumption for EdgeX Foundry or Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Vault take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Vault runs as a container in a Docker network that, by default with security on, does not allow direct access to the secret store. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to many secrets into it) that result in a DoS event. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query_1","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#15-spoofing-of-destination-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"15. Spoofing of Destination Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (REST authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#16-the-devicesensor-rest-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"16. The Device/Sensor (REST authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (REST authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#17-data-store-denies-devicesensor-rest-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"17. Data Store Denies Device/Sensor (REST authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (REST authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#18-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-medium","title":"18. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#19-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"19. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query_2","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#20-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"20. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or subscriber to the broker. Physical and sytem security is required to protect these and mitigate this threat. Query requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#21-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"21. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#22-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"22. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data that cause the broker or subscriber to go offline or appear unresponsive - depending on the capabilities of the broker or subscribing application. In the opposite direction, an MQTT publisher could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system or MQTT broker) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#23-data-flow-sniffing-state-mitigation-implemented-priority-high","title":"23. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#24-data-store-denies-devicesensor-via-external-mqtt-broker-authenticated-potentially-writing-data-state-mitigation-implemented-priority-high","title":"24. Data Store Denies Device/Sensor (via external MQTT broker - authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Device/Sensor (via external MQTT broker - authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#25-the-devicesensor-via-external-mqtt-broker-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"25. The Device/Sensor (via external MQTT broker - authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (via external MQTT broker - authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#26-spoofing-of-destination-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"26. Spoofing of Destination Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT query sendor (or the spoofed external message broker) would not be properly authenticated and thereby be unable to publish. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#27-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"27. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of a query (or the spoofed external message broker) would not be properly authenticated and thereby be unable to make its request. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query-or-actuation","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#28-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"28. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#29-spoofing-of-destination-data-store-devicesensor-state-needs-investigation-priority-high","title":"29. Spoofing of Destination Data Store Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof a legitimate device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#30-the-devicesensor-data-store-could-be-corrupted-state-not-applicable-priority-high","title":"30. The Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor. Ensure the integrity of the data flow to the data store. I.e. - example: a man in the middle attack on the wire between EdgeX and the wired device/sensor or an attack on the sensor (giggling a vibration sensor) Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device or intercept/use of the data to the device/sensor is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Additional optional mitigation ideas require modifications to the EdgeX device service. The device service could be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#31-data-store-denies-devicesensor-potentially-writing-data-state-mitigation-implemented-priority-low","title":"31. Data Store Denies Device/Sensor Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#32-data-flow-sniffing-state-not-applicable-priority-high","title":"32. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#33-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-state-not-applicable-priority-high","title":"33. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#34-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"34. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#35-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"35. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query-config","title":"Interaction: query & config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#36-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-configuration-state-mitigation-implemented-priority-low","title":"36. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (configuration) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Consul runs as a container in a Docker network that, by default with security on, does not allow direct access to the APIs and UI without the Consul access token (see https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/#how-to-get-consul-acl-token). A rogue authorized user or someone that illegally obtained the Consul token could force Consul to use too many resources by invoking its API or stuffing too much configuration in the system (or impact it enough that disrupts its abilty to service the EdgeX services). Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#37-spoofing-of-destination-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"37. Spoofing of Destination Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (configuration). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Replacing/spoofing the Consul container would require administrative access to the Docker socket. EdgeX services will talk to any service that answers on the configured consul hostname. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query-or-actuation_1","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#38-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"38. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#39-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"39. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#40-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"40. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (physically connected authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#41-data-flow-sniffing-state-not-applicable-priority-high","title":"41. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#42-data-store-denies-devicesensor-physically-connected-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"42. Data Store Denies Device/Sensor (physically connected authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (physically connected authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#43-the-devicesensor-physically-connected-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"43. The Device/Sensor (physically connected authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor (physically connected authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: With authentication and encrypting the data between EdgeX and the device/sensor (ex: using TLS), the data on the wire can be protected. The physcial security of the device/sensor still needs to be achieved to protect someone tampering with the device/sensor (ex: holding a match to a thermostat). As with device/sensors that are not authenticated, additional optional mitigation ideas to mitigate unprotected devices/sensors require modifications to the EdgeX device service. The device service could be constructed to filter data or report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#44-spoofing-of-destination-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"44. Spoofing of Destination Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#45-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"45. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-read","title":"Interaction: read","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#46-spoofing-of-destination-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"46. Spoofing of Destination Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Configuration Files. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#47-potential-excessive-resource-consumption-for-edgex-foundry-or-configuration-files-state-mitigation-implemented-priority-low","title":"47. Potential Excessive Resource Consumption for EdgeX Foundry or Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Configuration Files take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Config file does not consume resources other than file space. Configuration file is deployed with the service container and therefore, without access to the host and Docker, its size is controlled. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#48-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"48. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request_1","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#49-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"49. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX Foundry may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: There is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Impersonating EdgeX would require access to the host system and the Docker network. With this access, many other severe issues could occur (stopping the system, sending incorrect data, etc.). Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#50-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"50. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX Foundry. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Kong, the service would not know that the response came from something other than Kong. I.e. - there is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Kong is run as a container on the EdgeX Docker network. Replacing/spoofing the Kong container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Kong (with TLS cert in place). A spoofing service (in this case Kong), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request_2","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#51-elevation-by-changing-the-execution-flow-in-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"51. Elevation by Changing the Execution Flow in EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX UI - Web Application in order to change the flow of program execution within EdgeX UI - Web Application to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. In order to use the Web UI (with secure mode EdgeX), authentication required via Kong. With proper authentication, a rogue user could invoke commands, change the rules engine rules (and alter workkflows), stop services (and alter workflows), etc. - but these could also be accomplished directly with EdgeX. If the GUI is of extreme concern, it can be removed or turned off as it is a convenience mechanism and is not required for EdgeX operation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#52-edgex-ui-web-application-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-medium","title":"52. EdgeX UI - Web Application May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: Browser/API Caller may be able to remotely execute code for EdgeX UI - Web Application. Justification: <no mitigation provided> Possible Mitigation: Possible protections to be implemented: buffer overflow protection, sanitize user inputs, use of a firewall Mitigator: EdgeX Foundry Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#53-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"53. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Browser/API Caller in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. The Edge GUI is deployed as a container part of the EdgeX application set. Impersonation of Web Application would require access to the host (with privilege) and require changing or removing the existing GUI Web application. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#54-data-flow-request-is-potentially-interrupted-state-not-applicable-priority-low","title":"54. Data Flow request Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#55-potential-process-crash-or-stop-for-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"55. Potential Process Crash or Stop for EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: EdgeX UI - Web Application crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Other mechisms exist to work with EdgeX (such as the service APIs). As another EdgeX, stopping the service requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges. The GUI service can be removed for extra security. The GUI is a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#56-data-flow-sniffing-state-mitigation-implemented-priority-medium","title":"56. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Data flowing across request may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Use of a VPN or HTTPS can be used to secure the communications with the EdgeX UI. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#57-potential-data-repudiation-by-edgex-ui-web-application-state-not-applicable-priority-low","title":"57. Potential Data Repudiation by EdgeX UI - Web Application\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX UI - Web Application claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web UI can use elevated logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#58-cross-site-scripting-state-mitigation-implemented-priority-low","title":"58. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: X-XSS-Protection is enabled on all pages to protect against detected XSS. In environments where cross site scripting is a huge concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#59-potential-lack-of-input-validation-for-edgex-ui-web-application-state-needs-investigation-priority-medium","title":"59. Potential Lack of Input Validation for EdgeX UI - Web Application\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Tampering Description: Data flowing across request may be tampered with by an attacker. This may lead to a denial of service attack against EdgeX UI - Web Application or an elevation of privilege attack against EdgeX UI - Web Application or an information disclosure by EdgeX UI - Web Application. Failure to verify that input is as expected is a root cause of a very large number of exploitable issues. Consider all paths and the way they handle data. Verify that all input is verified for correctness using an approved list input validation approach. Justification: <no mitigation provided> Possible Mitigation: Input validation should be added to the GUI. However, access to the Web GUI (and then EdgeX) requires the API gateway token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). If this threat is likely, the Web GUI can be removed as this does not impact the remainder of EdgeX operations. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#60-spoofing-the-browserapi-caller-external-entity-state-not-applicable-priority-low","title":"60. Spoofing the Browser/API Caller External Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#61-spoofing-the-edgex-ui-web-application-process-state-mitigation-implemented-priority-low","title":"61. Spoofing the EdgeX UI - Web Application Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: EdgeX UI - Web Application may be spoofed by an attacker and this may lead to information disclosure by Browser/API Caller. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As one of the services deployed as a container of EdgeX, spoofing of EdgeX GUI would require either replacing the container (requiring host access and elevated privileges) and/or intercepting and rerouting traffic. Further, the GUI must obtain and use a Kong JWT token to access the EdgeX APIs which a spoofer would not have. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request_3","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#62-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"62. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#63-data-flow-request-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"63. Data Flow request Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#64-external-entity-kong-potentially-denies-receiving-data-state-not-applicable-priority-low","title":"64. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#65-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"65. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response_1","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#66-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"66. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Kong is run as a container on the EdgeX Docker network. Replacing/spoofing Kong would require privileaged access to the host. Kong is exposed via TLS and we provide a cli tool to install a custom certificate that the web UI can validate if the CA is trusted. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#67-cross-site-scripting-state-mitigation-implemented-priority-low","title":"67. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: Because the Web application is running as a container on the Docker network with Kong, access to the response traffic via Kong would require access to the Docker network (requiring access to the host with elevated privilege). The EdgeX Web GUI has X-XSS-Protection enabled. In environments where cross site scripting is a concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#68-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"68. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: The Web GUI must authenticate with Kong using a JWT token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). Without the proper JWT token access, the Web GUI cannot get eleveated privilege to EdgeX as a whole. An impersonating Web GUI might be used to have a user provide their JWT token which could be used to then perform other operations in EdgeX. If this is a real threat, the GUI can be removed and not used without other impacts to EdgeX. The GUI is a convenience tool. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response_2","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#69-data-flow-response-is-potentially-interrupted-state-not-applicable-priority-low","title":"69. Data Flow response Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Third Party Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#70-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"70. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web GUI can use elevated log level to log all requests. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#71-spoofing-of-the-browserapi-caller-external-destination-entity-state-not-applicable-priority-low","title":"71. Spoofing of the Browser/API Caller External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Browser/API Caller. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response_3","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#72-data-flow-response-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"72. Data Flow response Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#73-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"73. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging to document all requests. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#74-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"74. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#75-spoofing-of-source-data-store-devicesensor-state-not-applicable-priority-high","title":"75. Spoofing of Source Data Store Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof as a ligitimage device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#76-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-low","title":"76. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#77-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"77. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#78-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"78. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#79-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"79. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#80-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"80. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#81-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"81. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#82-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"82. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_1","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#83-external-entity-megaservice-cloud-or-enterprise-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"83. External Entity Megaservice - Cloud or Enterprise Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Megaservice - Cloud or Enterprise claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#84-spoofing-of-the-megaservice-cloud-or-enterprise-external-destination-entity-state-not-applicable-priority-low","title":"84. Spoofing of the Megaservice - Cloud or Enterprise External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Megaservice - Cloud or Enterprise may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Megaservice - Cloud or Enterprise. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of a megacloud or enterprise, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export) Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#85-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"85. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the cloud). If the data is deemed critical and if by some means the data flow was interrupted, then store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_2","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#86-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"86. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the external message bus). If the data is deemed critical and if by some means the data flow was interrupted, store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#87-external-entity-message-topic-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"87. External Entity Message Topic Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Message Topic claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#88-spoofing-of-the-message-topic-external-destination-entity-state-not-applicable-priority-low","title":"88. Spoofing of the Message Topic External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Topic may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Message Topic. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of an external message bus, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_3","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#89-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"89. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#90-spoofing-of-source-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"90. Spoofing of Source Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#91-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"91. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#92-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"92. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (physically connected authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#93-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"93. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#94-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"94. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#95-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"95. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#96-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"96. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (physically connected authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#97-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"97. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_4","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#98-spoofing-of-source-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"98. Spoofing of Source Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to Kong. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#99-external-entity-kong-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"99. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#100-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"100. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (REST authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#101-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"101. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#102-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"102. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#103-weakness-in-sso-authorization-state-mitigation-implemented-priority-high","title":"103. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_5","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#104-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-low","title":"104. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Access to publish data through the external MQTT broker is protected with authentication. Wrong data can also be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#105-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-mitigation-implemented-priority-low","title":"105. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (via external MQTT broker - authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor via MQTT (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#106-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"106. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#107-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"107. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#108-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"108. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#109-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"109. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (via external MQTT broker - authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#110-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"110. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#111-spoofing-of-source-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"111. Spoofing of Source Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby deny any request. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#112-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"112. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT receiver of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby be unable to receive. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-service-registration","title":"Interaction: service registration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#113-spoofing-of-destination-data-store-consul-registry-state-mitigation-implemented-priority-low","title":"113. Spoofing of Destination Data Store Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (registry) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (registry). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#114-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-registry-state-mitigation-implemented-priority-low","title":"114. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (registry) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX services and Consul run as containers in a Docker network that, by default with security on, does not allow direct access to the service APIs. During the process of Consul bootstrapping, the EdgeX security bootstrapper ensures that the Consul APIs and GUI cannot be accessed without an ACL token (see https://docs.edgexfoundry.org/2.2/security/Ch-Secure-Consul/). Therefore, using the Consul APIs to cause a DoS attack would require access tokens. A rogue authorized user or someone able to illegally get the Consul token could cause excess use of resources that cause the services or Consul down. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#115-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"115. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then TLS or overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-service-secrets","title":"Interaction: service secrets","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#116-weak-access-control-for-a-resource-state-mitigation-implemented-priority-medium","title":"116. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Improper data protection of Vault can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: The Vault root and service level tokens are revoked after setup and then all interactions is via the programmatic interface (with properly authenticated token). There are additional options to Vault Master Key encryption provided here: https://docs.edgexfoundry.org/2.2/threat-models/secret-store/vault_master_key_encryption/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#117-spoofing-of-source-data-store-vault-state-mitigation-implemented-priority-low","title":"117. Spoofing of Source Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-subscribed-message","title":"Interaction: subscribed message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#118-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"118. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Message Bus Broker can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: When running EdgeX in secure mode the Redis database service is secured with a username/password. Redis Pub/Sub utilizes the existing Redis database service so that no additional broker service is required. This in turn creates a Secure MessageBus. See https://docs.edgexfoundry.org/2.2/security/Ch-Secure-MessageBus/. MQTTS can used for internal message bus communications but not provided by EdgeX Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#119-spoofing-of-source-data-store-message-bus-broker-state-mitigation-implemented-priority-low","title":"119. Spoofing of Source Data Store Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus Broker may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-edgex-service-to-service-http-comms","title":"Diagram: EdgeX Service to Service HTTP comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#edgex-service-to-service-http-comms-diagram-summary","title":"EdgeX Service to Service HTTP comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 2 Mitigation Implemented 0 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-http","title":"Interaction: HTTP","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#120-edgex-service-a-process-memory-tampered-state-needs-investigation-priority-high","title":"120. EdgeX Service A Process Memory Tampered\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#121-elevation-using-impersonation-state-needs-investigation-priority-high","title":"121. Elevation Using Impersonation\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service APIs is restricted except through Kong. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, TLS can be used to encrypt all traffic. Service-to-service calls behind Kong are unauthenticated in the current implementation. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-edgex-service-to-service-message-bus-comms","title":"Diagram: EdgeX Service to Service message bus comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#edgex-service-to-service-message-bus-comms-diagram-summary","title":"EdgeX Service to Service message bus comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 2 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-message-bus-mqtt-redis-pubsub-nats","title":"Interaction: message bus (MQTT, Redis Pub/Sub, NATS)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#122-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"122. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: All services are required to authroize to the message bus, but all services authorized on the message bus have equal privilege to send and receive messages. Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service message bus is restricted to internal communications only. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, secure MQTT (MQTTS) message bus communications can be used. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#123-edgex-service-a-process-memory-tampered-state-mitigation-implemented-priority-high","title":"123. EdgeX Service A Process Memory Tampered\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-access-via-vpn","title":"Diagram: Access via VPN","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#access-via-vpn-diagram-summary","title":"Access via VPN Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-host-access","title":"Diagram: Host Access","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#host-access-diagram-summary","title":"Host Access Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-open-port-protections","title":"Diagram: Open Port Protections","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#open-port-protections-diagram-summary","title":"Open Port Protections Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-device-protocol-threats-modbus-example","title":"Diagram: Device Protocol Threats - Modbus example","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#device-protocol-threats-modbus-example-diagram-summary","title":"Device Protocol Threats - Modbus example Diagram Summary:","text":"Not Started 0 Not Applicable 7 Needs Investigation 9 Mitigation Implemented 2 Total 18 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-binary-rtu-get-or-set","title":"Interaction: Binary RTU (GET or SET)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#124-spoofing-of-destination-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"124. Spoofing of Destination Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#125-potential-excessive-resource-consumption-for-modbus-device-service-or-modbus-devicesensor-state-needs-investigation-priority-high","title":"125. Potential Excessive Resource Consumption for Modbus Device Service or Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does Modbus Device Service or Modbus Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#126-spoofing-the-modbus-device-service-process-state-needs-investigation-priority-high","title":"126. Spoofing the Modbus Device Service Process\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to unauthorized access to Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#127-the-modbus-devicesensor-data-store-could-be-corrupted-state-needs-investigation-priority-high","title":"127. The Modbus Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across Binary RTU (GET or SET) may be tampered with by an attacker. This may lead to corruption of Modbus Device/Sensor. Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#128-data-store-denies-modbus-devicesensor-potentially-writing-data-state-not-applicable-priority-high","title":"128. Data Store Denies Modbus Device/Sensor Potentially Writing Data\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: It is unlikely that a Modbus device/sensor has a log to provide an audit of requests. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#129-data-flow-sniffing-state-not-applicable-priority-high","title":"129. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across Binary RTU (GET or SET) may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized nor encrypted by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#130-weak-credential-transit-state-needs-investigation-priority-high","title":"130. Weak Credential Transit\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Credentials on the wire are often subject to sniffing by an attacker. Are the credentials re-usable/re-playable? Are credentials included in a message? For example, sending a zip file with the password in the email. Use strong cryptography for the transmission of credentials. Use the OS libraries if at all possible, and consider cryptographic algorithm agility, rather than hardcoding a choice. Justification: <no mitigation provided> Possible Mitigation: Modbus does not support any type of authentication/authorization in communications. Physical security of the device and wire are the only ways to thwart information disclosure. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#131-data-flow-binary-rtu-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"131. Data Flow Binary RTU (GET or SET) Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#132-data-store-inaccessible-state-needs-investigation-priority-high","title":"132. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-binary-rtu-response-get-or-se","title":"Interaction: Binary RTU Response (GET or SE","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#133-spoofing-of-source-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"133. Spoofing of Source Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to Modbus Device Service. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#134-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"134. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Modbus Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: As Modbus is a simple protocol (reporting data or reacting to accuation requests), it is not possible for the device or sensor to gain other data from the device service (or EdgeX as a whole). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#135-spoofing-the-modbus-device-service-process-state-not-applicable-priority-high","title":"135. Spoofing the Modbus Device Service Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to information disclosure by Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#136-potential-data-repudiation-by-modbus-device-service-state-mitigation-implemented-priority-high","title":"136. Potential Data Repudiation by Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device Service claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level can be used to log all data communications from a device/sensor. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#137-potential-process-crash-or-stop-for-modbus-device-service-state-mitigation-implemented-priority-medium","title":"137. Potential Process Crash or Stop for Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Modbus Device Service crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#138-data-flow-binary-rtu-response-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"138. Data Flow Binary RTU Response (GET or SET Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#139-data-store-inaccessible-state-needs-investigation-priority-high","title":"139. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#140-modbus-device-service-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-high","title":"140. Modbus Device Service May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Modbus Device/Sensor may be able to remotely execute code for Modbus Device Service. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#141-elevation-by-changing-the-execution-flow-in-modbus-device-service-state-not-applicable-priority-high","title":"141. Elevation by Changing the Execution Flow in Modbus Device Service\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into Modbus Device Service in order to change the flow of program execution within Modbus Device Service to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Physical security of the sensor and communications (wire) offer the best hope to mitigate this threat. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"walk-through/Ch-Walkthrough/","title":"EdgeX Demonstration API Walk Through","text":"In order to better appreciate the EdgeX Foundry micro services (what they do and how they work), how they inter-operate with each other, and some of the more important API calls that each micro service has to offer, this demonstration API walk through shows how a device service and device are established in EdgeX, how data is sent flowing through the various services, and how data is then shipped out of EdgeX to the cloud or enterprise system.
Through this demonstration, you will play the part of various EdgeX micro services by manually making REST calls in a way that mimics EdgeX system behavior. After exploring this demonstration, and hopefully exercising the APIs yourself, you should have a much better understanding of how EdgeX Foundry works.
To be clear, this walkthrough is not the way you setup all your device services, devices, etc. In this walkthrough, you manually call EdgeX APIs to perform the work that a device service would do to get a new device setup and to send data to/through EdgeX. In other words, you are simulating the work of a device service does automatically by manually executing EdgeX APIs. You will also exercise APIs to see the results of the work accomplished by the device service and all of EdgeX.
Next>
"},{"location":"walk-through/Ch-WalkthroughCommands/","title":"Calling commands","text":"Recall that the device profile (the camera-monitor-profile
in this walkthrough) included a number of commands to get/set (read or write) information from any device of that type. Also recall that the device (the countcamera1
in this walkthrough) was associated to the device profile (again, the camera-monitor-profile
) when the device was provisioned.
See core command API for more details.
With the setup complete, you can ask the core command micro service for the list of commands associated to the device (the countcamera1
). The command micro service exposes the commands in a common, normalized way that enables simplified communications with the devices for
Use either the Postman or Curl tab below to walkthrough getting the list of commands.
PostmanCurlMake a GET request to http://localhost:59882/api/v3/device/name/countcamera1
.
Note
Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default.
Make a curl GET request as shown below.
curl -X GET localhost:59882/api/v3/device/name/countcamera1 | json_pp\n
Note
Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default.
Explore all of the URLs returned as part of this response! These are the URLs that clients (internal or external to EdgeX) can call to trigger the various get/set (read and write) offerings on the Device. However, do take note that the host for the URLs is edgex-core-command
. This is the name of the host for core command inside Docker. To exercise the URL outside of Docker, you would have to use the name of the system host (localhost
if executing on the same box).
While we're at it, check that no data has yet been shipped to core data from the camera device. Since the device service and device in this demonstration are wholly manually driven by you, no sensor data should yet have been collected. You can test this theory by asking for the count of events in core data.
"},{"location":"walk-through/Ch-WalkthroughCommands/#walkthrough-events","title":"Walkthrough - Events","text":"Use either the Postman or Curl tab below to walkthrough getting the list of events.
PostmanCurlMake a GET request to http://localhost:59880/api/v3/event/count/device/name/countcamera1
.
Make a curl GET request as shown below.
curl -X GET localhost:59880/api/v3/event/count/device/name/countcamera1\n
The response returned should indicate no events for the camera in core data.
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":0}\n
"},{"location":"walk-through/Ch-WalkthroughCommands/#execute-a-command","title":"Execute a Command","text":"While there is no real device or device service in this walkthrough, EdgeX doesn't know that. Therefore, with all the configuration and setup you have performed, you can ask EdgeX to set the scan depth or set the snapshot duration to the camera, and EdgeX will dutifully try to perform the task. Of course, since no device service or device exists, as expected EdgeX will ultimately responds with an error. However, through the log files, you can see a command made of the core command micro service, attempts to call on the appropriate command of the fictitious device service that manages our fictitious camera.
For example sake, let's launch a command to set the scan depth of countcamera1
(the name of the single human/dog counting camera device in EdgeX right now). The first task to launch a request to set the scan depth is to get the URL for the command to set
or write a new scan depth on the device. Return to the results of the request to get a list of the commands by the device name above.
Locate and copy the URL and path for the set
depth command. Below is a picture containing a slice of the JSON returned by the GET request above and desired set
Command URL highlighted - yours will vary based on IDs.
Use either the Postman or Curl tab below to walkthrough actuating the device.
PostmanCurlMake a PUT request to http://localhost:59882/api/v3/device/name/countcamera1/ScanDepth
with the following body.
{\"depth\":\"9\"}\n
Warning
Notice that the URL above is a combination of both the command URL and path you found from your command list.
Make a curl PUT request as shown below.
curl -X PUT -d '{\"depth\":\"9\"}' localhost:59882/api/v3/device/name/countcamera1/ScanDepth\n
Warning
Notice that the URL above is a combination of both the command URL and path you found from your command list.
"},{"location":"walk-through/Ch-WalkthroughCommands/#check-command-service-log","title":"Check Command Service Log","text":"Again, because no device service (or device) actually exists, core command will respond with a Failed to send a http request
error. However, checking the logging output will prove that the core command micro service did receive the request and attempted to call on the non-existent device service (at the address provided for the device service - defined earlier in this walkthrough) to issue the actuating command. To see the core command service log issue the following Docker command :
docker logs edgex-core-command\n
The last lines of the log entries should highlight the attempt to contact the non-existent device. level=ERROR ts=2021-09-16T20:50:09.965368572Z app=core-command source=http.go:47 X-Correlation-ID=49cc97f5-1e84-4a46-9eb5-543ae8bd5284 msg=\"failed to send a http request -> Put \\\"camera-device-service:59990/api/v3/device/name/countcamera1/ScanDepth?\\\": unsupported protocol scheme \\\"camera-device-service\\\"\"\n...\n
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/","title":"Defining your device","text":"A device profile can be thought of as a template or as a type or classification of device. General characteristics about the type of device, the data theses devices provide, and how to command them is all provided in a device profile. Other pages within this document set provide more details about a device profile and its purpose (see core metadata to start). It is typical that as part of the reference information setup sequence, the device service provides the device profiles for the types of devices it manages.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#device-profile","title":"Device Profile","text":"See core metadata API for more details.
Our fictitious device service will manage only the human/dog counting camera, so it only needs to make one POST
request to create the monitoring camera device profile. Since device profiles are often represented in YAML, you make a multi-part form-data POST
with the device profile file (find the example profile here) to create the Camera Monitor profile.
If you explore the sample profile, you will see that the profile begins with some general information.
name: \"camera-monitor-profile\"\nmanufacturer: \"IOTech\"\nmodel: \"Cam12345\"\nlabels: - \"camera\"\ndescription: \"Human and canine camera monitor profile\"\n
Each profile has a unique name along with a description, manufacturer, model and collection of labels to assist in queries for particular profiles. These are relatively straightforward attributes of a profile.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#resources-and-commands","title":"Resources and Commands","text":"The device profile defines how to communicate with any device that abides by the profile. In particular, it defines the deviceResources
and deviceCommands
used to send requests to the device (via the device service). See the Device Profile documentation for more background on each of these.
The device profile describes the elements of data that can be obtained from the device or sensor and how to change a setting on a device or sensor. The data that can be obtained or the setting that can be changed are called resources or more precisely they are referred to as device resources in Edgex. Learn more about deviceReources
in the Device Profile documentation.
In this walkthrough example, there are two pieces of data we want to be able to get or read from the camera: dog and human counts. Therefore, both are represented as device resources in the device profile. Additionally, we want to be able to set two settings on the camera: the scan depth and snapshot duration. These are also represented as device resources in the device profile.
deviceResources:\n-\nname: \"HumanCount\"\nisHidden: false #is hidden is false by default so this is just making it explicit for purpose of the walkthrough demonstration\ndescription: \"Number of people on camera\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"R\" #designates that this property can only be read and not set\ndefaultValue: \"0\"\n-\nname: \"CanineCount\"\nisHidden: false\ndescription: \"Number of dogs on camera\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"R\" #designates that this property can only be read and not set\ndefaultValue: \"0\"\n-\nname: \"ScanDepth\"\nisHidden: false\ndescription: \"Get/set the scan depth\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"RW\" #designates that this property can be read or set\ndefaultValue: \"0\"\n\n-\nname: \"SnapshotDuration\"\nisHidden: false\ndescription: \"Get the snaphot duration\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"RW\" #designates that this property can be read or set\ndefaultValue: \"0\"\n
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#understanding-device-commands","title":"Understanding Device Commands","text":"Command or more precisely device commands specify access to reads and writes for multiple simultaneous device resources. In other words, device commands allow you to ask for multiple pieces of data from a sensor at one time (or set multiple settings at one time). In this example, we can request both human and dog counts in one request by establishing a device command that specifies the request for both. Get more details on deviceCommands
in the Device Profile documentation.
deviceCommands:\n-\nname: \"Counts\"\nreadWrite: \"R\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"HumanCount\" }\n- { deviceResource: \"CanineCount\" }\n
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#walkthrough-device-profile","title":"Walkthrough - Device Profile","text":"Use either the Postman or Curl tab below to walkthrough uploading the device profile.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#download-the-device-profile","title":"Download the Device Profile","text":"Click on the link below to download and save the device profile (YAML) to your system.
EdgeX_CameraMonitorProfile.yml
Note
Device profiles are stored in core metadata. Therefore, note that the calls in the walkthrough are to the metadata service, which defaults to port 59881.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#upload-the-device-profile-to-edgex","title":"Upload the Device Profile to EdgeX","text":"PostmanCurlMake a POST request to http://localhost:59881/api/v3/deviceprofile/uploadfile
. The request should not include any additional headers (leave the defaults). In the Body, make sure \"form-data\" is selected and set the Key to file
and then select the device profile file where you saved it (as shown below).
If your API call is successful, you will get a generated id for your new DeviceProfile
in the response area.
Make a curl POST request as shown below.
curl -X POST -F 'file=@/path/to/your/profile/here/EdgeX_CameraMonitorProfile.yml' http://localhost:59881/api/v3/deviceprofile/uploadfile\n
If your API call is successful, you will get a generated id for your new DeviceProfile
in the response area.
Warning
Note that the file location in the curl command above needs to be replaced with your actual file location path. Also, if you do not save the device profile file to EdgeX_CameraMonitorProfile.yml
, then you will need to replace the file name as well.
If you make a GET call to the http://localhost:59881/api/v3/deviceprofile/all
URL (with Postman or curl) you will get a listing (in JSON) of all the device profiles (and all of its associated deviceResource
and deviceCommand
) currently defined in your instance of EdgeX, including the one you just added.
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughDeviceService/","title":"Register your device service","text":"Our next task in this walkthrough is to have the device service register or define itself in EdgeX. That is, it can proclaim to EdgeX that \"I have arrived and am functional.\"
"},{"location":"walk-through/Ch-WalkthroughDeviceService/#register-with-core-configuration-and-registration","title":"Register with Core Configuration and Registration","text":"Part of that registration process of the device service, indeed any EdgeX micro service, is to register itself with the core configuration & registration. In this process, the micro service provides its location to the Config/Reg micro service and picks up any new/latest configuration information from this central service. Since there is no real device service in this walkthrough demonstration, this part of the inter-micro service exchange is not explored here.
"},{"location":"walk-through/Ch-WalkthroughDeviceService/#device-service","title":"Device Service","text":"See core metadata API for more details.
At this point in your walkthrough, the device service must create a representative instance of itself in core metadata. It is in this registration that the device service is given an address that allows core command or any EdgeX service to communicate with it.
The name of the device service must be unique across all of EdgeX. When registering a device service, the initial admin state can be provided. The administrative state (aka admin state) provides control of the device service by man or other systems. It can be set to LOCKED
or UNLOCKED
. When a device service is set to LOCKED
, it is not suppose to respond to any command requests nor send data from the devices. See Admin State documentation for more details.
Use either the Postman or Curl tab below to walkthrough creating the DeviceService
.
Make a POST request to http://localhost:59881/api/v3/deviceservice
with the following body:
{\n\"apiVersion\" : \"v3\",\n\"service\": {\n\"name\": \"camera-control-device-service\",\n\"description\": \"Manage human and dog counting cameras\",\n\"adminState\": \"UNLOCKED\",\n\"labels\": [\n\"camera\",\n\"counter\"\n],\n\"baseAddress\": \"camera-device-service:59990\"\n}\n}\n
Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new DeviceService
in the response area.
Make a curl POST request as shown below.
curl -X 'POST' 'http://localhost:59881/api/v3/deviceservice' -d '[{\"apiVersion\" : \"v3\",\"service\": {\"name\": \"camera-control-device-service\",\"description\": \"Manage human and dog counting cameras\", \"adminState\": \"UNLOCKED\", \"labels\": [\"camera\",\"counter\"], \"baseAddress\": \"camera-device-service:59990\"}}]'\n
If your API call is successful, you will get a generated ID for your new DeviceService
.
If you make a GET call to the http://localhost:59881/api/v3/deviceservice/all
URL (with Postman or curl) you will get a listing (in JSON) of all the device services currently defined in your instance of EdgeX, including the one you just added.
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughExporting/","title":"Exporting your device data","text":"Great, so the data sent by the camera device makes its way to core data. How can that data be sent to an enterprise system or the Cloud? How can that data be used by an edge analytics system (like a rules engine) to actuate on a device?
"},{"location":"walk-through/Ch-WalkthroughExporting/#getting-data-to-the-rules-engine","title":"Getting data to the rules engine","text":"By default, data is already passed from the core data service to application services (app services) via Redis Pub/Sub messaging. Alternately, the data can be supplied between the two via MQTT. A preconfigured application service is provided with the EdgeX default Docker Compose files that gets this data and routes it to the eKuiper rules engine. The application service is called app-service-rules
(see below). More specifically, it is an app service configurable.
app-service-rules:\ncontainer_name: edgex-app-rules-engine\ndepends_on:\n- consul\n- data\nenvironment:\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nEDGEX_PROFILE: rules-engine\nEDGEX_SECURITY_SECRET_STORE: \"false\"\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nSERVICE_HOST: edgex-app-rules-engine\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\nhostname: edgex-app-rules-engine\nimage: edgexfoundry/app-service-configurable:2.0.1\nnetworks:\nedgex-network: {}\nports:\n- 127.0.0.1:59701:59701/tcp\nread_only: true\nsecurity_opt:\n- no-new-privileges:true\nuser: 2002:2001\n
"},{"location":"walk-through/Ch-WalkthroughExporting/#seeing-the-data-export","title":"Seeing the data export","text":"The log level of any EdgeX micro service is set to INFO
by default. If you tune the log level of the app-service-rules micro service to DEBUG
, you can see Event
s pass through the app service on the way to the rules engine.
To set the log level of any service, open the Consul UI in a browser by visiting http://[host]:8500
. When the Consul UI opens, click on the Key/Value tab on the top of the screen.
On the Key/Value display page, click on edgex
> appservices
> 2.0
> app-rules-engine
> Writable
> LogLevel
. In the Value entry field that presents itself, replace INFO
with DEBUG
and hit the Save
button.
The log level change will be picked up by the application service. In a terminal window, execute the Docker command below to view the service log.
docker logs -f edgex-app-rules-engine\n
Now push another event/reading into core data as you did earlier (see Send Event). You should see each new event/reading created by acknowledged by the app service. With the right application service and rules engine configuration, the event/reading data is published to the rules engine topic where it can then be picked up and used by the rules engine service to trigger commands just as you did manually in this walkthrough.
"},{"location":"walk-through/Ch-WalkthroughExporting/#exporting-data-to-anywhere","title":"Exporting data to anywhere","text":"You can create an additional application service to get the data to another application or service, REST endpoint, MQTT topic, cloud provider, and more. See the Getting Started guide on exporting data for more information on how to use another app service configurable to get EdgeX data to any client.
"},{"location":"walk-through/Ch-WalkthroughExporting/#building-your-own-solutions","title":"Building your own solutions","text":"Congratulations, you've made it all the way through the Walkthrough tutorial!
<Back
"},{"location":"walk-through/Ch-WalkthroughProvision/","title":"Provision a device","text":"In the last act of setup, a device service often discovers and provisions devices (either statically or dynamically) and that it is going to manage on the part of EdgeX. Note the word \"often\" in the last sentence. Not all device services will discover new devices or provision them right away. Depending on the type of device and how the devices communicate, it is up to the device service to determine how/when to provision a device. In some cases, the provisioning may be triggered by a human request of the device service once everything is in place and once the human can provide the information the device service needs to physically connected to the device.
"},{"location":"walk-through/Ch-WalkthroughProvision/#device","title":"Device","text":"See core metadata API for more details.
For the sake of this demonstration, the call to core metadata will provision the human/dog counting monitor camera as if the device service discovered it (by some unknown means) and provisioned the device as part of some startup process. To create a Device
, it must be associated to a DeviceProfile
, a DeviceService
, and contain one or more Protocols
that define how and where to communicate with the device (possibly providing its address).
When creating a device, you specify both the admin state (just as you did for a device service) and an operating state. The operating state (aka op state) provides an indication on the part of EdgeX about the internal operating status of the device. The operating state is not set externally (as by another system or man), it is a signal from within EdgeX (and potentially the device service itself) about the condition of the device. The operating state of the device may be either UP
or DOWN
(it may alsy be UNKNOWN
if the state cannot be determined). When the operating state of the device is DOWN
, it is either experiencing some difficulty or going through some process (for example an upgrade) which does not allow it to function in its normal capacity.
Use either the Postman or Curl tab below to walkthrough creating the Device
.
Make a POST request to http://localhost:59881/api/v3/device
with the following body:
[\n{\n\"apiVersion\" : \"v3\",\n\"device\": {\n\"name\": \"countcamera1\",\n\"description\": \"human and dog counting camera #1\",\n\"adminState\": \"UNLOCKED\",\n\"operatingState\": \"UP\",\n\"labels\": [\n\"camera\",\"counter\"\n],\n\"location\": \"{lat:45.45,long:47.80}\",\n\"serviceName\": \"camera-control-device-service\",\n\"profileName\": \"camera-monitor-profile\",\n\"protocols\": {\n\"camera-protocol\": {\n\"camera-address\": \"localhost\",\n\"port\": \"1234\",\n\"unitID\": \"1\"\n}\n}\n}\n}\n]\n
Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new Device
in the response area.
Note
The camera-monitor-profile
was created by the device profile uploaded in a previous walkthrough step. The camera-control-device-service
was created in the last walkthough step. These names must match the previously created EdgeX objects in order to successfully provision your device.
Make a curl POST request as shown below.
curl -X 'POST' 'http://localhost:59881/api/v3/device' -d '[{\"apiVersion\" : \"v3\", \"device\": {\"name\": \"countcamera1\",\"description\": \"human and dog counting camera #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"camera\",\"counter\"],\"location\": \"{lat:45.45,long:47.80}\",\"serviceName\": \"camera-control-device-service\",\"profileName\": \"camera-monitor-profile\",\"protocols\": {\"camera-protocol\": {\"camera-address\": \"localhost\",\"port\": \"1234\",\"unitID\": \"1\"}}}}]'\n
If your API call is successful, you will get a generated ID (a UUID) for your new Device
.
Note
The camera-monitor-profile
was created by the device profile uploaded in a previous walkthrough step. The camera-control-device-service
was created in the last walkthough step. These names must match the previously created EdgeX objects in order to successfully provision your device.
Ensure the monitor camera is among the devices known to core metadata. If you make a GET call to the http://localhost:59881/api/v3/device/all
URL (with Postman or curl) you will get a listing (in JSON) of all the devices currently defined in your instance of EdgeX that should include the one you just added.
There are many additional APIs on core metadata to retrieve a DeviceProfile
, Device
, DeviceService
, etc. As an example, here is one to find all devices associated to a given DeviceProfile
.
curl -X GET http://localhost:59881/api/v3/device/profile/name/camera-monitor-profile | json_pp\n
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughReading/","title":"Sending events and reading data","text":"In the real world, the human/dog counting camera would start to take pictures, count beings, and send that data to EdgeX. To simulate this activity in this section of the walkthrough, you will make core data API calls as if you were the camera's device and device service. That is, you will report human and dog counts to core data in the form of event/reading objects.
"},{"location":"walk-through/Ch-WalkthroughReading/#send-an-eventreading","title":"Send an Event/Reading","text":"See core data API for more details.
Data is submitted to core data as an Event
object. An event is a collection of sensor readings from a device (associated to a device by its name) at a particular point in time. A Reading
object in an Event
object is a particular value sensed by the device and associated to a Device Resource (by name) to provide context to the reading.
So, the human/dog counting camera might determine that there are 5 people and 3 dogs in the space it is monitoring. In the EdgeX vernacular, the device service upon receiving these sensed values from the camera device would create an Event
with two Reading
s - one Reading
would contain the key/value pair of HumanCount:5 and the other Reading
would contain the key/value pair of CanineCount:3.
The device service, on creating the Event
and associated Reading
objects would transmit this information to core data via REST call.
Use either the Postman or Curl tab below to walkthrough sending an Event
with Reading
s to core data.
Make a POST request to `http://localhost:59880/api/v3/event/camera-monitor-profile/countcamera1/HumanCount with the body below.
{\n\"apiVersion\" : \"v3\",\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"deviceName\": \"countcamera1\",\n\"profileName\": \"camera-monitor-profile\",\n\"sourceName\": \"HumanCount\",\n\"id\": \"d5471d59-2810-419a-8744-18eb8fa03465\",\n\"origin\": 1602168089665565200,\n\"readings\": [\n{\n\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abd\",\n\"origin\": 1602168089665565200,\n\"deviceName\": \"countcamera1\",\n\"resourceName\": \"HumanCount\",\n\"profileName\": \"camera-monitor-profile\",\n\"valueType\": \"Int16\",\n\"value\": \"5\"\n},\n{\n\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abe\",\n\"origin\": 1602168089665565200,\n\"deviceName\": \"countcamera1\",\n\"resourceName\": \"CanineCount\",\n\"profileName\": \"camera-monitor-profile\",\n\"valueType\": \"Int16\",\n\"value\": \"3\"\n} ]\n}\n}\n
If your API call is successful, you will get a generated ID for your new Event
as shown in the image below.
Note
Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event.
Make a curl POST request as shown below.
curl -X POST -d '{\"apiVersion\" : \"v3\",\"event\": {\"apiVersion\" : \"v3\",\"deviceName\": \"countcamera1\",\"profileName\": \"camera-monitor-profile\",\"sourceName\": \"HumanCount\",\"id\":\"d5471d59-2810-419a-8744-18eb8fa03464\",\"origin\": 1602168089665565200,\"readings\": [{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abc\",\"origin\": 1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"HumanCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"5\"},{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abf\",\"origin\":1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"CanineCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"3\"}]}}' localhost:59880/api/v3/event/camera-monitor-profile/countcamera1/HumanCount\n
Note
Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event.
"},{"location":"walk-through/Ch-WalkthroughReading/#origin-timestamp","title":"Origin Timestamp","text":"The device service will supply an origin property in the Event
and Reading
object to suggest the time (in Epoch timestamp/nanoseconds format) at which the data was sensed/collected.
EdgeX uses nanosecond because some devices and use cases may provide and need that degree of accuracy. Also, Collisions at nanosecond accuracy are unlikely.
The Event
origin is always set by device service SDK and it is intended to be unique for that device service instance. The Reading
origin should be set by the device service's ProtocolDriver implementation, SDK copies the Event
origin into it if it was not set.
Note
Smart devices will often timestamp sensor data and this timestamp can be used as the origin timestamp. In cases where the sensor/device is unable to provide a timestamp (\"dumb\" or brownfield sensors), it is the device service that creates a timestamp for the sensor data that it be applied as the origin timestamp for the device.
"},{"location":"walk-through/Ch-WalkthroughReading/#exploring-eventsreadings","title":"Exploring Events/Readings","text":"Now that an Event
and associated Readings
have been sent to core data, you can use the core data API to explore that data that is now stored in the database.
Recall from a previous walkthrough step, you checked that no data was yet stored in core data. Make a similar call to see event records have now been sent into core data..
"},{"location":"walk-through/Ch-WalkthroughReading/#walkthrough-query-eventsreadings","title":"Walkthrough - Query Events/Readings","text":"Use either the Postman or Curl tab below to walkthrough getting the list of events.
PostmanCurlMake a GET request to retrieve the Event
s associated to the countcamera1
device: http://localhost:59880/api/v3/event/device/name/countcamera1
.
Make a GET request to retrieve the Reading
s associated to the countcamera1
device: http://localhost:59880/api/v3/reading/device/name/countcamera1
.
Make a curl GET requests to retrieve 10 of the last Event
s associated to the countcamera1
device and to retrieve 10 of the human count readings associated to countcamera1
curl -X GET localhost:59880/api/v3/event/device/name/countcamera1 | json_pp\ncurl -X GET localhost:59880/api/v3/reading/device/name/countcamera1 | json_pp\n
There are many additional APIs on core data to retrieve Event
and Reading
data. As an example, here is one to find all events inside of a start and end time range.
curl -X GET localhost:59880/api/v3/event/start/1602168089665560000/end/1602168089665570000 | json_pp\n
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughSetup/","title":"Setup up your environment","text":""},{"location":"walk-through/Ch-WalkthroughSetup/#install-docker-docker-compose-edgex-foundry","title":"Install Docker, Docker Compose & EdgeX Foundry","text":"To explore EdgeX and walk through it's APIs and how it works, you will need:
If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry. If you have the tools and EdgeX already installed and running, you can proceed to the Walkthrough Use Case.
"},{"location":"walk-through/Ch-WalkthroughSetup/#install-postman-optional","title":"Install Postman (optional)","text":"You can follow this walkthrough making HTTP calls from the command-line with a tool like curl
, but it's easier if you use a graphical user interface tool designed for exercising REST APIs. For that we like to use Postman. You can download the native Postman app for your operating system.
Note
Example curl
commands will be provided with the walk through so that you can run this walkthrough without Postman.
Alert
It is assumed that for the purposes of this walk through demonstration
localhost
. If this is not the case, substitute your hostname for localhost.<Back Next>
"},{"location":"walk-through/Ch-WalkthroughUseCase/","title":"Example Use Case","text":"In order to explore EdgeX, its services and APIs and to generally understand how it works, it helps to see EdgeX under the context of a real use case. While you exercise the APIs under a hypothetical situation in order to demonstrate how EdgeX works, the use case is very much a valid example of how EdgeX can be used to collect data from devices and actuate control of the sensed environment it monitors. People (and animal) counting camera technology as highlighted in this walk through does exist and has been connected to EdgeX before.
"},{"location":"walk-through/Ch-WalkthroughUseCase/#object-counting-camera","title":"Object Counting Camera","text":"Suppose you had a new device that you wanted to connect to EdgeX. The device was a camera that took a picture and then had an on-board chip that analyzed the picture and reported the number of humans and canines (dogs) it saw.
How often the camera takes a picture and reports its findings can be configured. In fact, the camera device could be sent two actuation commands - that is sent two requests for which it must respond and do something. You could send a request to set its time, in seconds, between picture snapshots (and then calculating the number of humans and dogs it finds in that resulting image). You could also request it to set the scan depth, in feet, of the camera - that is set how far out the camera looks. The farther out it looks, the less accurate the count of humans and dogs becomes, so this is something the manufacturer wants to allow the user to set based on use case needs.
"},{"location":"walk-through/Ch-WalkthroughUseCase/#edgex-device-representation","title":"EdgeX Device Representation","text":"In EdgeX, the camera must be represented by a Device
. Each Device
is managed by a device service. The device service communicates with the underlying hardware - in this case the camera - in the protocol of choice for that Device
. The device service collects the data from the devices it manages and passes that data into the rest of EdgeX.
Note
A device service will, by default, publish data into a message bus which can be subscribed to by core data and/or application services. You'll learn more about these later in this walkthrough. Alternately, a device service can send data directly to core data.
In this case, the device service would be collecting the count of humans and dogs that the camera sees. The device service also serves to translate the request for actuation from EdgeX and the rest of the world into protocol requests that the physical device would understand. So in this example, the device service would take requests to set the duration between snapshots and to set the scan depth and translate those requests into protocol commands that the camera understood.
Exactly how this camera physically connects to the host machine running EdgeX and how the device service works under the covers to communicate with the camera Device is immaterial for the point of this demonstration.
<Back Next>
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"EdgeX Foundry is a vendor-neutral open source project hosted by The Linux Foundation. EdgeX Foundry builds a common open framework for IoT edge computing. At the heart of the project is an interoperability framework hosted within a full hardware- and OS-agnostic reference software platform to enable an ecosystem of plug-and-play components that unifies the marketplace and accelerates the deployment of IoT solutions.
Docker Quick StartJump in to EdgeX Foundry by running locally with Docker containers.
Snap Quick StartJump in to EdgeX Foundry by running Snaps.
Build & Run NativelyBuild EdgeX and run it natively on your OS.
Build a Device ServiceBuild a custom device service to connect to your sensor or device.
Build an Application ServiceBuild or configure a new application service to get data to the cloud, database, enterprise application or other external system.
Running in Hybrid ModeHow to run a service you are working on natively and then run the rest of EdgeX with Docker containers
"},{"location":"V3TopLevelMigration/","title":"V3 Migration Guide","text":"EdgeX 3.0
Many backward breaking changes occurred in the EdgeX 3.0 (Minnesota) release which may require some migration depending on your use case.
This section describes how to migrate from V2 to V3 at a high level and refers the reader to the appropriate detail documents. The areas to consider for migrating are:
Service configuration is one of the big changes for EdgeX V3
"},{"location":"V3TopLevelMigration/#configuration-provider","title":"Configuration Provider","text":"If you have customized any EdgeX service's configuration (core, support, device, etc.) via the Configuration Provider (Consul), those customization will need to be re-applied to those services' configuration or the common configuration in the Configuration Provider once the V3 versions have started and pushed their configuration into the Configuration Provider. The V3 services now use v3
in the Configuration Provider path rather than 2.0
. The folder structure in the Configuration Provider has been flattened so all services are at the same level. See the Configuration File section below for details on migrating configuration.
Example Configuration Provider paths for V3
.../kv/edgex/v3/core-common-config-bootstrapper\n.../kv/edgex/v3/core-data/\n.../kv/edgex/v3/device-virtual/\n.../kv/edgex/v3/app-rules-engine/\n
The same applies for custom device and application service once they have been migrated following the guides referenced in the Custom Device Service and Custom Applications Service sections below.
Warning
If the Configuration Provider data is not cleared prior to running the V3 services, the V2 configuration will remain and be taking up useful memory. The configuration data in the Configuration Provider can be cleared by deleting the .../edgex/
node with the curl command below prior to starting EdgeX 3.0.
curl --request DELETE http://localhost:8500/v1/kv/edgex?recurse=true`\n
"},{"location":"V3TopLevelMigration/#configuration-file","title":"Configuration File","text":"If you have customized the service configuration files for any EdgeX service (core, support, device, etc.) that configuration will need to be migrated.
The biggest two changes to the service configuration files are:
See V3 Migration of Common Configuration for the details on migrating configuration common to all EdgeX services.
The tool here can be used to convert your customized service configuration file from TOML to YAML. This should be done once all the common configuration has been removed.
The following are where you can find the configuration migration specifics for individual EdgeX services
If you have custom environment overrides for configuration impacted by the V3 changes you will also need to migrate your overrides to use the new name or value depending on what has changed. Refer to the links above and/or below for details for migration of common and/or the service specific configuration to determine if your overrides require migrating.
Note
When using the Configuration Provider, the environment overrides for common configuration are applied to the core-common-config-bootstrapper service. They no longer work when applied to the individual services as the common configuration setting no longer exist in the private configuration.
"},{"location":"V3TopLevelMigration/#custom-compose-file","title":"Custom Compose File","text":"The compose files for V3 have many changes from their V2 counter parts. If you have customized a V2 compose file to add additional services and/or add or modify configuration overrides, it is highly recommended that you start with the appropriate V3 compose file and re-add your customizations. It is very likely that the sections for your additional services will need to be migrated to have the proper environment overrides. Best approach is to use one of the V3 service sections that closest matches your service as a template.
The latest V3 compose files can be found here: Compose Files
"},{"location":"V3TopLevelMigration/#compose-builder","title":"Compose Builder","text":"If the additional service(s) in your custom compose file are EdgeX released device or app services, it is highly recommended that you use the Compose Builder to regenerate your custom compose file.
The latest V3 Compose Builder can be found here: Compose Builder Readme
"},{"location":"V3TopLevelMigration/#command-line-options","title":"Command Line Options","text":"The following command-line options and corresponding environment variables have be renamed for consistency
-c/--confdir
is replaced by -cd/--configDir
EDGEX_CONF_DIR
environment variable is replaced by EDGEX_CONFIG_DIR
-f/--file
is replaced by -cf/--configFile
EDGEX_CONFIG_FILE
has not changed-cp/ --configProvider
has not changedEDGEX_CONFIGURATION_PROVIDER
environment variable is replaced by EDGEX_CONFIG_PROVIDER
If your solution uses any of the renamed options or environment variables you will need to make the appropriate changes to use the new names.
See Command Line Options page for more details on the above options and the Command Line Overrides section for more details on the above environment variables
"},{"location":"V3TopLevelMigration/#database","title":"Database","text":"There currently is no migration path for the data stored in the database. If possible, the database should be cleared prior to starting V3 EdgeX. This will allow the database to be V3 compliant from the start. See Clearing Redis Database section below for details on how to clear the Redis database.
The following sections describe what you need to be aware for the different services that create data in the database.
"},{"location":"V3TopLevelMigration/#core-data","title":"Core Data","text":"The Event/Reading data stored by Core Data is considered transient and of little value once it has become old. The V3 versions of these data collections have minimal changes from their V2 counter parts.
"},{"location":"V3TopLevelMigration/#api-change","title":"API Change","text":"/event/{profileName}/{deviceName}/{sourceName}
to /event/{serviceName}/{profileName}/{deviceName}/{sourceName}
See Core Data API Reference for complete details.
"},{"location":"V3TopLevelMigration/#reading","title":"Reading","text":"tags
field in reading.The field that has changed in V3 is the apiVersion
which is now set to v3
.
Most of the data stored by Core Metadata will be recreated when the V3 versions of the Device Services start-up. The statically declared devices will automatically be created and device discovery will find and add existing devices. Any device profiles, devices, provision watchers created manually via the V2 REST APIs will have to be recreated using the V3 REST API. Any manually-applied AdministrativeState
settings will also need to be re-applied.
Add/ Update/ Get device
LastConnected
, LastReported
and UpdateLastConnected
from device modelAdd/ Update/ Get deviceprofile
optional
field in ResourcePropertiesmask
, shift
, scale
, base
, offset
, maximum
and minimum
from string
to number
in ResourcePropertiesGet UOM
Add/ Get/ Update ProvisionWatcher
DiscoveredDevice
; such as profileName
, Device adminState
, and autoEvents
.DiscoveredDevice
object to allow any additional or customized data.adminState
now. The Device adminState
is moved into the DiscoveredDevice
object.Add/ Update Device
notify
which is never usedtags
and properties
See Core Metadata API Reference for complete details.
"},{"location":"V3TopLevelMigration/#core-command","title":"Core Command","text":""},{"location":"V3TopLevelMigration/#api-change_2","title":"API Change","text":"ds-pushevent
and ds-returnevent
to use bool value, true
or false
, instead of yes
or no
See Core Command API Reference for complete details.
"},{"location":"V3TopLevelMigration/#support-notifications","title":"Support Notifications","text":"Any Subscriptions
created via the V2 REST API will have to be recreated using the V3 REST API. The Notification
and Transmission
collections will be empty until new notifications are sent using EdgeX V3
authmethod
to support-scheduler actions DTO, which indicates how to authenticate the outbound URL. Use NONE
when running in non-secure mode and JWT
when running in secure mode.See Support Scheduler API Reference for complete details.
The statically declared Interval
and IntervalAction
will be created automatically. Any Interval
and/or IntervalAction
created via the V2 REST API will have to be recreated using the V3 REST API. If you have created a custom configuration with additional statically declared Interval
s and IntervalActions
see the Configuration File section under Customized Configuration below.
Application services use the database only when the Store and Forward capability is enabled. If you do not use this capability you can skip this section. This data collection only has data when that data could not be exported. It is recommended not to upgrade to V3 while the Store and Forward data collection is not empty or you are certain the data is no longer needed. You can determine if the Store and Forward data collection is empty by setting the Application Service's log level to DEBUG
and look for the following message which is logged every RetryInterval
:
Example
msg=\" 0 stored data items found for retrying\"\n
Note
The RetryInterval
is in the app-services
section of common configuration. Changing it there will apply to all Application Services that have the Store and Forward capability enabled.
When running EdgeX in Docker the simplest way to clear the database is to remove the db-data
volume after stopping the V2 EdgeX services.
docker compose -f <compose-file> down\ndocker volume rm $(docker volume ls -q | grep db-data)\n
Now when the V3 EdgeX services are started the database will be cleared of the old v2 data.
"},{"location":"V3TopLevelMigration/#snaps","title":"Snaps","text":"Because there are no tools to migrate EdgeX configuration and database, it's not possible to update the edgexfoundry snap from a V2 version to a V3 version. You must remove the V2 snap first, and then install a V3 version of the snap (available from the 3.0 track in the Snap Store). This will result in starting fresh with EdgeX V3 and all V2 data removed.
"},{"location":"V3TopLevelMigration/#local","title":"Local","text":"If you are running EdgeX locally, i.e. not in Docker or snaps and in non-secure mode you can use the Redis CLI to clear the database. The CLI would have been installed when you installed Redis locally. Run the following command to clear the database:
redis-cli FLUSHDB\n
This will not work if running EdgeX in running in secure mode since you will not have the random generated Redis password unless you created an Admin password when you installed Redis.
"},{"location":"V3TopLevelMigration/#custom-device-service","title":"Custom Device Service","text":"If you have custom Device Services they will need to be migrated to the V3 version of the Device SDK. See Device Service V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#custom-device-profile","title":"Custom Device Profile","text":"If you have custom V2 Device Profile(s) for one of the EdgeX Device Services they will need to be migrated to the V3 version of Device Profiles. See Device Service V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#custom-pre-defined-device","title":"Custom Pre-Defined Device","text":"If you have custom V2 Pre-Defined Device(s) for one of the EdgeX Device Services they will need to be migrated to the V3 version of Pre-Defined Devices. See Device Service V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#custom-applications-service","title":"Custom Applications Service","text":"If you have custom Application Services they will need to be migrated to the V3 version of the App Functions SDK. See Application Services V3 Migration Guide for complete details.
"},{"location":"V3TopLevelMigration/#security","title":"Security","text":"If you have an add-on services running in secure mode you will need to use the new names of the environment variables in EdgeX V3. See Security Services V3 Migration Guide for more details.
"},{"location":"V3TopLevelMigration/#api-gateway-configuration","title":"API Gateway configuration","text":"The API gateway has changed in EdgeX V3. See Security Services V3 Migration Guide for more details.
"},{"location":"V3TopLevelMigration/#authenticated-rest-apis","title":"Authenticated REST APIs","text":"When security is enable, all V3 EdgeX services REST APIs require a JWT authorization token. See Security Services V3 Migration Guide for more details.
"},{"location":"V3TopLevelMigration/#ekuiper","title":"eKuiper","text":""},{"location":"V3TopLevelMigration/#rules","title":"Rules","text":""},{"location":"V3TopLevelMigration/#rest-action","title":"Rest Action","text":""},{"location":"V3TopLevelMigration/#none-secure-mode","title":"None Secure Mode","text":"If running EdgeX in none secure mode and you have rules with rest
action that reference an EdgeX service the endpoint API version will need to be changed from v2 to V3
Example migration of rest
action with EdgeX endpoint
V2:
\"actions\": [\n{\n\"rest\": {\n\"url\": \"http://edgex-core-command:59882/api/v2/device/name/Random-Integer-Device/Int64\", ...\n}\n}\n]\n
\u200b V3:
\"actions\": [\n{\n\"rest\": {\n\"url\": \"http://edgex-core-command:59882/api/v3/device/name/Random-Integer-Device/Int64\", ...\n}\n}\n]\n
"},{"location":"V3TopLevelMigration/#secure-mode","title":"Secure Mode","text":"If running EdgeX in secure mode and you have rules with rest
action that reference an EdgeX Core Command you will need to convert the rule to use Command via External MQTT. See eKuiper documentation here for more details. This is due to the new microservice authorization on all EdgeX services' endpoints requiring a JWT token which eKuiper doesn't have.
Note
This approach requires an external MQTT broker to send the command requests. The default EdgeX compose files do not include a MQTT Broker. This broker is supposed to be external to EdgeX.
"},{"location":"about/","title":"About","text":"EdgeX Foundry is an open source, vendor neutral, flexible, interoperable, software platform at the edge of the network, that interacts with the physical world of devices, sensors, actuators, and other IoT objects. In simple terms, EdgeX is edge middleware - serving between physical sensing and actuating \"things\" and our information technology (IT) systems.
The EdgeX platform enables and encourages the rapidly growing community of IoT solution providers to work together in an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale.
By bringing this much-needed interoperability, EdgeX makes it easier to monitor physical world items, send instructions to them, collect data from them, move the data across the fog up to the cloud where it may be stored, aggregated, analyzed, and turned into information, actuated, and acted upon. So EdgeX enables data to travel northwards towards the cloud or enterprise and back to devices, sensors, and actuators.
The initiative is aligned around a common goal: the simplification and standardization of the foundation for tiered edge computing architectures in the IoT market while still enabling the ecosystem to provide significant value-added differentiation.
If you don't need further description and want to immediately use EdgeX Foundry use this link: Getting Started Guide
"},{"location":"about/#edgex-foundry-use-cases","title":"EdgeX Foundry Use Cases","text":"Originally built to support industrial IoT needs, EdgeX today is used in a variety of use cases to include:
EdgeX Foundry was conceived with the following tenets guiding the overall architecture:
EdgeX Foundry must be platform agnostic with regard to
EdgeX Foundry must be extremely flexible
EdgeX Foundry should provide \"reference implementation\" services but encourages best of breed solutions
EdgeX Foundry must provide for store and forward capability
EdgeX Foundry must support and facilitate \"intelligence\" moving closer to the edge in order to address
EdgeX Foundry must support brown and green device/sensor field deployments
EdgeX Foundry must be secure and easily managed
EdgeX was originally built by Dell to run on its IoT gateways. While EdgeX can and does run on gateways, its platform agnostic nature and micro service architecture enables tiered distributed deployments. In other words, a single instance of EdgeX\u2019s micro services can be distributed across several host platforms. The host platform for one or many EdgeX micro services is called a node. This allows EdgeX to leverage compute, storage, and network resources wherever they live on the edge.
Its loosely-coupled architecture enables distribution across nodes to enable tiered edge computing. For example, thing communicating services could run on a programmable logic controller (PLC), a gateway, or be embedded in smarter sensors while other EdgeX services are deployed on networked servers. The scope of a deployment could therefore include embedded sensors, controllers, edge gateways, servers and cloud systems.
EdgeX micro services can be deployed across an array of compute nodes to maximize resources while at the same time position more processing intelligence closer to the physical edge. The number and the function of particular micro services deployed on a given node depends on the use case and capability of the hardware and infrastructure.
"},{"location":"about/#apache-2-license","title":"Apache 2 License","text":"EdgeX is distributed under Apache 2 License backed by the Apache Foundation. Apache 2 licensing is very friendly (\u201cpermissive\u201d) to open and commercial interests. It allows users to use the software for any purpose. It allows users to distribute, modify or even fork the code base without seeking permission from the founding project. It allows users to change or extend the code base without having to contribute back to the founding project. It even allows users to build commercial products without concerns for profit sharing or royalties to go back to the Linux Foundation or open source project organization.
"},{"location":"about/#edgex-foundry-service-layers","title":"EdgeX Foundry Service Layers","text":"EdgeX Foundry is a collection of open source micro services. These micro services are organized into 4 service layers, and 2 underlying augmenting system services. The Service Layers traverse from the edge of the physical realm (from the Device Services Layer), to the edge of the information realm (that of the Application Services Layer), with the Core and Supporting Services Layers at the center.
The 4 Service Layers of EdgeX Foundry are as follows:
The 2 underlying System Services of EdgeX Foundry are as follows:
Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where most of the innate knowledge of what \u201cthings\u201d are connected, what data is flowing through, and how EdgeX is configured resides in an EdgeX instance. Core consists of the following micro services:
Core services provide intermediary communications between the things and the IT systems.
"},{"location":"about/#supporting-services-layer","title":"Supporting Services Layer","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Normal software application duties such as scheduler, and data clean up (also known as scrubbing in EdgeX) are performed by micro services in the supporting services layer.
These services often require some amount of core services in order to function. In all cases, supporting service can be considered optional \u2013 that is they can be left out of an EdgeX deployment depending on use case needs and system resources.
Supporting services include:
Application services are the means to extract, process/transform and send sensed data from EdgeX to an endpoint or process of your choice. EdgeX today offers application service examples to send data to many of the major cloud providers (Amazon IoT Hub, Google IoT Core, Azure IoT Hub, IBM Watson IoT\u2026), to MQTT(s) topics, and HTTP(s) REST endpoints.
Application services are based on the idea of a \"functions pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event messages) in the order specified. The first function in a pipeline is a trigger. A trigger begins the functions pipeline execution. A trigger, for example, is something like a message landing in a message queue. Each function then acts on the message. Common functions include filtering, transformation (i.e. to XML or JSON), compression, and encryption functions. The function pipeline ends when the message has gone through all the functions and is set to a sink. Putting the resulting message into an MQTT topic to be sent to Azure or AWS is an example of a sink completing an application service.
"},{"location":"about/#device-services-layer","title":"Device Services Layer","text":"Device services connect \u201cthings\u201d \u2013 that is sensors and devices \u2013 into the rest of EdgeX.
Device services are the edge connectors interacting with the \"things\" that include, but are not limited to: alarm systems, heating and air conditioning systems in homes and office buildings, lights, machines in any industry, irrigation systems, drones, currently automated transit such as some rail systems, currently automated factories, and appliances in your home. In the future, this may include driverless cars and trucks, traffic signals, fully automated fast food facilities, fully automated self-serve grocery stores, devices taking medical readings from patients, etc.
Device services may service one or a number of things or devices (sensor, actuator, etc.) at one time. A device that a device service manages, could be something other than a simple, single, physical device. The device could be another gateway (and all of that gateway's devices), a device manager, a device aggregator that acts as a device, or collection of devices, to EdgeX Foundry.
The device service communicates with the devices, sensors, actuators, and other IoT objects through protocols native to each device object. The device service converts the data produced and communicated by the IoT object into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry.
EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, MQTT, etc.
"},{"location":"about/#system-services-layer","title":"System Services Layer","text":"Security Infrastructure
Security elements of EdgeX Foundry protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules.
There are two major EdgeX security components.
System Management
System Management facilities provide the central point of contact for external management systems to start/stop/restart EdgeX services, get the status/health of a service, or get metrics on the EdgeX services (such as memory usage) so that the EdgeX services can be monitored.
"},{"location":"about/#software-development-kits-sdks","title":"Software Development Kits (SDKs)","text":"Two types of SDKs are provided by EdgeX to assist in creating north and south side services \u2013 specifically to create application services and device services. SDKs for both the north and south side services make connecting new things or new cloud/enterprise systems easier by providing developers all the scaffolding code that takes care of the basic operations of the service. Thereby allowing developers to focus on specifics of their connectivity to the south or north side object without worrying about all the raw plumbing of a micro service.
SDKs are language specific; meaning an SDK is written to create services in a particular programming language. Today, EdgeX offers the following SDKs:
EdgeX\u2019s primary job is to collect data from sensors and devices and make that data available to north side applications and systems. Data is collected from a sensor by a device service that speaks the protocol of that device. Example: a Modbus device service would communicate in Modbus to get a pressure reading from a Modbus pump. The device service translates the sensor data into an EdgeX event object. The device service can then either:
put the event object on a message bus (which may be implemented via Redis Streams or MQTT). Subscribers to the event message on the message bus can be application services or core data or both (see step 1.1 below).
send the event object to the core data service via REST communications (see step 1.2).
When core data receives the event (either via message bus or REST), it persists the sensor data in the local edge database. EdgeX uses Redis as our persistence store. There is an abstraction in place to allow you to use another database (which has allowed other databases to be used in the past). Persistence is not required and can be turned off. Data is persisted in EdgeX at the edge for two basics reasons:
When core data receives event objects from the device service via REST, it will put sensor data events on a message topic destined for application services. Redis Pub/Sub is used as the messaging infrastructure by default (step 2). MQTT or NATS (opt-in during build) can also be used as the messaging infrastructure between core data and the application services.
The application service transforms the data as needed and pushes the data to an endpoint. It can also filter, enrich, compress, encrypt or perform other functions on the event before sending it to the endpoint (step 3). The endpoint could be an HTTP/S endpoint, an MQTT topic, a cloud system (cloud topic), etc.
"},{"location":"about/#edge-analytics-and-actuation","title":"Edge Analytics and Actuation","text":"In edge computing, simply collecting sensor data is only part of the job of an edge platform like EdgeX. Another important job of an edge platform is to be able to:
Why edge analytics? Local analytics are important for two reasons:
Local analytics allows systems to operate independently, at least for some stretches of time. For example: a shipping container\u2019s cooling system must be able to make decisions locally without the benefit of Internet connectivity for long periods of time when the ship is at sea. Local analytics also allow a system to act quickly in a low latent fashion when critical to system operations. As an extreme case, imagine that your car\u2019s airbag fired on the basis of data being sent to the cloud and analyzed for collisions. Your car has local analytics to prevent such a potentially slow and error prone delivery of the safety actuation in your automobile.
EdgeX is built to act locally on data it collects from the edge. In other words, events are processed by local analytics and can be used to trigger action back down on a sensor/device.
Just as application services prepare data for consumption by north side cloud systems or applications, application services can process and get EdgeX events (and the sensor data they contain) to any analytics package (see step 4). By default, EdgeX ships with a simple rules engine (the default EdgeX rules engine is eKuiper \u2013 an open source rules engine and now a sister project in LF Edge). Your own analytics package (or ML agent) could replace or augment the local rules engine.
The analytic package can explore the sensor event data and make a decision to trigger actuation of a device. For example, it could check that the pressure reading of an engine is greater than 60 PSI. When such a rule is determined to be true, the analytic package calls on the core command service to trigger some action, like \u201copen a valve\u201d on some controllable device (see step 5).
The core command service gets the actuation request and determines which device it needs to act on with the request; then calling on the owning device service to do the actuation (see step 6). Core command allows developers to put additional security measures or checks in place before actuating.
The device service receives the request for actuation, translates that into a protocol specific request and forwards the request to the desired device (see step 7).
"},{"location":"about/#project-release-cadence","title":"Project Release Cadence","text":"Typically, EdgeX releases twice a year; once in the spring and once in the fall. Bug fix releases may occur more often. Each EdgeX release has a code name. The code name follows an alphabetic pattern similar to Android (code names sequentially follow the alphabet).
The code name of each release is named after some geographical location in the world. The honor of naming an EdgeX release is given to a community member deemed to have contributed significantly to the project. A release also has a version number. The release version follows sematic versioning to indicate the release is major or minor in scope. Major releases typically contain significant new features and functionality and are not always backward compatible with prior releases. Minor releases are backward compatible and usually contain bug fixes and fewer new features. See the project Wiki for more information on releases, versions and patches.
Release Schedule Version Barcelona Oct 2017 0.5.0 California Jun 2017 0.6.0 Delhi Oct 2018 0.7.0 Edinburgh Jul 2019 1.0.0 Fuji Nov 2019 1.1.0 Geneva May 2020 1.2.0 Hanoi November 2020 1.3.0 Ireland Spring 2021 2.0.0 Jakarta Fall 2021 2.1.0 Kamukura Spring 2022 TBD Levski Fall 2022 TBDNote: minor releases of the Device Services and Application Services (along with their associated SDKs) can be release independently. Graphical User Interface, the command line interface (CLI) and other tools can be released independently.
EdgeX community members convene in a meeting right at the time of a release to plan the next release and roadmap future releases.
See the Project Wiki for more detailed information on releases and roadmap.
"},{"location":"about/#edgex-history-and-naming","title":"EdgeX History and Naming","text":"EdgeX Foundry began as a project chartered by Dell IoT Marketing and developed by the Dell Client Office of the CTO as an incubation project called Project Fuse in July 2015. It was initially created to run as the IoT software application on Dell\u2019s introductory line of IoT gateways. Dell entered the project into open source through the Linux Foundation on April 24, 2017. EdgeX was formally announced and demonstrated at Hanover Messe 2017. Hanover Messe is one of the world's largest industrial trade fairs. At the fair, the Linux Foundation also announced the association of 50 founding member organizations \u2013 the EdgeX ecosystem \u2013 to help further the project and the goals of creating a universal edge platform.
The name \u2018foundry\u2019 was used to draw parallels to Cloud Foundry. EdgeX Foundry is meant to be a foundry for solutions at the edge just like Cloud Foundry is a foundry for solutions in the cloud. Cloud Foundry was originated by VMWare (Dell Technologies is a major shareholder of VMWare - recall that Dell Technologies was the original creator of EdgeX). The \u2018X\u2019 in EdgeX represents the transformational aspects of the platform and allows the project name to be trademarked and to be used in efforts such as certification and certification marks.
The EdgeX Foundry Logo represents the nature of its role as transformation engine between the physical OT world and the digital IT world.
The EdgeX community selected the octopus as the mascot or \u201cspirit animal\u201d of the project at its inception. Its eight arms and the suckers on the arms represent the sensors. The sensors bring the data into the octopus. Actually, the octopus has nine brains in a way. It has millions of neurons running down each arm; functioning as mini-brains in each of those arms. The arms of the octopus serve as \u201clocal analytics\u201d like that offered by EdgeX. The mascot is affectionately called \u201cEdgey\u201d by the community.
"},{"location":"api/Ch-APIIntroduction/","title":"Introduction","text":"Each of the EdgeX services (core, supporting, management, device and application) implement a RESTful API. This section provides details about each service's API. You will see there is a common set of API's that all services implement, which are:
Each Edgex Service's RESTful API is documented via Swagger. A link is provided to the swagger document in the service specific documentation.
Also included in this API Reference are a couple 3rd party services (Configuration/Registry and Rules Engine). These services do not implement the above common APIs and don't not have swagger documentation. Links are provided to their appropriate documentation.
See the left side navigation for complete list of services to access their API Reference.
"},{"location":"api/applications/Ch-APIAppFunctionsSDK/","title":"Application Services","text":"The App Functions SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a functions pipeline. This functions pipeline processes messages received by the configured trigger. See Application Functions SDK for more details on this SDK.
The App Functions SDK provides a RESTful API that all Application Services inherit from the SDK.
"},{"location":"api/applications/Ch-APIAppFunctionsSDK/#swagger","title":"Swagger","text":""},{"location":"api/applications/Ch-APIRulesEngine/","title":"Rules Engine","text":"EdgeX Foundry Rules Engine Microservice receives data from the instance of App Service Configurable running the rules-engine
profile (aka app-rules-engine) via the EdgeX MessageBus. EdgeX uses eKuiper
for the rules engine, which is a separate LF Edge project. See the eKuiper Website for more details on this rules engine.
eKuiper's documentation
"},{"location":"api/core/Ch-APICoreCommand/","title":"Core Command","text":"EdgeX Foundry's Command microservice is a conduit for other services to trigger action on devices and sensors through their managing Device Services. See Core Command for more details about this service.
The service provides an API to get the list of commands that can be issued for all devices or a single device. Commands are divided into two groups for each device:
EdgeX uses the 3rd party Consul microservice as the implementations for Configuration and Registry. The RESTful APIs are provided by Consul directly, and several communities supply Consul client libraries for different programming languages, including Go (official), Python, Java, PHP, Scala, Erlang/OTP, Ruby, Node.js, and C#.
For the client libraries of different languages, please refer to the list on this page:
https://developer.hashicorp.com/consul/api-docs/libraries-and-sdks
"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#configuration-management","title":"Configuration Management","text":"For the current API documentation, please refer to the official Consul web site:
https://developer.hashicorp.com/consul/api-docs/kv
"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#service-registry","title":"Service Registry","text":"For the current API documentation, please refer to the official Consul web site:
https://developer.hashicorp.com/consul/api-docs/catalog https://developer.hashicorp.com/consul/api-docs/agent https://developer.hashicorp.com/consul/api-docs/agent/check https://developer.hashicorp.com/consul/api-docs/health
Service Registration
While each microservice is starting up, it will connect to Consul to register its endpoint information, including microservice ID, address, port number, and health checking method. After that, other microservices can locate its URL from Consul, and Consul has the ability to monitor its health status. The RESTful API of registration is described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#register-service
Service Deregistration
Before microservices shut down, they have to deregister themselves from Consul. The RESTful API of deregistration is described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#deregister-service
Service Discovery
Service Discovery feature allows client micro services to query the endpoint information of a particular microservice by its microservice IDor list all available services registered in Consul. The RESTful API of querying service by microservice IDis described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#get-local-service-health-by-id
The RESTful API of listing all available services is described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/service#list-services
Health Checking
Health checking is a critical feature that prevents using services that are unhealthy. Consul provides a variety of methods to check the health of services, including Script + Interval, HTTP + Interval, TCP + Interval, Time to Live (TTL), and Docker + Interval. The detailed introduction and examples of each checking methods are described on the following Consul page:
https://developer.hashicorp.com/consul/api-docs/agent/check#list-checks
The health checks should be established during service registration. Please see the paragraph on this page of Service Registration section.
"},{"location":"api/core/Ch-APICoreConfigurationAndRegistry/#consul-ui","title":"Consul UI","text":"Consul has UI which allows you to view the health of registered services and view/edit services' individual configuration. Learn more about the UI on the following Consul page:
https://learn.hashicorp.com/tutorials/consul/get-started-explore-the-ui
"},{"location":"api/core/Ch-APICoreData/","title":"Core Data","text":"EdgeX Foundry Core Data microservice includes the Events/Readings database collected from devices /sensors and APIs to expose this database to other services. Its APIs to provide access to Add, Query and Delete Events/Readings. See Core Data for more details about this service.
"},{"location":"api/core/Ch-APICoreData/#swagger","title":"Swagger","text":""},{"location":"api/core/Ch-APICoreMetadata/","title":"Core Metadata","text":"The Core Metadata microservice includes the device/sensor metadata database and APIs to expose this database to other services. In particular, the device provisioning service deposits and manages device metadata through this service's API. See Core Metadata for more details about this service.
"},{"location":"api/core/Ch-APICoreMetadata/#swagger","title":"Swagger","text":""},{"location":"api/devices/Ch-APIDeviceSDK/","title":"Device Services","text":"The EdgeX Foundry Device Service Software Development Kit (SDK) takes the Developer through the step-by-step process to create an EdgeX Foundry Device Service microservice. See Device Service SDK for more details on this SDK.
The Device Service SDK provides a RESTful API that all Device Services inherit from the SDK.
"},{"location":"api/devices/Ch-APIDeviceSDK/#swagger","title":"Swagger","text":""},{"location":"api/support/Ch-APISupportNotifications/","title":"Support Notifications","text":"When a person or a system needs to be informed of something discovered on the node by another microservice on the node, EdgeX Foundry's Support Notifications microservice delivers that information. Examples of Alerts and Notifications that other services might need to broadcast include sensor data detected outside of certain parameters, usually detected by a Rules Engine service, or a system or service malfunction usually detected by system management services. See Support Notifications for more details about this service.
"},{"location":"api/support/Ch-APISupportNotifications/#swagger","title":"Swagger","text":""},{"location":"api/support/Ch-APISupportScheduler/","title":"Support Scheduler","text":"EdgeX Foundry's Support Scheduler microservice to schedule actions to occur on specific intervals. See Support Scheduler for more details about this service.
"},{"location":"api/support/Ch-APISupportScheduler/#swagger","title":"Swagger","text":""},{"location":"design/Process/","title":"Use Cases and Design Process","text":"This document describes the EdgeX use case driven requirements engineering and design process.
Approved by consent of the TSC on 2022-07-13
Supersedes the processes documented on the EdgeX Wiki
"},{"location":"design/Process/#use-case-driven-approach-to-requirements-and-design","title":"Use Case Driven Approach to Requirements and Design","text":"Designing an architecture is a very time consuming task. It is best to start that with a solid foundation. The obvious goal is to design an architecture that satisfies the functional requirements, while being secure, flexible, and robust. Requirements are very important factors when designing a system. They should be derived from established, validated, and most importantly, written use cases. To avoid feature creep, the architecture should focus on requirements that are backed by multiple use cases and in the meantime try to remain extensible.
The following figure outlines the EdgeX process around use cases, requirements capture, and architectural design.
"},{"location":"design/Process/#use-cases-and-requirements","title":"Use Cases and Requirements","text":"In any software system, new needs of the software are encountered on a regular basis. Any need that is more than a request to fix a bug or make a minor addition/change to the software should be added as feature requests (on Github) and supported by written use cases. The use cases should be documented in an EdgeX Use Case Record (UCR). UCRs must be reviewed by domain experts and approved by the TSC per the process documented here.
"},{"location":"design/Process/#ucr-template","title":"UCR template","text":"UCRs should be submitted as pull requests against the UCR area of edgex-docs. Use the current UCR template to help create the UCR document.
"},{"location":"design/Process/#ucr-review-and-approval-process","title":"UCR Review and Approval Process","text":"The community can submit UCR. The use cases describe the use case, target users, data, hardware, privacy and security considerations. Each use case should also include a list of functional requirements, the list of existing tools (that satisfy those requirements) and gaps. Use cases and requirements may freely overlap. Submissions get peer reviewed by domain experts and TSC. The TSC approves UCR and allows design work to be conducted based on the requirements. They can be updated to address shortcomings and technological advancements. Once a stable implementation is available addressing all the requirements, the record gets classified as \"supported\".
"},{"location":"design/Process/#designs","title":"Designs","text":"Issues and new requirements lead to design decisions. Design decisions are also made on a regular, if not daily, basis. Some of these decisions are big and impactful to all parts of the system. Other decisions are less significant but still important for everyone to know and understand.
EdgeX has two places to record design decisions.
Note: ADRs should also be documented on the project board with a link to the ADR in edgex-docs in the project board card.
"},{"location":"design/Process/#when-to-use-an-adr","title":"When to use an ADR","text":"\"Significant architectural decisions\" are deemed those that:
Impact more than one EdgeX service and often impact the entire system (such as the definition of a data transfer object used through the system, of a feature that must be supported by all services).\nRequire a lot of manpower (more than two people working over the course of a release or more) to implement the feature outlined in the ADR.\nRequires implementation to be accomplished over multiple releases (either due to the complexity of the feature or dependencies).\n
ADRs must be proceeded by one or more approved UCRs in order to be approved by the TSC - allowing for the design to be implemented in the EdgeX software.
"},{"location":"design/Process/#adr-template","title":"ADR template","text":"ADRs should be submitted as pull requests against the ADR area of edgex-docs. Use the latest current ADR template to help create the ADR document.
"},{"location":"design/Process/#adr-review-and-approval-process","title":"ADR Review and Approval Process","text":"Designs are created to address one or more requirements across one or more use cases. The design would include architecture details as well as references to pre-approved use cases and requirements. The TSC review the proposed design from a technical perspective. Approved designs get added to the EdgeX archive as \"approved\" records. They may get \"deprecated\" before implementation if another design supersedes it or if the requirements become obsolete over time. Designs may also get demoted if experimental implementations prove that they are not suitable (e.g. due to security, performance, dependency deprecation, feasibility). The design, implementation, verification cycles can repeat many times before resulting in a stable release.
"},{"location":"design/Process/#project-board-cards-and-issues","title":"Project Board Cards and Issues","text":"All project design/architectural design decisions captured on the Design Decisions project board will be created as either a:
Issue: for any design decision that will require code and a PR will be submitted against the issue.\nCard: for any design decision that is not itself going to result in code or may need to be broken down into multiple issues (which can be referenced on the card).\n
The template for project board cards documenting each decision is:
When/Where: date of the decision and place where the decision was made (such as TSC meeting, working group meeting, etc.). This section is required.\nDecision Summary: quick write-up on the decision. This section is required.\nNotes/Considerations: any alternatives discussed, any impacts to other decisions or considerations to be considered in the future (which would negate the decision). This section is optional.\n\nRelevant links: link to the meeting recording (if available). Link to ADR if relevant. Link to PRs or Issues if relevant. Required if available.\n
Note there is a Template column on the project board with a single card that specifies this same structure.
"},{"location":"design/Process/#project-board-columns","title":"Project Board Columns","text":"The Design Decisions project board will be permanent and never archived or deleted. For each release, a new column named for that release will be created to hold the decisions (in the form of cards or issues) for that release.
The release columns may be \"frozen\" at the end of a release, but should never be deleted so that all design decisions can be retained for the life of the project.
"},{"location":"design/Process/#ownership-and-cardissue-creation","title":"Ownership and Card/Issue Creation","text":"The TSC chair, vice-chair and product manager will have overall responsibility for the Design Decision project board. These people will also be responsible for capturing any decisions from TSC meetings or the Monthly Architect\u2019s Meeting as cards/issues on the board.
Work Group chairs are responsible for adding new design decision cards/issues that come for their work group or related meetings.
"},{"location":"design/TOC/","title":"Use Cases and Design Records","text":""},{"location":"design/TOC/#use-case-records-ucrs","title":"Use Case Records (UCRs)","text":"Note
UCRs are listed in alphabetical order by title.
Name/Link Short Description Bring Your Own Vault Use Case for bringing your own Vault Common Configuration Use Case for having Common configuration used by all EdgeX services Core Data Retention and Persistent Cap Use Case for capping readings in Core Data Device Parent-Child Relationships Use Case for Device Parent-Child Relationships Extending Device Data Use Case for Extending of Device Data by Application Services Provision Watch via Device Metadata Use Case for Provision Watching via Additional Device Metadata Record and Replay Use Case for Recording and Replaying event/readings System Events for Devices Use Case for System Events for Device add/update/delete Microservice Authentication Use Case for Microservice Authentication URIs for files Use Case for loading service files from URIs"},{"location":"design/TOC/#architectural-design-records-adrs","title":"Architectural Design Records (ADRs)","text":"Note
ADRs are listed in chronological order by sequence number in title.
Name/Link Short Description 0001 Registry Refactor Separate out Registry and Configuration APIs 0002 Array Datatypes Allow Arrays to be held in Readings 0003 V2 API Principles Principles and Goals of V2 API Design 0004 Feature Flags Feature Flag Implementation 0005 Service Self Config Init Service Self Config Init & Config Seed Removal 0006 Metrics Collection Collection of service telemetry data 0007 Release Automation Overview of Release Automation Flow for EdgeX 0008 Secret Distribution Creation and Distribution of Secrets 0009 Secure Bootstrapping Secure Bootstrapping of EdgeX 0011 Device Service REST API The REST API for Device Services in EdgeX v2.x 0012 Device Service Filters Device Service event/reading filters 0013 Device Service Events via Message Bus Device Services send Events via Message Bus 0014 Secret Provider for All Secret Provider for All EdgeX Services 0015 Encryption between microservices Details conditions under which TLS is or is not used 0016 Container Image Guidelines Documents best practices for security of docker images 0017 Securing access to Consul Access control and authorization strategy for Consul 0018 Service Registry Service registry usage for EdgeX services 0019 EdgeX-CLI V2 EdgeX-CLI V2 Implementation 0020 Delay start services (SPIFFE/SPIRE) Secret store tokens for delayed start services 0021 Device Profile Changes Rules on device profile modifications 0022 Unit of Measure Unit of Measure 0023 North South Messaging Provide for messaging from north side systems through command down to device services 0024 System Events System Events (aka Control Plane Events) published to the MessageBus 0025 Record and Replay Record data from various devices and play data back without devices present 0026 Common Configuration Separate out the common configuration setting into a single source for all the services 0027 URIs for Files Add capability to load service files from remote locations using URIs 0028 Microservice communication security Microservice communication security / authentication (token-based)"},{"location":"design/adr/","title":"Architecture Decision Records Folder","text":"This folder contains the EdgeX Foundry architectural decision records (ADR).
At the root of this folder are decisions that are relevant to multiple parts of the project (aka. cross cutting concerns). Sub folders under the ADR folder contain decisions relevant to the specific area of the project and essentially set up along working group lines (security, core, application, etc.).
"},{"location":"design/adr/#naming-and-formatting","title":"Naming and Formatting","text":"ADR documents should follow the RFC (request for comments) naming standard. Specifically, approved ADRs should have a sequentially increasing integer (or serial number) and then the architectural design topic as file names (sequence_number-My-Topic.md). Example: 0001-Separate-Configuration-Interface. The sequence is a global sequence for all EdgeX ADR. Per RFC and Michael Nygard suggestions the makeup of the ADR document should generally include:
EdgeX ADRs should use the template.md file available in this directory.
"},{"location":"design/adr/#ownership","title":"Ownership","text":"EdgeX WG chairman own the sub folder and included documents associated to their work group. The EdgeX TSC chair/vice chair are responsible for the root level, cross cutting concern documents.
"},{"location":"design/adr/#table-of-contents","title":"Table of Contents","text":"A README with a table of contents for current documents is located here. Document authors are asked to keep the TOC updated with each new document entry.
Legacy designs have their own Table of Contents and are located here.
"},{"location":"design/adr/0001-Registy-Refactor/","title":"Registry Refactoring Design","text":"Approved
"},{"location":"design/adr/0001-Registy-Refactor/#context","title":"Context","text":"Currently the Registry Client
in go-mod-registry
module provides Service Configuration and Service Registration functionality. The goal of this design is to refactor the go-mod-registry
module for separation of concerns. The Service Registry functionality will stay in the go-mod-registry
module and the Service Configuration functionality will be separated out into a new go-mod-configuration
module. This allows for implementations for deferent providers for each, another aspect of separation of concerns.
An aspect of using the current Registry Client
is \"Where do the services get the Registry Provider
connection information?\" Currently all services either pull this connection information from the local configuration file or from the edgex_registry
environment variable. Device Services also have the option to specify this connection information on the command line. With the refactoring for separation of concerns, this issue changes to \"Where do the services get the Configuration Provider
connection information?\"
There have been concerns voiced by some in the EdgeX community that storing this Configuration Provider
connection information in the configuration which ultimately is provided by that provider is not the right design.
This design proposes that all services will use the command line option approach with the ability to override with an environment variable. The Configuration Provider
information will not be stored in each service's local configuration file. The edgex_registry
environment variable will be deprecated. The Registry Provider
connection information will continue to be stored in each service's configuration either locally or from theConfiguration Provider
same as all other EdgeX Client and Database connection information.
The new -cp/-configProvider
command line option will be added to each service which will have a value specified using the format {type}.{protocol}://{host}:{port}
e.g consul.http://localhost:8500
. This new command line option will be overridden by the edgex_configuration_provider
environment variable when it is set. This environment variable's value has the same format as the command line option value.
If no value is provided to the -cp/-configProvider
option, i.e. just -cp
, and no environment variable override is specified, the default value of consul.http://localhost:8500
will be used.
if -cp/-configProvider
not used and no environment variable override is specified the local configuration file is used, as is it now.
All services will log the Configuration Provider
connection information that is used.
The existing -r/-registry
command line option will be retained as a Boolean flag to indicate to use the Registry.
All services in the edgex-go mono repo use the new common bootstrap functionality. The plan is to move this code to a go module for the Device Service and App Functions SDKs to also use. The current bootstrap modules pkg/bootstrap/configuration/registry.go
and pkg/bootstrap/container/registry.go
will be refactored to use the new Configuration Client
and be renamed appropriately. New bootstrap modules will be created for using the revised version of Registry Client
. The current use of useRegistry
and registryClient
for service configuration will be change to appropriate names for using the new Configuration Client
. The current use of useRegistry
and registryClient
for service registration will be retained for service registration. Call to the new Unregister() API will be added to shutdown code for all services.
The conf-seed
service will have similar changes for specifying the Configuration Provider
connection information since it doesn't use the common bootstrap package. Beyond that it will have minor changes for switching to using the Configuration Client
interface, which will just be imports and appropriate name refactoring.
Since the Configuration Provider
connection information will no longer be in the service's configuration struct, the config
endpoint processing will be modified to add the Configuration Provider
connection information to the resulting JSON create from service's configuration.
This following is the current Registry Client
Interface
type Client interface {\nRegister() error\nHasConfiguration() (bool, error)\nPutConfigurationToml(configuration *toml.Tree, overwrite bool) error\nPutConfiguration(configStruct interface{}, overwrite bool) error\nGetConfiguration(configStruct interface{}) (interface{}, error)\nWatchForChanges(updateChannel chan<- interface{}, errorChannel chan<- error, configuration interface{}, waitKey string)\nIsAlive() bool\nConfigurationValueExists(name string) (bool, error)\nGetConfigurationValue(name string) ([]byte, error)\nPutConfigurationValue(name string, value []byte) error\nGetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error)\nIsServiceAvailable(serviceId string) error\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client","title":"New Configuration Client","text":"This following is the new Configuration Client
Interface which contains the Service Configuration specific portion from the above current Registry Client
.
type Client interface {\nHasConfiguration() (bool, error)\nPutConfigurationFromToml(configuration *toml.Tree, overwrite bool) error\nPutConfiguration(configStruct interface{}, overwrite bool) error\nGetConfiguration(configStruct interface{}) (interface{}, error)\nWatchForChanges(updateChannel chan<- interface{}, errorChannel chan<- error,\nconfiguration interface{}, waitKey string)\nIsAlive() bool\nConfigurationValueExists(name string) (bool, error)\nGetConfigurationValue(name string) ([]byte, error)\nPutConfigurationValue(name string, value []byte) error\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#revised-registry-client","title":"Revised Registry Client","text":"This following is the revised Registry Client
Interface, which contains the Service Registry specific portion from the above current Registry Client
. The UnRegister()
API has been added per issue #20
type Client interface {\nRegister() error\nUnRegister() error\nIsAlive() bool\nGetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error)\nIsServiceAvailable(serviceId string) error\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#client-configuration-structs","title":"Client Configuration Structs","text":""},{"location":"design/adr/0001-Registy-Refactor/#current-registry-client-config","title":"Current Registry Client Config","text":"The following is the current struct
used to configure the current Registry Client
type Config struct {\nProtocol string\nHost string\nPort int\nType string\nStem string\nServiceKey string\nServiceHost string\nServicePort int\nServiceProtocol string\nCheckRoute string\nCheckInterval string\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#new-configuration-client-config","title":"New Configuration Client Config","text":"The following is the new struct
the will be used to configure the new Configuration Client
from the command line option or environment variable values. The Service Registry portion has been removed from the above existing Registry Client Config
type Config struct {\nProtocol string\nHost string\nPort int\nType string\nBasePath string\nServiceKey string\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#new-registry-client-config","title":"New Registry Client Config","text":"The following is the revised struct
the will be used to configure the new Registry Client
from the information in the service's configuration. This is mostly unchanged from the existing Registry Client Config
, except that the Stem
for configuration has been removed
type Config struct {\nProtocol string\nHost string\nPort int\nType string\nServiceKey string\nServiceHost string\nServicePort int\nServiceProtocol string\nCheckRoute string\nCheckInterval string\n}\n
"},{"location":"design/adr/0001-Registy-Refactor/#provider-implementations","title":"Provider Implementations","text":"The current Consul
implementation of the Registry Client
will be split up into implementations for the new Configuration Client
in the new go-mod-configuration
module and the revised Registry Client
in the existing go-mod-registry
module.
It was decided to move forward with the above design
After initial ADR was approved, it was decided to retain the -r/--registry
command-line flag and not add the Enabled
field in the Registry provider configuration.
Once the refactoring of go-mod-registry and go-mod-configuration are complete, they will need to be integrated into the new go-mod-bootstrap. Part of this integration will be the Command line option changes above. At this point the edgex-go services will be integrated with the new Registry
and Configuration
providers. The App Services SDK
and Device Services SDK
will then need to integrate go-mod-bootstrap to take advantage of these new providers.
Registry Abstraction - Decouple EdgeX services from Consul (Previous design)
"},{"location":"design/adr/0004-Feature-Flags/","title":"Feature Flag Proposal","text":""},{"location":"design/adr/0004-Feature-Flags/#status","title":"Status","text":"Accepted
"},{"location":"design/adr/0004-Feature-Flags/#context","title":"Context","text":"Out of the proposal for releasing on time, the community suggested that we take a closer look at feature-flags.
Feature-flags are typically intended for users of an application to turn on or off new or unused features. This gives user more control to adopt a feature-set at their own pace \u2013 i.e disabling store and forward in App Functions SDK without breaking backward compatibility.
It can also be used to indicate to developers the features that are more often used than others and can provided valuable feedback to enhance and continue a given feature. To gain that insight of the use of any given feature, we would require not only instrumentation of the code but a central location in the cloud (i.e a TIG stack) for the telemetry to be ingested and in turn reported in order to provide the feedback to the developers. This becomes infeasible primarily because the cloud infrastructure costs, privacy concerns, and other unforeseen legal reasons for sending \u201cUsage Metrics\u201d of an EdgeX installation back to a central entity such as the Linux Foundation, among many others. Without the valuable feedback loop, feature-flags don\u2019t provide much value on their own and they certainly don\u2019t assist in increasing velocity to help us deliver on time.
Putting aside one of the major value propositions listed above, feasibility of a feature flag \u201cmodule\u201d was still evaluated. The simplest approach would be to leverage configuration following a certain format such as FF_[NewFeatureName]=true/false. This is similar to what is done today. Turning on/off security is an example, turning on/off the registry is another. Expanding this further with a module could offer standardization of controlling a given feature such as featurepkg.Register(\u201cMyNewFeature\u201d)
or featurepkg.IsOn(\u201cMyNewFeature\u201d)
. However, this really is just adding complexity on top of the underlying configuration that is already implemented. If we were to consider doing something like this, it lends it self to a central management of features within the EdgeX framework\u2014either its own service or possibly added as part of the SMA. This could help address concerns around feature dependencies and compatibility. Feature A on Service X requires Feature B and Feature C on Service Y. Continuing down this path starts to beget a fairly large impact to EdgeX for value that cannot be fully realized.
The community should NOT pursue a full-fledged feature flag implementation either homegrown or off-the-shelf.
However, it should be encouraged to develop features with a wholistic perspective and consider leveraging configuration options to turn them on/off. In other words, once a feature compiles, can work under common scenarios, but perhaps isn\u2019t fully tested with edge cases, but doesn\u2019t impact any other functionality, should be encouraged.
"},{"location":"design/adr/0004-Feature-Flags/#consequences","title":"Consequences","text":"Allows more focus on the many more competing priorities for this release.
Minimal impact to development cycles and release schedule
"},{"location":"design/adr/0005-Service-Self-Config/","title":"Service Self Config Init & Config Seed Removal","text":""},{"location":"design/adr/0005-Service-Self-Config/#status","title":"Status","text":"approved - TSC vote on 3/25/20 for Geneva release
NOTE: this ADR does not address high availability considerations and concerns. EdgeX, in general, has a number of unanswered questions with regard to HA architecture and this design adds to those considerations.
"},{"location":"design/adr/0005-Service-Self-Config/#context","title":"Context","text":"Since its debut, EdgeX has had a configuration seed service (config-seed) that, on start of EdgeX, deposits configuration for all the services into Consul (our configuration/registry service). For development purposes, or on resource constrained platforms, EdgeX can be run without Consul with services simply reading configuration from the filesystem.
While this process has nominally worked for several releases of EdgeX, there has always been some issues with this extra initialization process (config-seed), not least of which are: - race conditions on the part of the services, as they bootstrap, coming up before the config-seed completes its deposit of configuration into Consul - how to deal with \"overrides\" such as environmental variable provided configuration overrides. As the override is often specific to a service but has to be in place for config-seed in order to take effect. - need for an additional service that is only there for init and then dies (confusing to users)
NOTE - for historical purposes, it should be noted that config-seed only writes configuration into the configuration/registry service (Consul) once on the first start of EdgeX. On subsequent starts of EdgeX, config-seed checks to see if it has already populated the configuration/registry service and will not rewrite configuration again (unless the --overwrite flag is used).
The design/architectural proposal, therefore, is: - removal of the config-seed service (removing cmd/config-seed from the edgex-go repository) - have each EdgeX micro service \"self seed\" - that is seed Consul with their own required configuration on bootstrap of the service. Details of that bootstrapping process are below.
"},{"location":"design/adr/0005-Service-Self-Config/#command-line-options","title":"Command Line Options","text":"All EdgeX services support a common set of command-line options, some combination of which are required on startup for a service to interact with the rest of EdgeX. Command line options are not set by any configuration. Command line options include:
consul.
- for example: -cp=consul.http://localhost:8500
)The distinction of command line options versus configuration will be important later in this ADR.
Two command line options (-o for overwrite and -r for registry) are not overridable by environmental variables.
NOTES: Use of the --overwrite command line option should be used sparingly and with expert knowledge of EdgeX; in particular knowledge of how it operates and where/how it gets its configuration on restarts, etc. Ordinarily, --overwrite is provided as a means to support development needs. Use of --overwrite permanently in production enviroments is highly discouraged.
"},{"location":"design/adr/0005-Service-Self-Config/#configuration-initialization","title":"Configuration Initialization","text":"Each service has (or shall have if not providing it already) a local configuration file. The service may use the local configuration file on initialization of the service (aka bootstrap of the service) depending on command line options and environmental variables (see below) provided at startup.
Using a configuration provider
When the configuration provider is specified, the service will call on the configuration provider (Consul) and check if the top-level (root) namespace for the service exists. If configuratation at the top-level (root) namespace exists, it indicates that the service has already populated its configuration into the configuration provider in a prior startup.
If the service finds the top-level (root) namespace is already populated with configuration information it will then read that configuration information from the configuration provider under namespace for that service (and ignore what is in the local configuration file).
If the service finds the top-level (root) namespace is not populated with configuration information, it will read its local configuration file and populate the configuration provider (under the namespace for the service) with configuration read from the local configuration file.
A configuration provider can be specified with a command line argument (the -cp / --configProvider) or environment variable (the EDGEX_CONFIGURATION_PROVIDER environmental variable which overrides the command line argument).
NOTE: the environmental variables are typically uppercase but there have been inconsistencies in environmental variable casing (example: edgex_registry). This should be considered and made consistent in a future major release.
Using the local configuration file
When a configuration provider isn't specified, the service just uses the configuration in its local configuration file. That is the service uses the configuration in the file associated with the profile, config filename and config file directory command line options or environmental variables. In this case, the service does not contact the configuration service (Consul) for any configuration information.
NOTE: As the services now self seed and deployment specific changes can be made via environment overrides, it will no longer be necessary to have a Docker profile configuration file in each of the service directories (example: https://github.com/edgexfoundry/edgex-go/blob/master/cmd/core-data/res/docker/configuration.toml). See Consequences below. It will still be possible for users to use the profile mechanism to specify a Docker configuration, but it will no longer be required and not the recommended approach to providing Docker container specific configuration.
"},{"location":"design/adr/0005-Service-Self-Config/#overrides","title":"Overrides","text":"Environment variables used to override configuration always take precedence whether configuration is being sourced locally or read from the config provider/Consul.
Note - this means that a configuration value that is being overridden by an environment variable will always be the source of truth, even if the same configuration is changed directly in Consul.
The name of the environmental variable must match the path names in Consul.
NOTES: - Environmental variables overrides remove the need to change the \"docker\" profile in the res/docker/configuration.toml files - Allowing removal of 50% of the existing configuration.toml files. - The override rules in EdgeX between environmental variables and command line options may be counter intuitive compared to other systems. There appears to be no standard practice. Indeed, web searching \"Reddit & Starting Fights Env Variables vs Command Line Args\" will layout the prevailing differences. - Environment variables used for configuration overrides are named by prepending the the configuration element with the configuration section inclusive of sub-path, where sub-path's \".\"s are replaced with underscores. These configuration environment variable overrides must be specified using camel case. Here are two examples:
Registry_Host for\n[Registry]\nHost = 'localhost'\n\nClients_CoreData_Host for\n[Clients]\n [Clients.CoreData]\n Host = 'localhost'\n
- Going forward, environmental variables that override command line options should be all uppercase. All values overriden get logged (indicating which configuration value or op param and the new value).
"},{"location":"design/adr/0005-Service-Self-Config/#decision","title":"Decision","text":"These features have been implemented (with some minor changes to be done) for consideration here: https://github.com/edgexfoundry/go-mod-bootstrap/compare/master...lenny-intel:SelfSeed2. This code branch will be removed once this ADR is approved and implemented on master.
The implementation for self-seeding services and environmental overrides is already implemented (for Fuji) per this document in the application services and device services (and instituted in the SDKs of each).
"},{"location":"design/adr/0005-Service-Self-Config/#backward-compatibility","title":"Backward compatibility","text":"Several aspects of this ADR contain backward compatibility issues for the device service and application service SDKs. Therefore, for the upcoming minor release, the following guidelines and expections are added to provide for backward compatibility.
As earlier versions of the device service SDKs accepted a URI for --registry, if specified on the command line, use the given URI as the address of the configuration provider. If both --configProvider and --registry specify URIs, then the service should log an error and exit.
If a configProvider URI isn't specified, but --registry (w/out a URI) is specified, then the service will use the Registry provider information from its local configuration file for both configuration and registry providers.
Add it back and use value as if it was EDGEX_CONFIGURATION_PROVIDER and enable use of registry with same settings in URL. Default to http as it is in Fuji.
"},{"location":"design/adr/0005-Service-Self-Config/#consequences","title":"Consequences","text":"There are still high availability concerns that need to be considered and not covered in this ADR at this time.
# all common shared environment variables defined here:\nx-common-env-variables: &common-variables\n EDGEX_SECURITY_SECRET_STORE: \"false\"\n EDGEX_CONFIGURATION_PROVIDER: consul.http://edgex-core-consul:8500\n Clients_CoreData_Host: edgex-core-data\n Clients_Logging_Host: edgex-support-logging\n Logging_EnableRemote: \"true\"\n
Approved Original proposal 10/24/2020 Approved by the TSC on 3/2/22
Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include:
Control plane events (CPE) are defined as events
that occur within an EdgeX instance. Examples of CPE include:
CPE should not be confused with core data Events. Core data Events represent a collection (one or more) of sensor/device readings. Core data Events represent sensing of some measured state of the physical world (temperature, vibration, etc.). CPE represents the detection of some happening inside of the EdgeX software.
This ADR outlines ** metrics (or telemetry) ** collection and handling.
Note
This ADR initially incorporated metrics collection and control plane event processing. The EdgeX architects felt the scope of the design was too large to cover under one ADR. Control plane event processing will be covered under a separate ADR in the future.
"},{"location":"design/adr/0006-Metrics-Collection/#context","title":"Context","text":"System Management services (SMA and executors) currently provide a limited set of \u201cmetrics\u201d to requesting clients (3rd party applications and systems external to EdgeX). Namely, it provides requesting clients with service CPU and memory usage; both metrics about the resource utilization of the service (the executable) itself versus metrics that are about what is happening inside of the service. Arguably, the current system management metrics can be provided by the container engine and orchestration tools (example: by Docker engine) or by the underlying OS tooling.
Info
The SMA has been deprecated (since Ireland release) and will be removed in a future, yet named, release.
Going forward, users of EdgeX will want to have more insights \u2013 that is more metrics telemetry \u2013 on what is happening directly in the services and the tasks that they are preforming. In other words, users of EdgeX will want more telemetry on service activities to include:
Metric (or telemetry) data is defined as the count or rate of some action, resource, or circumstance in the EdgeX instance or specific service. Examples of metrics include:
The collection and dissemination of metric data will require internal service level instrumentation (relevant to that service) to capture and send data about relevant EdgeX operations. EdgeX does not currently offer any service instrumentation.
"},{"location":"design/adr/0006-Metrics-Collection/#metric-use","title":"Metric Use","text":"As a first step in implementation of metrics data, EdgeX will make metric data available to other subscribing 3rd party applications and systems, but will not necessarily consume or use this information itself.
In the future, EdgeX may consume its own metric data. For example, EdgeX may, in the future, use a metric on the number of EdgeX events being sent to core data (or app services) as the means to throttle back device data collection.
In the future, EdgeX application services may optionally subscribe to a service's metrics messages bus (by attaching to the appropriate message pipe for that service). Thus allowing additional filtering, transformation, endpoint control of metric data from that service. At the point where this feature is supported, consideration would need to be made as to whether all events (sensor reading messages and metric messages) go through the same application services.
At this time, EdgeX will not persist the metric data (except as it may be retained as part of a message bus subsystem such as in an MQTT broker). Consumers of metric data are responsible for persisting the data if needed, but this is external to EdgeX. Persistence of metric information may be considered in the future based on requirements and adopter demand for such a feature.
In general, EdgeX metrics are meant to provide internal services and external applications and systems better information about what is happening \"inside\" EdgeX services and the associated devices with which it communicates.
"},{"location":"design/adr/0006-Metrics-Collection/#requirements","title":"Requirements","text":"Writable
area. When a user wishes to change the configuration dynamically (such as turning on/off a metric), then Consul's UI can be used to change it.on
or off
- in other words providing configuration that determines what metrics are collected and reported by default.off
(the default setting) the service does not report the metric. When a metric is turned on
the service collects and sends the metric to the designated message topic.Info
Initially, it was proposed that metrics be associated with a \"level\" and allow metrics to be turned on or off by level (like levels associated to log messages in logging). The level of metrics data seems arbitrary at this time and considered too complex for initial implementation. This may be reconsidered in a future release and based on new requirements/use cases.
It was also proposed to categorize or label metrics - essentially allowing grouping of various metrics. This would allow groups of metrics to be turned on or off, and allow metrics to be organized per the group when reporting. At this time, this feature is also considered beyond the scope of the initial implementation and to be reconsidered in a future release based on requirements/use case needs.
It was also proposed that each service offer a REST API to provide metrics collection information (such as which metrics were being collected) and the ability to turn the collection on or off dynamically. This is deemed out of scope for the first implementation and may be brought back if there are use case requirements / demand for it.
"},{"location":"design/adr/0006-Metrics-Collection/#requested-metrics","title":"Requested Metrics","text":"The following is a list of example metrics requested by the EdgeX community and adopters for various service areas. Again, metrics would generally be collected and pushed to the message topic in some configured interval (example: 1/5/15 minutes or other defined interval). This is just a sample of metrics thought relevant by each work group. It may not reflect the metrics supported by the implementation. The exact metrics collected by each service will be determined by the service implementers (or SDK implementers in the case of the app functions and device service SDKs).
"},{"location":"design/adr/0006-Metrics-Collection/#general","title":"General","text":"The following metrics apply to all (or most) services.
Note
It is envisioned that there may be additional specific metrics for each device service. For example, the ONVIF camera device service may report number of times camera tampering was detected.
"},{"location":"design/adr/0006-Metrics-Collection/#security","title":"Security","text":"Security metrics may be more difficult to ascertain as they are cross service metrics. Given the nature of this design (on a per service basis), global security metrics may be out of scope or security metrics collection has to be copied into each service (leading to lots of duplicate code for now). Also, true threat detection based on metrics may be a feature best provided by 3rd party based on particular threats and security profile needs.
Metric data will be collected and cached by each service. At designated times (kicked off by configurable schedule), the service will collect telemetry data from the cache and push it to a designated message bus topic.
"},{"location":"design/adr/0006-Metrics-Collection/#metrics-messaging","title":"Metrics Messaging","text":"Cached metric data, at the designated time, will be marshaled into a message and pushed to the pre-configured message bus topic.
Each metric message consists of several key/value pairs: - a required name (the name of the metric) such as service-uptime - a required value which is the telemetry value collected such as 120 as the number of hours the service has been up. - a required timestamp is the time (in Epoch timestamp/milliseconds format) at which the data was collected (similar in nature to the origin of sensed data). - an optional collection (array) of tags. The tags are sets of key/value pairs of strings that provide amplifying information about the telemetry. Tags may include: - originating service name - unit of measure associated with the telemetry value - value type of the value - additional values when the metric is more than just one value (example: when using a histogram, it would include min, max, mean and sum values)
The metric name must be unique for that service. Because some metrics are reported from multiple services (such as service uptime), the name is not required to be unique across all services.
All information (keys, values, tags, etc.) is in string format and placed in a JSON array within the message body. Here are some example representations:
Example metric message body with a single value
{\"name\":\"service-up\", \"value\":\"120\", \"timestamp\":\"1602168089665570000\", \"tags\":{\"service\":\"coredata\",\"uom\":\"days\",\"type\":\"int64\"}}\n
Example metric message body with multiple values
{\"name\":\"api-requests\", \"value\":\"24\", \"timestamp\":\"1602168089665570001\", \"tags\":{\"service\":\"coredata\",\"uom\":\"count\",\"type\":\"int64\", \"mean\":\"0.0665\", \"rate1\":\"0.111\", \"rate5\":\"0.150\",\"rate15\":\"0.111\"}}\n
Info
The key or metric name must be unique when using go-metrics as it requires the metric name to be unique per the registry. Metrics are considered immutable.
"},{"location":"design/adr/0006-Metrics-Collection/#configuration","title":"Configuration","text":"Configuration, not unlike that provided in core data or any device service, will specify the message bus type and locations where the metrics messages should be sent. In fact, the message bus configuration will use (or reuse if the service is already using the message bus) the common message bus configuration as defined below.
Common configuration for each service for message queue configuration (inclusive of metrics):
[MessageQueue]\nProtocol = 'redis' ## or 'tcp'\nHost = 'localhost'\nPort = 5573\nType = 'redis' ## or 'mqtt'\nPublishTopicPrefix = \"edgex/events/core\" # standard and existing core or device topic for publishing \n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\n# Client Identifiers\nClientId = \"device-virtual\"\n# Connection information\nQos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\n
Additional configuration must be provided in each service to provide metrics / telemetry specific configuration. This area of the configuration will likely be different for each type of service.
Additional metrics collection configuration to be provided include:
off
and on
. All are false by default. The list of metrics can and likely will be different per service. The keys in this list are the metric name. True and false are used for on
and off
values.[service-name]/[metric-name]
will be appended per metric (allowing subscribers to filter by service or metric name)These metrics configuration options will be defined in the Writable
area of configuration.toml
so as to allow for dynamic changes to the configuration (when using Consul). Specifically, the [Writable].[Writable.Telemetry]
area will dictate metrics collection configuration like this:
[[Writable]]\n[[Writable.Telemetry]]\nInterval = \"30s\"\nPublishTopicPrefix = \"edgex/telemetry\" # /<service-name>/<metric-name> will be added to this Publish Topic prefix\n#available metrics listed here. All metrics should be listed off (or false) by default\nservice-up = false\napi-requests = false\n
Info
It was discussed that in future EdgeX releases, services may want separate message bus connections. For example one for sensor data and one for metrics telemetry data. This would allow the QoS and other settings of the message bus connection to be different. This would allow sensor data collection, for example, to be messaged with a higher QoS than that of metrics. As an alternate approach, we could modify go-mod-messaging to allow setting QoS per topic (and thereby avoid multiple connections). For the initial release of this feature, the service will use the same connection (and therefore configuration) for metrics telemetry as well as sensor data.
"},{"location":"design/adr/0006-Metrics-Collection/#library-support","title":"Library Support","text":"Each service will now need go-mod-messaging support (for GoLang services and the equivalent for C services). Each service would determine when and what metrics to collect and push to the message bus, but will use a common library chosen for each EdgeX language supported (Go or C currently)
Use of go-metrics (a GoLang library to publish application metrics) would allow EdgeX to utilize (versus construct) a library utilized by over 7 thousand projects. It provides the means to capture various types of metrics in a registry (a sophisticated map). The metrics can then be published (reported
) to a number of well known systems such as InfluxDB, Graphite, DataDog, and Syslog. go-metrics is a Go library made from original Java package https://github.com/dropwizard/metrics.
A similar package would need to be selected (or created) for C. Per the Core WG meeting of 2/24/22 - it is important to provide an implementation that is the same in Go or C. The adopter of EdgeX should not see a difference in whether the metrics/telemetry is collected by a C or Go service. Configuration of metrics in a C or Go service should have the same structure. The C based metrics collection mechanism in C services (specifically as provided for in our C device service SDK) may operate differently \"under the covers\" but its configuration and resulting metrics messages on the EdgeX message bus must be formatted/organized the same.
** Considerations in the use of go-metrics **
** Community questions about go-metrics ** Per the Monthly Architect's meeting of 9/20/21):
As an alternative to go-metrics, there is another library called OpenCensus. This is a multi-language metrics library, including Go and C++. This library is more feature rich. OpenCensus is also roughly 5x the size of the go-metrics library.
"},{"location":"design/adr/0006-Metrics-Collection/#additional-open-questions","title":"Additional Open Questions","text":"Writable
configuration and allow Consul to be the means to change the configuration (dynamically). If an adopter chooses not to use Consul, then the configuration with regard to metrics collection, as with all configuration in this circumstance, would be static. If an external API need is requested in the future (such as from an external UI or tool), a REST API may be added. See older versions of this PR for ideas on implementation in this case.reporters
that come with go-metrics that allow for data to be taken directly from go-metrics and pushed to an intermediary for Prometheus and other monitoring/telemetry platforms as referenced above. These capabilities may not be very well supported and is beyond the scope of this EdgeX ADR. However, even without reporters
, it was felt a relatively straightforward exercise (on the part of the adopter) to create an application that listens to the EdgeX metrics message bus and makes that data available via pull REST API for Prometheus if desired.The go-metrics package offers the following types of metrics collection:
g := metrics.NewGauge()\ng.Update(42) // set the value to 42\ng.Update(10) // now set the value to 10\nfmt.Println(g.Value()) // print out the current value in the gauge = 10\n
c := metrics.NewCounter()\nc.Inc(1) // add one to the current counter\nc.Inc(10) // add 10 to the current counter, making it 11\nc.Dec(5) // decrement the counter by 5, making it 6 \nfmt.Println(c.Count()) // print out the current count of the counter = 6\n
m := metrics.NewMeter()\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nm.Mark(1) // add one to the current meter value\ntime.Sleep(15 * time.Second) // allow some time to go by\nfmt.Println(m.Count()) // prints 4\nfmt.Println(m.Rate1()) // prints 0.11075889086811593\nfmt.Println(m.Rate5()) // prints 0.1755318374350548\nfmt.Println(m.Rate15()) // prints 0.19136522498856992\nfmt.Println(m.RateMean()) //prints 0.06665062941438574\n
h := metrics.NewHistogram(metrics.NewUniformSample(4))\nh.Update(10)\nh.Update(20)\nh.Update(30)\nh.Update(40)\nfmt.Println((h.Max())) // prints 40\nfmt.Println(h.Min()) // prints 10\nfmt.Println(h.Mean()) // prints 25\nfmt.Println(h.Count()) // prints 4\nfmt.Println(h.Percentile(0.25)) //prints 12.5\nfmt.Println(h.Variance()) //prints 125\nfmt.Println(h.Sample()) //prints &{4 {0 0} 4 [10 20 30 40]}\n
t := metrics.NewTimer()\nt.Update(10)\ntime.Sleep(15 * time.Second)\nt.Update(20)\ntime.Sleep(15 * time.Second)\nt.Update(30)\ntime.Sleep(15 * time.Second)\nt.Update(40)\ntime.Sleep(15 * time.Second)\nfmt.Println((t.Max())) // prints 40\nfmt.Println(t.Min()) // prints 10\nfmt.Println(t.Mean()) // prints 25\nfmt.Println(t.Count()) // prints 4\nfmt.Println(t.Sum()) // prints 100\nfmt.Println(t.Percentile(0.25)) //prints 12.5\nfmt.Println(t.Variance()) //prints 125\nfmt.Println(t.Rate1()) // prints 0.1116017821771607\nfmt.Println(t.Rate5()) // prints 0.1755821073441404\nfmt.Println(t.Rate15()) // prints 0.1913711954736821\nfmt.Println(t.RateMean()) //prints 0.06665773963998162\n
Note
The go-metrics package does offer some variants of these like the GaugeFloat64 to hold 64 bit floats.
"},{"location":"design/adr/0006-Metrics-Collection/#consequences","title":"Consequences","text":"Possible standards for implementation
Approved (by TSC vote on 3/25/21)
"},{"location":"design/adr/0018-Service-Registry/#context","title":"Context","text":"An EdgeX system may be run with an optional service registry, the use of which (see the related ADR 0001-Registry-Refactor [1]) can be controlled on a per-service basis via the -r/-registry
commmand line options. For the purposes of this ADR, a base assumption is that the registry has been enabled for all services. The default service registry used by EdgeX is Consul [2] from Hashicorp. Consul is also the default configuration provider for EdgeX.
This ADR is meant to address the current usage of the registry by EdgeX services, and in particular whether the EdgeX services are using the registry to determine the location of peer services vs. using static per-service configuration. The reason this is being investigated is that there has been a proposal that EdgeX do away with the registry functionality, as the current implementation is not considered secure, due to the current configuration of Consul as used by the latest version of EdgeX (Hanoi/1.3.0).
According to the original Service Name Design document (v6) [3] written during the California (0.6) release of EdgeX, all EdgeX Foundry microservices should be able to accomplish the following tasks:
The purpose of this design is to ensure that services themselves advertise their location to the rest of the system by first self- registering. Most service registries (including Consul) implement some sort of health check mechanism. If a service is failing one or more health checks, the registry will stop reporting its availability when queried.
Note - the design specifically excludes device services from this service lookup, as Core Metadata maintains a persistent store of DeviceService objects which provide service location for device services.
"},{"location":"design/adr/0018-Service-Registry/#existing-behavior","title":"Existing Behavior","text":"This section documents the existing behavior in the Hanoi (1.3.x) version of EdgeX.
"},{"location":"design/adr/0018-Service-Registry/#device-services","title":"Device Services","text":"Device Virtual's behavior was first tested using the edgexfoundry snap (which is configured to always use the registry) by doing the following:
$ sudo snap install edgexfoundry $ cp /var/snap/edgexfoundry/current/config/device-virtual/res/configuration.toml .
I edited the file, removing the [Client.Data]
section completely and copied the file back into place. Next I enabled device-virtual while monitoring the journal output.
$ sudo cp configuration.toml /var/snap/edgexfoundry/current/config/device-virtual/res/\n$ sudo snap set edgexfoundry device-virtual=on\n
The following error was seen in the journal:
level=INFO app=device-virtual source=httpserver.go:94 msg=\"Web server starting (0.0.0.0:49990)\"\nerror: fatal error; Host setting for Core Data client not configured\n
Next I followed the same steps, but instead of completely removing the client, I instead set the client ports to invalid values. In this case the service logged the following errors and exited:
level=ERROR app=device-virtual source=service.go:149 msg=\"DeviceServicForName failed: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\"\nlevel=ERROR app=device-virtual source=init.go:45 msg=\"Couldn't register to metadata service: Get \\\"http://localhost:3112/api/v1/deviceservice/name/device-virtual\\\": dial tcp 127.0.0.1:3112: connect: connection refused\\n\"\n
Note - in order to run this second test, the easiest way to do so is to remove and reinstall the snap vs. manually wiping out device-virtual's configuration in Consul. I could have also stopped the service, modified the configuration directly in Consul, and restarted the service.
"},{"location":"design/adr/0018-Service-Registry/#registry-client-interface-usage","title":"Registry Client Interface Usage","text":"Next the service's usage of the go-mod-registry Client
interface was examined:
type Client interface {\n // Registers the current service with Registry for discover and health check\n Register() error\n\n // Un-registers the current service with Registry for discover and health check\n Unregister() error\n\n // Simply checks if Registry is up and running at the configured URL\n IsAlive() bool\n\n // Gets the service endpoint information for the target ID from the Registry\n GetServiceEndpoint(serviceId string) (types.ServiceEndpoint, error)\n\n // Checks with the Registry if the target service is available, i.e. registered and healthy\n IsServiceAvailable(serviceId string) (bool, error)\n}\n
"},{"location":"design/adr/0018-Service-Registry/#summary","title":"Summary","text":"If a device service is started with the registry flag set:
IsServiceAvailable
) on startup. Regardless of the registry setting, the Go SDK always sources the addresses of its dependent services from the Client* configuration stanzas.The same approach was used for Core and Support services (i.e. reviewing the usage of go-mod-bootstrap's Client
interface), and ironically, the SMA seems to be the only service in edgex-go that actually queries the registry for service location:
./internal/system/agent/getconfig/executor.go: ep, err := e.registryClient.GetServiceEndpoint(serviceName)\n./internal/system/agent/direct/metrics.go: e, err := m.registryClient.GetServiceEndpoint(serviceName)\n
In summary, other than the SMA's configuration and metrics logic, the Core and Support services behave in the same manner as device-sdk-go.
Note - the SMA also has a longstanding issue #2486 where it continuousy logs errors if one (or more) of the Support Services are not running. As described in the issue, this could be avoided if the SMA used the registry to determine if the services were actually available. See related issue #1662 ('Look at Driving \"Default Services List\" via Configuration').
"},{"location":"design/adr/0018-Service-Registry/#security-proxy-setup","title":"Security Proxy Setup","text":"The security-proxy-setup service also relies on static service address configuration to configure the server routes for each of the services accessible through the API Gateway (aka Kong). Although it uses the same TOML-based client config keys as the other services, these configuration values are only ever read from the security-proxy-setup's local configuration.toml file, as the security services have never supported using our configuration provider (aka Consul).
Note - Another point worth mentioning with respect to security services is that in the Geneva and Hanoi releases the service health checks registered by the services (and the associated IsServiceAvailable
method) are used to orchestrate the ordered startup of the security services via a set of Consul scripts. This additional orchestration is only performed when EdgeX is deployed via docker, and is slated to to be removed as part of the Ireland release.
After a bit of research reaching as far back as the California (0.6.1) release of EdgeX, I've managed to piece together why the current implementation works the way it does. This history focues solely on the core and support services.
The California release of EdgeX was released in June of 2018 and was the first to include services written using Go. This version of EdgeX as well as versions through the Fuji release all relied on a bootstrapping service called core-config-seed which was responsible for seeding the configuration of all of the core and support services into Consul prior to any of the services being started.
This release actually preceded usage of TOML for configuration files, and instead just used a flat key/value format, with keys converted from legacy Java property names (e.g. meta.db.device.url ) to Camel[Pascal]/Case (e.g. MetaDeviceServiceURL).
I chose the config key mentioned above on purpose:
MetaDeviceURL = \"http://edgex-core-metadata:48081/api/v1/device\"\n
Not only did this config key provide the address of core metadata, it also provided the path of a specific REST endpoint. In later releases of EdgeX, the address of the service and the specific endpoint paths were de-coupled. Instead of following the Service Name design (which was finalized two months earlier), the initial implementation followed the legacy Java implementation and initialized its service clients for each required REST endpoint (belonging to another EdgeX service) directly from the associated *URL config key read from Consul (if enabled) or directly from the configuration file.
The shared client initialization code also created an Endpoint monitor goroutine and passed it a go channel channel used by the service to receive updates to the REST API endpoint URL. This monitor goroutine effectively polled Consul every 15s (this became configurable in later versions) for the client's service address and if a change was detected, would write the updated endpoint URL to the given channel, effectively ensuring that the service started using the new URL.
It wasn't till late in the Geneva development cycle that I noticed log messages which made me aware of the fact that every one of our services was making a REST call to check the address of a service endpoint every 15s, for every REST endpoint it used! An issue was filed (https://github.com/edgexfoundry/edgex-go/issues/2594), and the client monitoring was removed as part of the Geneva 1.2.1 release.
"},{"location":"design/adr/0018-Service-Registry/#problem-statement","title":"Problem Statement","text":"The fundamental problem with the existing implementations (as decribed above), is that there is too much duplication of configuration across services. For instance, Core Data's service port can easily be changed by passing the environment variable SERVICE_PORT to the service on startup. This overrides the configuration read from the configuration provider, and will cause Core Data to listen on the new port, however it has no impact on any services which use Core Data, as the client config for each is read from the configuration provider (excluding security-proxy-setup).
This means in order to change a service port, environment variable overrides (e.g. CLIENTS_COREDARA_PORT) need to set for every client service as well as security-proxy-setup (if required).
"},{"location":"design/adr/0018-Service-Registry/#decision","title":"Decision","text":"Update the core, support, and security-proxy-setup services to use go-mod-registry's Client.GetServiceEndpoint
method (if started with the --registry
option) to determine (a) if a service dependency is available and (b) use the returned address information to initialize client endpoints (or setup the correct route in the case of proxy-setup). The same changes also need to be applied to the App Functions SDK and Go Device SDK, with only minor changes required in the C Device SDK (see previous commments re: the current implementation).
Note - this design only works if service registration occurs before the service initializes its clients. For instance, Core Data and Core Metadata both depend on the other, and thus if both defer service registration till after client initialization, neither will be able to successfully lookup the address of the other service.
"},{"location":"design/adr/0018-Service-Registry/#consquences","title":"Consquences","text":"One impact of this decision is that since the security-proxy-setup service currently runs before any of the core and support services are started, it would not be possible to implement this proposal without also modifying the service to use a lazy initialization of the API Gateway's routes. As such, the implementation of this ADR will require more design work with respect to security-proxy-setup. Some of the issues include:
--registry
command-line support to security-proxy-setup).Route
entries use service-keys instead of arbitrary names (e.g. (Route.core-data
vs. Route.CoreData
).Approved by TSC Vote on 4/28/22
"},{"location":"design/adr/0023-North-South-Messaging/#context-and-proposed-design","title":"Context and Proposed Design","text":"Today, data flowing from sensors/devices (the \u201csouthside\u201d) through EdgeX to enterprise applications, databases and cloud-based systems (the \u201cnorthside\u201d) can be accomplished via REST or Message bus. That is, sensor or device data collected by a device service can be sent via REST or message bus to core data. Core data then relays the data to application services via message bus, but the sensor data can also be sent directly from device services to application services via message bus (bypassing core data). The message bus is implemented via Redis Pub/Sub (default) or via MQTT. From the application services, data can be sent to northside endpoints in any number of ways \u2013 including via MQTT.
So, in summary, data can be collected from a sensor or device and be sent from the southside to the northside entirely using message bus technology when desired.
Today, communications from a 3rd party system (enterprise application, cloud application, etc.) to EdgeX in order to acuate a device or get the latest information from a sensor is accomplished via REST. The 3rd party system makes a REST call of the command service which then relays a request to a device service also using REST. There is no built in means to make a message-based request of EdgeX or the devices/sensors it manages. Note, these REST calls are optionally made via the API Gateway in order to provide access control.
In a future release of EdgeX, there is a desire to allow 3rd party systems to make requests of the southside via message bus. Specifically, a 3rd party system will send a command request to the command service via external message broker. The command service would then relay the request via message bus to the managing device service via one of the allowed internal message bus implementations (which could be MQTT or Redis Pub/Sub today). The device service would use the message to trigger action on the device/sensor as it does when it receives a REST request, and respond via message bus back to the command service. In turn, the command service would relay the response to the 3rd party system via external message bus.
In summary, this ADR proposes that the core command service adds support for an external MQTT connection (in the same manner that app services provide an external MQTT connection), which will allow it to act as a bridge between the internal message bus (implemented via either MQTT or Redis Pub/Sub) and external MQTT message bus.
Note
For the purposes of this initial north-to-south message bus communications, external 3rd party communications to the command service will be limited to use of MQTT.
"},{"location":"design/adr/0023-North-South-Messaging/#core-command-as-message-bus-bridge","title":"Core Command as Message Bus Bridge","text":"The core command service will serve as the EdgeX entry point for external, north-to-south message bus requests to the south side.
3rd party systems should not be granted access to the EdgeX internal message bus. Therefore, in order to implement north to south communications via message bus (specifically MQTT), the command service needs to take messages from the 3rd party or external MQTT topics and pass them internally onto the EdgeX internal message bus where they can eventually be routed to the device services and then on to the devices/sensors (southside).
In reverse, response messages from the southside will also be sent through the internal EdgeX message bus to the command service where they can then be bridged to the external MQTT topics and respond to the 3rd party system requester.
Note
Note that eKuiper is allowed access directly to the internal EdgeX message bus. This is a special circumstance of 3rd party external system communication as eKuiper is a sister project that is deemed the EdgeX reference implementation rules engine. In future releases of EdgeX, even eKuiper may be routed through an external to internal message bus bridge for better decoupling and security.
"},{"location":"design/adr/0023-North-South-Messaging/#message-bus-subscriptions-and-publishing","title":"Message Bus Subscriptions and Publishing","text":"The command service will require the means to publish messages to device services via the EdgeX message bus (internal message bus). It would use the messaging client (go-mod-messaging) to create a new MessageClient, connect to the message bus, and publish to designated request message topics (see topic configuration below).
The command service will also need to connect to the EdgeX message bus (internal message bus) in order to receive responses from the device services after a request by message bus has been made. Again, core command will use the go-mod-messaging MessageClient to subscribe and receive response messages from the device services.
In a similar fashion, device services will need to both subscribe and publish to the EdgeX message bus (internal message bus) to get command requests and push back any responses to the command service. Go lang device services will, like the command service, use the go-mod-messaging module and MessagingClient to get command requests and send command responses to and from the EdgeX message bus. C based device services will use a C alternative to subscribe and publish to the EdgeX message bus (internal message bus). Note, device services already use go-mod-messaging when publishing events/readings to the message bus (internal message bus).
The command service will also need to subscribe to 3rd party MQTT topics (external message bus) in order to get command requests from the 3rd party system. The command service will then relay command requests on to the appropriate device service via the internal message bus (forming the message bus to message bus bridge). Likewise, the command service will accept responses from the device services on the EdgeX message bus (internal message bus) and then publish responses to the 3rd party system via the 3rd party MQTT topics (external message bus).
"},{"location":"design/adr/0023-North-South-Messaging/#command-queries-via-command-service","title":"Command Queries via Command Service","text":"Today, 3rd party systems can make a REST call of core command to get the possible commands that can be executed. There are two query REST API endpoints: /device/all (to get the commands for all devices) and device/name/{name} (to get the commands for a specific device by name).
It stands to reason that if a 3rd party system wants to send commands via messaging that they would also want to get an understanding of what commands are available via messaging. For this reason, the core command service will also allow message requests to get all command or get all commands for a particular device name. In other words, the core command service must support command \"queries\" via messaging just as it supports command requests via messaging.
In the case of command queries, the REST responses include the actual REST command endpoints. For example, the REST query would return core command paths, urls and parameters used to construct REST command requests (as shown in the example below).
\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"get\": true,\n\"path\": \"/api/v2/device/name/testDevice1/command/coolingpoint1\",\n\"url\": \"http://localhost:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n}\n]\n}\n]\n
When using messaging to make the \"queries\" the response message must return information about how to pass a message to the appropriate topic to make the command request. Therefore, the query response when using messaging would include something like the following:
\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint/get\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n} ]\n},\n{\n\"name\": \"coolingpoint1\",\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint1/set\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n} ]\n}\n]\n
Note
Per Core WG meeting of 4/7/22 - the JSON above serves as a general example. The implementation will have to address get/set (or read/write) differentiation, but this is considered an implementation detail to be resolved by the developers.
Note
The query response does not contain a URL since it is assumed that the broker address must already be known in order to make the query.
"},{"location":"design/adr/0023-North-South-Messaging/#message-structure","title":"Message Structure","text":"In REST based command requests (and responses), the HTTP request line contains important information such as the path or target of the request, and the HTTP method type (indicating a GET or PUT request). The HTTP status line provides the information such as the response code (ex: 200 for OK). The body or payload of the HTTP message contains the request details (such as parameters to a device PUT call) or response information (such as events and associated readings from a GET call).
Since most message bus protocols lack a generic message header mechanism (as in HTTP), providing request/response metadata is accomplished by defining a message
envelope object associated with each request/response. Therefore, messages described in this ADR must provide JSON envelope
and payload
objects for each request/response.
The message topic names act like the HTTP paths and methods in REST requests. That is, the topic names specify the device receiver of any command request as paths do in the HTTP requests.
"},{"location":"design/adr/0023-North-South-Messaging/#message-envelope","title":"Message Envelope","text":"The messages defined in this ADR are JSON formatted requests and responses that share a common base structure. The outer most JSON object represents the message envelope
, which is used to convey metadata about request/response (e.g. a correlation identifier which will be added to any relayed request message as well as the response message envelope so that the 3rd party system will know to associate the responses to the original request).
Note
A Correlation ID (see this article for a more detailed description) is a unique value that is added to every request and response involved in a transaction which could include multiple requests/responses between one or more microservices. It's not meant to correlate requests to responses, its meant to label every message involved in a potentially multi-request transaction.
A Request ID should be an identifier returned on the response to a request (providing traceability between single request/response).
The envelope
will also contain the API version (something provided in the HTTP path when using REST).
Command requests in HTTP may also contain ds-pushevent and ds-returnevent query parameters (for GET commands). These will be optionally provided key/value pairs represented in the message envelope
's query parameters (and optionally allows for other parameters in the future).
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"API\":\"V2\",\n\"queryParams\": {\n\"ds-pushevent\":\"true\",\n\"ds-returnevent\":\"true\",\n}\n...\n}\n
Note
As with REST requests, if the ds-returnvent was no
, then a message with envelope would be returned but with no payload as there would be no events to return.
The request message payload
to the command service and those relayed to the device service would mimic their HTTP/REST request body alternatives. The payload
provides details needed in executing the command at the south side.
In the example GET and PUT messages below, note the envelope
wraps or encases the message payload
. The payload may be empty (as is typical of GET requests).
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"queryParams\": {\n\"ds-pushevent\":\"true\",\n\"ds-returnevent\":\"true\",\n}\n}\n\n{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"payload\": {\n\"AHU-TargetTemperature\": \"28.5\",\n\"AHU-TargetBand\": \"4.0\",\n\"AHU-TargetHumidity\": {\n\"Accuracy\": \"0.2-0.3% RH\",\n\"Value\": 59\n}\n}\n}\n
Note
Payload could be empty and therefore optional in the message structure - and exemplified in the top example here.
The response message payload
would contain the response from the south side, which is typically EdgeX event/reading objects (in the case of GET requests) but would also include any error message details.
Example response messages for a GET and PUT request are shown below. Again, note that the message envelope
wraps the response payload
.
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"errorCode\": 0,\n\"payload\": {\n\"event\": {\n\"apiVersion\": \"v2\",\n\"id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n\"deviceName\": \"string\",\n\"profileName\": \"string\",\n\"created\": 0,\n\"origin\": 0,\n\"readings\": [\n\"string\"\n],\n\"tags\": {\n\"Gateway-id\": \"HoustonStore-000123\",\n\"Latitude\": \"29.630771\",\n\"Longitude\": \"-95.377603\"\n}\n}\n}\n}\n\n{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"errorCode\": 1,\n\"payload\": {\n\"message\": \"string\"\n}\n}\n
Note
Get command responses may include CBOR data. The message envelope (which has a content type indicator) will indicate that the payload is either CBOR or JSON. The same message envelope content type indicator that is used in REST communications will be used in this message bus communications.
Alert
Open discussions per working group meetings and reviews...
error
boolean and then have the message indicate the error condition. The request message payload
to query the command service would mimic their HTTP/REST request body alternatives. The payload
provides details needed in executing the command at the south side.
In the example query to get all commands below, note the envelope
wraps or encases the message payload
. The payload will be empty. The query parameters will include the offset and limit (as per the REST counter parts).
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"queryParams\": {\n\"offset\":0,\n\"limit\":20,\n}\n}\n\nIn the example query to get commands for a specific device by name, the device name would be in the topic, so the query message would be without information (and removed from the message as queryParams will be optional).\n\n{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n}\n
The response message payload
for queries would contain the information necessary to make a message-based command request.
An example response message is shown below. Again, note that the message envelope
wraps the response payload
.
{\n\"Correlation-ID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"apiVersion\": \"v2\",\n\"requestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"errorCode\": 0,\n\"payload\": {\n\"apiVersion\": \"v2\",\n\"deviceCoreCommands\": [\n{\n\"deviceName\": \"testDevice1\",\n\"profileName\": \"testProfile\",\n\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"get\": true,\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint1/get\",\n\"url\": \"broker.address:1883\",\n\"parameters\": [\n{\n\"resourceName\": \"resource1\",\n\"valueType\": \"Int32\"\n}\n]\n}\n]\n},\n{\n\"deviceName\": \"testDevice1\",\n\"profileName\": \"testProfile\",\n\"coreCommands\": [\n{\n\"name\": \"coolingpoint1\",\n\"set\": true,\n\"topic\": \"/edgex/command/request/testDevice1/coolingpoint1/set\",\n\"url\": \"broker.address:1883\",\n\"parameters\": [\n{\n\"resourceName\": \"resource5\",\n\"valueType\": \"String\"\n},\n{\n\"resourceName\": \"resource6\",\n\"valueType\": \"Bool\"\n}\n]\n}\n]\n}\n]\n}\n}\n
"},{"location":"design/adr/0023-North-South-Messaging/#topic-naming","title":"Topic Naming","text":""},{"location":"design/adr/0023-North-South-Messaging/#3rd-party-system-topics","title":"3rd party system topics","text":"The 3rd party system or application must publish command requests messages to an EdgeX specified MQTT topic (external message bus) and subscribe to responses from the same. Messages topics should follow the following pattern:
/edgex/command/request/<device-name>/<command-name>/<method>
/edgex/command/response/#
For queries, the following topics are used - Publishing query command request topic: /edgex/commandquery/request
- Subscribing query command response topic: /edgex/commandquery/response
The command service must subscribe to the request topics of the 3rd party MQTT topic (external message bus) to get command requests, publish those to a topic to send them to a device service via the EdgeX message bus (internal message bus), subscribe to response messages on topics from device services (internal), and then publish response messages to a topic on the 3rd party MQTT broker (external). Message topics for the command service would follow the following standard:
edgex/command/request/#
edgex/command/request/<device-service>/<device-name>/<command-name>/<method>
edgex/command/response/#
edgex/command/response/<device-name>/<command-name>/<method>
For queries, the following topics are used:
edgex/commandquery/request
edgex/commandquery/response
The device services must subscribe to the EdgeX command request topic (internal message bus) and publish response messages to an EdgeX command response topic. The following naming standard will be applied to these topic names:
edgex/command/request/#
edgex/command/response/<device-service>/<device-name>/<command-name>/<method>
Both the EdgeX command service and the device services must contain configuration needed to connect to and publish/subscribe to messages from topics on the EdgeX message bus (internal). This includes configuration to access the message bus when secure or insecure.
The command service must also be provided configuration to connect to the 3rd party MQTT broker's topics (external). Because the communications may be done in a secure or insecure fashion, the core command service will need to be provided access to the 3rd party MQTT broker (external)
Similar to EdgeX application services, the command service will have access to an external MQTT broker to get command requests and send 3rd parties a response. This will require the command service to have two message queue configuration settings (internal and external).
"},{"location":"design/adr/0023-North-South-Messaging/#command-service-configuration","title":"command service configuration","text":"Example command service configuration is provided below.
[MessageQueue]\n[InternalMessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nRequestTopicPrefix = \"edgex/command/request/\" # for publishing requests to the device service; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\nResponseTopic = \"edgex/command/response/#\u201d # for subscribing to device service responses\n AuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\n SecretName = \"redisdb\"\n [ExternalMQTT]\n Protocol = \"tcp\"\n Host = \"localhost\"\n Port = 1883\n RequestCommandTopic = \"edgex/command/request/#\" # for subscribing to 3rd party command requests\nResponseCommandTopicPrefix = \"edgex/command/response/\" # for publishing responses back to 3rd party systems /<device-name>/<command-name>/<method> will be added to this publish topic prefix\nRequestQueryTopic = \"edgex/commandquery/request\"\nResponseQueryTopic = \"edgex/commandquery/response\"\n
Note
Core command contains no MessageQueue configuration today. This is all additive/new configuration and therefore backward compatible with EdgeX 2.x implementations.
"},{"location":"design/adr/0023-North-South-Messaging/#device-service-configuration","title":"device service configuration","text":"Example device service configuration is provided below.
[MessageQueue]\n## already existing message queue configuration (for sending events/readings to the message bus)\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\nPublishTopicPrefix = \"edgex/events/device\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\n# Client Identifiers\nClientId = \"device-rest\"\n# Connection information\nQos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\n\n## new configuration to allow device services to also communicate via message bus with core command\nCommandRequestTopic = \"edgex/command/request/#\" # subscribing for inbound command requests\nCommandResponseTopicPrefix = \"edgex/command/response/\" # publishing outbound command responses; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\n
Note
Most of the device service configuration is existing based on its need to already communicate with the message bus for publishing events/readings. The last two lines are added to allow device services to subscribe and publish command messages from/to the message bus.
"},{"location":"design/adr/0023-North-South-Messaging/#edgex-service-internal-message-bus-requests","title":"EdgeX Service (Internal) Message Bus Requests","text":"Application services (or other EdgeX services in the future) may want to also use message communications to make command requests. Application services make command requests today via REST.
In order to support this, the following need to be added:
The command service will also need an internal request topic and internal response topic prefix configuration to allow internal EdgeX services to make command requests (and query requests).
[MessageQueue]\n[InternalMessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nRequestTopicPrefix = \"edgex/command/request/\" # for publishing requests to the device service; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\nResponseTopic = \"edgex/command/response/#\u201d # for subscribing to device service responses\n InternalRequestCommandTopic = \"/command/request/#\" # for subscribing to internal command requests\nInternalResponseCommandTopicPrefix = \"/command/response/\" # for publishing responses back to internal service /<device-name>/<command-name>/<method> will be added to this publish topic prefix\nInternalRequestQueryTopic = \"/commandquery/request\"\nInternalResponseQueryTopic = \"/commandquery/response\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\n
A new command message client will need to be created to allow internal services (app services in this instance) to conveniently use the message bus communications with core command. The client service's configuration will also be expanded to include the corresponding topic and UseMessageBus
flag that enables the new messaging based CommandClient to be created. Example client configuration would look something like the following:
[Clients]\n[Clients.core-command]\nUseMessageBus = true\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nCommandRequestTopicPrefix = \"/command/request\" /<device-name>/<command-name>/<method> will be added to this publish topic prefix\nCommandResponseTopic = \"/command/response/#\"\nCommandQueryRequestTopic = \"/commandquery/request\"\nCommandQueryResponseTopic = \"/commandquery/response\"\n
Do we need separate topics for all the devices or would one on the device service suffice?
Would clients (non EdgeX services and applications) want to get a list of available commands via message (instead of calling REST)?
Dynamic configuration of the message subscription is not a user friendly operation today (requiring configuration changes).
Is it acceptable for more than one response to be published by the device service on the same correlation ID? Eg, send back \"Acknowledged\", then \"Scheduled\", then \"Starting\", then \"Done\" statuses?
Would it make sense to echo the command name into the response, as a reality check?
Would sending/receiving binary data (e.g. CBOR) be supported in this north-south message implementation?
Use of the message bus communications (by the non-EdgeX 3rd party service or application) would bypass the API Gateway.
Note a number of open questions in the Message Structure section that still need to be addressed.
Alert
Per TSC meeting of 4/27/22 - the discussion around error response was reopened. There is still some polite disagreement as to whether to keep the error response simple (as documented in this ADR) or to offer errorCode enumerations that are similar to HTTP response codes for common problems (such as ). As part of this discussion, the question is whether the error code enumerations should be exactly that of the HTTP response codes (400, 404, 423, 500, etc.) or more generic (i.e., non-HTTP) response error codes unique to this implementation.
The resolution to this question was to explore some options at implementation time. The use of an enumeration (HTTP or other) can be explored during development and options brought forth via PR.
Info
This ADR does not handle securing the message bus communications between services. This need is to be covered universally in an upcoming ADR.
"},{"location":"design/adr/0023-North-South-Messaging/#future-considerations","title":"Future Considerations","text":"System Events, aka Control Plane Events (CPE), are new to EdgeX. This ADR addresses the System Events for Devices use case with an extensible design that can address other System Event use cases that may be identified in the future. This extensible design approach and the fact that System Events are produced and consumed by different EdgeX services makes it architecturally significant warranting this ADR.
"},{"location":"design/adr/0024-system-events/#proposed-design","title":"Proposed Design","text":"To address the System Events for Devices use case, Core Metadata will publish a new SystemEvent
DTO to the EdgeX MessageBus when a device is added, updated or deleted. Consumers of these System Events will subscribe to the MessageBus to receive the new SystemEvent
DTO .
This new SystemEvent
DTO will contain the following data describing the System Event:
ObjectValue
in Reading
DTONote
As defined, this DTO should suffice for future System Event use cases.
"},{"location":"design/adr/0024-system-events/#messagebus","title":"MessageBus","text":"Services that publish System Events (Core Metadata) must connect to the EdgeX MessageBus and have MessageBus configuration similar to that of Core Data's here. This design assumes that Core Metadata will have this capability and configuration due to planned implementation of Service Metrics.
The PublishTopicPrefix
property in Core Metadata's MessageQueue
configuration will be used for System Events and set to edgex/system-event
.
The new SystemEvent
DTO will be published to a multi-level topic allowing subscribers to filter by topic. The format of this topic for System Events will be:
\u200b {PublishTopicPrefix}/{source}/{type}/{action}
where
{source}
= Publisher of the System Event, i.e. core-metadata
{type}
= Type of System Event, i.e. device
{action}
= The Action that triggered the System Event, i.e. add
Specific use cases may add additional levels as needed. The Device System Events use case will add the following levels
{owner}
= Owner the data for the System Event, i.e device-onvif-camera
as the device owner`{profile}
= Device profile associated with the Device, i.e onvif-camera
Example - System Event subscription topics
edgex/system-event/# - All system events\nedgex/system-event/core-metadata/# - only system events from Core Metadata\nedgex/system-event/core-metadata/device/# - only device system events from Core Metadata\nedgex/system-event/core-metadata/device/add/device-onvif-camera/# - only add device system events for device-onvif-camera\nedgex/system-event/core-metadata/device/#/#/onvif-camera - only device system events for devices created for the onvif-camera device profile\n
"},{"location":"design/adr/0024-system-events/#consumers","title":"Consumers","text":"Consumers of Device System Events will likely be custom application services as described in System Events for Devices . No changes are required to the App Functions SDK since it already supports processing of different types via the Target Type capability. Developers of custom application services that consume System Events will need to do the following:
&dtos.SystemEvent{}
when creating an instance of ApplicationService
using the NewAppServiceWithTargetType factory function.SystemEvent
DTO and process it accordingly. Similar to how the ToLineProtocol pipeline function expects the Metric DTO.SystemEvent
DTO will be added to this repositoryThis design will satisfy the System Events for Devices use case as well as possibly other future System Event use cases.
"},{"location":"design/adr/0024-system-events/#other-related-adrs","title":"Other Related ADRs","text":"This ADR describes the architecture of the new common configuration capability which impacts all services. Requirements for this new capability are described in the above referenced UCR. This is deemed architecturally significant due to the cross-cutting impacts.
"},{"location":"design/adr/0026-Common%20Configuration/#current-design","title":"Current Design","text":"The following flow chart demonstrates the bootstrapping of each services' configuration in the current Levski release.
"},{"location":"design/adr/0026-Common%20Configuration/#proposed-design","title":"Proposed Design","text":"The configuration settings that are common to all services will be partitioned out into a separate common configuration source. This common configuration source will be pushed into the Configuration Provider by the new core-common-config-bootstrapper
service.
During bootstrapping, each service will either load the common configuration from the Configuration Provider or via URI to some endpoint that provides the common configuration. Each service will have additional private configuration, which may override and/or extend the common configuration.
An additional common configuration setting must be present to indicate all other common settings have been pushed to the Configuration Provider. This setting is stored last and the services must wait for this setting to be present prior to pulling the common settings.
Environment overrides are only applied when configuration is loaded from file. The overridden values are pushed into the Configuration Provider, when used.
"},{"location":"design/adr/0026-Common%20Configuration/#common-config-bootstrapping","title":"Common Config Bootstrapping","text":"The following flow chart demonstrates the bootstrapping (seeding) of the common configuration when using the Configuration Provider.
"},{"location":"design/adr/0026-Common%20Configuration/#service-configuration-bootstrapping","title":"Service Configuration Bootstrapping","text":"The following flow chart demonstrates the bootstrapping of each services' configuration with this new common configuration capability.
"},{"location":"design/adr/0026-Common%20Configuration/#secret-store-configuration","title":"Secret Store Configuration","text":"As part of this design, the Secret Store configuration is being removed from the service configuration (common and private). This is so the Secret Provider can be instantiated prior to processing the service's configuration which may require the Secret Provider. The Secret Store configuration will now be a combination of default values and environment variable overrides. These environment variables will be the same as the ones that are currently used to override the configuration.
"},{"location":"design/adr/0026-Common%20Configuration/#specifying-the-common-configuration-location","title":"Specifying the Common Configuration location","text":"If the -cp/--configProvider
command line option is used, the service will default to pulling the common configuration from a standard path in the Configuration Provider. i.e. edgex/3.0/common/
The -cp/--configProvider
option assumes the usage of the core-common-config-bootstrapper service and cannot be used with the -cc/--commonConfig
option.
The new -cc/--commonConfig
command line option will be added for all services. This option will take the URI that specifies where the common configuration is pulled when not using the Configuration Provider. Authentication will be limited to basic-auth
. In addition, a new environment override variable EDGEX_COMMON_CONFIG
will be added which allows overriding this new command line option.
If the -cp/--configProvider
option is not specified and the -cc/--commonConfig
option is not specified, then the service will start using solely the private configuration. In this scenario, any information in the common configuration must be added to the service's private configuration. The individual bootstrap handlers will need to be enhanced to detect an empty configuration for robust error messaging.
-cp/--configProvider
command line option or the EDGEX_CONFIG_PROVIDER
environment variable.-cc/--commonConfig
command line option or the EDGEX_COMMON_CONFIG
environment variable may be specified using The Writable sections in common and in private configurations will be watched for changes when using the Configuration Provider. When changes to the common Writable are processed, each changed setting must be checked to see if the setting exists in the service's private section. The change will be ignored if the setting exists in the service's private section. This is so that the service's private overrides are always retained.
Changes to the service's private Writable section will be processed as is done currently.
"},{"location":"design/adr/0026-Common%20Configuration/#common-application-and-device-service-settings","title":"Common Application and Device service settings","text":"Any settings that are common to all Application Services and/or to all Device Services will be included in the single common configuration source. These settings will be ignored by services that don't use them when marshaled into the service's configuration struct.
"},{"location":"design/adr/0026-Common%20Configuration/#example-configuration-files","title":"Example Configuration Files","text":""},{"location":"design/adr/0026-Common%20Configuration/#common-configuration_1","title":"Common Configuration","text":"[Writable]\nLogLevel = \"INFO\"\n[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.DB]\npath = \"redisdb\"\n[Writable.InsecureSecrets.DB.Secrets]\nusername = \"\"\npassword = \"\"\n[Writable.Telemetry]\nInterval = \"30s\"\nPublishTopicPrefix = \"edgex/telemetry\" # /<service-name>/<metric-name> will be added to this Publish Topic prefix\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\n# Device SDK Common Service Metrics\nEventsSent = false\nReadingsSent = false\n# App SDK Common Service Metrics\nMessagesReceived = false\nInvalidMessagesReceived = false\nPipelineMessagesProcessed = false # Pipeline IDs are added as the tag for the metric for each pipeline defined\nPipelineMessageProcessingTime = false # Pipeline IDs are added as the tag for the metric for each pipeline defined\nPipelineProcessingErrors = false # Pipeline IDs are added as the tag for the metric for each pipeline defined\nHttpExportSize = false # Single metric used for all HTTP Exports\nMqttExportSize = false # BrokerAddress and Topic are added as the tag for this metric for each MqttExport defined \n# Common Security Service Metrics\nSecuritySecretsRequested = false\nSecuritySecretsStored = false\nSecurityConsulTokensRequested = false\nSecurityConsulTokenDuration = false\n[Writable.Telemetry.Tags] # Contains the service level tags to be attached to all the service's metrics\n# Gateway=\"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only chnage existing value, not added new ones.\n\n# Device Service specifc common Writable configuration\n[Writable.Reading]\nReadingUnits = true\n\n# Application Service specifc common Writable configuration\n[Writable.StoreAndForward]\nEnabled = false\nRetryInterval = \"5m\"\nMaxRetryCount = 10\n\n[Service]\nHealthCheckInterval = \"10s\"\nHost = \"localhost\"\nServerBindAddr = \"\" # Leave blank so default to Host value unless different value is needed.\nMaxResultCount = 1024\nMaxRequestSize = 0 # Not curently used. Defines the maximum size of http request body in bytes\nRequestTimeout = \"5s\"\n[Service.CORSConfiguration]\nEnableCORS = false\nCORSAllowCredentials = false\nCORSAllowedOrigin = \"https://localhost\"\nCORSAllowedMethods = \"GET, POST, PUT, PATCH, DELETE\"\nCORSAllowedHeaders = \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\"\nCORSExposeHeaders = \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\"\nCORSMaxAge = 3600\n\n[Registry]\nHost = \"localhost\"\nPort = 8500\nType = \"consul\"\n\n[Databases]\n[Databases.Primary]\nHost = \"localhost\"\nPort = 6379\nTimeout = 5000\nType = \"redisdb\"\n\n[MessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\"\nPort = 6379\nType = \"redis\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\"\n# Additional Default NATS Specific options that need to be here to enable evnironment variable overrides of them\nFormat = \"nats\"\nRetryOnFailedConnect = \"true\"\nQueueGroup = \"\"\nDurable = \"\"\nAutoProvision = \"true\"\nDeliver = \"new\"\nDefaultPubRetryAttempts = \"2\"\nSubject = \"edgex/#\" # Required for NATS Jetstram only for stream autoprovsioning\n\n# Device Service specifc common configuration\n[Device]\nDataTransform = true\nMaxCmdOps = 128\nMaxCmdValueLen = 256\nProfilesDir = \"./res/profiles\"\nDevicesDir = \"./res/devices\"\nEnableAsyncReadings = true\nAsyncBufferSize = 16\nLabels = []\nUseMessageBus = true\n[Device.Discovery]\nEnabled = false\nInterval = \"30s\"\n\n# Application Service specifc common configuration \n[Trigger]\nType=\"edgex-messagebus\"\n[Trigger.EdgexMessageBus]\nType = \"redis\"\n[Trigger.EdgexMessageBus.SubscribeHost]\nHost = \"localhost\"\nPort = 6379\nProtocol = \"redis\"\n[Trigger.EdgexMessageBus.PublishHost]\nHost = \"localhost\"\nPort = 6379\nProtocol = \"redis\"\n[Trigger.EdgexMessageBus.Optional]\nauthmode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nsecretname = \"redisdb\"\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\nQos = \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\"\n# Default NATS Specific options that need to be here to enable environment variable overrides of them\nFormat = \"nats\"\nRetryOnFailedConnect = \"true\"\nQueueGroup = \"\"\nDurable = \"\"\nAutoProvision = \"true\"\nDeliver = \"new\"\nDefaultPubRetryAttempts = \"2\"\nSubject = \"edgex/#\" # Required for NATS JetStream only for stream auto provisioning\n
"},{"location":"design/adr/0026-Common%20Configuration/#core-data-private-configuration","title":"Core Data Private Configuration","text":"MaxEventSize = 25000 # Defines the maximum event size in kilobytes\n\n[Writable]\nPersistData = true\n[Writable.Telemetry]\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\n# Core Data Service Metrics\nEventsPersisted = false\nReadingsPersisted = false\n[Service]\nPort = 59880\nStartupMsg = \"This is the Core Data Microservice\"\n\n[Clients] # Core data no longer dependent on \"Client\" services. Other services will have thier specific clients here\n\n[Databases]\n[Databases.Primary]\nName = \"coredata\"\n\n[MessageQueue]\nPublishTopicPrefix = \"edgex/events/core\" # /<device-profile-name>/<device-name> will be added to this Publish Topic prefix\nSubscribeEnabled = true\nSubscribeTopic = \"edgex/events/device/#\" # required for subscribing to Events from MessageBus\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\nClientId =\"core-data\"\n
"},{"location":"design/adr/0026-Common%20Configuration/#app-rfid-llrp-inventory-private-configuration","title":"App RFID LLRP Inventory Private Configuration","text":"[Service]\nPort = 59711\nStartupMsg = \"RFID LLRP Inventory Service\"\n\n[Clients]\n[Clients.core-data]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59880\n\n[Clients.core-metadata]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59881\n\n[Clients.core-command]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59882\n\n[Trigger]\nType=\"edgex-messagebus\"\n[Trigger.EdgexMessageBus]\n[Trigger.EdgexMessageBus.SubscribeHost]\nSubscribeTopics=\"edgex/events/#/#/#/ROAccessReport,edgex/events/#/#/#/ReaderEventNotification\"\n[Trigger.EdgexMessageBus.PublishHost]\nPublishTopic=\"edgex/events/device/{profilename}/{devicename}/{sourcename}\" # publish to same topic format the Device Services use\n[Trigger.EdgexMessageBus.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\nClientId =\"app-rfid-llrp-inventory\"\n\n[AppCustom]\n# Every device(reader) + antenna port represents a tag location and can be assigned an alias\n# such as Freezer, Backroom etc. to give more meaning to the data. The default alias set by\n# the application has a format of <deviceName>_<antennaId> e.g. Reader-10-EF-25_1 where\n# Reader-10-EF-25 is the deviceName and 1 is the antennaId.\n# See also: https://github.com/edgexfoundry/app-rfid-llrp-inventory#setting-the-aliases\n#\n# In order to override an alias, set the default alias as the key, and the new alias as the value you want, such as:\n# Reader-10-EF-25_1 = \"Freezer\"\n# Reader-10-EF-25_2 = \"Backroom\"\n[AppCustom.Aliases]\n\n# See: https://github.com/edgexfoundry/app-rfid-llrp-inventory#configuration\n[AppCustom.AppSettings]\nDeviceServiceName = \"device-rfid-llrp\"\nAdjustLastReadOnByOrigin = true\nDepartedThresholdSeconds = 600\nDepartedCheckIntervalSeconds = 30\nAgeOutHours = 336\nMobilityProfileThreshold = 6.0\nMobilityProfileHoldoffMillis = 500.0\nMobilityProfileSlope = -0.008\n
"},{"location":"design/adr/0026-Common%20Configuration/#device-mqtt-private-configuration","title":"Device MQTT Private Configuration","text":"MaxEventSize = 0 # value 0 unlimit the maximum event size that can be sent to message bus or core-data\n\n[Writable]\n# InsecureSecrets are required for when Redis is used for message bus\n[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.MQTT]\npath = \"credentials\"\n[Writable.InsecureSecrets.MQTT.Secrets]\nusername = \"\"\npassword = \"\"\n\n[Service]\nPort = 59982\nStartupMsg = \"device mqtt started\"\n\n[Clients]\n[Clients.core-data]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59880\n\n[Clients.core-metadata]\nProtocol = \"http\"\nHost = \"localhost\"\nPort = 59881\n\n[MessageQueue]\nPublishTopicPrefix = \"edgex/events/device\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n[MessageQueue.Optional]\n# Default MQTT & NATS Specific options that need to be here to enable environment variable overrides of them\nClientId = \"device-mqtt\"\n[MessageQueue.Topics]\nCommandRequestTopic = \"edgex/device/command/request/device-mqtt/#\" # subscribing for inbound command requests\nCommandResponseTopicPrefix = \"edgex/device/command/response\" # publishing outbound command responses; <device-service>/<device-name>/<command-name>/<method> will be added to this publish topic prefix\n\n[MQTTBrokerInfo]\nSchema = \"tcp\"\nHost = \"localhost\"\nPort = 1883\nQos = 0\nKeepAlive = 3600\nClientId = \"device-mqtt\"\n\nCredentialsRetryTime = 120 # Seconds\nCredentialsRetryWait = 1 # Seconds\nConnEstablishingRetry = 10\nConnRetryWaitTime = 5\n\n# AuthMode is the MQTT broker authentication mechanism. Currently, \"none\" and \"usernamepassword\" is the only AuthMode supported by this service, and the secret keys are \"username\" and \"password\".\nAuthMode = \"none\"\nCredentialsPath = \"credentials\"\n\n# Comment out/remove when using multi-level topics\nIncomingTopic = \"DataTopic\"\nResponseTopic = \"ResponseTopic\"\nUseTopicLevels = false\n\n# Uncomment to use multi-level topics\n# IncomingTopic = \"incoming/data/#\"\n# ResponseTopic = \"command/response/#\"\n# UseTopicLevels = true\n\n[MQTTBrokerInfo.Writable]\n# ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT broker\nResponseFetchInterval = 500\n
"},{"location":"design/adr/0026-Common%20Configuration/#modules-and-services-impacted","title":"Modules and Services Impacted","text":"The following modules and services are impacted:
Currently, in the Levski and earlier releases services can only load configuration, units of measurements, device profiles, device definitions, provision watches, etc. from the local file system. As outlined in the reference UCR, there is a need to be able to load files from a remote locations using URIs to specify the locations.
"},{"location":"design/adr/0027-URIs%20for%20Files/#proposed-design","title":"Proposed Design","text":"This ADR proposes a new helper function for loading files be added to go-mod-bootstrap
. This function will provide the logic for loading a file either from local file system (as is today) or from a remote location. As stated in the UCR, only HTTP and HTTPS URIs will be supported. For HTTPS, certificate validation will be performed using the system's built-in trust anchors. The docker images for all services will have the CA certs installed as is done here in App Service Configurable's Dockerfile.
While not recommended, users will be able to specify username-password (<username>:<password>@
) in the URI in plain text. While this is ok network wise when using HTTPS, it isn't good practice to have these credentials specified in configuration or other service files where the URI is specified.
Example plain text username-password
in URI located in configuration
[UoM]\nUoMFile = \"https://myuser:mypassword@example.com/uom.yaml\"\n
"},{"location":"design/adr/0027-URIs%20for%20Files/#secure-credentials","title":"Secure Credentials","text":"In order to provide a secure way for users to specify credentials, the edgexSecretName
query parameter can be specified on the URI. This parameter specifies a Secret Name from the service's Secret Store where the credentials reside and will be processed by the new helper function.
Example URI with edgexSecretName
query parameter
[UoM]\nUoMFile = \"https://example.com/uom.yaml?edgexSecretName=mySecretName\"\n
The type of authentication as well as the credentials will be contained in the secret data specified by the Secret Name. Only one type of authentication will be supported initially, which is httpheader
. The httpheader
type will accommodate various forms of authorization placed in the header. Others types can be added in the future when need is determined.
Note
Digest Auth will not be supported at this time. It can be added in the future based on feedback indicating its need.
When httpheader
is specified as the type in the secret data, the header name and contents from the secret data will be placed in the HTTP header.
Example secret data - Basic Auth
using httpheader
type=httpheader\nheadername=Authorization\nheadercontents=Basic bXl1c2VyOm15cGFzc3dvcmQ=\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\nAuthorization: Basic bXl1c2VyOm15cGFzc3dvcmQ=\n
Example secret data - API-Key
using httpheader
type=httpheader\nheadername=X-API-KEY\nheadercontents=abcdef12345\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\nX-API-KEY: abcdef12345\n
Example secret data - Bearer
using httpheader
type=httpheader\nheadername=Authorization\nheadercontents=Bearer eyJhbGciO...\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\nAuthorization: Bearer eyJhbGciO...\n
All Services will be impacted for enabling the loading the common configuration and private configuration files using URIs. This will be handled in go-mod-bootstrap's
processing of the -cc/--commonConfig
and -cf/--configFile
command line flags.
Core Metadata's loading of the UOM file will be adjusted to use the new file load function.
Device Service's loading of device profiles, device definitions and provision watchers files will be adjusted to load an index file specified by a URI in place of the configured folder name. The contents of the index file will be used to load the individual files by URI by appending the filenames to the original URI. Any authentication specified in the original URI will be used in the subsequent URIs.
Example DevicesDir configuration in service configuration
[Device]\n...\nProfilesDir = \"./res/profiles\"\nDevicesDir = \"http://example.com/devices/index.json\"\nProvisionWatchersDir = \"./res/provisionwatchers\"\n...\n
Example Device Index file http://example.com/devices/index.json
[\n\"device1.yaml\", \"device2.yaml\"\n]\n
Example resulting device file URIs from above example
http://example.com/devices/device1.yaml\nhttp://example.com/devices/device2.yaml\n
Other files (existing or future) not listed above may also be candidates for using this new URI capability. Those listed above are the most impactful for deployment at scale.
Implement as designed above
"},{"location":"design/adr/0027-URIs%20for%20Files/#other-related-adrs","title":"Other Related ADRs","text":"Approved
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#context","title":"Context","text":"Currently EdgeX Events are sent from Device Services via HTTP to Core Data, which then puts the Events on the MessageBus after optionally persisting them to the database. This ADR details how Device Services will send EdgeX Events to other services via the EdgeX MessageBus.
Note: Though this design is centered on device services, it does have cross cutting impacts with other EdgeX services and modules
Note: This ADR is dependent on the Secret Provider for All to provide the secrets for secure Message Bus connections.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#decision","title":"Decision","text":""},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#which-message-bus-implementations","title":"Which Message Bus implementations?","text":"Multiple Device Services may need to be publishing Events to the MessageBus concurrently. ZMQ
will not be a valid option if multiple Device Services are configured to publish. This is because ZMQ
only allows for a single publisher. ZMQ
will still be valid if only one Device Service is publishing Events. The MQTT
and Redis Streams
are valid options to use when multiple Device Services are required, as they both support multiple publishers. These are the only other implementations currently available for Go services. The C base device services do not yet have a MessageBus implementation. See the C Device SDK below for details.
Note: Documentation will need to be clear when ZMQ
can be used and when it can not be used.
The Go Device SDK will take advantage of the existing go-mod-messaging
module to enable use of the EdgeX MessageBus. A new bootstrap handler will be created which initializes the MessageBus client based on configuration. See Configuration section below for details. The Go Device SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.
The C Device SDK will implement its own MessageBus abstraction similar to the one in go-mod-messaging
. The first implementation type (MQTT or Redis Streams) is TBD. Using this abstraction allows for future implementations to be added when use cases warrant the additional implementations. As with the Go SDK, the C SDK will be enhanced to optionally publish Events to the MessageBus anywhere it currently POSTs Events to Core Data. This publish vs POST option will be controlled by configuration with publish as the default. See Configuration section below for details.
With this design, Events will be sent directly to Application Services w/o going through Core Data and thus will not be persisted unless changes are made to Core Data. To allow Events to optionally continue to be persisted, Core Data will become an additional or secondary (and optional) subscriber for the Events from the MessageBus. The Events will be persisted when they are received. Core Data will also retain the ability to receive Events via HTTP, persist them and publish them to the MessageBus as is done today. This allows for the flexibility to have some device services to be configured to POST Events and some to be configured to publish Events while we transition the Device Services to all have the capability to publishing Events. In the future, once this new Publish
approach has been proven, we may decide to remove POSTing Events to Core Data from the Device SDKs.
The existing PersistData
setting will be ignored by the code path subscribing to Events since the only reason to do this is to persist the Events.
There is a race condition for Marked As Pushed
when Core Data is persisting Events received from the MessageBus. Core Data may not have finished persisting an Event before the Application Service has processed the Event and requested the Event be Marked As Pushed
. It was decided to remove Mark as Pushed
capability and just rely on time based scrubbing of old Events.
As this development will be part of the Ireland release all Events published to the MessageBus will use the V2 Event DTO. This is already implemented in Core Data for the V2 AddEvent API.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#validation","title":"Validation","text":"Services receiving the Event DTO from the MessageBus will log validation errors and stop processing the Event.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#message-envelope","title":"Message Envelope","text":"EdgeX Go Services currently uses a custom Message Envelope for all data that is published to the MessageBus. This envelope wraps the data with metadata, which is ContentType
(JSON or CBOR), Correlation-Id
and the obsolete Checksum
. The Checksum
is used when the data is CBOR encoded to identify the Event in V1 API to be mark it as pushed. This checksum is no longer needed as the V2 Event DTO requires the ID be set by the Device Services which will always be used in the V2 API to mark the Events as pushed. The Message Envelope will be updated to remove this property.
The C SDK will recreate this Message Envelope.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services","title":"Application Services","text":"As part of the V2 API consumption work in Ireland the App Services SDK will be changed to expect to receive V2 Event DTOs rather than the V1 Event model. It will also be updated to no longer expect or use the Checksum
currently on the Message Envelope. Note these changes must occur for the V2 consumption and are not directly tied to this effort.
The App Service SDK will be enhanced for the secure MessageBus connection described below. See Secure Connections for details
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus-topics","title":"MessageBus Topics","text":"Note: The change recommended here is not required for this design, but it provides a good opportunity to adopt it.
Currently Core Data publishes Events to the simple events
topic. All Application Services running receive every Event published, whether they want them or not. The Events can be filtered out using the FilterByDeviceName
or FilterByResourceName
pipeline functions, but the Application Services still receives every Event and process all the Events to some extent. This could cause load issues in a deployment with many devices and large volume of Events from various devices or a very verbose device that the Application Services is not interested in.
Note: The current FilterByDeviceName
is only good if the device name is known statically and the only instance of the device defined by the DeviceProfileName
. What we really need is FilterByDeviceProfileName
which allows multiple instances of a device to be filtered for, rather than a single instance as it it now. The V2 API will be adding DeviceProfileName
to the Events, so in Ireland this filter will be possible.
Pub/Sub systems have advanced topic schema, which we can take advantage of from Application Services to filter for just the Events the Application Service actual wants. Publishers of Events must add the DeviceProfileName
, DeviceName
and SourceName
to the topic in the form edgex/events/<device-profile-name>/<device-name>/<source-name>
. The SourceName
is the Resource
or Command
name used to create the Event. This allows Application Services to filter for just the Events from the device(s) it wants by only subscribing to those DeviceProfileNames
or the specific DeviceNames
or just the specific SourceNames
Example subscribe topics if above schema is used:
Int16
device resource from devices created from the Random-Integer-Device device profile. HVACValues
device command from devices created from the Modbus-Device device profile.The MessageBus abstraction allows for multiple subscriptions, so an Application Service could specify to receive data from multiple specific device profiles or devices by creating multiple subscriptions. i.e. edgex/Events/Random-Integer-Device/#
and edgex/Events/Random-Boolean-Device/#
. Currently the App SDK only allows for a single subscription topic to be configured, but that could easily be expanded to handle a list of subscriptions. See Configuration section below for details.
Core Data's existing publishing of Events would also need to be changed to use this new topic schema. One challenge with this is Core Data doesn't currently know the DeviceProfileName
or DeviceName
when it receives a CBOR encoded event. This is because it doesn't decode the Event until after it has published it to the MessageBus. Also, Core Data doesn't know of SourceName
at all. The V2 API will be enhanced to change the AddEvent endpoint from /event
to /event/{profile}/{device}/{source}
so that DeviceProfileName
, DeviceName
, and SourceName
are always know no matter how the request is encoded.
This new topic approach will be enabled via each publisher's PublishTopic
having the DeviceProfileName
, DeviceName
and SourceName
added to the configured PublishTopicPrefix
PublishTopicPrefix = \"edgex/events\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n
See Configuration section below for details.
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#configuration","title":"Configuration","text":""},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#device-services","title":"Device Services","text":"All Device services will have the following additional configuration to allow connecting and publishing to the MessageBus. As describe above in the MessageBus Topics section, the PublishTopic
will include the DeviceProfileName
and DeviceName
.
A MessageQueue section will be added, which is similar to that used in Core Data today, but with PublishTopicPrefix
instead of Topic
.To enable secure connections, the Username
& Password
have been replaced with ClientAuth & SecretPath
, See Secure Connections section below for details. The added Enabled
property controls whether the Device Service publishes to the MessageBus or POSTs to Core Data.
[MessageQueue]\nEnabled = true\nProtocol = \"tcp\"\nHost = \"localhost\"\nPort = 1883\nType = \"mqtt\"\nPublishTopicPrefix = \"edgex/events\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\n# Client Identifiers\nClientId =\"<device service key>\"\n# Connection information\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\nClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert`\nSecretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#core-data","title":"Core Data","text":"Core data will also require additional configuration to be able to subscribe to receive Events from the MessageBus. As describe above in the MessageBus Topics section, the PublishTopicPrefix
will have DeviceProfileName
and DeviceName
added to create the actual Public Topic.
The MessageQueue
section will be changed so that the Topic
property changes to PublishTopicPrefix
and SubscribeEnabled
and SubscribeTopic
will be added. As with device services configuration, the Username
& Password
have been replaced with ClientAuth
& SecretPath
for secure connections. See Secure Connections section below for details. In addition, the Boolean SubscribeEnabled
property will be used to control if the service subscribes to Events from the MessageBus or not.
[MessageQueue]\nProtocol = \"tcp\"\nHost = \"localhost\"\nPort = 1883\nType = \"mqtt\"\nPublishTopicPrefix = \"edgex/events\" # /<device-profile-name>/<device-name>/<source-name> will be added to this Publish Topic prefix\nSubscribeEnabled = true\nSubscribeTopic = \"edgex/events/#\"\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\n# Client Identifiers\nClientId =\"edgex-core-data\"\n# Connection information\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\nClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert`\nSecretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#application-services_1","title":"Application Services","text":""},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#messagebus","title":"[MessageBus]","text":"Similar to above, the Application Services MessageBus
configuration will change to allow for secure connection to the MessageBus. The Username
& Password
have been replaced with ClientAuth
& SecretPath
for secure connections. See Secure Connections section below for details.
[MessageBus.Optional]\n# MQTT Specific options\n# Client Identifiers\nClientId =\"<app sevice key>\"\n# Connection information\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\" # Only used if Cert/Key file or Cert/Key PEMblock are specified\nClientAuth = \"none\" # Valid values are: `none`, `usernamepassword` or `clientcert`\nSecretpath = \"messagebus\" # Path in secret store used if ClientAuth not `none`\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#binding","title":"[Binding]","text":"The Binding
configuration section will require changes for the subscribe topics scheme described in the MessageBus Topics section above to filter for Events from specific device profiles or devices. SubscribeTopic
will change from a string property containing a single topic to the SubscribeTopics
string property containing a comma separated list of topics. This allows for the flexibility for the property to be a single topic with the #
wild card so the Application Service receives all Events as it does today.
Receive only Events from the Random-Integer-Device
and Random-Boolean-Device
profiles
[Binding]\nType=\"messagebus\"\nSubscribeTopics=\"edgex/events/Random-Integer-Device, edgex/events/Random-Boolean-Device\"\n
Receive only Events from the Random-Integer-Device1
from the Random-Integer-Device
profile [Binding]\nType=\"messagebus\"\nSubscribeTopics=\"edgex/events/Random-Integer-Device/Random-Integer-Device1\"\n
or receives all Events:
[Binding]\nType=\"messagebus\"\nSubscribeTopics=\"edgex/events/#\"\n
"},{"location":"design/adr/013-Device-Service-Events-Message-Bus/#secure-connections","title":"Secure Connections","text":"As stated earlier, this ADR is dependent on the Secret Provider for All ADR to provide a common Secret Provider for all Edgex Services to access their secrets. Once this is available, the MessageBus connection can be secured via the following configurable client authentications modes which follows similar implementation for secure MQTT Export and secure MQTT Trigger used in Application Services.
Secret Provider
using the configured SecretPath
.How the secrets are injected into the Secret Provider
is out of scope for this ADR and covered in the Secret Provider for All ADR.
ZMQ
or Redis Streams
then there must be a MQTT Broker running when a C Device service is in use and configured to publish to MessageBus.DeviceProfileName
and DeviceName
the V2 API must restrict the characters used in device names to those allowed in a topic. An issue for V2 API already exists for restricting the allowable characters to RFC 3986 , which will suffice.InsecureSecrets
SecretProvider
reside?Approved
"},{"location":"design/adr/014-Secret-Provider-For-All/#context","title":"Context","text":"This ADR defines the new SecretProvider
abstraction that will be used by all EdgeX services, including Device Services. The Secret Provider is used by services to retrieve secrets from the Secret Store. The Secret Store, in secure mode, is currently Vault. In non-secure mode it is configuration in some form, i.e. DatabaseInfo
configuration or InsecureSecrets
configuration for Application Services.
The Secret Provider abstraction defined in this ADR is based on the Secret Provider abstraction implementations in the Application Functions SDK (App SDK) for Application Services and the one in go-mod-bootstrap (Bootstrap) used by the Core, Support & Security services in edgex-go. Device Services do not currently use secure secrets. The App SDK implementation was initially based on the Bootstrap implementation.
The similarities and differences between these implementations are:
SecretClient
from go-mod-secretsSecretClient
based on the SecretStore
configuration(s)GetDatabaseCredentials
APICredentialsProvider
& CertificateProvider
) while the App SDK's use a single interface (SecretProvider
) for the abstraction GetCertificateKeyPair
API, which the App SDK's does notInitialize
API (Bootstrap's initialization is done by the bootstrap handler)StoreSecrets
API GetSecrets
APIInsecureSecretsUpdated
APISecretsLastUpdated
APISecretClient
for the Application Service instance's exclusive secrets.StoreSecrets
& GetSecrets
APIsSecretClient
is considered the shared client for secrets that all Application Service instances share. It is only used by the GetDatabaseCredentials
APIInsecureSecrets
A secret is a collection of key/value pairs stored in a SecretStore
at specified path whose values are sensitive in nature. Redis database credentials are an example of a Secret
which contains the username
and password
key/values stored at the redisdb
path.
Service Exclusive secrets are those that are exclusive to the instance of the running service. An example of exclusive secrets are the HTTP Auth tokens used by two running instances of app-service-configurable (http-export) which export different device Events to different endpoints with different Auth tokens in the HTTP headers. Service Exclusive secrets are seeded by POSTing the secrets to the /api/vX/secrets
endpoint on the running instance of each Application Service.
Service Shared secrets are those that all instances of a class of service, such a Application Services, share. Think of Core Data as it own class of service. An example of shared secrets are the database credentials for the single database instance for Store and Forward data that all Application Services may need to access. Another example is the database credentials for each of instance the Core Data. It is shared, but only one instance of Core Data is currently ever run. Service Shared secrets are seeded by security-secretstore-setup using static configuration for static secrets for known services. Currently database credentials are the only shared secrets. In the future we may have Message Bus credentials as shared secrets, but these will be truly shared secrets for all services to securely connect to the Message Bus, not just shared between instances of a service.
Application Services currently have the ability to configure SecretStores
for Service Exclusive and/or Service Shared secrets depending on their needs.
Known Services are those identified in the static configuration by security-secretstore-setup
These currently are Core Data, Core Metadata, Support Notifications, Support Scheduler and Application Service (class)
Unknown Services are those not known in the static configuration that become known when added to the Docker compose file or Snap.
Application Service (instance) are examples of these services.
Service exclusive SecretStore
can be created for these services by adding the services' unique name , i.e. appservice-http-export, to the EDGEX_ADD_SECRETSTORE_TOKENS
environment variable for security-secretstore-setup
EDGEX_ADD_SECRETSTORE_TOKENS: \"appservice-http-export, appservice-mqtt-export\"\n
This creates an exclusive secret store token for each service listed. The name provided for each service must be used in the service's SecretStore
configuration and Docker volume mount (if applicable). Typically the configuration is set via environment overrides or is already in an existing configuration profile (http-export profile for app-service-configurable).
Example docker-compose file entries:
environment:\n...\nSecretStoreExclusive_Path: \"/v1/secret/edgex/appservice-http-export/\"\nTokenFile: \"/tmp/edgex/secrets/appservice-http-export/secrets-token.json\"\n\nvolumes:\n...\n- /tmp/edgex/secrets/appservice-http-export:/tmp/edgex/secrets/appservice-http-export:ro,z\n
Database credentials are currently the only secrets of this type
Runtime Secrets are those not known in the static configuration and that become known during run time. These secrets are seeded at run time via the Application Services /api/vX/secrets
endpoint
type CredentialsProvider interface {\nGetDatabaseCredentials(database config.Database) (config.Credentials, error)\n}\n
and
type CertificateProvider interface {\nGetCertificateKeyPair(path string) (config.CertKeyPair, error)\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods","title":"Factory and bootstrap handler methods","text":"type SecretProvider struct {\nsecretClient pkg.SecretClient\n}\n\nfunc NewSecret() *SecretProvider {\nreturn &SecretProvider{}\n}\n\nfunc (s *SecretProvider) BootstrapHandler(\nctx context.Context,\n_ *sync.WaitGroup,\nstartupTimer startup.Timer,\ndic *di.Container) bool {\n...\nIntializes the SecretClient and adds it to the DIC for both interfaces.\n...\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#app-sdks-current-implementation","title":"App SDK's current implementation","text":""},{"location":"design/adr/014-Secret-Provider-For-All/#interface","title":"Interface","text":"type SecretProvider interface {\nInitialize(_ context.Context) bool\nStoreSecrets(path string, secrets map[string]string) error\nGetSecrets(path string, _ ...string) (map[string]string, error)\nGetDatabaseCredentials(database db.DatabaseInfo) (common.Credentials, error)\nInsecureSecretsUpdated()\nSecretsLastUpdated() time.Time\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#factory-and-bootstrap-handler-methods_1","title":"Factory and bootstrap handler methods","text":"type SecretProviderImpl struct {\nSharedSecretClient pkg.SecretClient\nExclusiveSecretClient pkg.SecretClient\nsecretsCache map[string]map[string]string // secret's path, key, value\nconfiguration *common.ConfigurationStruct\ncacheMuxtex *sync.Mutex\nloggingClient logger.LoggingClient\n//used to track when secrets have last been retrieved\nLastUpdated time.Time\n}\n\nfunc NewSecretProvider(\nloggingClient logger.LoggingClient, configuration *common.ConfigurationStruct) *SecretProviderImpl {\nsp := &SecretProviderImpl{\nsecretsCache: make(map[string]map[string]string),\ncacheMuxtex: &sync.Mutex{},\nconfiguration: configuration,\nloggingClient: loggingClient,\nLastUpdated: time.Now(),\n}\n\nreturn sp\n}\n
type Secrets struct {\n}\n\nfunc NewSecrets() *Secrets {\nreturn &Secrets{}\n}\n\nfunc (_ *Secrets) BootstrapHandler(\nctx context.Context,\n_ *sync.WaitGroup,\nstartupTimer startup.Timer,\ndic *di.Container) bool {\n...\nCreates NewNewSecretProvider, calls Initailizes() and adds it to the DIC\n...\n}\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#secret-store-for-non-secure-mode","title":"Secret Store for non-secure mode","text":"Both Bootstrap's and App SDK's implementation use the DatabaseInfo
configuration for GetDatabaseCredentials
API in non-secure mode. The App SDK only uses it, for backward compatibility, if the database credentials are not found in the new InsecureSecrets
configuration section. For Ireland it was planned to only use the new InsecureSecrets
configuration section in non-secure mode.
Note: Redis credentials are blank
in non-secure mode
Core Data
[Databases]\n[Databases.Primary]\nHost = \"localhost\"\nName = \"coredata\"\nUsername = \"\"\nPassword = \"\"\nPort = 6379\nTimeout = 5000\nType = \"redisdb\"\n
Application Services
[Database]\nType = \"redisdb\"\nHost = \"localhost\"\nPort = 6379\nUsername = \"\"\nPassword = \"\"\nTimeout = \"30s\"\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#insecuresecrets-configuration","title":"InsecureSecrets Configuration","text":"The App SDK defines a new Writable
configuration section called InsecureSecrets
. This structure mimics that of the secure SecretStore
when EDGEX_SECURITY_SECRET_STORE
environment variable is set to false
. Having the InsecureSecrets
in the Writable
section allows for the secrets to be updated without restarting the service. Some minor processing must occur when the InsecureSecrets
section is updated. This is to call the InsecureSecretsUpdated
API. This API simply sets the time the secrets were last updated. The SecretsLastUpdated
API returns this timestamp so pipeline functions that use credentials for exporting know if their client needs to be recreated with new credentials, i.e MQTT export.
type WritableInfo struct {\nLogLevel string\n...\nInsecureSecrets InsecureSecrets\n}\n\ntype InsecureSecrets map[string]InsecureSecretsInfo\n\ntype InsecureSecretsInfo struct {\nPath string\nSecrets map[string]string\n}\n
[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.DB]\npath = \"redisdb\"\n[Writable.InsecureSecrets.DB.Secrets]\nusername = \"\"\npassword = \"\"\n[Writable.InsecureSecrets.mqtt]\npath = \"mqtt\"\n[Writable.InsecureSecrets.mqtt.Secrets]\nusername = \"\"\npassword = \"\"\ncacert = \"\"\nclientcert = \"\"\nclientkey = \"\"\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#decision","title":"Decision","text":"The new SecretProvider
abstraction defined by this ADR is a combination of the two implementations described above in the Existing Implementations section.
To simplify the SecretProvider
abstraction, we need to reduce to using only exclusive SecretStores
. This allows all the APIs to deal with a single SecretClient
, rather than the split up way we currently have in Application Services. This requires that the current Application Service shared secrets (database credentials) must be copied into each Application Service's exclusive SecretStore
when it is created.
The challenge is how do we seed static secrets for unknown services when they become known. As described above in the Known and Unknown Services section above, services currently identify themselves for exclusive SecretStore
creation via the EDGEX_ADD_SECRETSTORE_TOKENS
environment variable on security-secretstore-setup. This environment variable simply takes a comma separated list of service names.
EDGEX_ADD_SECRETSTORE_TOKENS: \"<service-name1>,<service-name2>\"\n
If we expanded this to add an optional list of static secret identifiers for each service, i.e. appservice/redisdb
, the exclusive store could also be seeded with a copy of static shared secrets. In this case the Redis database credentials for the Application Services' shared database. The environment variable name will change to ADD_SECRETSTORE
now that it is more than just tokens.
ADD_SECRETSTORE: \"app-service-xyz[appservice/redisdb]\"\n
Note: The secret identifier here is the short path to the secret in the existing appservice SecretStore
. In the above example this expands to the full path of /secret/edgex/appservice/redisdb
The above example results in the Redis credentials being copied into app-service-xyz's SecretStore
at /secret/edgex/app-service-xyz/redis
.
Similar approach could be taken for Message Bus credentials where a common SecretStore
is created with the Message Bus credentials saved. The services request the credentials are copied into their exclusive SecretStore
using common/messagebus
as the secret identifier.
Full specification for the environment variable's value is a comma separated list of service entries defined as:
<service-name1>[optional list of static secret IDs sperated by ;],<service-name2>[optional list of static secret IDs sperated by ;],...\n
Example with one service specifying IDs for static secrets and one without static secrets
ADD_SECRETSTORE: \"appservice-xyz[appservice/redisdb; common/messagebus], appservice-http-export\"\n
When the ADD_SECRETSTORE
environment variable is processed to create these SecretStores
, it will copy the specified saved secrets from the initial SecretStore
into the service's SecretStore
. This all depends on the completion of database or other credential bootstrapping and the secrets having been stored prior to the environment variable being processed. security-secretstore-setup will need to be refactored to ensure this sequencing.
The following will be the new SecretProvider
abstraction interface used by all Edgex services
type SecretProvider interface {\n// Stores new secrets into the service's exclusive SecretStore at the specified path.\nStoreSecrets(path string, secrets map[string]string) error\n// Retrieves secrets from the service's exclusive SecretStore at the specified path.\nGetSecrets(path string, _ ...string) (map[string]string, error)\n// Sets the secrets lastupdated time to current time. \nSecretsUpdated()\n// Returns the secrets last updated time\nSecretsLastUpdated() time.Time\n}\n
Note: The GetDatabaseCredentials
and GetCertificateKeyPair
APIs have been removed. These are no longer needed since insecure database credentials will no longer be stored in the DatabaseInfo
configuration and certificate key pairs are secrets like any others. This allows these secrets to be retrieved via the GetSecrets
API.
The factory method and bootstrap handler will follow that currently in the Bootstrap implementation with some tweaks. Rather than putting the two split interfaces into the DIC, it will put just the single interface instance into the DIC. See details in the Interfaces and factory methods section above under Existing Implementations.
"},{"location":"design/adr/014-Secret-Provider-For-All/#caching-of-secrets","title":"Caching of Secrets","text":"Secrets will be cached as they are currently in the Application Service implementation
"},{"location":"design/adr/014-Secret-Provider-For-All/#insecure-secrets","title":"Insecure Secrets","text":"Insecure Secrets will be handled as they are currently in the Application Service implementation. DatabaseInfo
configuration will no longer be an option for storing the insecure database credentials. They will be stored in the InsecureSecrets
configuration only.
[Writable.InsecureSecrets]\n[Writable.InsecureSecrets.DB]\npath = \"redisdb\"\n[Writable.InsecureSecrets.DB.Secrets]\nusername = \"\"\npassword = \"\"\n
"},{"location":"design/adr/014-Secret-Provider-For-All/#handling-on-the-fly-changes-to-insecuresecrets","title":"Handling on-the-fly changes to InsecureSecrets
","text":"All services will need to handle the special processing when InsecureSecrets
are changed on-the-fly via Consul. Since this will now be a common configuration item in Writable
it can be handled in go-mod-bootstrap
along with existing log level processing. This special processing will be taken from App SDK.
Proper mock of the SecretProvider
interface will be created with Mockery
to be used in unit tests. Current mock in App SDK is hand written rather then generated with Mockery
.
SecretProvider
reside?","text":""},{"location":"design/adr/014-Secret-Provider-For-All/#go-services","title":"Go Services","text":"The final decision to make is where will this new SecretProvider
abstraction reside? Originally is was assumed that it would reside in go-mod-secrets
, which seems logical. If we were to attempt this with the implementation including the bootstrap handler, go-mod-secrets
would have a dependency on go-mod-bootstrap
which will likely create a circular dependency.
Refactoring the existing implementation in go-mod-bootstrap
and have it reside there now seems to be the best choice.
The C Device SDK will implement the same SecretProvider
abstraction, InsecureSercets configuration and the underling SecretStore
client.
Writable.InsecureSecrets
section added to their configurationInsecureSecrets
definition will be moved from App SDK to go-mod-bootstrapSecretStore
configuration section will be added to all Device ServicesSecretProvider
interface from the DIC in place of current usage of the GetDatabaseCredentials
and GetCertificateKeyPair
interfaces.GetDatabaseCredentials
and GetCertificateKeyPair
will be replaced with calls to GetSecrets
API and appropriate processing of the returned secrets will be added. GetSecrets
API in place of the GetDatabaseCredentials
APISecretProvider
bootstrap handlerSecretStoreExclusive
configuration and just use the existing SecretStore
configurationSecretStore
requires stopping and restarting all the services. The is because security-secretstore-setup has completed but not stopped. If it is rerun without stopping the other services, there tokens and static secrets will have changed. The planned refactor of security-secretstore-setup
will attempt to resolve this.This design involves creating a new Application Service that is responsible for the requirements in the above referenced UCR. This document is created as a means of formal design review.
"},{"location":"design/adr/application/0025-Record-and-Replay/#proposed-design","title":"Proposed Design","text":"A new Application Service will be created with a RESTful API to handle the Record, Replay, Export and Import capabilities. An Application Service has been chosen since the Record capability requires a service that can connect to the MessageBus and consume Events over a long period of time (just like other App Services). The service will not create or start a Functions Pipeline on start-up as normally done in Application Services. It will wait until the Record request has been received. Once the recording is complete the Functions Pipeline will be stopped.
Note
Application Services do not receive data when the Functions Pipelines are stopped.
"},{"location":"design/adr/application/0025-Record-and-Replay/#record-endpoint","title":"Record Endpoint","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#post","title":"POST","text":"This POST
API will start recording data as specified in the request Data Transfer Object (DTO) defined below. The request handler will validate the DTO and then create a new Functions Pipeline and Start the Functions Pipeline to process incoming data. An error is retuned if a recording is already in progress.
The Functions Pipeline will contain the following pipeline functions in the following order
The async function receiving the data will first stop the Functions Pipeline and then save the data for later replay and/or export. It will also determine the list of unique Device Profile and Device Names from the data and store them along side the recorded data. Since app services can receive Events out of order per their timestamps, the saved Event data must be sorted by the Event timestamps. All data will saved in in-memory storage.
Note
Starting a new recording will overwrite any previous recorded data.
"},{"location":"design/adr/application/0025-Record-and-Replay/#record-request-dto","title":"Record Request DTO","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#duration","title":"Duration","text":"Time duration in which to record data. Required if Event Limit is not specified.
"},{"location":"design/adr/application/0025-Record-and-Replay/#event-limit","title":"Event Limit","text":"Maximum number Events
to record. Required if Duration is not specified
Optional list of Device Profile Names to filter for
"},{"location":"design/adr/application/0025-Record-and-Replay/#include-device-names","title":"Include Device Names","text":"Optional list of Device Names to filter for
"},{"location":"design/adr/application/0025-Record-and-Replay/#exclude-device-profile-names","title":"Exclude Device Profile Names","text":"Optional list of Device Profile Names to filter out
"},{"location":"design/adr/application/0025-Record-and-Replay/#exclude-device-names","title":"Exclude Device Names","text":"Optional list of Device Names to filter out
"},{"location":"design/adr/application/0025-Record-and-Replay/#delete","title":"DELETE","text":"The DELETE
API will cancel current in progress recording. An error is returned if a recording is not in progress.
This GET
API will return the status of Record. If Record is not active the status will be for the last Record session that was run. The API response will be the following DTO:
Boolean indicating if Record is in progress or not.
"},{"location":"design/adr/application/0025-Record-and-Replay/#event-count","title":"Event Count","text":"Count of Events that have been captured. 0 if not running and no past Record has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#duration_1","title":"Duration","text":"Duration that the recording has been active. 0 if not running and no past Record has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#replay-endpoint","title":"Replay endpoint","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#post_1","title":"POST","text":"This POST
API will start replaying the recorded data as specified in the request Data Transfer Object (DTO) defined below. An error is retuned is there is already a replay session in progress. The request handler will validate the DTO and that the appropriate Device Profiles and Devices from the data exist. It will then start an async Go function to handle the replay so the request doesn't timeout on long replays.
The replay async Go function will use the Background Publishing capability to send the recorded Events to the EdgeX MessageBus using the same publish topic scheme used by Device Services, which is edgex/events/device/<device-profile-name>/<device-name>/<source-name>
. The App SDK has the Publish Topic Placeholders capability built-in to facilitate this. The data for these topics is available from the Event DTO. The timestamps in the Events and Readings published will be set to the current date/time. This requires a copy be made of the Event/Readings as they are published in order to not corrupt the original data.
Once the first event is published the replay function will calculate the wait time to use before sending the next Event from the recorded data. This will be based on the time difference from the original timestamp of the previous event published and the timestamp of the next event multiplied by the inverse of the Replay Rate
specified in the request DTO.
Examples - Replay Rate wait time calculation
Delta time between original Events is 800ms Replay rate is 2.0 (100% faster) making wait time 400ms (800ms * (1 / 2.0)) Replay rate is 0.5 (100% slower) making wait time 1600ms (800ms * (1 / 0.5))
The replay function will repeat publishing the recorded data per the Repeat Count
in from the DTO.
Required rate at which to replay the data compared to the rate the data was recorded. Float value greater than 0 where 1 is the same rate, less than 1 is slower rate and greater than 1 is faster rate than the rate the data was recorded.
"},{"location":"design/adr/application/0025-Record-and-Replay/#repeat-count","title":"Repeat Count","text":"Optional count of number of times to repeat the replay. Defaults to 1 if not specified or is set to 0.
"},{"location":"design/adr/application/0025-Record-and-Replay/#delete_1","title":"DELETE","text":"This DELETE
API will cancel current in progress replay. An error is returned if a replay is not in progress.
This GET
API will return the status of Replay. If Replay is not active the status will be for the last Replay that was run. The API response will be the following DTO:
Boolean indicating if a Replay is in progress or not
"},{"location":"design/adr/application/0025-Record-and-Replay/#event-count_1","title":"Event Count","text":"Count of Events that have been replayed. 0 if not running and no past Replay has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#duration_2","title":"Duration","text":"Duration that the Replay has been active. 0 if not running and no past Replay has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#repeat-count_1","title":"Repeat Count","text":"Count of repeats. Value indicates the Replay in progress or competed. 0 if not running and no past Replay has been run.
"},{"location":"design/adr/application/0025-Record-and-Replay/#download-endpoint-export","title":"Download endpoint (Export)","text":""},{"location":"design/adr/application/0025-Record-and-Replay/#get_2","title":"GET","text":"This GET
API will request that the previously recorded data be exported as a file download. It will accept an optional query parameter to specify compression (NONE, ZIP or GZIP). An error is returned if no data has been recorded or invalid compression type requested.
The file content will be the Recorded Data DTO as define below. The request handler will build the DTO described below by extracting the recorded Events
from in-memory storage, pulling the referenced Device Profiles
and Devices
from Core Metadata using the names from in-memory storage. The file extension used will be .json
, .zip
or .gzip
depending on the compression selected.
List of Events
(with Readings
) that were recorded
List of Device Profiles
(complete profiles) that are referenced in the recorded Events
List of Device defintions
that are referenced in the recorded Events
This POST
API will upload previously exported recorded data file. It will accept an optional Boolean query parameter to specify to not overwrite existing Device Profiles and/or Devices if they already exist. Default is to overwrite existing with those captured with the recorded data.
The request handler will receive the file as a Recorded Data DTO described above and detect if it is compressed and un-compress the contents if needed before un-marshaling the JSON into the DTO. The compression will be determined based the Content-Encoding
from the request header. The Event
data from the DTO will then be saved to the in-memory storage along with the Device Profile and Device Names. The Device Profiles
and Devices
will be pushed to Core Metadata if they don't exist or if overwrite is enabled.
Note
Import will overwrite any previous recorded data.
"},{"location":"design/adr/application/0025-Record-and-Replay/#considerations","title":"Considerations","text":"Implement this design as outlined above using a RESTful API and in-memory storage
"},{"location":"design/adr/application/0025-Record-and-Replay/#other-related-adrs","title":"Other Related ADRs","text":"Accepted by EdgeX Foundry working groups as of Core Working Group meeting 16-Jan-2020
Note
This ADR was written pre-Geneva with an assumption that the V2 APIs would be available in Geneva. In actuality, the full V2 APIs will be delivered in the Ireland release (Spring 2020)
"},{"location":"design/adr/core/0003-V2-API-Principles/#context","title":"Context","text":"A redesign of the EdgeX Foundry API is proposed for the Geneva release. This is understood by the community to warrant a 2.0 release that will not be backward compatible. The goal is to rework the API using solid principles that will allow for extension over the course of several release cycles, avoiding the necessity of yet another major release version in a short period of time.
Briefly, this effort grew from the acknowledgement that the current models used to facilitate requests and responses via the EdgeX Foundry API were legacy definitions that were once used as internal representations of state within the EdgeX services themselves. Thus if you want to add or update a device, you populate a full device model rather than a specific Add/UpdateDeviceRequest. Currently, your request model has the same definition, and thus validation constraints, as the response model because they are one and the same! It is desirable to separate and be specific about what is required for a given request, as well as its state validity, and the bare minimum that must be returned within a response.
Following from that central need, other considerations have been used when designing this proposed API. These will be enumerated and briefly explained below.
1.) Transport-agnostic Define the request/response data transfer objects (DTO) in a manner whereby they can be used independent of transport. For example, although an OpenAPI doc is implicitly coupled to HTTP/REST, define the DTOs in such a way that they could also be used if the platform were to evolve to a pub/sub architecture.
2.) Support partial updates via PATCH Given a request to, for example, update a device the user should be able to update only some properties of the device. Previously this would require an endpoint for each individual property to be updated since the \"update device\" endpoint, facilitated by a PUT, would perform a complete replacement of the device's data. If you only wanted to update the LastConnected timestamp, then a separate endpoint for that property was required. We will leverage PATCH in order to update an entity and only those properties populated on the request will be considered. Properties that are missing or left blank will not be touched.
3.) Support multiple requests at once Endpoints for the addition or updating of data (POST/PATCH) should accept multiple requests at once. If it were desirable to add or update multiple devices with one request, for example, the API should facilitate this.
4.) Support multiple correlated responses at once Following from #3 above, each request sent to the endpoint must result in a corresponding response. In the case of HTTP/REST, this means if four requests are sent to a POST operation, the return payload will have four responses. Each response must expose a \"code\" property containing a numeric result for what occurred. These could be equivalent to HTTP status codes, for example. So while the overall call might succeed, one or more of the child requests may not have. It is up to the caller to examine each response and handle accordingly.
In order to correlate each response to its original request, each request must be assigned its own ID (in GUID format). The caller can then tie a response to an individual request and handle the result accordingly, or otherwise track that a response to a given request was not received.
5.) Use of 207 HTTP Status (Multi-Result) In the case where an endpoint can support multiple responses, the returned HTTP code from a REST API will be 207 (Multi-status)
6.) Each service should provide a \"batch\" request endpoint In addition to use-case specific endpoints that you'd find in any REST API, each service should provide a \"batch\" endpoint that can take any kind of request. This is a generic endpoint that allows you to group requests of different types within a single call. For example, instead of having to call two endpoints to get two jobs done, you can call a single endpoint passing the specific requests and have them routed appropriately within the service. Also, when considering agnostic transport, the batch endpoint would allow for the definition and handling of \"GET\" equivalent DTOs which are now implicit in the format of a URL.
7.) GET endpoints returning a list of items must support pagination URL parameters must be supported for every GET endpoint to support pagination. These parameters should indicate the current page of results and the number of results on a page.
"},{"location":"design/adr/core/0003-V2-API-Principles/#decision","title":"Decision","text":"Commnunity has accepted the reasoning for the new API and the design principles outlined above. The approach will be to gradually implement the V2 API side-by-side with the current V1 APIs. We believe it will take more than a single release cycle to implement the new specification. Releases of that occur prior to the V2 API implementation completion will continue to be major versioned as 1.x. Subsequent to completion, releases will be major versioned as 2.x.
"},{"location":"design/adr/core/0003-V2-API-Principles/#consequences","title":"Consequences","text":"Approved (by TSC vote on 10/6/21)
"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#context","title":"Context","text":"This ADR presents a technical plan for creation of a 2.0 version of edgex-cli which supports the new V2 REST APIs developed as part of the Ireland release of EdgeX.
"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#existing-behavior","title":"Existing Behavior","text":"The latest version of edgex-cli (1.0.1) only supports the V1 REST APIs and thus cannot be used with V2 releases of EdgeX.
As the edgex-cli was developed organically over time, the current implementation has a number of bugs mostly involving a lack of consistent behavior, especially with respect to formatting of output.
Other issues with the existing client include:
The original Hanoi V1 client was created by a team at VMWare which is no longer participating in the project. Canonical will lead the development of the Ireland/Jakarta V2 client.
"},{"location":"design/adr/core/0019-EdgeX-CLI-V2/#decision","title":"Decision","text":"-d
, --debug
show additional output for debugging purposes (e.g. REST URL, request JSON, \u2026). This command-line arg will replace -v, --verbose and will no longer trigger output of the response JSON (see -j, --json). -j
, --json
output the raw JSON response returned by the EdgeX REST API and nothing else. This output mode is used for script-based usage of the client. --version
output the version of the client and if available, the version of EdgeX installed on the system (using the version of the metadata data service) Restructure the Go code hierarchy to follow the most recent recommended guidelines. For instance /cmd should just contain the main application for the project, not an implementation for each command - that should be in /internal/cmd
Take full advantage of the features of the underlying command-line library, Cobra, such as tab-completion of commands.
Allow overlap of command names across services by supporting an argument to specify the service to use: -m/--metadata
, -c/--command
, -n/--notification
, -s/--scheduler
or --data
(which is the default). Examples:
edgex-cli ping --data
edgex-cli ping -m
edgex-cli version -c
Implement all required V2 endpoints for core services
Core Command - edgex-cli command
read | write | list
Core Data - edgex-cli event
add | count | list | rm | scrub**
- edgex-cli reading
count | list
Metadata - edgex-cli device
add | adminstate | list | operstate | rm | update
- edgex-cli deviceprofile
add | list | rm | update
- edgex-cli deviceservice
add | list | rm | update
- edgex-cli provisionwatcher
add | list | rm | update
Support Notifications - edgex-cli notification
add | list | rm
- edgex-cli subscription
add | list | rm
Support Scheduler - edgex-cli interval
add | list | rm | update
**Common endpoints in all services**\n- **`edgex-cli version`**\n- **`edgex-cli ping`**\n- **`edgex-cli metrics`**\n- **`edgex-cli status`**\n\nThe commands will support arguments as appropriate. For instance:\n- `event list` using `/event/all` to return all events\n- `event list --device {name}` using `/event/device/name/{name}` to return the events sourced from the specified device.\n
Currently, some commands default to always displaying GUIDs in objects when they're not really needed. Change this so that by default GUIDs aren't displayed, but add a flag which causes them to be displayed.
scrub may not work with Redis being secured by default. That might also apply to the top-level db
command (used to wipe the entire db). If so, then the commands will be disabled in secure mode, but permitted in non-secure mode.
Have built-in defaults with port numbers for all core services and allow overrides, avoiding the need for static configuration file or configuration provider.
(Stretch) implement a -o
/--output
argument which could be used to customize the pretty-printed objects (i.e. non-JSON).
(Stretch) Implement support for use of the client via the API Gateway, including being able to connect to a remote EdgeX instance. This might require updates in go-mod-core-contracts.
** Approved ** By TSC Vote on 2/14/22
Please see a prior PR on this topic that detailed much of the debate and context on this issue. For clarity and simplicity, that PR was closed in favor of this simpler ADR.
"},{"location":"design/adr/core/0021-Device-Profile-Changes/#context","title":"Context","text":"While the device profile has always been the way to describe a device/sensor and template its communications to the rest of the EdgeX platform, over the course of EdgeX evolution there have been changes in what could change in a profile (often based on its associations to other EdgeX objects). This document is meant to address the issue of change surrounding device profiles in EdgeX going forward \u2013 specifically when can a device profile (or its sub-elements such as device resources) be added, modified or removed.
"},{"location":"design/adr/core/0021-Device-Profile-Changes/#summary-of-device-profile-rules","title":"Summary of Device Profile Rules","text":"These rules will be implemented in core metadata on device profile API calls.
The following APIs would be added to the metadata REST service in order to meet the design specified above.
Some adopters may not view event/reading data as ephemeral or short lived. These adopters may choose not to allow device profiles to be modified or removed when associated to an event or reading. For this reason, two new configuration options, in the [Writable.ProfileChange]
section, will be added to metadata configuration that are used to reject modifications or deletions.
When either of these config settings are set to true, metadata would accordingly reject changes to or removal of profiles (note: metadata will not check that there are actually events or readings - or any object - associated to the device profile when these are set to true. It simply rejects all modification or deletes to device profiles with the assumption that there could be events, readings or other objects associated and which need to be preserved).
"},{"location":"design/adr/core/0021-Device-Profile-Changes/#consequencesconsiderations","title":"Consequences/Considerations","text":"In order to allow device profiles to be updated or removed even when associated to an EdgeX event/reading, a new property needs to be added to the reading object.
ReadingUnits
(set true by default) will allow adopters to indicate they do not want units to be added to the readings (for cases where there is a concern about the number of readings and the extra data of adding units).ReadingUnits
configuration option will be added to the [Writable.Reading]
section of device services (and addressed in the device service SDKs).Approved by TSC Vote on 3/16/2022
This ADR began under a different ADR pull request. The prior ADR recommended a UoM per device resource and just allowed for the association of an arbitrary set of unit of measure references against the resource. However, it did not include any specific units of measure or validation of those units against the actual profiles (and ultimately the associated readings). See the previous UoM ADR for details and prior debate.
Implementation: to be determined, but could be as soon as Kamakura (Spring 2022).
"},{"location":"design/adr/core/0022-UoM/#context","title":"Context","text":"Unit of measurement (UoM) is defined as \"a standard amount of a physical quantity, such as length, mass, energy, etc, specified multiples of which are used to express magnitudes of that physical quantity\". In EdgeX, data collected from sensors are physical quantities which should be associated to some unit of measure to express magnitude of that physical quantity. For example, if EdgeX collected a temperature reading from a thermostat as 45
, the user of that sensor reading would want to know if the unit of measure for the 45
quantity was expressed in Celsius, Fahrenheit or even the Kelvin scale.
Since the founding of the project, there has been consensus that a unit of measure should be associated to any sensor or metric quantity collected by EdgeX. Also since the founding of the project, a unit of measure has therefore been specified (directly or indirectly) to each device resource (found in device profiles) and associated values collected as part of readings.
The unit of measure was, however, in all cases just a string reference to some arbitrary unit (which may or may not be in a UoM standard) to be interpreted by the consumer of EdgeX data. The reporting sensor/device or programmer of the device service could choose what UoM string was associated to the device resources (and readings produced by the device service) as the unit of measure for any piece of data. Per the temperature example above, the unit of measure could have been \"F\" or \"C\", \"Celsius\" or \"Fahrenheit\", or any other representation. In other words, the associated unit of measure for all data in EdgeX was left to agreement and interpretation by the data provider/producer and EdgeX data consumer.
There are various specifications and standards around unit of measure. Specifically, there are several options to choose from as it relates to the exchange of data in electronic communications - and units of measure associated in that exchange. As examples, two big competing standards around EDI (electronic data exchange) that both have associated unit of measure codes are:
The Unified Code for Units of Measure provides an alternative list (not a standard) that is used by various organizations like OSGI and the Eclipse Foundation.
While standards exist, use by various open source projects (especially IoT/edge projects) is inconsistent and haphazard. Groups like oneM2M seem to define their own selection of units in specifications per vertical (home for example) while Kura doesn't even appear to use the UoM JSR (a Java related unit of measure specification for Java applications like Kura).
"},{"location":"design/adr/core/0022-UoM/#decision","title":"Decision","text":"It would be speculative and inappropriate for EdgeX to select a unit of measure standard which is not widely adopted in the industry or choose a static unit of measure list that is incomplete with regard to possible IoT / edge use case needs. At this time, there does not appear to be a single and unequivocal standard for units of measure that encompasses all EdgeX related use cases (now and in the future).
Therefore, EdgeX chooses not to select or adopt a unit of measure specification, standard, or code list to apply across the platform. Instead, EdgeX adopters will be allowed to optionally specify which unit of measure specification, standard, or unit of measure code list they would like used in their instance(s) of EdgeX.
"},{"location":"design/adr/core/0022-UoM/#specifying-the-units-of-measure","title":"Specifying the Units of Measure","text":"Units of measure allowed by the instance of EdgeX will be specified in a configuration file (in YAML format called uom.yaml
by default). Note: the UoM configuration is a separate configuration YAML file (separate from the metadata service configuration file - configuration.yaml
).
EdgeX 3.0
For EdgeX 3.0 the UoM definition file is changed to YAML instead of TOML format.
The units of measure in the configuration file can be attributed, optionally, to a specification, document, or other UoM definition source. The source
only helps provide the location of documentation about the origins and details of the units specified for the reader, but it will not be used or checked by EdgeX. An optional default source can be provided at the top level configuration (as shown in the examples below) so that other sources are only needed when there are specific units used that are not found in the default source.
The units of measure can be categorized for better organization and to allow for different sources to be specified for different units. The categories are defined by the YAML section names (the UoM dot labels).
Sample YAML unit of measure configuration
Source: reference to source for all UoM if not specified below\nUnits:\ntemperature:\nSource: www.weather.com\nValues:\n- C\n- F\n- K\nweights:\nSource: www.usa.gov/federal-agencies/weights-and-measures-division\nValues:\n- lbs\n- ounces\n- kilos\n- grams\n
"},{"location":"design/adr/core/0022-UoM/#specifying-the-uom-file-location","title":"Specifying the UoM File Location","text":"The location of the UoM file will be specified in core metadata's configuration (currently in res/configuration.yaml
) - see example A below.
Example Metadata Configuration - location of of the UoM configuration file
Writable:\nUoM:\nValidation: false ## false (meaning off) by default\n\n## in the non-writable area - example file specified to units of measure\nUoM:\nUoMFile: ./res/uom.yaml # the UoMFile location can be either absolute or relative path location\n
The location of the UoM file should point to an accessible file (relative to application executable or absolute path). The file must be something that the service can reach (ex: in shared volume, volume mount, etc.) in order to allow for the adopter to provide the units of measure independently during configuration/setup of the EdgeX instance without requiring a build of the metadata service or a reconstruction of the Docker image/container.
Info
In future versions, multiple UoM definition files might be specified. This may help the organization of the units in the future.
Note
The environmental overrides can be used to specify and override the location of the UoM configuration file.
Info
It was discussed that the file location could be done via URI and even allow for HTTP, HTTPS or other protocol access of the file. For this first implementation, it was decided (per Monthly Architect's meeting of 2/28/22) to only allow for a simple file path reference (relative or absolute). Future implementation can consider URI use.
"},{"location":"design/adr/core/0022-UoM/#specifying-validation-on-or-off","title":"Specifying Validation on or off","text":"Additionally, in metadata's configuration, a configuration option for unit of measure validation being on
or off
will be provided (note Validation
in both example above). The location of the UoM file is static, but the ability to turn validation on/off is dynamic and therefore in the writable area of configuration. For backward compatibility, validation will be off by default.
Note
on
and off
are specified by boolean values true
and false
in the configuration file.
Core metadata will read the units of measure from its configuration file. Like all configuration information, this data will be stored in the configuration service (Consul today) on initial startup of the core metadata service.
When validation is turned on
(Writable.UoM.validation is set to true), all device profile units
(in device resource, device properties) will be validated against the list of units of measure by core metadata. In other words, when a device profile is created or updated or when a device resource is added or updated via the core metadata API, the units specified in the device resource's units
field (see resource example below) will be checked against the valid list of UoM provided via core metadata configuration. If the units
value matches any one of the configuration units of measure, then the device resource is considered valid - allowing the create or update operation to continue.
If the units
value does not match any one of the configuration units of measure, then the device profile or device resource operation (create or update) is rejected (error code 500 is returned) and an appropriate error message is returned in the response to the caller of the core metadata API.
Note
Importantly (as discussed in Core WG 2/17/22), the units
field on a profile is and shall remain optional. If the units
field is not specified in the device profile, then it is assumed that the device resource does not have well defined units of measure. In other words, core metadata will not fail a profile with no units
field specified on a device resource.
In the example device resource below, core metadata would check that C
is in the list of units of measure in the configuration.
deviceResources:\n-\nname: \"RoomTemperature\"\nisHidden: false\ndescription: \"Room Temperature x10 \u00b0C (Read Only)\"\nattributes:\n{ primaryTable: \"INPUT_REGISTERS\", startingAddress: 3, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: 0.1\nunits: \"C\" ## core metadata checks this value against its list of valid units of measure\n
By checking the units
property of the device resources (on creation or updates of the device profile or create/update of the device resources), and rejecting any additions or changes that include non-valid units of measure, then we can be assured that all readings created by the device service will contain valid units by default (assuming that validation of the units of measure is always on) or that the units are inconsequential (when the units
field is not specified for a device resource). This means, the units in a reading do not need to be validated separately.
Based on discussion in the Core WG meeting of 2/3/22, it was decided that without validation and some valid list of actual UoM, the ADR was just adding metadata to the profile and thus did not even rise to the level of \"significant\" architectural decision. It was further felt that in order to really provide any value to adopters and to get adherence to their chosen units of measure, EdgeX had to allow for a valid list of units of measure to be specified and be used to check profile units - but in a way that is easy to configure/provide without having to rebuild a service for example. If the units of measure were defined just in the standard configuration file, it would make it hard to change this list in deployments.
This new UoM ADR is the result of that discussion. In general, it specifies, through adopter provided configuration, the exact unit of measures that are allowed for the EdgeX instance and any optional reference (such as a specification) where those units are defined. It does so through a separate core metadata configuration file making it easier to change.
"},{"location":"design/adr/core/0022-UoM/#use-of-senml","title":"Use of SenML","text":"SenML was suggested as a specification (currently a proposed standard) from which EdgeX may draw some guidance or inspiration with regard to unit of measure representation in \"simple sensor measurements and device parameters.\"
In fact, SenML defines a simple data model (in JSON, CBOR, XML, EXI) for the exchange of what EdgeX would call readings. A JSON example is below:
[{\"n\":\"urn:dev:ow:10e2073a01080063\",\"u\":\"Cel\",\"v\":23.1}]\n
In the example above, the array (what EdgeX would consider a collection of readings) has a single SenML Record with a measurement for a sensor named \"urn:dev:ow:10e2073a01080063\" with a current value of 23.1 for degrees measured in Celsius (Cel) unit of measure. However, SenML suggests the use of short names for the keys in most cases, but long names could be used. In which case, the JSON SenML reading would look like the following:
[{\"Name\":\"urn:dev:ow:10e2073a01080063\",\"Unit\":\"Cel\",\"Value\":23.1}]\n
In this way, the parallels to EdgeX model are, by accident, uncanny - at least in the JSON instance. SenML goes to much more depth to provide extensions and more definitions around measurements. But at its base, the EdgeX format is not unlike SenML and could easily be aligned with SenML in the future (or allow for an application service to export in SenML with an additional function fairly easily and if there were demand).
However, on the basis of \"unit of measure\", SenML is actually light on details. With regard to UoM, the SenML specification only says:
Quote
If the Record has no Unit, the Base Unit is used as the Unit. Having no Unit and no Base Unit is allowed; any information that may be required about units applicable to the value then needs to be provided by the application context.
A SenML Units Registry provides for a list of unit symbols (the \"SenML Units registry\"). This list could be used as one of the sources for EdgeX UoM definition.
SenML should be examined for future versions of EdgeX with regard to data model, but its relevance to unit of measure is believed to be minimal at this time.
"},{"location":"design/adr/core/0022-UoM/#future-considerationsadditionsimprovements","title":"Future Considerations/Additions/Improvements","text":"In the future, validation may be turned on
or off
per device service; allowing the decision to validate units of measure to be accomplished on a service or even allow the device service to validate/not validate based on particular devices.
In the future, additional criteria may be added to the unit of measure information to all for more specific (or allowing more granularity) validation. For example, the category of units of measure could be specified in a device resource so that a profile's units are validated against specific sources or collections of unit of measure.
Use of URI to specify the unit of measures file was discussed. This would be novel with regard to providing EdgeX information. Per core working group of 2/17/22 and then again at the monthly architect's meeting of 2/28/22, we may look to use a URI to specify a configuration file to specify UoM in the future. Indeed, URIs may be used (an EdgeX 3.0 consideration) to point to device profiles, configuration files, and other information in the future. This would even allow multiple EdgeX instances to use the same configuration or profile (multiple EdgeX instances using the same URI to use a shared profile for example). However, it was deemed scope creep and too much to do for this first iteration.
Initially, this ADR allowed for the UoM to also or alternately to be defined in the standard metadata service configuration file (`configuration.yaml'). During the Core WG meeting of 3/3/22, it was decided to simplify the design and strictly limit UoM to a separate configuration file. If future use cases or adopters request inline definition, this can be implemented in a future release.
"},{"location":"design/adr/core/0022-UoM/#consequences","title":"Consequences","text":"Approved
"},{"location":"design/adr/device-service/0002-Array-Datatypes/#context","title":"Context","text":"The current data model does not directly provide for devices which provide array data. Small fixed-length arrays may be handled by defining multiple device resources - one for each element - and aggregating them via a resource command. Other array data may be passed using the Binary type. Neither of these approaches is ideal: the binary data is opaque and any service processing it would need specific knowledge to do so, and aggregation presents the device service implementation with a multiple-read request that could in many cases be better handled by a single request.
This design adds arrays of primitives to the range of supported types in EdgeX. It comprises an extension of the DeviceProfile model, and an update to the definition of Reading.
"},{"location":"design/adr/device-service/0002-Array-Datatypes/#decision","title":"Decision","text":""},{"location":"design/adr/device-service/0002-Array-Datatypes/#deviceprofile-extension","title":"DeviceProfile extension","text":"The permitted values of the Type
field in PropertyValue
are extended to include: \"BoolArray\", \"Uint8Array\", \"Uint16Array\", \"Uint32Array\", \"Uint64Array\", \"Int8Array\", Int16Array\", \"Int32Array\", \"Int64Array\", \"Float32Array\", \"Float64Array\"
In the API (v1 and v2), Reading.Value
is a string representation of the data. If this is maintained, the representation for Array types will follow the JSON array syntax, ie [\"value1\", \"value2\", ...]
Any service which processes Readings will need to be reworked to account for the new Reading type.
"},{"location":"design/adr/device-service/0002-Array-Datatypes/#device-service-considerations","title":"Device Service considerations","text":"The API used for interfacing between device SDKs and devices service implementations contains a local representation of reading values. This will need to be updated in line with the changes outlined here. For C, this will involve an extension of the existing union type. For Go, additional fields may be added to the CommandValue
structure.
Processing of numeric data in the device service, ie offset
, scale
etc will not be applied to the values in an array.
Approved
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#context","title":"Context","text":"This ADR details the REST API to be provided by Device Service implementations in EdgeX version 2.x. As such, it supercedes the equivalent sections of the earlier \"Device Service Functional Requirements\" document. These requirements should be implemented as far as possible within the Device Service SDKs, but they also apply to any Device Service implementation.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#decision","title":"Decision","text":""},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#common-endpoints","title":"Common endpoints","text":"The DS should provide the REST endpoints that are expected of all EdgeX microservices, specifically:
PUT
and POST
callback/device/name/{name} DELETE
callback/profile PUT
callback/watcher PUT
and POST
callback/watcher/name/{name} DELETE
parameter meaning {name} the name of the device or watcher These endpoints are used by the Core Metadata service to inform the device service of metadata updates. Endpoints are defined for each of the objects of interest to a device service, ie Devices, Device Profiles and Provision Watchers. On receipt of calls to these endpoints the device service should update its internal state accordingly. Note that the device service does not need to be informed of the creation or deletion of device profiles, as these operations may only occur where no devices are associated with the profile. To avoid stale profile entries the device service should delete a profile from its cache when the last device using it is deleted.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#object-deletion","title":"Object deletion","text":"When an object is deleted, the Metadata service makes a DELETE
request to the relevant callback/{type}/name/{name} endpoint.
When an object is created or updated, the Metadata service makes a POST
or PUT
request respectively to the relevant callback/{type} endpoint. The payload of the request is the new or updated object, ie one of the Device, DeviceProfile or ProvisionWatcher DTOs.
GET
and PUT
parameter meaning {name} the name of the device {command} the command name The command specified must match a deviceCommand or deviceResource name in the device's profile
body (for PUT
): An application/json
SettingRequest, which is a set of key/value pairs where the keys are valid deviceResource names, and the values provide the command argument for that resource. Example: {\"AHU-TargetTemperature\": \"28.5\", \"AHU-TargetBand\": \"4.0\"}
response body: A successful GET
operation will return a JSON-encoded EventResponse object, which contains one or more Readings. Example: {\"apiVersion\":\"v2\",\"deviceName\":\"Gyro\",\"origin\":1592405201763915855,\"readings\":[{\"deviceName\":\"Gyro\",\"name\":\"Xrotation\",\"value\":\"124\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Yrotation\",\"value\":\"-54\",\"origin\":1592405201763915855,\"valueType\":\"int32\"},{\"deviceName\":\"Gyro\",\"name\":\"Zrotation\",\"value\":\"122\",\"origin\":1592405201763915855,\"valueType\":\"int32\"}]}
This endpoint is used for obtaining readings from a device, and for writing settings to a device.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#data-formats","title":"Data formats","text":"The values obtained when readings are taken, or used to make settings, are expressed as strings.
Type EdgeX types Representation BooleanBool
\"true\" or \"false\" Integer Uint8-Uint64
, Int8-Int64
Numeric string, eg \"-132\" Float Float32
, Float64
Decimal with exponent, eg \"1.234e-5\" String String
string Binary Bytes
octet array Array BoolArray
, Uint8Array-Uint64Array
, Int8Array-Int64Array
, Float32Array
, Float64Array
JSON Array, eg \"[\"1\", \"34\", \"-5\"]\" Notes: - The presence of a Binary reading will cause the entire Event to be encoded using CBOR rather than JSON - Arrays of String and Binary data are not supported
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#readings-and-events","title":"Readings and Events","text":"A Reading represents a value obtained from a deviceResource. It contains the following fields
Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken value The reading value valueType The type of the dataOr for binary Readings, the following fields
Field name Description deviceName The name of the device profileName The name of the Profile describing the Device resourceName The name of the deviceResource origin A timestamp indicating when the reading was taken binaryValue The reading value mediaType The MIME type of the dataAn Event represents the result of a GET
command. If the command names a deviceResource, the Event will contain a single Reading. If the command names a deviceCommand, the Event will contain as many Readings as there are deviceResources listed in the deviceCommand.
The fields of an Event are as follows:
Field name Description deviceName The name of the Device from which the Readings are taken profileName The name of the Profile describing the Device origin The time at which the Event was created readings An array of Readings"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#query-parameters","title":"Query Parameters","text":"Calls to the device endpoints may include a Query String in the URL. This may be used to pass parameters relating to the request to the device service. Individual device services may define their own parameters to control specific behaviors. Parameters beginning with the prefix ds-
are reserved to the Device SDKs and the following parameters are defined for GET requests:
GET
will result in an event being pushed to the EdgeX system ds-returnevent \"true\" or \"false\" \"true\" If set to false, there will be no Event returned in the http response EdgeX 3.0
The valid values of ds-pushevent and ds-returnevent is changed to true/false
instead of yes/no
in EdgeX 3.0.
A Device in EdgeX has two states associated with it: the Administrative state and the Operational state. The Administrative state may be set to LOCKED
(normally UNLOCKED
) to block access to the device for administrative reasons. The Operational state may be set to DOWN
(normally UP
) to indicate that the device is not currently working. In either case access to the device via this endpoint will be denied and HTTP 423 (\"Locked\") will be returned.
A number of simple data transformations may be defined in the deviceResource. The table below shows these transformations in the order in which they are applied to outgoing data, ie Readings. The transformations are inverted and applied in reverse order for incoming data.
Transform Applicable reading types Effect mask Integers The reading is masked (bitwise-and operation) with the specified value. shift Integers The reading is bit-shifted by the specified value. Positive values indicate right-shift, negative for left. base Integers and Floats The reading is replaced by the specified value raised to the power of the reading. scale Integers and Floats The reading is multiplied by the specified value. offset Integers and Floats The reading is increased by the specified value.The operation of the mask transform on incoming data (a setting) is that the value to be set on the resource is the existing value bitwise-anded with the complement of the mask, bitwise-ored with the value specified in the request.
ie, new-value = (current-value & !mask) | request-value
The combination of mask and shift can therefore be used to access data contained in a subdivision of an octet.
It is possible that following the application of the specified transformations, a value may exceed the range that may be represented by its type. Should this occur on a set operation, a suitable error should be logged and returned, along with the Bad Request
http code 400. If it occurs as part of a get operation, the Reading's value should be set to the String \"overflow\"
and its valueType to String
.
Assertions are another attribute in a device resource's PropertyValue, which specify a string which the reading value is compared against. If the comparison fails, then the http request returns a string of the form \"Assertion failed for device resource: \\, with value: \\\", this also has a side-effect of setting the device operatingstate to DISABLED
. A 500 status code is also returned. Note that the error response and status code should be returned regardless of the ds-returnevent
setting.
Assertions are also checked where an event is being generated due to an AutoEvent, or asynchronous readings are pushed. In these cases if the assertion is triggered, an error should be logged and the operating state should be set as above.
Assertions are not checked for settings, only for readings.
Mappings may be defined in a deviceCommand. These allow Readings of string type to be remapped. Mappings are applied after assertions are checked, and are the final transformation before Readings are created. Mappings are also applied, but in reverse, to settings (PUT
request data).
Each Device has as part of its metadata a timestamp named lastConnected
, this indicates the most recent occasion when the device was successfully interacted with. The device service should update this timestamp every time a GET or PUT operation succeeds, unless it has been configured not to do so (eg for performance reasons).
POST
A call to this endpoint triggers the device discovery process, if enabled. See Discovery Design for details.
"},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#consequences","title":"Consequences","text":""},{"location":"design/adr/device-service/0011-DeviceService-Rest-API/#changes-from-v1x-api","title":"Changes from v1.x API","text":"GET
requests take parameters controlling what is to be done with resulting Events, and the default behavior does not send the Event to core-dataOpenAPI definition of v2 API : https://github.com/edgexfoundry/device-sdk-go/blob/master/openapi/v2/device-sdk.yaml
Device Service Functional Requirements (Geneva) : https://wiki.edgexfoundry.org/download/attachments/329488/edgex-device-service-requirements-v11.pdf?version=1&modificationDate=1591621033000&api=v2
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/","title":"Device Service Filters","text":""},{"location":"design/adr/device-service/0012-DeviceService-Filters/#status","title":"Status","text":"** Approved ** (by TSC vote on 3/15/21)
In EdgeX today, sensor/device data collected can be \"filtered\" by application services before being exported or sent to some north side application or system. Built-in application service functions (available through the app services SDK) allow EdgeX event/reading objects to be filtered by device name or by device ResourceName. That is, event/readings can be filtered by:
There are potentially two places where \"filtering\" in a device service could be useful.
Event/Reading
objects and pushes those to core data). A sensor data filter would allow the device service to essentially ignore some of the raw sensed data. This would allow for some device service optimization in that the device service would not have perform type transformations and creation of event/reading objects if the data can be eliminated at this early stage. This first level filtering would, if put in place, likely occur in code associated with the read command gets done by the ProtocolDriver
.Event/Reading
objects, there is a desire to filter some of the Readings
based on the Reading
values or Reading
name (which is the device ResourceName) or some combination of value and name.At this time, this design only addresses the need for the second filter (Reading Filter). At the time of this writing, no applicable use case has yet to be defined to warrant the Sensor Data Filter.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#reading-filters","title":"Reading Filters","text":"Reading filters will allow, not unlike application service filter functions today, to have Readings
in an Event
to be removed if:
the value was outside or inside some range, or the value was greater than, less than or equal to some value
Reading
value (numeric) of a Reading
outside a specified range (min/max) described in the service configuration. Thus avoiding sending in outlier or jittery data Readings
that could negatively effect analytics.Reading
value (numeric) equal to or near (with in some specified range) the last reading. This allows a device service to reduce sending in Event/Readings
that do not represent any significant change. This differs from the already implemented onChangeOnly in that it is filtering Readings
within a specified degree of change. Note: this feature would require caching of readings which has not fully been implemented in the SDK. The existing mechanism for autoevents
provides a partial cache. Added for future reference, but this feature would not be accomplished in the initial implementation; requiring extra design work on caching to be implemented.the value was the same as some or not the same as some specified value or values (for strings, boolean and other non-numeric values)
temperature
or humidity
as example device resources.Unlike application services, there is not a need to filter on a device name (or identifier). Simply disable the device in the device service if all Event/Readings
are to be stopped for the device.
In the case that all Readings
of an Event
are filtered, it is assumed the entire Event
is deemed to be worthless and not sent to core data by the device service. If only some Readings
from and Event
are filtered, the Event
minus the filtered Readings
would be sent to core data.
The filter behaves the same whether the collection of Readings
and Events
is triggered by a scheduled collection of data from the underlying sensor/device or triggered by a command request (as from the command service). Therefore, the call for a command request still results in a successful status code and a return of no results (or partial results) if the filter causes all or some of the readings to be removed.
A new function interface shall be defined that, when implemented, performs a Reading Filter operation. A ReadingFilter function would take a parameter (an Event
containing readings), check whether the Readings
of the Event
match on the filtering configuration (see below) and if they do then remove them from the Event
. The ReadingFilter function would return the Event
object (minus filtered Readings
) or nil
if the Event
held no more Readings
. Pseudo code for the generic function is provided below. The results returned will include a boolean to indicate whether any Reading
objects were removed from the Event
(allowing the receiver to know if some were filtered from the original list).
func (f Filter) ReadingFilter(lc logger.LoggingClient, event *models.Event) (*models.Event, error, boolean) {\n// depending on impl; filtering for values in/out of a range, >, <, =, same, not same, from a particular name (device resource), etc.\n// The boolean will indicate whether any Readings were filtered from the Event. \nif (len(event.Reading )) > 0)\nif (len filteredReadings > 0)\nreturn event, true\nelse return event, false\nelse\nreturn nil, true\n}\n
Based on current needs/use cases, implementations of the function interface could include the following filter functions:
func (f Filter) FilterByValue (lc logger.LoggingClient, event *models.Event) (*models.Event, error, boolean) {}\n\nfunc (f Filter) FilterByResourceNamesMatch (lc logger.LoggingClient, event *models.Event) (*models.Event, error, boolean) {}\n
Note
The app functions SDK comes with FilterByDeviceName
and FilterByResourceName
functions today. The FilterByResourceName would behave similarly to FilterByResourceNameMatch.
The Filter structure houses the configuration parameters for which the filter functions work and filter on.
Note
The app functions SDK uses a fairly simple Filter structure.
type Filter struct {\nFilterValues []string\nFilterOut bool\n}\n
Given the collection of filter operations (in range, out of range, equal or not equal), the following structure is proposed:
type Filter struct {\nFilterValues []string\nTargetResourceName string\nFilterOp string // enum of in (in range inclusive), out (outside a range exclusive), eq (equal) or ne (not equal)\n}\n
Examples use of the Filter structure to specify filtering:
Filter {FilterValues: {10, 20}, \"Int64\", FilterOp: \"in\"} // filter for those Int64 readings with values between 10-20 inclusive\nFilter {FilterValues: {10, 20}, \"Int64\", FilterOp: \"out\"} // filter for those Int64 readings with values outside of 10-20.\nFilter {FilterValues: {8, 10, 12}, \"Int64\", FilterOp: \"eq\"} //filter for those Int64 readings with values of 8, 10, or 12.\nFilter {FilterValues: {8, 10}, \"Int64\", FilterOp: \"ne\"} //filter for those Int64 readings with values not equal to 8 or 10\nFilter {FilterValues: {\"Int32\", \"Int64\"}, nil, FilterOp: \"eq\"} //filter to be used with FilterByResourceNameMatch. Filter for resource names of Int32 or Int64.\nFilter {FilterValues: {\"Int32\"}, nil, FilterOp: \"ne\"} //filter to be used with FilterByResourceNameMatch. Filter for resource names not equal to (excluding) Int32.\n
A NewFilter function creates, initializes and returns a new instance of the filter based on the configuration provided.
func NewReadingNameFilter(filterValues []string, filterOp string) Filter {\nreturn Filter{FilterValues: filterValues, TargetResourceName string, FilterOp: filterOp}\n}\n
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#sharing-filter-functions","title":"Sharing filter functions","text":"If one were to explore the filtering functions in the app functions SDK filter.go (both FilterByDeviceName
and FilterByValueDescriptor
), the filters operate on the Event
model object and return the same objects (Event
or nil). Ideally, since both app services and device services generally share the same interface model (from go-mod-core-contracts
), it would be the desire to share the same filter functions functions between SDKs and associated services.
Decisions on how to do this in Go - whether by shared module for example - is left as a future release design and implementation task - and as the need for common filter functions across device services and application services are identified in use cases. C needs are likely to be handled in the SDK directly.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#additional-design-considerations","title":"Additional Design Considerations","text":"As Device Services do not have the concept of a functions pipeline like application services do, consideration must be given as to how and where to:
At this time, custom filters will not be supported as the custom filters would not be known by the SDK and therefore could not be specified in configuration. This is consistent with the app functions SDK and filtering.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#function-inflection-point","title":"Function Inflection Point","text":"It is precisely after the convert to Event/Reading
objects (after the async readings are assembled into events) and before returning that result in common.SendEvent
(in utils.go) function that the device service should invoke the required filter functions. In the existing V1 implementation of the device-sdk-go, commands, async readings, and auto-events all call the function common.SendEvent()
. Note: V2 implementation will require some re-evaluation of this inflection point. Where possible, the implementation should locate a single point of inflection if possible. In the C SDK, it is likely that the filters will be called before conversion to Event/Reading objects - they will operate on commandresult objects (equivalent to CommandValues).
The order in which functions are called is important when more than one filter is provided. The order that functions are called should be reflected in the order listed in the configuration of the filters.
Events containing binary values (event.HasBinaryValue), will not be filtered. Future releases may include binary value filters.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#setting-filter-function-and-configuration","title":"Setting Filter Function and Configuration","text":"When filter functions are shared (or appear to be doing the same type of work) between SDKs, the configuration of the similar filter functions should also look similar. The app functions SDK configuration model for filters should therefore be followed.
While device services do not have pipelines, the inclusion and configuration of filters for device services should have a similar look (to provide symmetry with app services). The configuration has to provide the functions required and parameters to make the functions work - even though the association to a pipeline is not required. Below is the common app service configuration as it relates to filters:
[Writable.Pipeline]\nExecutionOrder = \"FilterByDeviceName, TransformToXML, SetOutputData\"\n[Writable.Pipeline.Functions.FilterByDeviceName]\n[Writable.Pipeline.Functions.FilterByDeviceName.Parameters]\nDeviceNames = \"Random-Float-Device,Random-Integer-Device\"\nFilterOut = \"false\"\n
Suggested and hypothetical configuration for the device service reading filters should look something like that below.
[Writable.Filters]\n# filter readings where resource name equals Int32 \nExecutionOrder = \"FilterByResourceNamesMatch, FilterByValue\"\n[Writable.Filter.Functions.FilterByResourceNamesMatch]\n[Writable.Filter.Functions.FilterByResourceNamesMatch.Parameters]\nFilterValues = \"Int32\"\nFilterOps =\"eq\"\n# filter readings where the Int64 readings (resource name) is Int64 and the values are between 10 and 20\n[Writable.Filter.Functions.FilterByValue]\n[Writable.Filter.Functions.FilterByValue.Parameters]\nTargetResourceName = \"Int64\"\nFilterValues = {10,20}\nFilterOp = \"in\"\n
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#decision","title":"Decision","text":"To be determined
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#consequences","title":"Consequences","text":"This design does not take into account potential changes found with the V2 API.
"},{"location":"design/adr/device-service/0012-DeviceService-Filters/#references","title":"References","text":""},{"location":"design/adr/devops/0007-Release-Automation/","title":"Release Automation","text":""},{"location":"design/adr/devops/0007-Release-Automation/#status","title":"Status","text":"Approved by TSC 04/08/2020
"},{"location":"design/adr/devops/0007-Release-Automation/#context","title":"Context","text":"EdgeX Foundry is a framework composed of microservices to ease development of IoT/Edge solutions. With the framework getting richer, project growth, the number of artifacts to be released has increased. This proposal outlines a method for automating the release process for the base artifacts.
"},{"location":"design/adr/devops/0007-Release-Automation/#requirements","title":"Requirements","text":""},{"location":"design/adr/devops/0007-Release-Automation/#release-artifact-definition","title":"Release Artifact Definition","text":"For the scope of Hanoi release artifact types are defined as:
This list is likely to expand in future releases.
*The building and publishing of snaps was removed from community scope in September 2020 and is managed outside the community by Canonical.
"},{"location":"design/adr/devops/0007-Release-Automation/#general-requirements","title":"General Requirements","text":"As the EdgeX Release Czar I gathered the following requirements for automating this part of the release.
The code that will manage the release automation for EdgeX Foundry will live in a repository called cd-management
. This repository will have a branch named release
that will track the releases of artifacts off the main
branch of the EdgeX Foundry repositories.
EdgeX Foundry has this idea of multple release streams that basically coincides with different named branches in GitHub. For the majority of the main releases we will be targeting those off the main
branch. In our cd-management
repository we will have a release
branch that will track the main
branches EdgeX repositories. In the future we will mark a specific release for long term support (LTS). When this happens we will have to branch off main
in the EdgeX repositories and create a separate release stream for the LTS. The suggestion at that point will be to branch off the release
branch in cd-management
as well and use this new release branch to track the LTS branches in the EdgeX repositories.
Go modules, Application and Device SDKs only release a GitHub tag as their release. Go modules, Application and Device SDKs are set up to automatically increment a developmental version tag on each merge to main
. (IE: 1.0.0-dev.1 -> 1.0.0-dev.2)
The release automation for Go Modules, Device and Application SDKs is used to set the final release version git tag. (IE: 1.0.0-dev.X -> 1.0.0) For each release, the Go Modules, Device and Application SDK repositories will be tagged with the release version.
"},{"location":"design/adr/devops/0007-Release-Automation/#core-services-including-security-and-system-management-services-application-services-device-services-and-supporting-docker-images","title":"Core Services (Including Security and System Management services), Application Services, Device Services and Supporting Docker Images","text":""},{"location":"design/adr/devops/0007-Release-Automation/#during-development_1","title":"During Development","text":"For the Core Services, Application Services, Device Services and Supporting Docker Images we release Github tags and docker images. On every merge to the main
branch we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2), stage docker images in our Nexus repository (docker.staging).
The release automation will need to do the following:
For supporting release assets (e.g. edgex-cli) we release GitHub tags on every merge to the main
branch. For every merge to main
we will do the following; increment a developmental version tag on GitHub, (IE: 1.0.0-dev.1 -> 1.0.0-dev.2) and store the build artifacts in our Nexus repository.
For EdgeX releases the release automation will set the final release version by creating a git tag (e.g. 1.0.0-dev.X -> 1.0.0) and produce a Github Release containing the binary assets targeted for release.
"},{"location":"design/adr/devops/0010-Release-Artifacts/","title":"Release Artifacts","text":""},{"location":"design/adr/devops/0010-Release-Artifacts/#status","title":"Status","text":"Approved
"},{"location":"design/adr/devops/0010-Release-Artifacts/#context","title":"Context","text":"During the Geneva release of EdgeX Foundry the DevOps WG transformed the CI/CD process with new Jenkins pipeline functionality. After this new functionality was added we also started adding release automation. This new automation is outlined in ADR 0007 Release Automation. However, in ADR 0007 Release Automation only two release artifact types are outlined. This document is meant to be a living document to try to outlines all currently supported artifacts associated with an EdgeX Foundry release, and should be updated if/when this list changes.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#release-artifact-types","title":"Release Artifact Types","text":""},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-images","title":"Docker Images","text":"Tied to Code Release? Yes
Docker images are released for every named release of EdgeX Foundry. During development the community releases images to the docker.staging
repository in Nexus. At the time of release we promote the last tested image from docker.staging
to docker.release
. In addition to that we will publish the docker image on DockerHub.
Retention Policy: 90 days since last download
Contains: Docker images that are not expected to be released. This contains images to optimize the builds in the CI infrastructure. The definitions of these docker images can be found in the edgexfoundry/ci-build-images Github repository.
Docker Tags Used: Version, Latest
"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerstaging","title":"docker.staging","text":"Retention Policy: 180 days since last download
Contains: Docker images built for potential release and testing purposes during development.
Docker Tags Used: Version (ie: v1.x), Release Branch (master, fuji, etc), Latest
"},{"location":"design/adr/devops/0010-Release-Artifacts/#dockerrelease","title":"docker.release","text":"Retention Policy: No automatic removal. Requires TSC approval to remove images from this repository.
Contains: Officially released docker images for EdgeX.
Docker Tags Used:\u2022Version (ie: v1.x), Latest
Nexus Cleanup Policies Reference
"},{"location":"design/adr/devops/0010-Release-Artifacts/#docker-compose-files","title":"Docker Compose Files","text":"Tied to Code Release? Yes
Docker compose files are released alongside the docker images for every release of EdgeX Foundry. During development the community maintains compose files a folder named nightly-build
. These compose files are meant to be used by our testing frameworks. At the time of release the community makes compose files for that release in a folder matching it's name. (ie: geneva
)
Tied to Code Release? No
After Docker images are published to DockerHub, automation should be run to update the image Overviews and Descriptions of the necessary images. This automation is located in the edgex-docker-hub-documentation
branch of the cd-management repository. In preparation for the release the community makes changes to the Overview and Description metadata as appropriate. The Release Czar will coordinate the execution of the automation near the release time.
Tied to Code Release? No
EdgeX Foundry releases a set of documentation for our project at http://docs.edgexfoundry.org. This page is a Github page that is managed by the edgex/foundry/edgex-docs Github repository. As a community we make our best effort to keep these docs up to date. On this page we are also versioning the docs with the semantic versions of the named releases. As a community we try to version our documentation site shortly after the official release date but documentation changes are addressed as we find them throughout the release cycle.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#github-tags","title":"GitHub Tags","text":"Tied to Code Release? Yes, for the final semantic version
Github tags are used to track the releases of EdgeX Foundry. During development the tags are incremented automatically for each commit using a development suffix (ie: v1.1.1-dev.1
-> v1.1.1-dev.2
). At the time of release we release a tag with the final semantic version (ie: v1.1.1
).
Tied to Code Release? Yes
The building of snaps was removed from community scope in September 2020 but are still available on the snapcraft store.
Canonical publishes daily arm64 and amd64 releases of the following snaps to latest/edge in the Snap Store. These builds take place on the Canonical Launchpad platform and use the latest code from the master branch of each EdgeX repository, versioned using the latest git tag.
edgexfoundry edgex-app-service-configurable edgex-device-camera edgex-device-rest edgex-device-modbus edgex-device-mqtt edgex-device-grove edgex-cli (work-in-progress) Note - this list may expand over time.
At code freeze the edgexfoundry snap revision in the edge channel is promoted to latest/beta and $TRACK/beta. Publishing to beta will trigger the Canonical checkbox automated tests, which include tests on a variety of hardware hosted by Canonical.
When the project tags a release of any of the snaps listed above, the resulting snap revision is first promoted from the edge channel to latest/candidate and $TRACK/candidate. Canonical tests this revision, and if all looks good, releases to latest/stable and $TRACK/stable.
Canonical may also publish updates to the EdgeX snaps after release to address high/critical bugs and CVEs (common vulnerabilities and exposures).
Note - in the above descriptions, $TRACK corresponds to the named release tracks (e.g. fuji, geneva, hanoi, ...) which are created for every major/minor release of EdgeX Foundry.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#swaggerhub-api-docs","title":"SwaggerHub API Docs","text":"Tied to Code Release? No
In addition to our documentation site EdgeX foundry also releases our API specifications on Swaggerhub.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#testing-framework","title":"Testing Framework","text":"Tied to Code Release? Yes
The EdgeX Foundry community has a set of tests we maintain to do regression testing during development this framework is tracking the master
branch of the components of EdgeX. At the time of release we will update the testing frameworks to point at the released Github tags and add a version tag to the testing frameworks themselves. This creates a snapshot of testing framework at the time of release for validation of the official release.
Tied to Code Release? Yes
GitHub release functionality is utilized on some repositories to release binary artifacts/assets (e.g. zip/tar files). These are versioned with the semantic version and found on the repository's GitHub Release page under 'Assets'.
"},{"location":"design/adr/devops/0010-Release-Artifacts/#known-build-dependencies-for-edgex-foundry","title":"Known Build Dependencies for EdgeX Foundry","text":"There are some internal build dependencies within the EdgeX Foundry organization. When building artifacts for validation or a release you will need to take into the account the build dependencies to make sure you build them in the correct order.
This document is meant to be a living document of all the release artifacts of EdgeX Foundry. With this ADR we would have a good understanding on what needs to be released and when they are released. Without this document this information will remain tribal knowledge within the community.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/","title":"Creation and Distribution of Secrets","text":""},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#status","title":"Status","text":"Approved
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#context","title":"Context","text":"This ADR seeks to clarify and prioritize the secret handling approach taken by EdgeX.
EdgeX microservices need a number of secrets to be created and distributed in order to create a functional, secure system. Among these secrets are:
There is a lack of consistency on how secrets are created and distributed to EdgeX microservices, and when developers need to add new components to the system, it is unclear on what the preferred approach should be.
This document assumes a threat model wherein the EdgeX services are sandboxed (such as in a snap or a container) and the host system is trusted, and all services running in a single snap share a trust boundary.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#terms","title":"Terms","text":"The following terms will be helpful for understading the subsequent discussion:
While EdgeX implements a sophisticated secret handling mechanism, that mechanism itself requires secrets. For example, every microservice that talks to Vault must have its own unique secret to authenticate: Vault itself cannot be used to distribute these secrets. SECRETSLOC fulfills the role that the non-routable instance data IP address, 169.254.169.254, fulfills in the public cloud: delivery of bootstrapping secrets. As EdgeX does not have a hypervisor nor virtual machines for this purpose, a protected file system path is used instead.
SECRETSLOC is implementation-dependent. A desirable feature of SECRETSLOC would be that data written here is kept in RAM and is not persisted to storage media. This property is not achieveable in all circumstances.
For Docker, a list of suggested paths--in preference order--is:
/run/edgex/secrets
(a tmpfs
volume on a Linux host)/tmp/edgex/secrets
(a temporary file area on Linux and MacOS hosts)For snaps, a list of suggested paths-in preference order--is: * /run/snap.
$SNAP_NAME/
(a tmpfs
volume on a Linux host) * $SNAP_DATA/secrets
(a snap-specific persistent data area) * TBD (a content interface that allows for sharing of secrets from the core snap)
A survey on the existing EdgeX secrets reveals the following appoaches.
A designation of \"compliant\" means that the current implementation is aligned with the recommended practices documented in the next section. A designation of \"non-compliant\" means that the current implementation uses an implemention mechanism outside of the recommended practices documented in the next section. A \"non-compliant\" implementation is a candidate for refactoring to bring the implementation into conformance with the recommended practices.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#system-managed-secrets","title":"System-managed secrets","text":"Snaps: PKI generated by standalone utility every cold start of the framework. Deployed to SECRETSLOC. (Compliant.)
Secret store master password
Snaps: Stored in $SNAP_DATA/config/security-secrets-setup/res
. (Non-compliant.)
Secret store per-service authentication tokens
Snaps: Distribution via SECRETSLOC, generated every cold start of the framework. (Compliant.)
Postgres superuser password
Snaps: Generated at snap install time via \"apg\" (\"automatic password generator\") tool, installed into Postgres, cached to $SNAP_DATA/config/postgres/kongpw
(non-compliant), and passed to Kong via $KONG_PG_PASSWORD
.
MongoDB service account passwords
Snaps: Direct consumption from secret store. (Compliant.)
Redis authentication password
Snaps: Server--staged to $SNAP_DATA/secrets/edgex-redis/redis5-password
and injected via command line. (Non-compliant.). Clients--direct consumption from secret store. (Compliant.)
Kong client authentication tokens
Note: in the current implementation, Consul is being operated as a public service. Consul will be a subject of a future \"bootstrapping ADR\" due to its role in serivce location.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#user-managed-secrets","title":"User-managed secrets","text":"User-managed secrets functionality is provided by app-functions-sdk-go
.
If security is enabled, secrets are retrieved from Vault. If security is disabled, secrets are retreived from the configuration provider. If the configuration provider is not available, secrets are read from the underlying .toml
. It is taken as granted in this ADR that secrets originating in the configuration provider or from .toml
configuration files are not secret. The fallback mechanism is provided as a convienience to the developer, who would otherwise have to litter their code with \"if (isSecurityEnabled())\" logic leading to implementation inconsistencies.
The central database credential is supplied by GetDatabaseCredentials()
and returns the database credential assigned to app-service-configurable
. If security is enabled, database credentials are retreived using the standard flow. If security is disabled, secrets are retreived from the configuration provider from a special section called [Writable.InsecureSecrets]
. If not found there, the configuration provider is searched for credentials stored in the legacy [Databases.Primary]
section using the Username
and Password
keys.
Each user application has its own exclusive-use area of the secret store that is accessed via GetSecrets()
. If security is enabled, secret requests are passed along to go-mod-secrets
using an application-specific access token. If security is disabled, secret requets are made to the configuration provider from the [Writable.InsecureSecrets]
section. There is no fallback configuration location.
As user-managed secrets have no framework support for initialization, a special StoreSecrets()
method is made available to the application for the application to initialize its own secrets. This method is only available in security-enabled mode.
No changes to user-managed secrets are being proposed in this ADR.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#decision","title":"Decision","text":""},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#creation-of-secrets","title":"Creation of secrets","text":"Management of hardware-bound secrets is platform-specific and out-of-scope for the EdgeX framework. EdgeX open source will contain only the necessary hooks to integrate platform-specific functionality.
For software-managed secrets, the system of reference of secrets in EdgeX is the EdgeX secret store. The EdgeX secret store provides for encryption of secrets at rest. This term means that if a secret is replicated, the EdgeX secret store is the authoritative source of truth of the secret. Whenever possible, the EdgeX secret store should also be the record of origin of a secret as well. This means creating secrets inside of the EdgeX secret store is preferable to importing an externally-created secret into the secret store. This can often be done for framework-managed secrets, but not possible for user-managed secrets.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#choosing-between-alternative-forms-of-secrets","title":"Choosing between alternative forms of secrets","text":"When given a choice between plain-text secrets and cryptographic keys, cryptographic keys should be preferred.
An example situation would be the introduction of an MQTT message broker. A broker may support both TLS client authentication as well as username/password authentication. In such a situation, TLS client authentication would be preferred:
TLS client authentication should not be used unless there is a capability to revoke a compromised certificate, such as by replacing the certificate authority, or providing a certificate revokation list to the server. If certificate revokation is not supported, plain-text secrets (such as username/password) should be used instead, as they are typically easier to revoke.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#distribution-and-consumption-of-secrets","title":"Distribution and consumption of secrets","text":""},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#prohibited-practices","title":"Prohibited practices","text":"Use of hard-coded secrets is an instance of CWE-798: Use of hard-coded credentials and is not allowed. A hard-coded secret is a secret that is the same across multiple EdgeX instances. Hard-coded secrets make devices susceptible to BORE (break-once-run-everywhere) attacks, where collections of machines can compromised by a single replicated secret. Specific cases where this is likely to come up are:
EdgeX is an open-source project. Any secret that is present in an EdgeX repository is public to the world, and therefore not a secret, by definition. Configuration files, such as .toml files, .json files, .yaml files (including docker-compose.yml
) are specific instances of this practice.
Binaries are usually not protected against confidentiality threats, and binaries can be easily reverse-engineered to find any secrets therein. Binaries included compile executables as well as Docker images.
"},{"location":"design/adr/security/0008-Secret-Creation-and-Distribution/#recommended-practices","title":"Recommended practices","text":"This approach is only possible for components that have native support for Hashicorp Vault. This includes any EdgeX service that links to go-mod-secrets.
For example, if secretClient is an instance of the go-mod-secrets secret store client:
secrets, err := secretClient.GetSecrets(\"myservice\", \"username\", \"password\")\n
The above code will retrieve the username
and password
properties of the myservice
secret.
Environment variables are part of a process' environment block and are mapped into a process' memory. In this scenario, an intermediary makes a connection to the secret store to fetch a secret, store it into an environment variable, and then launches a target executable, thereby passing the secret in-memory to the target process.
Existing examples of this functionality include vaultenv, envconsul, or env-aws-params. These tools authenticate to a remote network service, inject secrets into the process environment, and then exec's a replacment process that inherits the secret-enriched enviornment block.
There are a few potential risks with this approach:
Environment-variable-sniffing malware (introduced by compromised 3rd party libaries) is a proven attack method.
Dynamic injection of secret into container-scoped tmpfs
volume
An example of this approach is consul-template. This approach is useful when a secret is required to be in a configuration file and cannot be passed via an environment variable or directly consumed from a secret store.
This option is the most widely supported secret distribution mechanism by container orchestrators.
EdgeX supports runtime environments such as standard Docker and snaps that have no built-in secret management features.
Generic Docker does not have a built-in secrets mechanism. Manual configuration of a SECRETSLOC should utilize either a host file file system path or a Docker volume.
Snaps also do not have a built-in secrets mechanism. The options for SECRETSLOC are limited to designated snap-writable directories.
For comparison:
Docker Swarm: Swarm swarm mode is not officially supported by the EdgeX project. Docker Swarm secrets are shared via the /run/secrets
volume, which is a Linux tmpfs
volume created on the host and shared with the container. For an example of Docker Swarm secrets, see the docker-compose secrets stanza. Secrets distributed in this manner become part of the RaftDB, and thus it becomes necessary to enable swarm autolock mode, which prevents the Raft database encryption key from being stored plaintext on disk. Swarm secrets have an additional limitation in that they are not mutable at runtime.
Kubernetes: Kubernetes is not officially supported by the EdgeX project. Kubernetes also supports the secrets volume approach, though the secrets volume can be mounted anywhere in the container namespace. For an example of Kubernetes secrets volumes, see the Kubernetes secrets documentation. Secrets distributed in this manner become part of the etcd
database, and thus it becomes necessary to specify a KMS provider for data encryption to prevent etcd
from storing plaintext versions of secrets.
As the existing implementation is not fully-compliant with this ADR, significant scope will be added to current and future EdgeX releases in order to bring the project into compliance.
List of needed improvements:
security-secrets-setup
utility.All: Investigate hardware protection of cached Consul and Vault PKI secret keys. (Vault cannot unseal its own TLS certificate.)
Special case: Bring-your-own external Kong certificate and key
The Kong external certificate and key is already stored in Vault, however, additional metadata is needed to signal whether these are auto-generated or manually-installed. A manually-installed certificate and key would not be overwritten by the framework bringup logic. Installing a custom certificate and key can then be implemented by overwriting the system-generated ones and setting a flag indicating that they were manually-installed.
Secret store master password
All: Enable hooks for hardware protection of secret store master password.
Secret store per-service authentication tokens
No changes required.
Postgres superuser password
Cache in Vault and inject into Kong using environment variable injection.
MongoDB service account passwords
No changes required.
Redis(v5) authentication password
No changes on client side.
Redis(v6) passwords (v6 adds multiple user support)
No changes on client side (each service accesses its own credential)
Kong authentication tokens
** Approved **
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#context","title":"Context","text":"Docker-compose, the tool used by EdgeX to manage its Docker-based stack, lags in its support for initialization logic.
Docker-compose v2.x used to have a depends_on / condition
directive that would test a service's HEALTHCHECK and block startup until the service was \"healthy\". Unfortunately, this feature was removed in 3.x docker-compose. (This feature is also unsuppported in swarm mode as well.)
Snaps have an explicit install phase and Kubernetes PODs have optional init containers. In other frameworks, initialization is allowed to run to completion prior to application components being started in production mode. This functionality does not exist in Docker nor docker-compose.
The current lack of an initialization phase is a blocking issue for implementing microservice communication security, as critical EdgeX core components that are involved with microservice communication (specifically Consul) are being brought up in an insecure configuration. (Consul's insecure configuration is will be addressed in a separate ADR.)
Activities that are best done in the initialization phase include the following:
Workarounds when an installation phase is not present include:
EdgeX does not have a manual installation flow, and uses a combination of the last three approaches.
The objective of this ADR is to define a framework for Docker-based initialization logic in EdgeX. This will enable the removal of certain hard-coded secrets in EdgeX and enable certain components (such as Consul) to be started in a secure configuration. These improvement are necessary pre-requisites to implementing microservice communication security.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#history","title":"History","text":"In previous releases, container startup sequencing has been primarily been driven by Consul service health checks backed healthcheck endpoints of particular services or by sentinel files placed in the file system when certain intialization milestones are reached.
The implementation has been plagued by several issues:
Sentinel files are not cleaned up if the framework fails or is shut down. Invalid state left over from previous instantiations of the framework causes difficult-to-resolve race conditions. (Implementation of this ADR will try to remove as many as possible, focusing on those that are used to gate startup. Some use of sentinel files may still be required to indicate completion of initialization steps so that they are not re-done if there is no API-based mechanism to determine if such initialization has been completed.)
Consul healh checks are reported in a difficult-to-parse JSON structure, which has lead to the creation of specialized tools that are insensitive to libc implementations used by different container images.
Consul is being used not only for service health, but for service location and configuration as well. The requirement to synchronize framework startup for the purpose of securely initializing Consul means that a non-Consul mechanism must be used to stage-gate EdgeX initialization.
This last point is the primary motivator of this ADR.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#decision","title":"Decision","text":""},{"location":"design/adr/security/0009-Secure-Bootstrapping/#stage-gate-mechanism","title":"Stage-gate mechanism","text":"The stage-gate mechanism must work in the following environments:
Startup sequencing will be driven by two primary mechanisms:
Use of entrypoint scripts to:
Block on stage-gate and service dependencies
The bootstrap container will inject entrypoint scripts into the other containers in the case where EdgeX is directly consuming an upstream container. Docker will automatically retry restarting containers if its entrypoint script is missing.
Use of TCP sockets for startup sequencing is commonly used in Docker environments. Due to its popularlity, there are several existing tools for this, including wait-for-it, dockerize, and wait-for. The TCP mechanism is portable across platforms and will work in distributed multi-node scenarios.
At least three new ports will be added to EdgeX for sequencing purposes:
bootstrap
port. This port will be opened once first-time initialization has been completed.tokens_ready
port. This port signals that secret-store tokens have been provisioned and are valid.ready_to_run
port. This port will be opened once stateful services have completed initialization and it is safe for the majority of EdgeX core services to start.The stateless EdgeX services should block on ready_to_run
port.
The following diagram shows the \"as-is\" startup flow.
There are several components being removed via activity unrelated with this ADR. These proposed edits are shown to reduce clutter in the TO-BE diagram. * secrets-setup is being eliminated through a separate ADR to eliminate TLS for single-node usage. * kong-migrations is being combined with the kong service via an entrypoint script. * bootstrap-redis will be incorporated into the Redis entrypoint script to set the Redis password before Redis starts to fix the time delay before a Redis password is set.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#to-be-startup-flow","title":"\"To-be\" startup flow","text":"The following diagram shows the \"to-be\" startup flow. Note that the bootstrap flows are always processed, but can be short-circuited.
Another difference to note in the \"to-be\" diagram is that the Vault depdendency on Consul is reversed in order to provide better security.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#new-bootstraprtr-container","title":"New Bootstrap/RTR container","text":"The purpose of this new container is to:
bootstrap
semaphoreready_to_run
semaphore (these are the stateful components such as databases, and blocking waiting for sercret store tokens to be provisioned)ready_to_run
semaphoreThis ADR is expected to yield the following benefits after completion of the related engineering tasks:
Introduction of a new container into the startup flow (but other containers are eliminated or combined).
Expanded scope and responsibility of entrypoint scripts, which must not only block component startup, but now must also configure a component for secure operation.
In this scenario, instead of a service waiting on a TCP-socket semaphore created by another service, services would open a socket and wait for a coordinator/controller to issue a \"go\" command.
This solution was not chosen for several reasons:
In this scenario, the system management agent is responsbile bringing up the EdgeX framework. Since the system management agent has access to the Docker socket, it has the ability to start services in a prescribed order, and as a management agent, has knowledge about the desired state of the framework.
This solution was not chosen for several reasons:
This alternative would create a mega-install container that has locally installed verions of critical components needed for bootstrapping such as Vault, Consul, PostgreSQL, and others.
A sequential script would start each component in turn, intiailizing each to run in a secure configuration, and then shut them all down again.
The same stage-gate mechanism would be used to block startup of these same components, but Docker would start them in production configuration.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#manual-secret-provisioning","title":"Manual secret provisioning","text":"A typical cloud-based microservice architecture typically has a manual provisioning step. This step would include activities such as configuring Vault, installing a database schema, setting up database service account passwords, and seeding initial secrets such as PKI private keys that have been generated offline (possibly requiring several days of lead time). A cloud team may have weeks or months to prepare for this event, and it might take the greater part of a day.
In contrast, EdgeX up to this point has been a \"turnkey\" middleware framework: it can be deployed with the same ease as an application, such as via a docker-compose file, or via a snap install. This means that most of the secret provisioning must be automated and the provisioning logic must be built into the framework in some way. The proposals presented in this ADR are compatibile with continuance of this functionality.
"},{"location":"design/adr/security/0009-Secure-Bootstrapping/#references","title":"References","text":"** Approved **
"},{"location":"design/adr/security/0015-in-cluster-tls/#context","title":"Context","text":"This ADR seeks to define the EdgeX direction on using encryption to secure \"in-cluster\" EdgeX communications, that is, internal microservice-to-microservice communication.
This ADR will seek to clarify the EdgeX direction in several aspects with regard to:
This ADR will be used to triage EdgeX feature requests in this space.
"},{"location":"design/adr/security/0015-in-cluster-tls/#background","title":"Background","text":""},{"location":"design/adr/security/0015-in-cluster-tls/#why-encrypt","title":"Why encrypt?","text":"Why consider encryption in the first place? Simple. Encryption helps with the following problems:
Client authentication of servers. The client knows that it is talking to the correct server. This is typically achieved using TLS server certificates that the client checks against a trusted root certificate authority. Since the client is not in charge of network routing, TLS server authentication provides a good assurance that the requests are being routed to the correct server.
Server authentication of clients. The server knows the identity of the client that has connected to it. There are a variety of mechanims to achieve this, such as usernames and passwords, tokens, claims, et cetera, but the mechanism under consideration by this ADR is TLS client authentication using TLS client certificates.
Confidentiality of messages exchanged between services. Confidentiality is needed to protect authentication data flowing between communicating microservices as well as to protect the message payloads if they contain nonpublic data. TLS provides communication channel confidentiality.
Integrity of messages exchanged between services. Integrity is needed to ensure that messages between communicating microservices are not maliciously altered, such as inserting or deleting data in the middle of the exchange. TLS provides communication channel integrity.
A microservice architecture normally strives for all of the above protections.
Besides TLS, there are other mechanisms that can be used to provide some of the above properties. For example, IPSec tunnels provide confidentity, integrity, and authentication of the hosts (network-level protection). SSH tunnels provide confidentiality, integrity, and authentication of the tunnel endpoints (also network-level protection). TLS, however, is preferred, because it operates in-process at the application level and provides better point-to-point security.
"},{"location":"design/adr/security/0015-in-cluster-tls/#why-to-not-encrypt","title":"Why to not encrypt?","text":"In the case of TLS communications, microservices depend on an asymmetric private key to prove their identity. To be of value, this private key must be kept secret. Applications typically depend on process-level isolation and/or file system protections for the private key. Moreover, interprocess communication using sockets is mediated by the operating system kernel. An attacker running at the privilege of the operating system has the ability to compromise TLS protections, such as by substituting a private key or certificate authority of their choice, accessing the unencrypted data in process memory, or intercepting the network communications that flow through the kernel. Therefore, within a single host, TLS protections may slow down an attacker, but are not likely to stop them. Additionally, use of TLS requires management of additional security assets in the form of TLS private keys.
Microservice communcation across hosts, however, is vulnerable to intereception, and must be protected via some mechanism such as, but not limited to: IPSec or SSH tunnels, encrypted overlay networks, service mesh middlewares, or application-level TLS.
Another reason to not encrypt is that TLS adds overhead to microservice communication in the form of additional network around-trips when opening connections and performing cryptographic public key and symmetric key operations.
"},{"location":"design/adr/security/0015-in-cluster-tls/#decision","title":"Decision","text":"At this time, EdgeX is primarily a single-node IoT application framework. Should this position change, this ADR should be revisited. Based on the single-node assumption:
This ADR if approved would close the following issues as will-not-fix.
It would also close https://github.com/edgexfoundry/edgex-go/issues/1925 as there is no current need for TLS as a mutual authentication strategy.
"},{"location":"design/adr/security/0015-in-cluster-tls/#alternatives","title":"Alternatives","text":""},{"location":"design/adr/security/0015-in-cluster-tls/#encrypted-overlay-networks","title":"Encrypted overlay networks","text":"Encrypted overlay networks provide varying protection based on the product used. Some can only encrypt data, such as an IPsec tunnel. Some can encrypt and provide for network microsegmentation, such as Docker Swarm networks with encryption enabled. Some can encrypt and enforce network policy such as restrictions on ingress traffic or restrictions on egress traffic.
"},{"location":"design/adr/security/0015-in-cluster-tls/#service-mesh-middleware","title":"Service mesh middleware","text":"Service mesh middleware is an alternative that should be investigated if EdgeX decides to fully support a Kubernetes-based deployment using distributed Kubernetes pods.
A service mesh typically achieves most of the security objectives of security microservice commuication by intercepting microservice communications and imposing a configuration-driven policy that typically includes confidentiality and integrity protection.
These middlewares typically rely on the Kubernetes pod construct and are difficult to support for non-Kubernetes deployments.
"},{"location":"design/adr/security/0015-in-cluster-tls/#edgex-public-key-infrastructure","title":"EdgeX public key infrastructure","text":"An EdgeX public key infrastructure that is natively supported by the architecture should be considered if EdgeX decides to support an out-of-box distributed deployment on non-Kubernetes platforms.
Native support of TLS requires a significant amount of glue logic, and exceeds the availble resources in the security working group to implement this strategy. The following text outlines a proposed strategy for supporting native TLS in the EdgeX framework:
EdgeX will use Hashicorp Vault to secure the EdgeX PKI, through the use of the Vault PKI secrets engine. Vault will be configured with a root CA at initialization time, and a Vault-based sub-CA for dynamic generation of TLS leaf certificates. The root CA will be restricted to be used only by the Vault root token.
EdgeX microservices that are based on third-party containers require special support unless they can talk natively to Vault for their secrets. Certain tools, such as those mentioned in the \"Creation and Distribution of Secrets\" ADR (envconsul
, consul-template
, and others) can be used to facilitiate third-party container integration. These services are:
Consul: Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container.
Vault: As Vault's database is encrypted, Vault cannot natively bootstrap its own TLS certificate. Requires TLS certificate to be injected into container and its location set in a configuration file.
PostgreSQL: Requires TLS certificate to be injected into '$PGDATA' (default: /var/lib/postgresql/data
) which is where the writable database files are kept.
Kong (admin): Requires environment variable to be set to secure admin port with TLS, with a TLS certificates injected into the container.
Kong (external): Requires a bring-your-own (BYO) external certificate, or as a fallback, a default one should be generated using a configurable external hostname. (The Kong ACME plugin could possibly be used to automate this process.)
Redis (v6): Requires TLS certificate set by configuration file or command line, with a TLS certificate injected into the container.
Mosquitto: Requires TLS certificate set by configuration file, with a TLS certificate injected into the container.
Additionally, every EdgeX microservice consumer will require access to the root CA for certificate verification purposes, and every EdgeX microservice server will need a TLS leaf certificate and private key.
Note that Vault bootstrapping its own PKI is tricky and not natively supported by Vault. Expect that a non-trivial amount of effort will need to be put into starting Vault in non-secure mode to create the CA hierarchy and a TLS certificate for Vault itself, and then restarting Vault in a TLS-enabled configuration. Periodic certificate rotation is a non-trivial challenge as well.
The Vault bootstrapping flow would look something like this:
There are no current plans for mutual auth TLS. Supporting mutual auth TLS would require creation of a separate PKI hierarchy for generation of TLS client certificates and glue logic to persist the certificates in the service's key-value secret store and provide them when connecting to other EdgeX services.
"},{"location":"design/adr/security/0016-docker-image-guidelines/","title":"Docker image guidelines","text":""},{"location":"design/adr/security/0016-docker-image-guidelines/#status","title":"Status","text":"Approved
"},{"location":"design/adr/security/0016-docker-image-guidelines/#context","title":"Context","text":"When deploying the EdgeX Docker containers some security measures are recommended to ensure the integrity of the software stack.
"},{"location":"design/adr/security/0016-docker-image-guidelines/#decision","title":"Decision","text":"When deploying Docker images, the following flags should be set for heightened security.
no-new-privileges
option in their Docker compose file (example below). More details about this flag can be found here. This follows Rule #4 for Docker security found here.security_opt:\n - \"no-new-privileges:true\"\n
NOTE: Alternatively an AppArmor security profile can be used to isolate the docker container. More details about apparmor profiles can be found here
security_opt: [ \"apparmor:unconfined\" ]\n
--user=<userid>
or -u=<userid>
option in their Docker compose file (example below). More details about this flag can be found here. This follows Rule #2 for Docker security found here.services:\n device-virtual:\n image: ${REPOSITORY}/docker-device-virtual-go${ARCH}:${DEVICE_VIRTUAL_VERSION}\nuser: $CONTAINER-PORT:$CONTAINER-PORT # user option using an unprivileged user\n ports:\n - \"127.0.0.1:49990:49990\"\ncontainer_name: edgex-device-virtual\n hostname: edgex-device-virtual\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-virtual\n depends_on:\n - consul\n - data\n - metadata\n
NOTE: exception Sometimes containers will require root access to perform their fuctions. For example the System Management Agent requires root access to control other Docker containers. In this case you would allow it run as default root user.
resource limits
should be set for each container. More details about resource limits
can be found here. This follows Rule #7 for Docker security found here.services:\n device-virtual:\n image: ${REPOSITORY}/docker-device-virtual-go${ARCH}:${DEVICE_VIRTUAL_VERSION}\nuser: 4000:4000 # user option using an unprivileged user\n ports:\n - \"127.0.0.1:49990:49990\"\ncontainer_name: edgex-device-virtual\n hostname: edgex-device-virtual\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-virtual\n depends_on:\n - consul\n - data\n - metadata\n deploy: # Deployment resource limits\n resources:\n limits:\n cpus: '0.001'\nmemory: 50M\n reservations:\n cpus: '0.0001'\nmemory: 20M\n
--read_only
flag should be set. More details about this flag can be found here. This follows Rule #8 for Docker security found here. device-rest:\n image: ${REPOSITORY}/docker-device-rest-go${ARCH}:${DEVICE_REST_VERSION}\nports:\n - \"127.0.0.1:49986:49986\"\ncontainer_name: edgex-device-rest\n hostname: edgex-device-rest\n read_only: true # read_only option\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-rest\n depends_on:\n - data\n - command\n
NOTE: exception If a container is required to have write permission to function, then this flag will not work. For example, the vault needs to run setcap in order to lock pages in memory. In this case the --read_only
flag will not be used.
NOTE: Volumes If writing persistent data is required then a volume can be used. A volume can be attached to the container in the following way
device-rest:\n image: ${REPOSITORY}/docker-device-rest-go${ARCH}:${DEVICE_REST_VERSION}\nports:\n - \"127.0.0.1:49986:49986\"\ncontainer_name: edgex-device-rest\n hostname: edgex-device-rest\n read_only: true # read_only option\n networks:\n - edgex-network\n env_file:\n - common.env\n environment:\n SERVICE_HOST: edgex-device-rest\n depends_on:\n - data\n - command\n volumes:\n - consul-config:/consul/config:z\n
NOTE: alternatives If writing non-persistent data is required (ex. a config file) then a temporary filesystem mount can be used to accomplish this goal while still enforcing --read_only
. Mounting a tmpfs
in Docker gives the container a temporary location in the host systems memory to modify files. This location will be removed once the container is stopped. More details about tmpfs
can be found here
for additional docker security rules and guidelines please check the Docker security cheatsheet
"},{"location":"design/adr/security/0016-docker-image-guidelines/#consequences","title":"Consequences","text":"Create a more secure Docker environment
"},{"location":"design/adr/security/0016-docker-image-guidelines/#references","title":"References","text":"** Approved **
"},{"location":"design/adr/security/0017-consul-security/#context","title":"Context","text":"This ADR defines the motiviation and approach used to secure access to the Consul component in the EdgeX architecture for security-enabled configurations only. Non-secure configuations continue to use Consul in anonymous read-write mode. As this Consul security feature requires Vault to function, if EDGEX_SECURITY_SECRET_STORE=false
and Vault is not present, the legacy behavior (unauthenticated Consul access) will be preserved.
Consul provides several services for the EdgeX architecture:
Use of the services provided by Consul is optional on a service-by-service basis. Use of the registry is controlled by the -r
or --registry
flag provided to an EdgeX service. Use of mutable configuration data is controlled by the -cp
or --configProvider
flag provided to an EdgeX service. When Consul is enabled as a configuration provider, the configuration.toml
is parsed into individual settings and seeded into the Consul key-value store on the first start of a service. Configuration reads and writes are then done to Consul if it is specified as the configuration provider, otherwise the static configuration.toml
is used. Writes to the [Writable]
section in Consul trigger per-service callbacks notifying the application of the changed data. Updates to non-[Writable]
sections are parsed only once at startup and require a service restart to take effect.
Since configuration data can affect the runtime behavior of services, compensating controls must be introduced in order to mitigate the risks introduced by moving configuration from a static file into to an HTTP-accessible service with mutable state.
The current practice is that Consul is exposed via unencrypted HTTP in anonymous read/write mode to all processes and EdgeX services running on the host machine.
"},{"location":"design/adr/security/0017-consul-security/#decision","title":"Decision","text":"Consul will be configured with access control list (ACL) functionality enabled, and each EdgeX service will utilize a Consul access token to authenticate to Consul. Consul access tokens will be requested from the Vault Consul secrets engine (to avoid introducing additional bootstrapping secrets).
DNS will be disabled via configuration as it is not used in EdgeX.
Consul Access Via API Gateway
In security enabled EdgeX, the API gateway will be configured to proxy the Consul service over the /consul
path, using the request-transformer
plugin to add the global management token to incoming requests via the X-Consul-Token
HTTP header. Thus, ability to access remote APIs also grants the ability to modify Consul's key-value store. At this time, service access via API gateway is all-or-nothing, but this does not preclude future fine-grained authorization at the API gateway layer to specific microservices, including Consul.
Proxying of the Consul UI is problematic and there is no current solution, which would involve proper balacing of the externally-visible URL, the path-stripping effect (or not) of the proxy, Consul's ui_content_path
, and UI authentication (the request-transfomer
does not work on the UI).
Full implementation of this ADR will deny Consul access to all existing Consul clients. To limit the impacts of the change, deployment will take place in phases. Phase 1 is basic plumbing work and leaves Consul configured in a permissive mode and thus is not a breaking change. Phase 2 will affect the APIs of Go modules and will change the default policy to \"deny\", both of which are breaking changes. Phase 3 is a refinement of access control; presuming the existing services are \"well-behaved\", that is, they do not access configuration of other services, Phase 3 will not introduce any breaking changes on top of the Phase 2 breaking changes.
"},{"location":"design/adr/security/0017-consul-security/#phase-1-completed-in-ireland-release","title":"Phase 1 (completed in Ireland release)","text":"ready_to_run
signal.)/acl/token/self
).Migtigations:
** Approved ** via TSC vote on 2021-12-14
"},{"location":"design/adr/security/0020-spiffe/#context","title":"Context","text":"In security-enabled EdgeX, there is a component called security-secretstore-setup
that seeds authentication tokens for Hashicorp Vault--EdgeX's secret store--into directories reserved for each EdgeX microservice. The implementation is provided by a sub-component, security-file-token-provider
, that works off of a static configuration file (token-config.json
) that configures known EdgeX services, and an environment variable that lists additional services that require tokens. The token provider creates a unique token for each service and attaches a custom policy to each token that limits token access in a manner that paritions the secret store's namespace.
The current solution has some problematic aspects:
These tokens have an initial TTL of one hour (1h) and become invalid if not used and renewed within that time period. It is not possible to delay the start of EdgeX services until a later time (that is, greater than the default token TTL), as they will not be able to connect to the EdgeX secret store to obtain required secrets.
Transmission of the authentication token requires one or more shared file systems between the service and security-secretstore-setup
. In the Docker implementation, this shared file system is constructed by bind-mounting a host-based directory to multiple containers. The snap implementation is similar, utilizing a content-interface between snaps. In a Kubernetes implementation limited to a single worker node, a CSI storage driver that provided RWO volumes would suffice.
The current approach cannot support distributed services without an underlying distributed file system to distribute tokens, such as GlusterFS, running across the participating nodes. For Kubernetes, the requirement would be a remote shared file system persistent volume (RWX volume).
EdgeX will create a new service, security-spiffe-token-provider
. This service will be a mutual-auth TLS service that exchanges a SPIFFE X.509 SVID for a secret store token.
An SPIFFE identifier is a URI of the format spiffe://trust domain/workload identifier
. For example: spiffe://edgexfoundry.org/service/core-data
. A SPIFFE Verifiable Identity Document (SVID) is a cryptographically-signed version of a SPIFFE ID, typically a X.509 certificate with the SPIFFE ID encoded into the subjectAltName
certificate extension, or a JSON web token (encoded into the sub
claim). The EdgeX implementation will use a naming convention on the path component, such as the above, in order to be able to extract the requesting service from the SPIFFE ID.
The SPIFFE token provider will take three parameters:
An X.509 SVID used in mutual-auth TLS for the token provider and the service to cross-authenticate.
The reqested service key. If blank, the service key will default to the service name encoded in the SVID. If the service name follows the pattern device-(name)
, then the service key must follow the format device-(name)
or device-name-*
. If the service name is app-service-configurable
, then the service key must follow the format app-*
. (This is an accomodation for the Unix workload attester not being able to distingish workloads that are launched using the same executable binary. Custom app services that support multiple instances won't be supported unless they name the executable the same as the standard app service binary or modify this logic.)
A list of \"known secret\" identifiers that will allow new services to request database passwords or other \"known secrets\" to be seeded into their service's partition in the secret store.
The go-mod-secrets
module will be modified to enable a new mode whereby a secret store token is obtained by:
Obtaining an X.509 SVID by contacting a local SPIFFE agent's workload API on a local Unix domain socket.
Connecting to the security-spiffe-token-provider
service using the X.509 SVID to request a secret store token.
The SPIFFE authentication mode will be an opt-in feature.
The SPIFFE implementation will be user-replaceable; specifically, the workload API socket will be configurable, as well as the parsing of the SPIFFE ID. Reasons for doing so might include: changing the name of the trust domain in the SPIFFE ID, or moving the SPIFFE server out of the edge.
This feature is estimated to be a \"large\" or \"extra large\" effort that could be implemented in a single release cycle.
"},{"location":"design/adr/security/0020-spiffe/#technical-architecture","title":"Technical Architecture","text":"The work flow is as follows:
token generate
and shared to the EdgeX secrets volume.security-secret-store-setup
initializes it and creates an admin token for security-spiffe-token-provider
to use.security-spiffe-token-provider
service is started. It obtains an SVID from the SIFFE agent and uses it as a TLS server certificate.security-spiffe-token-provider
service. The EdgeX microservice uses the trust bundle as a server CA to verify the TLS certificate of the remote service.security-spiffe-token-provider
verifies the SVID using the trust bundle as client CA to verify the client, extracts the service key, and issues an appropriate Vault service token.The server uses a workload registration Unix domain socket that allows authorization entries to be added to the authorization database. This socket is protected by Unix file system permissions to control who is allowed to add entries to the database.
In this proposal, a subcommand will be added to the EdgeX secrets-config
utility to simplify the process of registering new services that uses the registration socket above.
The agent uses a workload attesation Unix domain socket that is open to the world. This socket is shared via a snap content-interface of via a shared host bind mount for Docker. There is one agent per node.
"},{"location":"design/adr/security/0020-spiffe/#trust-bundle","title":"Trust Bundle","text":"SVID's must be traceable back to a known issuing authority (certificate authority) to determine their validity.
In the proposed implementation, we will generate a CA on first boot and store it persistently. This root CA will be distributed as the trust bundle. The SPIFFE server will then generate a rotating sub-CA for issuing SVIDs, and the issued SVID will include both the leaf certificate and the intermediate certificate.
This implementation differs from the default implementation, which uses a transient CA that is rotated periodically and that keeps a log of past CA's. The default implementation is not suitable because only the Kubernetes reference implementation of the SPIRE server has a notification hook that is invoked when the CA is rotated. CA rotation would just result in issuing of SVIDs that are not trusted by microservices that received only the initial CA.
The SPIFFE implementation is replaceable. The user is free to replace this default implementation with potentally a cloud-based SPIFFE server and a cloud-based CA.
"},{"location":"design/adr/security/0020-spiffe/#workload-authorization","title":"Workload Authorization","text":"Workloads are authenticated by connecting to the spiffe-agent
via a Unix domain socket, which is capable of identifying the process ID of the remote client. The process ID is fed into one of following workload attesters, which gather additional metadata about the caller:
docker:label:com.docker.compose.service:edgex-core-data
where the service label is the key value in the services
section of the docker-compose.yml
. It is also possible to refer to labels built-in to the container image.Once authenticated, the metadata is sent to the SPIFFE server to authorize the workload.
Workloads are authorized via an authorization database connected to the SPIFFE server. Supported databases are SQLite (default), PostgreSQL, and MySQL. Due to startup ordering issues, SQLite will be used. (Disclaimer: SQlite, according for the Turtle book is intended for development and test only. We will use SQlite anyway because because Redis is not supported.)
The only service that needs to be seeded to the database as this time is security-spiffe-token-provier
. For example:
spire-server entry create -parentID \"${local_agent_svid}\" -dns edgex-spiffe-token-provider -spiffeID \"${svid_service_base}/edgex-spiffe-token-provider\" -selector \"docker:label:com.docker.compose.service:edgex-spiffe-token-provider\"\n
The above command associates a SPIFFE ID with a selector, in this case, a container label, and configures a DNS subjectAltName in the X.509 certificate for server-side TLS.
A snap-based installation of EdgeX would use a unix:path
or unix:sha256
selector instead.
There are two extension mechanims for authorization additional workloads:
spire-server entry create
commands for each additional service.edgex-secrets-config
utility (that will wrap the spire-server entry create
command) for ad-hoc authorization of new services.The authorization database is persistent across reboots.
"},{"location":"design/adr/security/0020-spiffe/#consequences","title":"Consequences","text":"This proposal will require addition of several new, optional, EdgeX microservices:
security-spiffe-token-provider
, running on the main nodespiffe-agent
, running on the main node and each remote nodespiffe-server
, running on the main nodespiffe-config
, a one-shot service running on the main nodeNote that like Vault, the recommended SPIFFE configuration is to run the SPIFFE server on a dedicated node. If this is a concern, bring your own SPIFFE implementation.
Minor changes will be needed to security-secretstore-setup
to preserve the token-creating-token used by security-file-token-provider
so that it can be used by security-spiffe-token-provider
.
The startup flow of the framework will be adjusted as follows:
spiffe-server
spiffe-config
(can be combined with spifee-server
)spiffe-agent
security-spiffe-token-provider
There is no direct dependency between spiffe-server
and any other microservice. security-spiffe-token-provider
requires an SVID from spiffe-agent
and a Vault admin token.
None of these new services will be proxied via the API gateway.
In the future, this mechanism may become the default secret store distribution mechanism, as it eliminates several secrets volumes used to share secrets between security-secretstore-setup
and various EdgeX microservices.
The EdgeX automation will only configure the SPIFEE agent on the main node. Additional nodes can be manually added by the operator by obtaining a join token from the main node and using it to bootstrap a remote node.
SPIFFE/SPIRE has native support for Kubernetes and can distribute the trust bundle via a Kubernetes ConfigMap to more easily enable distributed scenarios, removing a major roadblock to usage of EdgeX in a Kubernetes environment.
"},{"location":"design/adr/security/0020-spiffe/#footprint","title":"Footprint","text":"NOTE: This data is limited by the fact that the pre-built SPIRE reference binaries are compiled with CGO enabled.
"},{"location":"design/adr/security/0020-spiffe/#spire-server","title":"SPIRE Server","text":" 69 MB executable, dynamically linked\n 151 MB inside of a Debian-slim container\n 30 MB memory usage, as container\n
"},{"location":"design/adr/security/0020-spiffe/#spire-agent","title":"SPIRE Agent","text":" 33 MB executable, dynamically linked\n 114 MB inside of a Debian-slim container\n 64 MB memory usage, as container\n
"},{"location":"design/adr/security/0020-spiffe/#spiffe-base-secret-store-token-provider","title":"SPIFFE-base Secret Store Token Provider","text":"The following is the minimum size:
> 6 MB executable (likely much larger)\n > 29 MB memory usage, as container\n
"},{"location":"design/adr/security/0020-spiffe/#limitations","title":"Limitations","text":"The following are known limitations with this proposal:
The capabilities enabled by this solution would only be enabled on Linux platforms. SIFFE/SPIRE Agent is not available for native Windows and pre-built binaries are only avaiable for Linux. (It is unclear as to whether other *nix'es are supported.)
The capabilities enabled by this solution would only be supported for Go-based services. The SPIFFE API's are implemented in gRPC, which is only ported to C#, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, and Ruby. Notably, the C language is not supported, and the only other EdgeX supported language is Go.
That default TTL of an x.509 SVID is one hour. As such, all SVID consumers must be capable of auto-renewal of SVIDs on both the client and server side.
Leave C-SDK device services behind. In this option, C device services would be unable to participate in the delayed-start services architecture.
Fork a grpc-c library. Forking a grpc-c library and rehabilitating it is one option. There is at least one grpc-c library that has been proven to work, but it requires additional features to make it compatible with the SPIRE workload agent. However, the project is extremely large and it is unlikely that EdgeX is big enough to carry the project. Available libraries include:
https://github.com/lixiangyun/grpc-c
This library is several years out-of-date, does not compile on current Linux distributions without some rework, and does not pass per-request metadata tags. Proved to work via manual patching. Not supportable.
https://github.com/Juniper/grpc-c
This library is serveral years out-of-date, also does not compile on current Linux distributiosn without some rework. Uses hard-coded Unix domain socket paths. May support per-request metadata tags, but did not test. Not supportable.
https://github.com/HewlettPackard/c-spiffe
This library is yet untested. Rather than a gRPC library, this library implements the workload API client directly. Ultimately, this library also wraps the gRPC C++ library, and statically links to it. There is no benefit to the EdgeX project to use this library as we can call the underlying library directly.
Hybrid device services. In this model, device services would always be written in Go, but in the case where linking to a C language library is required, CGO features would be used to invoke native C functions from golang. This option would commit the EdgeX project to a one-time investment to port the existing C device services to the new hybrid model. This option is the best choice if the long-term strategy is to end-of-life the C Device SDK.
Bridge. In this model, the C++ implementation to invoke the SPIFFE/SPIRE workload API would be hidden behind a dynamic shared library with C linkage. This would require minimal change to the existing C SDK. However, the resulting binaries would have be based on GLIBC vs MUSL in order to get dlopen()
support. This will also limit the choice of container base images for containerized services.
Modernize. In this model, the Device SDK would be rewritten either partially or in-full in C++. Under this model, the SPIFFE/SPIRE workload API could be accessed via a community-supported C++ GRPC SDK. There are many implementation options:
A \"C++ compilation-switch\" where the C SDK could be compiled in C-mode or C++-mode with enhanced functionality.
A C++ extension API. The original C SDK would remain as-is, but if compiling with __cplusplus
defined, additional API methods would be exposed. The SDK could thus be composed of a mixture of .c
files with C linkage and .cc
files with C++ linkage. The linker would ultimately determine whether or not the C++ runtime library needed to be linked in.
Native C++ device SDK with legacy C wrapper facade.
Compile existing code in C++ mode, with optional C++ facade.
If one of the following things were to happen, it would push this proposal \"over the edge\" from being an optional opt-in feature to a required standard feature for security:
The \"on-demand\" method of obtaining a secret store token is the default method of obtaining a token for non-core EdgeX services.
The \"on-demand\" method of obtaining a secret store token is the default method for all EdgeX services.
SPIFFE SVID's become the implementation mechanism for microservice-level authentication. (Not in scope for this ADR.)
Keeping these as separate executables clearly separates the on-demand secret store tokens feature as an optional service. It is possible to combine the services, but there would need to be a configuration switch in order to enable the SPIFFE feature. It would also increase the base executable size to include the extra logic.
"},{"location":"design/adr/security/0020-spiffe/#alternatives-regarding-spiffe-ca","title":"Alternatives regarding SPIFFE CA","text":""},{"location":"design/adr/security/0020-spiffe/#transient-ca-option","title":"Transient CA option","text":"The SPIFFE server can be configured with no \"upstream authority\" (certificate authority), and the server will periodically generate a new, transient CA, and keep a bounded history of previous CA's. A rotating trust bundle only practically works in a Kubernetes environment, since a configmap can be updated real-time. For everyone else, we need a static CA that can be pre-distributed to remote nodes. Thus, this solution was not chosen.
"},{"location":"design/adr/security/0020-spiffe/#vault-based-ca-option","title":"Vault-based CA option","text":"The SPIFFE server can be configured to make requests to a Hashicorp Vault PKI secrets engine to generate intermediate CA certificates for signing SVID's. This is an option for future integrations, but is omitted from this proposal due to the jump in implementation complexity and the desire that the current proposal be on add-on feature. The current implementation allows the SPIFFE server and Vault to be started simultaneously. Using a Vault-based CA would require a complex interlocking sequence of steps.
"},{"location":"design/adr/security/0020-spiffe/#references","title":"References","text":"The AS-IS Architecture figure below depicts the current state of microservice communication security prior to EdgeX 3.0, when security is enabled:
As shown in the diagram, many of the foundational services used by EdgeX Foundry have already been secured:
Communication with EdgeX's secret store, as implemented by Hashicorp Vault, is secured over a local HTTP socket with token-based authentication. An access control list limits access to the keyspace of the key value store.
Communication with EdgeX's service registry and configuration provider, as implemented by Hashicorp Consul, is secured over a local HTTP socket with token-based authentication, with the token being mediated by Hashicorp Vault. An access control list limits access to the keyspace of the configuration store.
Communication with EdgeX's default database, Redis, is secured using username/password authentication, with the password stored in Hashicorp Vault. An access control list limits the commands that clients are allowed to issue to the server.
External access to EdgeX microservices has also been secured. EdgeX microservices only bind to local ports, and are only exposed externally through a Kong API gateway. This gateway is configured to use TLS 1.3, using RS256 or ES256 JWT authentication (at the user's discretion). All external requests are filtered at the API gateway. URL rewriting is used to concentrate microservices on a single HTTP-accessible port.
Behind the proxy, it is not possible to verify Kong as the origin of local network traffic because mutual-auth TLS is not supported in the open source version of Kong. Although the Kong JWT plugin will set request headers on the backend request that identify the caller, there is no mechanism by which Kong can prove to a backend service that it was the component that performed the authentication step. Even though the original JWT passes through the proxy, the Kong authentication plugins do not expose token introspection endpoints that the backend service could use to check token validity independently.
The consequence of having an API gateway that performs all microservice authentication is that communication between EdgeX microservices running behind the API gateway are not authenticated in any way. EdgeX microservices are unable to distinguish malicious traffic that has evaded the API gateway from legitimate microservice traffic.
"},{"location":"design/adr/security/0028-authentication/#proposed-design","title":"Proposed Design","text":"This ADR proposes an implementation of the Microservice Authentication UCR that uses a token-based authentication mechanism.
This ADR proposes to relieve the Kong API gateway of its JWT management responsibility, and instead use Hashicorp Vault for this purpose, which is already used as EdgeX's secret store. This change requires minimal modification of existing clients written to perform JWT-based authentication at the Kong gateway: they simply use a Vault-issued JWT instead of a Kong-issued JWT or a self-issued JWT.
This ADR proposes a layered authentication scheme, with the reverse proxy performing an initial check for all external requests, and EdgeX services themselves authenticating all internal and external requests. There are three reasons for the layered approach:
Authentication at the proxy layer provides a choke point and policy enforcement points for incoming requests. By customizing the behavior of the proxy-auth component, it is possible to allow access to some URLs and deny access to other URLs based on arbitrary criteria, such as source IP address, JWT-based claims, or user identity and role mappings.
It means that individual microservices do not immediately need to implement fine-grained authorization to get the same effect as having custom policy enforcement at the proxy.
It provides defense-in-depth against microservice implementation bugs and other technical debt that might otherwise put EdgeX microservices at risk. Getting a known response to /core-data/api/v2/ping
as a result of an anonymous HTTP request would positively identify an EdgeX installation. Similarly, an adopter porting their custom services to EdgeX 3.0 without adding authentication hooks could be vulnerable to outside attacks that might be mitigated by the additional check at the proxy layer.
EdgeX microservices shall utilize Vault to assess JWT validity and an NGINX reverse proxy shall use the ngx_http_auth_request_module to delegate confirmation of JWT validity. TLS termination at the reverse proxy shall be enabled by default so as to be consistent with ADR 0015 - Encryption between microservices.
Behind the proxy, there are two major changes:
Every EdgeX service, when security is enabled, requires a JWT be passed as part of the HTTP request that is validated using Vault's token introspection endpoint, or manually validated based on published signature keys.
Every EdgeX service, when security is enabled, uses a Vault-supplied JWT to authenticate outgoing calls to peer EdgeX services. The original caller's identity may be passed through at the developers' discretion for microservice chaining scenarios.
The new TO-BE architecture is diagrammed in the following figure:
"},{"location":"design/adr/security/0028-authentication/#implementation-pre-requisites","title":"Implementation pre-requisites","text":"This ADR assumes a minor refactoring to the security bootstrapping components use the Vault identity API and one or more authentication engines to issue identity-based Vault tokens instead of raw Vault tokens. Affected services include, go-mod-secrets
(configure identity, issue and validate JWT's), security-secretstore-setup
, security-file-token-provider
, and security-spiffe-token-provider
.
This refactoring results in several benefits:
It de-privileges security-secretstore-setup
's use of Vault, which currently requires Vault \"sudo\" capability to issue raw Vault tokens. (This is a blocking issue for customers that want to bring their own Vault.)
An external user identity could be authenticated by an external service, such as Auth0. Alternatively, username/password or AppRole authentication could be used if an external source of identity is not available. This is viewed as beneficial, as downstream EdgeX deployments are already building their own similar integrations.
An internal service identity could be authenticated by a Kubernetes service account token. This could eliminate the requirement to pre-distribute Vault tokens to services via a shared filesystem volume, simplifying Kubernetes-based deployments of EdgeX.
As an added bonus, Vault supports longer JWT key sizes than the Kong JWT plugin.
Additionally, security-bootstrapper
will need to modified to not block on availability of Postgres before issuing the ready-to-run signal. (This change is already completed.)
The following list of changes is derived from the proof of concept implementation to actually effect the change (besides the prerequisite changes above):
Kong and Postgres is removed from compose files and snaps.
Add an NGINX reverse proxy with using the proxy auth module.
Create a new security-proxy-auth
service to check the incoming JWT for validity. (NGINX will be configured to delegate to this service for authentication checks. NGINX could also delegate to a minimal function like /api/v2/version, but the reason as to why the function was called wouldn't be as clear as having a separate authentication service.)
The security-proxy-setup
container remains, with the binary replaced with a small shell script to create a default TLS certificate and key.
The secrets-config
utility will create new users in Vault instead of Kong, and update TLS configuration for NGINX on disk instead of the Kong API.
Modifications to go-mod-core-contracts
to support an injectable authentication interface to add JWT's to outgoing HTTP requests.
Modifications to go-mod-bootstrap
to realize the go-mod-secrets
changes, create common JWT authentication handlers, and inject JWT authentication to the core-contracts clients.
Modifications to individual EdgeX services to authenticate selected routes (that is, every route except /api/v2/ping
, which remains anonymous).
Modifications to security-bootstrapper
to build an entrypoint script for NGINX and a default NGINX configuration.
Documentation updates.
Token-based authentication is flexible and works in a wide variety of use cases, but does not address issues of network security.
For scenarios where all EdgeX services are running on the same host, or there is an existing solution to network security already in place, such as an encrypted network overlay as might be found in some Kubernetes deployments of EdgeX, the token-based solution offers significant memory and disk savings over the Kong-based solution used in EdgeX releases prior to 3.0.
For scenarios where token-based authentication credentials can be exposed over a network, an authentication solution based on end-to-end encryption would be more appropriate.
"},{"location":"design/adr/security/0028-authentication/#considerations","title":"Considerations","text":""},{"location":"design/adr/security/0028-authentication/#size-and-space-impact-of-kong-postgres-versus-alternatives","title":"Size and Space Impact of Kong + Postgres Versus Alternatives","text":""},{"location":"design/adr/security/0028-authentication/#disk-space","title":"Disk space","text":"A savings of up to ~300 MB in docker images can be expected, depending on specific selection of container images used. (The POC implementation successfully used the smallest NGINX available, alpine-slim.)
Image Tag Image ID Age Size nginx alpine 2bc7edbc3cf2 6 days ago 40.7MB nginx alpine-slim c59097225492 6 days ago 11.5MB nginx latest 3f8a00f137a0 8 days ago 142MB kong 2.8 0affcb95d383 6 days ago 139MB postgres 13.8-alpine 551b13d106b4 4 months ago 213MB edgexfoundry/security-proxy-auth 0.0.0-dev b2ee5c21efba 8 days ago 16.2MBImage data collected on 2023-02-17.
"},{"location":"design/adr/security/0028-authentication/#memory","title":"Memory","text":"A memory savings of up to ~150 MB has been observed in the POC implementation upon initial startup of the framework.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS cad71e71ab32 edgex-kong 0.03% 109.4MiB / 15.61GiB 0.68% 255kB / 263kB 0B / 69.6kB 2 9ab4de1e5448 edgex-kong-db 0.11% 64.51MiB / 15.61GiB 0.40% 232kB / 183kB 32.2MB / 53.9MB 18 ff1e97c16e55 edgex-nginx 0.00% 4.289MiB / 15.61GiB 0.03% 3.24kB / 248B 0B / 0B 5 42629157e65c edgex-proxy-auth 0.00% 6.258MiB / 15.61GiB 0.04% 22.9kB / 16.2kB 7.3MB / 0B 11"},{"location":"design/adr/security/0028-authentication/#alternative-using-kong-to-mediate-edgex-internal-microservice-interactions","title":"Alternative: Using Kong to Mediate EdgeX Internal Microservice Interactions","text":"One approach that is seen in some microservice architectures is to force all communication between microservices to go through the external API gateway. There are two problems with this approach:
In the typical EdgeX runtime environment, there is no mechanism to block direct microservice-to-microservice communication.
The external address of the API gateway may not be known to internal code, increasing implementation difficulty for the programmer.
Neither the JWT nor OAuth2 plugins offer a token introspection endpoint, though it would be possible to create a fake service that EdgeX microservices could call to validate a bearer token. Using the Kong Admin API to obtain a public key for JWT validation via database dump would be unnecessarily complex. Validation of an opaque OAuth2 token would require direct access to Kong's backend database and is also unnecessarily complex.
"},{"location":"design/adr/security/0028-authentication/#other-related-adrs","title":"Other Related ADRs","text":"Some device protocols allow for devices to be discovered automatically. A Device Service may include a capability for discovering devices and creating the corresponding Device objects within EdgeX. A framework for doing so will be implemented in the Device Service SDKs.
The discovery process will operate as follows:
A boolean configuration value Device/Discovery/Enabled
defaults to false. If this value is set true, and the DS implementation supports discovery, discovery is enabled.
The SDK will respond to POST requests on the the /discovery endpoint. No content is required in the request. This call will return one of the following codes:
In each of the failure cases a meaningful error message should be returned.
In the case where discovery is triggered, the discovery process will run in a new thread or goroutine, so that the REST call may return immediately.
An integer configuration value Device/Discovery/Interval
defaults to zero. If this value is set to a positive value, and discovery is enabled, the discovery process will be triggered at the specified interval (in seconds).
When discovery is triggered, the SDK calls the implementation function provided by the Device Service. This should perform whatever protocol-specific procedure is necessary to find devices, and pass these devices into the SDK by calling the SDK's filtered device addition function.
Note: The implementation should call back for every device found. The SDK is to take responsibility for filtering out devices which have already been added.
The information required for a found device is as follows:
The filtered device addition function will take as an argument a collection of structs containing the above data. An implementation may choose to make one call per discovered device, but implementors are encouraged to batch the devices if practical, as in future EdgeX versions it will be possible for the SDK to create all required new devices in a single call to core-metadata.
Rationale: An alternative design would have the implementation function return the collection of discovered devices to the SDK. Using a callback mechanism instead has the following advantages:
The filter criteria for discovered devices are represented by Provision Watchers. A Provision Watcher contains the following fields:
Identifiers
: A set of name-value pairs against which a new device's ProtocolProperties are matchedBlockingIdentifiers
: A further set of name-value pairs which are also matched against a new device's ProtocolPropertiesProfile
: The name of a DeviceProfile which should be assigned to new devices which pass this ProvisionWatcherAdminState
: The initial Administrative State for new devices which pass this ProvisionWatcherA candidate new device passes a ProvisionWatcher if all of the Identifiers
match, and none of the BlockingIdentifiers
.
For devices with multiple Device.Protocols
, each Device.Protocol
is considered separately. A pass (as described above) on any of the protocols results in the device being added.
The values specified in Identifiers
are regular expressions.
Note: If a discovered Device is manually removed from EdgeX, it will be necessary to adjust the ProvisionWatcher via which it was added, either by making the Identifiers
more specific or by adding BlockingIdentifiers
, otherwise the Device will be re-added the next time Discovery is initiated.
Note: ProvisionWatchers are stored in core-metadata. A facility for managing ProvisionWatchers is needed, eg edgex-cli
could be extended
This document sets out the required functionality of a Device SDK other than the implementation of its REST API (see ADR 0011) and the Dynamic Discovery mechanism (see Discovery).
This functionality is categorised into three areas - actions required at startup, configuration options to be supported, and support for push-style event generation.
"},{"location":"design/legacy-requirements/device-service/#startup","title":"Startup","text":"When the device service is started, in addition to any actions required to support functionality defined elsewhere, the SDK must:
The core-metadata service maintains an extent of device service registrations so that it may route requests relating to particular devices to the correct device service. The SDK should create (on first run) or update its record appropriately. Device service registrations contain the following fields:
Name
- the name of the device serviceDescription
- an optional brief description of the serviceLabels
- optional string labelsBaseAddress
- URL of the base of the service's REST APIThe default device service Name
is to be hardcoded into every device service implementation. A suffix may be added to this name at runtime by means of commandline option or environment variable. Service names must be unique in a particular EdgeX instance; the suffix mechanism allows for running multiple instances of a given device service.
The Description
and Labels
are configured in the [Service]
section of the device service configuration.
BaseAddress
may be constructed using the [Service]/Host
and [Service]/Port
entries in the device service configuration.
During startup the SDK must supply to the implementation that part of the service configuration which is specific to the implementation. This configuration is held in the Driver
section of the configuration file or registry.
The SDK must also supply a logging facility at this stage. This facility should by default emit logs locally (configurable to file or to stdout) but instead should use the optional logging service if the configuration element Logging/EnableRemote
is set true
. Note: the logging service is deprecated and support for it will be removed in EdgeX v2.0
The implementation on receipt of its configuration should perform any necessary initialization of its own. It may return an error in the event of unrecoverable problems, this should cause the service startup itself to fail.
"},{"location":"design/legacy-requirements/device-service/#configuration","title":"Configuration","text":"Configuration should be supported by the SDK, in accordance with ADR 0005
"},{"location":"design/legacy-requirements/device-service/#commandline-processing","title":"Commandline processing","text":"The SDK should handle commandline processing on behalf of the device service. In addition to the common EdgeX service options, the --instance
/ -i
flag should be supported. This specifies a suffix to append to the device service name.
The SDK should also handle environment variables. In addition to the common EdgeX variables, EDGEX_INSTANCE_NAME
should if set override the --instance
setting.
The SDK should use (or for non-Go implementations, re-implement) the standard mechanisms for obtaining configuration from a file or registry.
The configuration parameters to be supported are:
"},{"location":"design/legacy-requirements/device-service/#service-section","title":"Service section","text":"Option Type Notes Host String This is the hostname to use when registering the service in core-metadata. As such it is used by other services to connect to the device service, and therefore must be resolvable by other services in the EdgeX deployment. Port Int Port on which to accept the device service's REST API. The assigned port for experimental / in-development device services is 49999. Timeout Int Time (in milliseconds) to wait between attempts to contact core-data and core-metadata when starting up. ConnectRetries Int Number of times to attempt to contact core-data and core-metadata when starting up. StartupMsg String Message to log on successful startup. CheckInterval String The checking interval to request if registering with Consul. Consul will ping the service at this interval to monitor its liveliness. ServerBindAddr String The interface on which the service's REST server should listen. By default the server is to listen on the interface to which theHost
option resolves. A value of 0.0.0.0
means listen on all available interfaces."},{"location":"design/legacy-requirements/device-service/#clients-section","title":"Clients section","text":"Defines the endpoints for other microservices in an EdgeX system. Not required when using Registry.
"},{"location":"design/legacy-requirements/device-service/#data","title":"Data","text":"Option Type Notes Host String Hostname on which to contact the core-data service. Port Int Port on which to contact the core-data service."},{"location":"design/legacy-requirements/device-service/#metadata","title":"Metadata","text":"Option Type Notes Host String Hostname on which to contact the core-metadata service. Port Int Port on which to contact the core-metadata service."},{"location":"design/legacy-requirements/device-service/#device-section","title":"Device section","text":"Option Type Notes DataTransform Bool For enabling/disabling transformations on data between the device and EdgeX. Defaults to true (enabled). Discovery/Enabled Bool For enabling/disabling device discovery. Defaults to true (enabled). Discovery/Interval Int Time between automatic discovery runs, in seconds. Defaults to zero (do not run discovery automatically). MaxCmdOps Int Defines the maximum number of resource operations that can be sent to the driver in a single command. MaxCmdResultLen Int Maximum string length for command results returned from the driver. UpdateLastConnected Bool If true, update the LastConnected attribute of a device whenever it is successfully accessed (read or write). Defaults to false."},{"location":"design/legacy-requirements/device-service/#logging-section","title":"Logging section","text":"Option Type Notes LogLevel String Sets the logging level. Available settings in order of increasing severity are:TRACE
, DEBUG
, INFO
, WARNING
, ERROR
."},{"location":"design/legacy-requirements/device-service/#driver-section","title":"Driver section","text":"This section is for options specific to the protocol driver. Any configuration specified here will be passed to the driver implementation during initialization.
"},{"location":"design/legacy-requirements/device-service/#push-events","title":"Push Events","text":"The SDK should implement methods for generating Events other than on receipt of device GET requests. The AutoEvent mechanism provides for generating Events at fixed intervals. The asynchronous event queue enables the device service to generate events at arbitrary times, according to implementation-specific logic.
"},{"location":"design/legacy-requirements/device-service/#autoevents","title":"AutoEvents","text":"Each device may have as part of its definition in Metadata a number of AutoEvents
associated with it. An AutoEvent
has the following fields:
The device SDK should schedule device readings from the implementation according to these AutoEvent
defininitions. It should use the same logic as it would if the readings were being requested via REST.
The SDK should provide a mechanism whereby the implementation may submit device readings at any time without blocking. This may be done in a manner appropriate to the implementation language, eg the Go SDK provides a channel on which readings may be pushed, the C SDK provides a function which submits readings to a workqueue.
"},{"location":"design/ucr/","title":"Use Case Records Folder","text":"This folder contains the EdgeX Foundry use case records (UCRs).
"},{"location":"design/ucr/#naming-and-formatting","title":"Naming and Formatting","text":"UCR documents should include the title in their file name as Use-Case-Title.md
. E
EdgeX UCRs should use the template.md file available in this directory.
"},{"location":"design/ucr/#table-of-contents","title":"Table of Contents","text":"A README with a table of contents for current documents is located here. Document authors are asked to keep the TOC updated with each new document entry.
Legacy requirements have their own Table of Contents and are located here.
"},{"location":"design/ucr/Bring-Your-Own-Vault/","title":"Bring Your Own Vault (BYOV) Use Case Requirements","text":""},{"location":"design/ucr/Bring-Your-Own-Vault/#submitters","title":"Submitters","text":"Any segments using EdgeX in secure mode (using Vault to secure EdgeX secrets) and wanting to incorporate their pre-existing or non-EdgeX Vault store.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#motivation","title":"Motivation","text":"Hashicorp Vault is a secure store to manage and protect sensitive (secret) data. Open-source Vault is used in EdgeX to secure any EdgeX micro service secrets (API keys, passwords, database credentials, service credentials, tokens, certificates etc.). The Vault secret store serves as the central repository to keep these secrets in an EdgeX deployment.
Vault provides a unified interface to any secret, while providing tight access control and multiple authentication mechanisms (token, LDAP, etc.). Additionally, Vault supports pluggable \"secrets engines\". EdgeX uses three secrets engines today: key-value secrets engine, Consul secrets engine, and identity secrets engine. EdgeX uses the Consul secrets engine to allow Vault to issue Consul access tokens to EdgeX microservices. See EdgeX Secret Store for more details.
Today, when the secret store is in place and used as the EdgeX secret store, EdgeX requires adopters to use a new instance of Vault provided by the deployment options offered by the EdgeX community (i.e. Docker Compose files, Kubernetes examples, Snaps, etc.). In other words, EdgeX must totally own the Vault install.
In some edge environments where EdgeX may run, Vault is already in place and could be shared by EdgeX. Additionally, adopters may find several applications running at the edge and want these applications to share a single instance of Vault. However, having an existing or new instance of Vault that EdgeX uses but does not instantiate and run (a concept the community has called \u201cbringing your own Vault\u201d) is not straightforward.
If an adopter wishes to use an instance of Vault that they stand up or pre-exists in their environment, the EdgeX project does not provide any guidance or recipe for how to do this. While technically possible, it would require a lot of work on the part of the adopter. See the original issue driving this requirement for a potential list of changes that would be required. In short, this is some tedious work and work that is not documented well (or in some cases at all). It would require an adopter to study the secretstore-setup code and rework or replace the secretstore-setup service with new code to use the existing Vault instance.
Therefore, the motivation for this EdgeX change is to make it easier to allow adopters to \u201cbring their own Vault\u201d instance and have EdgeX use that instance without any changes to the overall function of the EdgeX platform.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#target-users","title":"Target Users","text":"Any adopter that runs EdgeX in secure mode and with a pre-existing Vault or intention to share a Vault instance among edge applications.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#description","title":"Description","text":"Adopters running EdgeX in an environment that has (or will have) an existing Vault instance not setup by EdgeX:
There are no existing solutions for BYOV.
"},{"location":"design/ucr/Bring-Your-Own-Vault/#requirements","title":"Requirements","text":"The basic requirements are straightforward:
Currently the configuration for all the EdgeX services have many common settings. Most of these common settings have the same value for every service deployed in a single EdgeX based solution and possible across identical deployments of the same solution. The motivation for the UCR is to limit this redundancy by having common settings in one location which are then used across all EdgeX services.
"},{"location":"design/ucr/Common%20Configuration/#description","title":"Description","text":"See Common Configuration for complete list of common configuration sections. As stated above most of the values for these common settings are the same across all the EdgeX Services. Below are a couple examples.
Example - Common configuration - Service & Registry
[Service]\nHealthCheckInterval = \"10s\"\nHost = \"localhost\" <overriden in compose file for service specific>\nPort = <Service Specific>\nServerBindAddr = \"\" # Leave blank so default to Host value unless different value is needed.\nStartupMsg = <Service Specific>\nMaxResultCount = 1024\nMaxRequestSize = 0 # Not curently used. Defines the maximum size of http request body in bytes\nRequestTimeout = \"5s\"\n[Service.CORSConfiguration]\nEnableCORS = false\nCORSAllowCredentials = false\nCORSAllowedOrigin = \"https://localhost\"\nCORSAllowedMethods = \"GET, POST, PUT, PATCH, DELETE\"\nCORSAllowedHeaders = \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\"\nCORSExposeHeaders = \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\"\nCORSMaxAge = 3600\n
...
[Registry] Host = \"localhost\" Port = 8500 Type = \"consul\" ```
In the above example only the Port and StartupMsg settings have unique values for each EdgeX Service.
In the Levski release the additional common security metrics require all services must have the Writable.Telemetry and MessageQueue and sections.
Example - Common configuration - Writable.Telemetry and MessageQueue
...\n[Writable.Telemetry]\nInterval = \"30s\"\nPublishTopicPrefix = \"edgex/telemetry\" # /<service-name>/<metric-name> will be added to this Publish Topic prefix\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\n# Service Specifc Metrics\n<Service Specific metric name> = false\n...\n# Common Security Service Metrics\nSecuritySecretsRequested = false\nSecuritySecretsStored = false\nSecurityConsulTokensRequested = false\nSecurityConsulTokenDuration = false\n[Writable.Telemetry.Tags] # Contains the service level tags to be attached to all the service's metrics\n# Gateway=\"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only chnage existing value, not added new ones.\n...\n[MessageQueue]\nProtocol = \"redis\"\nHost = \"localhost\" <override in compose file same for every service>\nPort = 6379\nType = \"redis\"\nAuthMode = \"usernamepassword\" # required for redis messagebus (secure or insecure).\nSecretName = \"redisdb\"\nPublishTopicPrefix = <Service Specific>\nSubscribeEnabled = <Service Specific>\nSubscribeTopic = <Service Specific>\n[MessageQueue.Topics]\n<service specific name> = <Service specific value>\n...\n[MessageQueue.Optional]\n# Default MQTT Specific options that need to be here to enable evnironment variable overrides of them\nClientId = <Service Specific>\nQos = \"0\" # Quality of Sevice values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive = \"10\" # Seconds (must be 2 or greater)\nRetained = \"false\"\nAutoReconnect = \"true\"\nConnectTimeout = \"5\" # Seconds\nSkipCertVerify = \"false\"\n# Additional Default NATS Specific options that need to be here to enable evnironment variable overrides of them\nFormat = \"nats\"\nRetryOnFailedConnect = \"true\"\nQueueGroup = \"\"\nDurable = \"\"\nAutoProvision = \"true\"\nDeliver = \"new\"\nDefaultPubRetryAttempts = \"2\"\nSubject = \"edgex/#\" # Required for NATS Jetstram only for stream autoprovsioning\n
In the above example only the PublishTopicPrefix, SubscribeTopic, SubscribeEnabled, MessageQueue.Topics and ClientId settings have unique values to that of the default EdgeX deployment values.
Note
In Levski release App Services don't have the MessageQueue section and Core Command's is MessageQueue.Internal. These inconstancies will be rectified in EdgeX 3.0 so all EdgeX services have the same MessageQueue section specified in the same manner. Also in EdgeX 3.0, the PublishTopicPrefix and SubscribeTopic settings will be replaced by entries in MessageQueue.Topics.
There are other similar common sections not shown above. As can be seen from the two examples above there is much duplication of configuration settings across all the EdgeX services. This gives rise to the need to have all these common duplicate configuration settings in a single global source.
In addition to the above common settings, Application services and Device services have their own common configuration settings that may have the same values across deployed application or devices services. For Application services these are the Trigger, Writable.Telemetry.Metrics and Clients.core-metadata configuration sections. For Device services these are the Device, Clients and Writable.Telemetry.Metrics configuration sections.
"},{"location":"design/ucr/Common%20Configuration/#existing-solutions","title":"Existing solutions","text":"There are no existing solutions for global configuration that would apply to EdgeX since the current configuration implementation is specific to EdgeX. See 0005-Service-Self-Config for more details on current configuration design.
"},{"location":"design/ucr/Common%20Configuration/#requirements","title":"Requirements","text":""},{"location":"design/ucr/Common%20Configuration/#general","title":"General","text":"Services shall be able reference a common configuration in a manner that is flexible for use with and without the Configuration Provider
Services must be able to override any of the common configuration settings with private service specific configuration values
Example Core Data specific Writable.Telemetry and Service configuration settings in private configuration file
[Writable.Telemetry]\n[Writable.Telemetry.Metrics] # All service's metric names must be present in this list.\nEventsPersisted = false\nReadingsPersisted = false\n
...\n\n [Service]\n Port = 59880\n StartupMsg = \"This is the Core Data Microservice\"\n
Application services shall be able to load separate common configuration specific to Application services
Device services shall be able to load separate common configuration specific to Device services
Service shall have a common way to specify the common configurations to load.
Secret Store configuration shall no longer be part of the each services' standard configuration as it is needed prior to connecting to the Configuration Provider.
Jim White (IOTech Systems)
"},{"location":"design/ucr/Core-Data-Retention/#status","title":"Status","text":"Approved By TSC vote on 1/31/23
Per Architect's meeting of 2/1/23, it was decided that this requirement does not require and ADR (it is not architecturally significant and can be accomplished in core data revisions). Also note that the existing core data clean up in the scheduler service will remain and that it is up to the user to configure this such that it does not conflict with the core data clean up schedule (see other related issues below).
"},{"location":"design/ucr/Core-Data-Retention/#change-log","title":"Change Log","text":"Formerly referred to as Core Data Cache
"},{"location":"design/ucr/Core-Data-Retention/#market-segments","title":"Market Segments","text":"Any/All
"},{"location":"design/ucr/Core-Data-Retention/#motivation","title":"Motivation","text":"Reduction in the amount of data that is persisted at the edge. Reduction in the amount of data sent to the north. Reduction in the amount of data sent to edge analytics (rules engines, etc.).
"},{"location":"design/ucr/Core-Data-Retention/#target-users","title":"Target Users","text":"In cases where there is a need to store data at the edge and that data is subsequently sent to the \u201cnorth\u201d (cloud or enterprise systems, rules engines, AI/ML, etc.), there may be a need to keep (persist) only the latest readings. \u201cLatest\u201d should be configurable and defined by the user \u2013 allowing for a cap on the number of readings for a particular device resource. Queries of core data should also allow for requesting the \u201clatest\u201d N readings as well.
For example, as a temperature sensor may report the current temperature (the device resource) very frequently (say once every 5 seconds), that data may only be sent to other services or systems every minute. The user may wish to have only the last two readings persisted and subsequently sent north during the minute interval (batch and send). Thus, a retention cap is placed on core data for a certain number of readings.
"},{"location":"design/ucr/Core-Data-Retention/#existing-solutions","title":"Existing solutions","text":"Today, core data will persist all data sent to it. The scheduler can be used to \u201cclean\u201d older data (data collected with a timestamp exceeding a specific timeframe). However, there is no way to retain only X number or latest readings. Query methods do not, by default, provide a simple way to query for \u201clatest\u201d readings. On most core data query methods, one could set the limit parameter = 1 (or some other number) and thereby return the latest event or reading since the results are sorted based on origin.
"},{"location":"design/ucr/Core-Data-Retention/#requirements","title":"Requirements","text":"Per the Architect's meeting of 2/1/23, it was determined that this can be implemented in Core Data without the need for additional ADR write up. This feature shall be implemented such that it is off by default (meaning that core data retention will be as is without any cap as specified in the requirements above). The existing scheduler ability to clean older data in core data shall remain in place (with current defaults). It will be up to the user to turn this data retention feature on (setting the hard cap and purging interval) and it will also be up to the user to ensure the standard scheduled data clean up does not conflict with this new data retention feature.
"},{"location":"design/ucr/Core-Data-Retention/#references","title":"References","text":"This UCR describes Use Cases for new Device metadata for Parent to Child Relationships for a given Device.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#submitters","title":"Submitters","text":"Any that deploy EdgeX systems to manage multiple devices. In particular, Industrial Gateway systems that connect to multiple south-bound devices and provide their data to north-bound services.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#motivation","title":"Motivation","text":"It is frequently important to north-bound services to know the parent-child relationships of the devices found in an EdgeX system. This information is generally used for either protocol data constructs or for display purposes.
If not know or provided by the south-bound Device Service, this information might be added to the Device instance's metadata by the north-bound or analytics services, or by the user.
It is desirable that the means of conveying this information become standardized for those systems which provide and use it, so that application services can rely on it, hence proposing here that there be a common definition and usage of this metadata.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#target-users","title":"Target Users","text":"Some north-bound protocols and some UI designs present the system devices in a hierarchial manner, where it is necessary to know which devices are parents and which are their children.
These considerations are most important for gateways that are implemented with the EdgeX framework, since there are potentially many south-bound devices connected to a system.
Examples are * North-bound BACnet Service - where only one \"main\" device is present at the point of external connection (eg, UDP port 0xBAC0) and all other devices must be presented as \"virtually routed devices\" connected to that main \"virtual router\" device. * Azure IoT Hub - where the normal connection for IoT Plug and Play / Digital Twin is for a single device, and any other devices need to somehow fall under that device (eg, with Device Twin \"Modules\") * UI device presentation - where child devices can be shown grouped under their parent, often rolled up until they are expanded to show their data * Multi-tenant deployments of multi-point energy meters - where a main meter has up to 80 Branch Circuit Monitoring (BCM) points connected to it, each BCM modeled as a Device consisting of the same 6 or so energy channels (Device Resources), and each BCM is assigned to a particular tenant. Tenants will be given access to the data from their BCM point(s) but not those of other tenants. A gateway may connect more than one of these multi-point energy meters.
Since there are multiple similar uses for this relationship information on the north side, it is proposed to locate this relationship metadata in the Device object as accessed from core-metadata by all services, rather than to locate it in each north-bound service (which would be particularly problematic for the UI, which gets its data through REST APIs).
The south-bound Device Service that creates a Device is ideally the service which establishes this relationship data, though it is possible that it is unaware of the parent-child relationship. It should be permitted, therefore, for this relationship information to also be set by north-bound services (most likely the UI) and simply ignored by the south-bound Device Service.
It is also necessary to indicate which device is the \"main\" or \"publisher\" device (ie, the gateway device), as any devices without a configured relationship can be inferred to be children of that device.
It is frequently a pattern in data servers to \"walk the device tree\", starting with the main device, then recursively processing its direct child devices, and then the child devices (if any) of those devices, until all devices have been processed. This is normally part of the initialization of device data for a server, since the parent must be processed and initialized before its child devices. Consequently, there is a need for a means to answer the question \"What are the child devices (if any) of device x.y.z?\"; this is commonly done either with the device structure listing its children, or by providing a query that can answer this question.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#extensions-to-the-main-use-case","title":"Extensions to the main Use Case","text":"The Device structure in Eaton's legacy products indicated this parent-child relationship bidirectionally: each device indicated its parent device (if any) with one field, and its child devices (if any) with a list of IDs.
The Device structure in Eaton's cloud solution is a \"DeviceTree\", which is a recursive, hierarchial structure of the connected devices, starting with the \"publisher\" device and its first-level child devices.
There is the BACnet \"virtual routed devices\" model, but I would not recommend it, as it is too convoluted for this simple relationship.
The existing EdgeX UIs group devices by their Device Service, which is a good approach for simple devices without children of their own, but fails if those devices have child devices too.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#requirements","title":"Requirements","text":"Not a requirement: inheritance of device status via the parent-child relationship. Apparently this was a point over which past consideration of parent-child relationships in EdgeX foundered, but it seems complicated for independent services, and can generally be inferred by other services anyway.
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#other-related-issues","title":"Other Related Issues","text":"Use Case for Application Services Extending Device Data Extending Device Data later (./Extending-Device-Data.md) may be related, as, depending on its solution, it may have to indicate a different Device Relationship (\"Extends\").
"},{"location":"design/ucr/Device-Parent-Child-Relationships/#references","title":"References","text":"Azure IoT Edge Gateways and Child Devices
BACnet Virtual Devices: The full BACnet spec is paywalled by ASHRAE. But the relevant snippet is from Annex H, section H.1.1.2 Multiple \"Virtual\" BACnet Devices in a Single Physical Device:
A BACnet device is one that possesses a Device object and communicates using the procedures specified in this standard. In some instances, however, it may be desirable to model the activities of a physical building automation and control device through the use of more than one BACnet device. Each such device will be referred to as a virtual BACnet device. This can be accomplished by configuring the physical device to act as a router to one or more virtual BACnet networks. The idea is that each virtual BACnet device is associated with a unique DNET and DADR pair, i.e. a unique BACnet address. The physical device performs exactly as if it were a router between physical BACnet networks.
"},{"location":"design/ucr/Extending-Device-Data/","title":"Extending Device Data","text":""},{"location":"design/ucr/Extending-Device-Data/#extending-device-data","title":"Extending Device Data","text":"This UCR describes the Use Case for Extending of Device Data by Application Services for a given south-bound Device.
"},{"location":"design/ucr/Extending-Device-Data/#submitters","title":"Submitters","text":"Any that deploy EdgeX systems with analytics, utility, or north-bound microservices that add new Device Resources that are extensions of the original south-bound or service-based Device data.
"},{"location":"design/ucr/Extending-Device-Data/#motivation","title":"Motivation","text":"We find a consistent need as we design microservices for our industrial products: The new analytics, utility, and north-bound microservices almost always need to add Device Resources to manage their configuration, transforms, and status reporting. These Resources are usually needed on a per-Device basis (rather than just overall service configuration or status), which can be seen as extending (adding to) the data of the original south-bound devices.
Adding configuration and status via Resources that extend the original south-bound Device make this configuration and status data easily accessible and translatable to other Application Services and to the UI via REST; we think that this general solution is better than disparate solutions which add custom APIs in each Application Service to Get and Set this data.
What is needed is a common means of showing the relationship between these added Resources, their owning service, and the original south-bound Device Resources; that is, to indicate that these Resources \"extend\" the original Device data.
It is desirable that the means of conveying this information become standardized for those EdgeX microservices which provide and use it, hence proposing here that there be a common EdgeX way defined to do this.
"},{"location":"design/ucr/Extending-Device-Data/#target-users","title":"Target Users","text":"Picture the extremely simple case of a south-bound sensor device that just measures Temperature and Humidity and provides these as Device Resources. If we then add analytics and north-bound microservices: - A Trending service that needs Device Resources to indicate that Temperature and Humidity are trended for, eg, Minimum, Average, and Maximum over a 1 hour trend interval. - An Alarming service that needs Device Resources to describe the Alarm Rules used to monitor Temperature and Humidity, plus a device-level InAlarm status. - A Cloud service that reports not just the Temperature and Humidity but also their Trend configuration and Alarm Rule Resources. In addition, the Cloud service adds its own Resources to direct the Cadence with which this Device's data is reported.
Now scale this up to 100 such Temperature/Humidity sensors, and, if not using extended devices as described here, it would grow difficult to match all of the free-standing (unassociated) Resources to their original sensor data. And add the requirement that all these resources must be able to be seen and managed locally via REST or Message Bus, and potentially from north-bound services like Modbus/TCP, and from the Cloud (because everybody wants to control everything from the Cloud).
Furthermore, from the end user's point of view, the Trend configuration, Alarm Rules, and Cloud Cadence that are added for a given Device are all seen as aspects of the Temperature/Humidity Device, as is common for Digital Twin representations, and not as separated, free-standing entities. So there must be some means to relate the extended Device Resources to the original south-bound Device and its Device Resources.
"},{"location":"design/ucr/Extending-Device-Data/#existing-solutions","title":"Existing solutions","text":"In EdgeX today, Devices and their Resources such as those described in the last section can be added, but they are not seen as related to the south-bound Device or to each other, except perhaps by well-chosen Labels or Tags.
The existing south-bound Device Profiles could be extended to simply add the new Resources, but nothing connects these Resources to their owning Service (ie, so core-command could be used to manage them).
"},{"location":"design/ucr/Extending-Device-Data/#requirements","title":"Requirements","text":"Not a requirement: means of using or combining Resources from multiple south-bound Devices into one Extended Resource.
"},{"location":"design/ucr/Extending-Device-Data/#other-related-issues","title":"Other Related Issues","text":""},{"location":"design/ucr/Extending-Device-Data/#references","title":"References","text":""},{"location":"design/ucr/Microservice-Authentication/","title":"Microservice Authentication","text":""},{"location":"design/ucr/Microservice-Authentication/#microservice-authentication","title":"Microservice Authentication","text":""},{"location":"design/ucr/Microservice-Authentication/#submitters","title":"Submitters","text":"Modern cybersecurity standards for IoT require peer-to-peer authentication of software components. Representative IoT security standards make explicit reference to authentication of both human and non-human interactions between components:
CR 1.2 (Requirement): Components shall provide the capability to identify itself and authenticate with any other component (software application, embedded device, host device and network devices), according to ISA-62443-3-3 SR 1.2.
SR 1.2 (Requirement): The control system shall provide the capability to identify and authenticate all software processes and devices. This capability shall enforce such identification and authentication on all interfaces which provide access to the control system to support least privilege in accordance with applicable security policies and procedures.
PR.AC-1: Identities and credentials are issued, managed, verified, revoked, and audited for authorized devices, users, and processes.
"},{"location":"design/ucr/Microservice-Authentication/#target-users","title":"Target Users","text":"Microservice authentication provides the following benefits, which are potentially valuable to all of the listed target users:
Provides a defense against malware running on the device, as currently there is no mechanism to ensure that only authorized users or processes are allowed to invoke EdgeX services.
Provides greater auditability as to who initiated a particular action on the device.
Depending on implementation, may provide a way to revoke access that was previously granted, or allow customers to tie in to enterprise identity management systems.
For purposes of this UCR, microservice authentication implies that the receiving microservice has access to the identity of the caller and can write program logic based on that identity.
"},{"location":"design/ucr/Microservice-Authentication/#existing-solutions","title":"Existing solutions","text":"Microservice authentication is currently implemented around two primary vectors:
Initiator sends an identifier along with a request to the receiver. The identifier is cryptographically validated using a key trusted by the receiver, or the receiver asks a trusted third party to verify the identifier.
A benefit of token-based authentication schemes is identity delegation, whereby the identifier can be passed through a chain of calls to preserve the identity of the original initiator. The identifier can often be tunneled through other protocols. Another benefit of token-based authentication is that it flows easily through a web application firewall.
A drawback of token-based authentication is that due to MITM threats, token-based authentication over an unencrypted network is insecure. Another drawback of token-based authentication is that it is unidirectional: the receiver can authenticate the initiator, but not vice-versa.
End-to-end encryption implies that only the original sender and the final intended receiver ever see the unencrypted message contents. If a message is simply encrypted from process-to-process or machine-to-machine, where an intermediary can decrypt the message, even if the entire flow encrypted point-to-point, then the message is simply said to be \"encrypted in-transit.\" If the architecture of the system requires a server-based intermediary between two clients, then in a E2EE system, only the two communicating clients have access the unencrypted data.
"},{"location":"design/ucr/Microservice-Authentication/#requirements","title":"Requirements","text":"When an EdgeX service is running in secure mode, unauthenticated inbound requests shall be rejected.
When an EdgeX service is running in secure mode and initiating an outbound request to a peer EdgeX service, the outbound request shall be authenticated.
Authentication shall work in the context of bare-metal deployments, snap-based deployments, docker-based deployments, and Kubernetes-based deployments.
This UCR does not prescribe what layer in the software stack performs authentication.
"},{"location":"design/ucr/Microservice-Authentication/#other-related-issues","title":"Other Related Issues","text":"Including identity and access management in EdgeX system (edgex-go#3845): Expresses the desire to integrate human identity into the EdgeX system. The BSI presentation to EdgeX TSC also explicitly mentions Auth0 integration.
Investigate alternatives to Kong that have better platform support and use less memory (edgex-go#3747): Expresses the concern over the size of the Kong+Postgres implementation, and a desire to find something more efficient.
None
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/","title":"Provision Watch via Device Metadata","text":""},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#provision-watch-via-device-metadata","title":"Provision Watch via Device Metadata","text":"This UCR describes the Use Case for Provision Watching via Additional Device Metadata, beyond the protocol properties currently used exclusively for matching in Provision Watchers.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#submitters","title":"Submitters","text":"Any that deploy EdgeX systems with south-bound Device Services where Provisioning is dependent on device data discovered in devices, not just their protocol properties. Any that deploy EdgeX systems with analytics, utility, or north-bound microservices that must \"discover\" Devices added to the EdgeX core-metadata by south-bound Device Services.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#motivation","title":"Motivation","text":"The autodiscovery of Devices using Provision Watchers is a useful feature of Device Services; currently, the Provision Watcher implementation in the two Device SDKs uses only the protocol properties of a discovered Device to match against the \"identifiers\" specified in the Provision Watcher metadata. The implementations use regular expression matching against the \"identifiers\", and also filter out any Devices whose protocol properties match the \"blockingIdentifiers\" of the Provision Watcher metadata.
Provisioning for south-bound services today must have a strict knowledge of the devices that will be discovered, but some protocols (eg, BACnet) have discoverable device properties which can provide a further discrimination, for example, to use the device's modelName to determine which Device Profile should be applied to it. We would like that the metadata from the Device (not necessarily from core-metadata, but properties of the Device) can be selected to match for provisioning, and not limit the property names to a fixed set of properties.
We are finding that Hybrid App-Device Services later (./Hybrid-App-Device-Services.md) also want to use Provision Watchers, so that they can be configured at run-time to work with new Devices, but these do not need or want to match the protocol properties of a Device; instead, they want to match or exclude based on Device instance metadata properties such as the \"modelName\", \"profileName\", \"name\", and \"labels\".
This UCR describes the Use Case for using these additional properties for Provision Watching.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#target-users","title":"Target Users","text":"Application Services using the Device SDKs (ie, Hybrid App-Device Services) can take advantage of the Provision Watching feature and APIs to \"discover\" new EdgeX devices from the south-bound Device Services, match them to app-specific Device Profiles, and handle their data with analysis or transforms.
A south-bound Device Service may discover devices across a range of protocol properties, and those devices may need different Device Profiles depending upon metadata properties of the discovered devices, for example, the ModelName field of BACnet data. While the \"modelName\" is an obvious target, the Device Service may want to use other device metadata for Provisioning as well for inclusion or exclusion.
For another example, consider the case where each of three Hybrid App-Device Services (a Trending Service, an Alarm Monitoring Service, and a Cloud Service) want to handle the data originating in a south-bound Modbus service for any \"Watt-o-Meter\" (Model Name) Device. So each service is configured with a Provision Watcher that will try to match that \"modelName\", or else \"profileName\" of \"Watt-o-Meter-Modbus-Profile-01\", of devices discovered in core-metadata or shown as added via the control plane events and, if a match is found, add a new \"extended\" Device to each service using the appropriate Device Profile (eg, \"Watt-o-Meter-Trends-Profile-01\" for the Trending Service), and giving the new extended Device a name, for example based on the original and the service (eg, \"Meter-333-Trending\").
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#extensions","title":"Extensions","text":"Other Device metadata properties appear to be good candidates for a user to choose from: - name: The Device name may be good for regular expression matching, eg \"name\":\"Meter-*\" - labels: Since this one is free-form and open to the owner to add labels of their choosing, this one should be good for both matching and the exclusion list. Eg, if a Device had \"labels\": \"meter, basement, energy\", then it could be matched or excluded for \"labels\":\"basement\". - serial number: with regular expressions, this can be a powerful matching choice. - MAC address: similar to serial number for a specific range of vendor devices.
The Device Service which discovers the Device will probably want to permit specific metadata properties to be used.
"},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#existing-solutions","title":"Existing solutions","text":"In EdgeX today, as noted, Provision Watchers match only the protocol properties, using regular expression matching and excluding. The example given for the REST API is a good one:
\"identifiers\": {\n\"address\": \"localhost\",\n\"port\": \"3[0-9]{2}\"\n},\n
Note its use of regular expression matching for the port number. "},{"location":"design/ucr/Provision-Watch-via-Device-Metadata/#requirements","title":"Requirements","text":"Currently one must have physical devices and appropriate environment to produce real device data (Event/Readings) into an EdgeX solution for other EdgeX services (Core Data, App Services, eKuiper Rules Engine) to consume. This is often not the case when someone is developing/testing one of these consuming EdgeX services. A good example of this is the RFID LLRP Inventory App Service . Testing this service is dependent on the RFID LLRP Device Service, physical LLRP RFID readers, RFID Tags and environment where these are deployed. Having a way to record EdgeX the Event/Readings from an actual deployment that then could be replayed in development environment for testing would be very valuable.
Other potential uses are:
Target users have the need to be able to replay recorded EdgeX Event/Readings for functional, performance or reproducible testing. This UCR describes a new capability that allows user to first record Event/Readings from real devices in real-time and be able to replay the Event/Readings as if it was in real-time. The static device profile and device definition files at time of capture will need to be available and loaded at the time the captured data is replayed.
"},{"location":"design/ucr/Record-and-Replay/#existing-solutions","title":"Existing solutions","text":"There are simulators for some devices (i.e. Modbus), but there isn't an general solution to reproduce real device data into EdgeX without the physical devices being present. These simulators also don't have a way to produce a specific set of results in a timeline as do physical devices.
"},{"location":"design/ucr/Record-and-Replay/#requirements","title":"Requirements","text":""},{"location":"design/ucr/Record-and-Replay/#record","title":"Record","text":"Shall have capability to record device Event/Reading data as received from the Device Service(s).
Recorded Event/Reading timestamps shall be sufficient to determine the captured rate.
Recorded Events shall be sufficient to determine the major version of EdgeX used, i.e. ApiVersion
Record must allow the duration of capture and/or the max number of Readings to capture to be specified
Record must capture all Device definitions for devices referenced in the captured Event/Readings
Record must capture all Device Profiles for devices referenced in the captured Event/Readings
Record must have the ability to record data only from specified target devices or device profiles.
Shall have capability to export captured data for use at a later time or to send to other users
Exported data shall be in JSON format with option for it to be compressed (ZIP or GZIP).
Exported binary Event/Readings shall not be re-encoded in CBOR.
Shall have capability to import data that was previously recorded and exported
Import must validate that the current EdgeX version is compatible with the APIVersion captured in the recorded Events.
Import must add any captured Device definitions and Device Profiles to the system prior to play back.
Import shall have option to overwrite or use existing Device definitions and Device Profiles
Shall have capability to replay previously captured data
Replay must allow captured data to be replayed at captured, slower or faster speeds
Replay must adjust Event/Reading timestamps to be current time when published.
Replay must allow for the repeat replay option with number of times to repeat the captured data.
Replay must allow recorded data from multiple sources at the same time (this mimics more device services feeding EdgeX )
When a new camera device is added to the system only Core Metadata and the Device Service managing the new camera device are aware. There are use cases when other parts of the system need to know when a new device has been added to the system. This UCR will focus on the camera management use case which illustrates the need for these new System Events.
"},{"location":"design/ucr/System-Events-for-Devices/#target-users","title":"Target Users","text":"System Events (aka Control Plane Events - CPE) are events generated by the system when there are changes in part of the system that are important for other parts of the system to know about. This UCR will focus on the Device System Events use case as related to camera management. These Device System Events could be utilized by many other use cases in similar manner.
The new EdgeX USB and ONVIF camera Device Services (not yet released) implement auto provisioning which detects when a new camera device has been connected to USB or added to the network. New Device objects are created in Core Metadata for the new camera devices that have been auto provisioned. The auto provisioning also detects existing known camera devices, determines if there have been changes to the device details, such as IP address, and updates the Device object in Core Metadata with any changes. Device objects can also be manually deleted from Core Metadata once camera devices have been permanently disconnected.
A camera management application service needs to know when a new camera has been added so that it can initiate AI/ML processing on the stream from the new camera. The service also needs to know when an existing camera device has been updated so that it can make any needed adjustments such as restarting the AI/ML processing using the new IP address of the camera. Finally the service needs to know when an existing camera device has been removed so that it can stop the AI/ML processing for the removed camera.
"},{"location":"design/ucr/System-Events-for-Devices/#existing-solutions","title":"Existing solutions","text":"Parts of the system (i.e. application service) must poll Core Metadata for list of devices to determine if a device has been added, update or deleted. To do this it must keep its own list of Device objects to make these determinations.
As a temporary stop gap for the initial upcoming release of the new ONVIF camera device service, an enhancement was added which publishes an EdgeX Event/Reading when a new camera device has been added, updated or modified. The Reading contains the information about the event type and the device name. This is improper use of the EdgeX Event/Reading which is intended for readings from devices, not System Events. This feature in the ONVIF Camera Device Service will be removed once System Events for Devices are in place.
"},{"location":"design/ucr/System-Events-for-Devices/#requirements","title":"Requirements","text":"Subscription shall allow filtering for Device System Events for the following:
Device Service (i.e. only want Events for which the device is owned by device-onvif-camera)
Device Profile (i.e. only want Events for which the device is for a specific device profile)
Event Type, (i.e. only want Add events)
Each Device System Event must contain at a minimum the following, which is all that is needed to send a command to the device to get the stream URL or stop the AI/ML processing:
Event Type: Added, Updated or Deleted
Note
Other details about the device, if not present in the System Event, can be queried from Core Metadata using the Device Name
"},{"location":"design/ucr/System-Events-for-Devices/#other-related-issues","title":"Other Related Issues","text":"Deployment at scale, i.e. identical or almost identical deployments across many locations, would benefit from the ability to load service files from a central location. This would allow the maintainer to make changes once to a shared file and have them apply to all or a subset of deployments. The following are some EdgeX service files that would benefit for this capability:
Unit of Measure file used by Core Metadata
Service Configuration files
./res/configuration.toml
, but can be overridden via -cf/--configFile command line flag.Token Configuration file for Security File Token Provider
Device Profiles, Device Definition and Provision Watchers
These files can reside in a device services local file system and are pushed to Core Metadata the first time the service starts. Example here
These files are found by scanning the folders specified in configuration here
Note
These files are only pushed to Core Metadata the first time the device service is loaded. They are not currently re-pushed once they exist in Core Metadata even when the files have changed locally. Thus updating the files locally or in a shared location will not result in changing the contents of these files in Core Metadata. They still benefit from this capability during initial deployment and when new files are added.
Currently all files loaded by services are expected to be on the local file system, thus are duplicated many times when deploying at scale.
"},{"location":"design/ucr/URIs-for-Files/#target-users","title":"Target Users","text":"This UCR proposes to enhance loading of files in EdgeX by allowing the location of the file to be optionally specified as an URI.
"},{"location":"design/ucr/URIs-for-Files/#existing-solutions","title":"Existing solutions","text":"Loading shared files via a URI is not new in the software industry. Here is the Wiki page for Uniform Resource Identifier
"},{"location":"design/ucr/URIs-for-Files/#requirements","title":"Requirements","text":"username:password@
http
and https
schemes from the above spec shall be supported as well as plain paths
as is todayThe file
scheme shall not be supported as it doesn't allow for relative paths
The URI spec shall be extended to allow the specifying of EdgeX service secrets from the service's Secret Store in order to avoid credentials in plain text. Details on how are left to the ADR.
-cc/--commonConfig
flag can be a URI to a remote files. The implementation of this portion of the ADR is dependent on the UCR and following ADR.In addition to the examples listed in this section of the documentation, you will find other examples in the EdgeX Examples Repository.
The tabs below provide a listing (may be partial based on latest updates) for reference.
Application ServicesDeploymentDevice ServicesSecuritySee App Service Examples for a listing of custom and configurable application service examples.
Example Location Helm (Kubernetes) Github - examples, deployment Raspberry Pi 4 Github - examples, raspberry-pi-4 Example Location TBD Example Location security-enabled EdgeX Remote Device Service Github - examples, securityWarning
Not all the examples in the EdgeX Examples repository are available for all EdgeX releases. Check the documentation for details.
"},{"location":"examples/AppServiceExamples/","title":"App Service Examples","text":"The following is a list of examples we currently have available that demonstrate various ways that the Application Functions SDK or App Service Configurable can be used. All of the examples can be found here in the edgex-examples repo. They focus on how to leverage various built in provided functions as mentioned above as well as how to write your own in the case that the SDK does not provide what is needed.
Example Name Description Camera Management Utilizes the ONVIF and USB device services and demonstrates the management of these cameras and their integration with video inferencing Simple Filter XML Demonstrates Filtering of Events by Device names and transforming data to XML Simple Filter XML HTTP Same example as #1, but result published to HTTP Endpoint Simple Filter XML MQTT Same example as #1, but result published to MQTT Broker Simple CBOR Filter Demonstrates Filtering of Events by Resource names for Event that is CBOR encoded containing a binary reading Advanced Filter Convert Publish Demonstrates Filtering of Events by Resource names, custom function to convert the reading and them publish the modified Event back to the MessageBus under a different topic. Advanced Target Type Demonstrates use of custom Target Type and use of HTTP Trigger Cloud Export MQTT Demonstrates simple custom Cloud transform and exporting to Cloud MQTT Broker. Cloud Event Transform Demonstrates custom transforms that convert Event/Readings to and from Cloud Events Send Command Demonstrates sending commands to a Device via the Command Client. Secrets Demonstrates how to retrieve secrets from the service SecretStore Custom Trigger Demonstrates how to create and use a custom trigger NATS RPC Demonstrates how to create a synchronous request/reply trigger using NATS messaging Fledge Export Demonstrates custom conversion of Event/Reading to Fledge format and then exporting to Fledge service REST endpoint Influxdb Export Demonstrates custom conversion of Event/Reading to InfluxDB timeseries format and then exporting to InFluxDB via MQTT Json Logic Demonstrates using the built in JSONLogic Evaluate pipeline function IBM Export Profile Demonstrates a custom App Service Configurable profile for exporting to IBM Cloud"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/","title":"Command Devices with eKuiper Rules Engine","text":""},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#overview","title":"Overview","text":"This document describes how to actuate a device with rules trigger by the eKuiper rules engine. To make the example simple, the virtual device device-virtual is used as the actuated device. The eKuiper rules engine analyzes the data sent from device-virtual services, and then sends a command to virtual device based a rule firing in eKuiper based on that analysis. It should be noted that an application service is used to route core data through the rules engine.
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#use-case-scenarios","title":"Use Case Scenarios","text":"Rules will be created in eKuiper to watch for two circumstances:
Random-UnsignedInteger-Device
device (one of the default virtual device managed devices), and if a uint8
reading value is found larger than 20
in the event, then send a command to Random-Boolean-Device
device to start generating random numbers (specifically - set random generation bool to true).Random-Integer-Device
device (another of the default virtual device managed devices), and if the average for int8
reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device
device to stop generating random numbers (specifically - set random generation bool to false).These use case scenarios do not have any real business meaning, but easily demonstrate the features of EdgeX automatic actuation accomplished via the eKuiper rule engine.
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#prerequisite-knowledge","title":"Prerequisite Knowledge","text":"This document will not cover basic operations of EdgeX or LF Edge eKuiper. Readers should have basic knowledge of:
Make sure you read the EdgeX eKuiper Rule Engine Tutorial and successfully run eKuiper with EdgeX.
First create a stream that can consume streaming data from the EdgeX application service (rules engine profile). This step is not required if you already finished the EdgeX eKuiper Rule Engine Tutorial.
curl -X POST \\\nhttp://$ekuiper_docker:59720/streams \\\n-H 'Content-Type: application/json' \\\n-d '{\"sql\": \"create stream demo() WITH (FORMAT=\\\"JSON\\\", TYPE=\\\"edgex\\\")\"}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#get-and-test-the-command-url","title":"Get and Test the Command URL","text":"Since both use case scenario rules will send commands to the Random-Boolean-Device
virtual device, use the curl request below to get a list of available commands for this device.
curl http://127.0.0.1:59882/api/v3/device/name/Random-Boolean-Device | jq\n
It should print results like those below.
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"deviceCoreCommand\": {\n\"deviceName\": \"Random-Boolean-Device\",\n\"profileName\": \"Random-Boolean-Device\",\n\"coreCommands\": [\n{\n\"name\": \"WriteBoolValue\",\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Bool\",\n\"valueType\": \"Bool\"\n},\n{\n\"resourceName\": \"EnableRandomization_Bool\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"WriteBoolArrayValue\",\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/WriteBoolArrayValue\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"BoolArray\",\n\"valueType\": \"BoolArray\"\n},\n{\n\"resourceName\": \"EnableRandomization_BoolArray\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"Bool\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/Bool\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Bool\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"BoolArray\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Boolean-Device/BoolArray\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"BoolArray\",\n\"valueType\": \"BoolArray\"\n}\n]\n}\n]\n}\n}\n
From this output, look for the URL associated to the PUT
command (the first URL listed). This is the command eKuiper will use to call on the device. There are two parameters for this command:
Bool
: Set the returned value when other services want to get device data. The parameter will be used only when EnableRandomization_Bool
is set to false.EnableRandomization_Bool
: Enable/disable the randomization generation of bool values. If this value is set to true, then the 1st parameter will be ignored.You can test calling this command with its parameters using curl as shown below.
curl -X PUT \\\nhttp://edgex-core-command:59882/api/v3/device/name/Random-Boolean-Device/WriteBoolValue \\\n-H 'Content-Type: application/json' \\\n-d '{\"Bool\":\"true\", \"EnableRandomization_Bool\": \"true\"}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#create-rules","title":"Create rules","text":"Now that you have EdgeX and eKuiper running, the EdgeX stream defined, and you know the command to actuate Random-Boolean-Device
, it is time to build the eKuiper rules.
Again, the 1st rule is to monitor for events coming from the Random-UnsignedInteger-Device
device (one of the default virtual device managed devices), and if a uint8
reading value is found larger than 20
in the event, then send the command to Random-Boolean-Device
device to start generating random numbers (specifically - set random generation bool to true). Given the URL and parameters to the command, below is the curl command to declare the first rule in eKuiper.
curl -X POST \\\nhttp://$ekuiper_server:59720/rules \\\n-H 'Content-Type: application/json' \\\n-d '{\n \"id\": \"rule1\",\n \"sql\": \"SELECT uint8 FROM demo WHERE uint8 > 20\",\n \"actions\": [\n {\n \"rest\": {\n \"url\": \"http://edgex-core-command:59882/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n \"method\": \"put\",\n \"dataTemplate\": \"{\\\"Bool\\\":\\\"true\\\", \\\"EnableRandomization_Bool\\\": \\\"true\\\"}\",\n \"sendSingle\": true\n }\n },\n {\n \"log\":{}\n }\n ]\n}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#the-second-rule","title":"The second rule","text":"The 2nd rule is to monitor for events coming from the Random-Integer-Device
device (another of the default virtual device managed devices), and if the average for int8
reading values (within 20 seconds) is larger than 0, then send a command to Random-Boolean-Device
device to stop generating random numbers (specifically - set random generation bool to false). Here is the curl request to setup the second rule in eKuiper. The same command URL is used as the same device action (Random-Boolean-Device's PUT bool command
) is being actuated, but with different parameters.
curl -X POST \\\nhttp://$ekuiper_server:59720/rules \\\n-H 'Content-Type: application/json' \\\n-d '{\n \"id\": \"rule2\",\n \"sql\": \"SELECT avg(int8) AS avg_int8 FROM demo WHERE int8 != nil GROUP BY TUMBLINGWINDOW(ss, 20) HAVING avg(int8) > 0\",\n \"actions\": [\n {\n \"rest\": {\n \"url\": \"http://edgex-core-command:59882/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n \"method\": \"put\",\n \"dataTemplate\": \"{\\\"Bool\\\":\\\"false\\\", \\\"EnableRandomization_Bool\\\": \\\"false\\\"}\",\n \"sendSingle\": true\n }\n },\n {\n \"log\":{}\n }\n ]\n}'\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#watch-the-ekuiper-logs","title":"Watch the eKuiper Logs","text":"Both rules are now created in eKuiper. eKuiper is busy analyzing the event data coming for the virtual devices looking for readings that match the rules you created. You can watch the edgex-kuiper container logs for the rule triggering and command execution.
docker logs edgex-kuiper\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#explore-the-results","title":"Explore the Results","text":"You can also explore the eKuiper analysis that caused the commands to be sent to the service. To see the the data from the analysis, use the SQL below to query eKuiper filtering data.
SELECT int8, \"true\" AS randomization FROM demo WHERE uint8 > 20\n
The output of the SQL should look similar to the results below.
[{\"int8\":-75, \"randomization\":\"true\"}]\n
"},{"location":"examples/Ch-CommandingDeviceThroughRulesEngine/#extended-reading","title":"Extended Reading","text":"Use these resources to learn more about the features of LF Edge eKuiper.
EdgeX - Levski Release
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview","title":"Overview","text":"In this example, we use a script to simulate a custom-defined MQTT device, instead of a real device. This provides a straight-forward way to test the device-mqtt features using an MQTT-broker.
Note
Multi-Level Topics move metadata (i.e. device name, command name,... etc) from the payload into the MQTT topics. Notice the sections marked with Using Multi-level Topic: for relevant input/output throughout this example.
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-the-custom-device-configuration","title":"Prepare the Custom Device Configuration","text":"In this section, we create folders that contain files required for deployment of a customized device configuration to work with the existing device service:
- custom-config\n |- devices\n |- my.custom.device.config.yaml\n |- profiles\n |- my.custom.device.profile.yml\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#device-configuration","title":"Device Configuration","text":"Use this configuration file to define devices and schedule jobs. device-mqtt generates a relative instance on start-up.
Create the device configuration file, named my.custom.device.config.yaml
, as shown below:
# Pre-define Devices\ndeviceList:\n- name: \"my-custom-device\"\nprofileName: \"my-custom-device-profile\"\ndescription: \"MQTT device is created for test purpose\"\nlabels: [ \"MQTT\", \"test\" ]\nprotocols:\nmqtt:\nCommandTopic: \"command/my-custom-device\"\nautoEvents:\n- interval: \"30s\"\nonChange: false\nsourceName: \"message\"\n
Note
CommandTopic
is used to publish the GET or SET command request
The DeviceProfile defines the device's values and operation method, which can be Read or Write.
Create a device profile, named my.custom.device.profile.yml
, with the following content:
name: \"my-custom-device-profile\"\nmanufacturer: \"iot\"\nmodel: \"MQTT-DEVICE\"\ndescription: \"Test device profile\"\nlabels:\n- \"mqtt\"\n- \"test\"\ndeviceResources:\n-\nname: randnum\nisHidden: true\ndescription: \"device random number\"\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\n-\nname: ping\nisHidden: true\ndescription: \"device awake\"\nproperties:\nvalueType: \"String\"\nreadWrite: \"R\"\n-\nname: message\nisHidden: false\ndescription: \"device message\"\nproperties:\nvalueType: \"String\"\nreadWrite: \"RW\"\n-\nname: json\nisHidden: false\ndescription: \"JSON message\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"RW\"\nmediaType: \"application/json\"\n\ndeviceCommands:\n-\nname: values\nreadWrite: \"R\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"randnum\" }\n- { deviceResource: \"ping\" }\n- { deviceResource: \"message\" }\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#prepare-docker-compose-file","title":"Prepare docker-compose file","text":"$ git clone git@github.com:edgexfoundry/edgex-compose.git\n$ cd edgex-compose\n$ git checkout main\n
!!! note Use main branch until levski is released.$ cd compose-builder\n$ make gen ds-mqtt mqtt-broker no-secty ui\n
$ ls | grep 'docker-compose.yml'\ndocker-compose.yml\n
Create a docker-compose file docker-compose.override.yml
to extend the compose file which generated by the compose-builder. In this file, we add volume path and environment variables as shown below:
# docker-compose.override.yml\n\nversion: '3.7'\n\nservices:\ndevice-mqtt:\nenvironment:\nDEVICE_DEVICESDIR: /custom-config/devices\nDEVICE_PROFILESDIR: /custom-config/profiles\nvolumes:\n- /path/to/custom-config:/custom-config\n
Note
Replace the /path/to/custom-config
in the example with the correct path
Deploy EdgeX using the following commands:
$ cd edgex-compose/compose-builder\n$ docker compose pull\n$ docker compose -f docker-compose.yml -f docker-compose.override.yml up -d\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#using-a-mqtt-device-simulator","title":"Using a MQTT Device Simulator","text":""},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overview_1","title":"Overview","text":""},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#expected-behaviors","title":"Expected Behaviors","text":"Using the detailed script below as a simulator, there are three behaviors:
The simulator publishes the data to the MQTT broker with topic incoming/data/my-custom-device/values
and the message is similar to the following:
{\n\"randnum\" : 4161.3549,\n\"ping\" : \"pong\",\n\"message\" : \"Hello World\"\n}\n
Receive the reading request, then return the response.
The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/randnum/get/293d7a00-66e1-4374-ace0-07520103c95f
and message returned is similar to the following:
{\"randnum\":\"42.0\"}\n
The simulator returns the response to the MQTT broker, the topic is command/response/#
and the message is similar to the following:
{\"randnum\":\"4.20e+01\"}\n
Receive the set request, then change the device value.
The simulator receives the request from the MQTT broker, the topic is command/my-custom-device/testmessage/set/293d7a00-66e1-4374-ace0-07520103c95f
and the message is similar to the following:
{\"message\":\"test message...\"}\n
The simulator changes the device value and returns the response to the MQTT broker, the topic is command/response/#
and the message is similar to the following:
{\"message\":\"test message...\"}\n
To implement the simulated custom-defined MQTT device, create a javascript, named mock-device.js
, with the following content:
function getRandomFloat(min, max) {\nreturn Math.random() * (max - min) + min;\n}\n\nconst deviceName = \"my-custom-device\";\nlet message = \"test-message\";\nlet json = {\"name\" : \"My JSON\"};\n\n// DataSender sends async value to MQTT broker every 15 seconds\nschedule('*/15 * * * * *', ()=>{\nvar data = {};\ndata.randnum = getRandomFloat(25,29).toFixed(1);\ndata.ping = \"pong\"\ndata.message = \"Hello World\"\n\npublish( 'incoming/data/my-custom-device/values', JSON.stringify(data));\n});\n\n// CommandHandler receives commands and sends response to MQTT broker\n// 1. Receive the reading request, then return the response\n// 2. Receive the set request, then change the device value\nsubscribe( \"command/my-custom-device/#\" , (topic, val) => {\nconst words = topic.split('/');\nvar cmd = words[2];\nvar method = words[3];\nvar uuid = words[4];\nvar response = {};\nvar data = val;\n\nif (method == \"set\") {\nswitch(cmd) {\ncase \"message\":\nmessage = data[cmd];\nbreak;\ncase \"json\":\njson = data[cmd];\nbreak;\n}\n}else{\nswitch(cmd) {\ncase \"ping\":\nresponse.ping = \"pong\";\nbreak;\ncase \"message\":\nresponse.message = message;\nbreak;\ncase \"randnum\":\nresponse.randnum = 12.123;\nbreak;\ncase \"json\":\nresponse.json = json;\nbreak;\n}\n}\nvar sendTopic =\"command/response/\"+ uuid;\npublish( sendTopic, JSON.stringify(response));\n});\n
To run the device simulator, enter the commands shown below with the following changes: $ mv mock-device.js /path/to/mqtt-scripts\n$ docker run --rm --name=mqtt-scripts \\\n -v /path/to/mqtt-scripts:/scripts --network host \\\n dersimn/mqtt-scripts --dir /scripts\n
Note
Replace the /path/to/mqtt-scripts
in the example mv command with the correct path
Then the mqtt-scripts show logs as below:
2022-08-12 09:52:42.086 <info> mqtt-scripts 1.2.2 starting\n2022-08-12 09:52:42.227 <info> mqtt connected mqtt://127.0.0.1\n2022-08-12 09:52:42.733 <info> /scripts/mock-device.js loading\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-commands","title":"Execute Commands","text":"Now we're ready to run some commands.
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#find-executable-commands","title":"Find Executable Commands","text":"Use the following query to find executable commands:
$ curl http://localhost:59882/api/v3/device/all | json_pp\n\n{\n\"deviceCoreCommands\" : [\n{\n\"profileName\" : \"my-custom-device-profile\",\n\"coreCommands\" : [\n{\n\"name\" : \"values\",\n\"get\" : true,\n\"path\" : \"/api/v3/device/name/my-custom-device/values\",\n\"url\" : \"http://edgex-core-command:59882\",\n\"parameters\" : [\n{\n\"resourceName\" : \"randnum\",\n\"valueType\" : \"Float32\"\n},\n{\n\"resourceName\" : \"ping\",\n\"valueType\" : \"String\"\n},\n{\n\"valueType\" : \"String\",\n\"resourceName\" : \"message\"\n}\n]\n},\n{\n\"url\" : \"http://edgex-core-command:59882\",\n\"parameters\" : [\n{\n\"resourceName\" : \"message\",\n\"valueType\" : \"String\"\n}\n],\n\"name\" : \"message\",\n\"get\" : true,\n\"path\" : \"/api/v3/device/name/my-custom-device/message\",\n\"set\" : true\n},\n{\n\"name\": \"json\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/MQTT-test-device/json\",\n\"url\" : \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"json\",\n\"valueType\": \"Object\"\n}\n]\n}\n],\n\"deviceName\" : \"my-custom-device\"\n}\n],\n\"apiVersion\" : \"v2\",\n\"statusCode\" : 200\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-set-command","title":"Execute SET Command","text":"Execute a SET command according to the url and parameterNames, replacing [host] with the server IP when running the SET command.
$ curl http://localhost:59882/api/v3/device/name/my-custom-device/message \\\n -H \"Content-Type:application/json\" -X PUT \\\n -d '{\"message\":\"Hello!\"}'\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#execute-get-command","title":"Execute GET Command","text":"Execute a GET command as follows:
$ curl http://localhost:59882/api/v3/device/name/my-custom-device/message | json_pp\n\n{\n\"apiVersion\":\"v2\",\n\"event\":{\n\"apiVersion\":\"v2\",\n\"deviceName\":\"my-custom-device\",\n\"id\":\"13164041-2e6c-4454-9bc3-8e8987e85311\",\n\"origin\":1660298227470009014,\n\"profileName\":\"my-custom-device-profile\",\n\"readings\":[\n{\n\"deviceName\":\"my-custom-device\",\n\"id\":\"c58e65b4-62f0-4e41-b368-645993ec0bfd\",\n\"origin\":1660298227470005426,\n\"profileName\":\"my-custom-device-profile\",\n\"resourceName\":\"message\",\n\"value\":\"Hello!\",\n\"valueType\":\"String\"\n}\n],\n\"sourceName\":\"message\"\n},\n\"statusCode\":200\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#schedule-job","title":"Schedule Job","text":"The schedule job is defined in the autoEvents
section of the device definition file:
autoEvents:\nInterval: \"30s\"\nOnChange: false\nSourceName: \"message\"\n
After the service starts, query core-data's reading API. The results show that the service auto-executes the command every 30 secs, as shown below:
$ curl http://localhost:59880/api/v3/reading/resourceName/message | json_pp\n\n{\n\"statusCode\" : 200,\n\"readings\" : [\n{\n\"value\" : \"test-message\",\n\"id\" : \"e91b8ca6-c5c4-4509-bb61-bd4b09fe835c\",\n\"resourceName\" : \"message\",\n\"origin\" : 1624418361324331392,\n\"profileName\" : \"my-custom-device-profile\",\n\"deviceName\" : \"my-custom-device\",\n\"valueType\" : \"String\"\n},\n{\n\"resourceName\" : \"message\",\n\"value\" : \"test-message\",\n\"id\" : \"1da58cb7-2bf4-47f0-bbb8-9519797149a2\",\n\"deviceName\" : \"my-custom-device\",\n\"valueType\" : \"String\",\n\"profileName\" : \"my-custom-device-profile\",\n\"origin\" : 1624418330822988843\n},\n...\n],\n\"apiVersion\" : \"v2\"\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#async-device-reading","title":"Async Device Reading","text":"The device-mqtt
subscribes to a DataTopic
, which waits for the real device to send value to MQTT broker, then device-mqtt
parses the value and forward to the northbound.
The data format contains the following values:
The following results show that the mock device sent the reading every 15 secs:
$ curl http://localhost:59880/api/v3/reading/resourceName/randnum | json_pp\n\n{\n\"readings\" : [\n{\n\"origin\" : 1624418475007110946,\n\"valueType\" : \"Float32\",\n\"deviceName\" : \"my-custom-device\",\n\"id\" : \"9b3d337e-8a8a-4a6c-8018-b4908b57abb8\",\n\"resourceName\" : \"randnum\",\n\"profileName\" : \"my-custom-device-profile\",\n\"value\" : \"2.630000e+01\"\n},\n{\n\"deviceName\" : \"my-custom-device\",\n\"valueType\" : \"Float32\",\n\"id\" : \"06918cbb-ada0-4752-8877-0ef8488620f6\",\n\"origin\" : 1624418460007833720,\n\"profileName\" : \"my-custom-device-profile\",\n\"value\" : \"2.570000e+01\",\n\"resourceName\" : \"randnum\",\n},\n...\n],\n\"statusCode\" : 200,\n\"apiVersion\" : \"v2\"\n}\n
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#mqtt-device-service-configuration","title":"MQTT Device Service Configuration","text":"MQTT Device Service has the following configurations to implement the MQTT protocol.
Configuration Default Value Description MQTTBrokerInfo.Schema tcp The URL schema MQTTBrokerInfo.Host localhost The URL host MQTTBrokerInfo.Port 1883 The URL port MQTTBrokerInfo.Qos 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) MQTTBrokerInfo.KeepAlive 3600 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 MQTTBrokerInfo.ClientId device-mqtt ClientId to connect to the broker with MQTTBrokerInfo.CredentialsRetryTime 120 The retry times to get the credential MQTTBrokerInfo.CredentialsRetryWait 1 The wait time(seconds) when retry to get the credential MQTTBrokerInfo.ConnEstablishingRetry 10 The retry times to establish the MQTT connection MQTTBrokerInfo.ConnRetryWaitTime 5 The wait time(seconds) when retry to establish the MQTT connection MQTTBrokerInfo.AuthMode none Indicates what to use when connecting to the broker. Must be one of \"none\" , \"usernamepassword\" MQTTBrokerInfo.CredentialsPath credentials Name of the path in secret provider to retrieve your secrets. Must be non-blank. MQTTBrokerInfo.IncomingTopic DataTopic (incoming/data/#) IncomingTopic is used to receive the async value MQTTBrokerInfo.ResponseTopic ResponseTopic (command/response/#) ResponseTopic is used to receive the command response from the device MQTTBrokerInfo.UseTopicLevels false (true) Boolean setting to use multi-level topics MQTTBrokerInfo.Writable.ResponseFetchInterval 500 ResponseFetchInterval specifies the retry interval(milliseconds) to fetch the command response from the MQTT brokerNote
Using Multi-level Topic: Remember to change the defaults in parentheses in the table above.
"},{"location":"examples/Ch-ExamplesAddingMQTTDevice/#overriding-with-environment-variables","title":"Overriding with Environment Variables","text":"The user can override any of the above configurations using environment:
variables to meet their requirement, for example:
# docker-compose.override.yml\n\nversion: '3.7'\n\nservices:\ndevice-mqtt:\nenvironment:\nMQTTBROKERINFO_CLIENTID: \"my-device-mqtt\"\nMQTTBROKERINFO_CONNRETRYWAITTIME: \"10\"\nMQTTBROKERINFO_USETOPICLEVELS: \"false\"\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/","title":"Modbus","text":"EdgeX - Ireland Release
This page describes how to connect Modbus devices to EdgeX. In this example, we simulate the temperature sensor instead of using a real device. This provides a straightforward way to test the device service features.
To fulfill the issue #61, there is an important incompatible change after v2 (Ireland release). In the Device Profile attributes section, the startingAddress
becomes an integer data type and zero-based value. In v1, startingAddress
was a string data type and one-based value.
You can use any operating system that can install docker and docker-compose. In this example, we use Ubuntu to deploy EdgeX using docker.
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-device-simulator","title":"Modbus Device Simulator","text":"1.Download ModbusPal
Download the fixed version of ModbusPal from the https://sourceforge.net/p/modbuspal/discussion/899955/thread/72cf35ee/cd1f/attachment/ModbusPal.jar .
2.Install required lib:
sudo apt install librxtx-java\n
3.Startup the ModbusPal: sudo java -jar ModbusPal.jar\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#modbus-register-table","title":"Modbus Register Table","text":"You can find the available registers in the user manual.
Modbus TCP \u2013 Holding Registers
Address Name R/W Description 4000 ThermostatL R/W Lower alarm threshold 4001 ThermostatH R/W Upper alarm threshold 4002 Alarm mode R/W 1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher 4004 Temperature x10 R Temperature x 10 (np. 10,5 st.C to 105)"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#setup-modbuspal","title":"Setup ModbusPal","text":"To simulate the sensor, do the following:
Add registers according to the register table:
Add the ModbusPal support value auto-generator, which can bind to the registers:
Enable the value generator and click the Run
button.
The following sections describe how to complete the set up before starting the services. If you prefer to start the services and then add the device, see Set Up After Starting Services
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#create-a-custom-configuration-folder","title":"Create a Custom configuration folder","text":"Run the following command:
mkdir -p custom-config\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-profile","title":"Set Up Device Profile","text":"Run the following command to create your device profile:
cd custom-config\nnano temperature.profile.yml\n
Fill in the device profile according to the Modbus Register Table, as shown below:
name: \"Ethernet-Temperature-Sensor\"\nmanufacturer: \"Audon Electronics\"\nmodel: \"Temperature\"\nlabels:\n- \"Web\"\n- \"Modbus TCP\"\n- \"SNMP\"\ndescription: \"The NANO_TEMP is a Ethernet Thermometer measuring from -55\u00b0C to 125\u00b0C with a web interface and Modbus TCP communications.\"\n\ndeviceResources:\n-\nname: \"ThermostatL\"\nisHidden: true\ndescription: \"Lower alarm threshold of the temperature\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 3999, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"RW\"\nscale: 0.1\n-\nname: \"ThermostatH\"\nisHidden: true\ndescription: \"Upper alarm threshold of the temperature\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4000, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"RW\"\nscale: 0.1\n-\nname: \"AlarmMode\"\nisHidden: true\ndescription: \"1 - OFF (disabled), 2 - Lower, 3 - Higher, 4 - Lower or Higher\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4001 }\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"RW\"\n-\nname: \"Temperature\"\nisHidden: false\ndescription: \"Temperature x 10 (np. 10,5 st.C to 105)\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4003, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: 0.1\n\ndeviceCommands:\n-\nname: \"AlarmThreshold\"\nreadWrite: \"RW\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"ThermostatL\" }\n- { deviceResource: \"ThermostatH\" }\n-\nname: \"AlarmMode\"\nreadWrite: \"RW\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"AlarmMode\", mappings: { \"1\":\"OFF\",\"2\":\"Lower\",\"3\":\"Higher\",\"4\":\"Lower or Higher\"} }\n
In the Modbus protocol, we provide the following attributes: 1.primaryTable
: HOLDING_REGISTERS, INPUT_REGISTERS, COILS, DISCRETES_INPUT
2.startingAddress
This attribute defines the zero-based startingAddress in Modbus device. For example, the GET command requests data from the Modbus address 4004 to get the temperature data, so the starting register address should be 4003.
3.IS_BYTE_SWAP
, IS_WORD_SWAP
: To handle the different Modbus binary data order, we support Int32, Uint32, Float32 to do the swap operation before decoding the binary data.
For example: { primaryTable: \"INPUT_REGISTERS\", startingAddress: \"4\", isByteSwap: \"false\", isWordSwap: \"true\" }
4.RAW_TYPE
: This attribute defines the binary data read from the Modbus device, then we can use the value type to indicate the data type that the user wants to receive.
We only support Int16
, Int32
and Uint16
for rawType. The corresponding value type must be Float32
and Float64
. For example:
deviceResources:\n-\nname: \"Temperature\"\nisHidden: false\ndescription: \"Temperature x 10 (np. 10,5 st.C to 105)\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: 4003, rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: 0.1\n
In the device-modbus, the Property rawType
(or valueType
if rawType
is not defined) decides how many registers will be read. Like Holding registers, a register has 16 bits. If the Modbus device's user manual specifies that a value has two registers, define it as Float32
or Int32
or Uint32
in the deviceProfile.
Once we execute a command, device-modbus knows its value type and register type, startingAddress, and register length. So it can read or write value using the modbus protocol.
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-device-service-configuration","title":"Set Up Device Service Configuration","text":"Run the following command to create your device configuration:
cd custom-config\nnano device.config.yaml\n
Fill in the device.config.yaml file, as shown below: deviceList:\nname: \"Modbus-TCP-Temperature-Sensor\"\nprofileName: \"Ethernet-Temperature-Sensor\"\ndescription: \"This device is a product for monitoring the temperature via the ethernet\"\nlabels: - \"temperature\"\n- \"modbus\"\n- \"TCP\"\nprotocols:\nmodbus-tcp:\nAddress: \"172.17.0.1\"\nPort: \"502\"\nUnitID: \"1\"\nTimeout: \"5\"\nIdleTimeout: \"5\"\nautoEvents:\ninterval: \"30s\"\nonChange: false\nsourceName: \"Temperature\"\n
The address 172.17.0.1
is point to the docker bridge network which means it can forward the request from docker network to the host.
Use this configuration file to define devices and AutoEvent. Then the device-modbus will generate the relative instance on startup.
The device-modbus offers two types of protocol, Modbus TCP and Modbus RTU, which can be defined as shown below:
protocol Name Protocol Address Port UnitID BaudRate DataBits StopBits Parity Timeout IdleTimeout Modbus TCP Gateway address TCP 10.211.55.6 502 1 5 5 Modbus RTU Gateway address RTU /tmp/slave 502 2 19200 8 1 N 5 5In the RTU protocol, Parity can be:
$ git clone git@github.com:edgexfoundry/edgex-compose.git\n
$ cd edgex-compose/compose-builder\n$ make gen ds-modbus\n
Add prepared configuration files to docker-compose file, you can mount them using volumes and change the environment for device-modbus internal use.
Open the docker-compose.yml
file and then add volumes path and environment as shown below:
device-modbus:\n...\nenvironment:\n...\nDEVICE_DEVICESDIR: /custom-config\nDEVICE_PROFILESDIR: /custom-config\nvolumes:\n...\n- /path/to/custom-config:/custom-config\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#start-edgex-foundry-on-docker","title":"Start EdgeX Foundry on Docker","text":"Since we generate the docker-compose.yml
file at the previous step, we can deploy EdgeX as shown below:
$ cd edgex-compose/compose-builder\n$ docker compose up -d\nCreating network \"compose-builder_edgex-network\" with driver \"bridge\"\nCreating volume \"compose-builder_consul-acl-token\" with default driver\n...\nCreating edgex-core-metadata ... done\nCreating edgex-core-command ... done\nCreating edgex-core-data ... done\nCreating edgex-device-modbus ... done\nCreating edgex-app-rules-engine ... done\nCreating edgex-sys-mgmt-agent ... done\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-after-starting-services","title":"Set Up After Starting Services","text":"If the services are already running and you want to add a device, you can use the Core Metadata API as outlined in this section. If you set up the device profile and Service as described in Set Up Before Starting Services, you can skip this section.
To add a device after starting the services, complete the following steps:
Upload the device profile above to metadata with a POST to http://localhost:59881/api/v3/deviceprofile/uploadfile and add the file as key \"file\" to the body in form-data format, and the created ID will be returned. The following example command uses curl to send the request:
$ curl http://localhost:59881/api/v3/deviceprofile/uploadfile \\\n -F \"file=@temperature.profile.yml\"\n
Ensure the Modbus device service is running, adjust the service name below to match if necessary or if using other device services.
Add the device with a POST to http://localhost:59881/api/v3/device, the body will look something like:
$ curl http://localhost:59881/api/v3/device -H \"Content-Type:application/json\" -X POST \\\n -d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\" :\"Modbus-TCP-Temperature-Sensor\",\n \"description\":\"This device is a product for monitoring the temperature via the ethernet\",\n \"labels\":[ \n \"Temperature\",\n \"Modbus TCP\"\n ],\n \"serviceName\": \"device-modbus\",\n \"profileName\": \"Ethernet-Temperature-Sensor\",\n \"protocols\":{\n \"modbus-tcp\":{\n \"Address\" : \"172.17.0.1\",\n \"Port\" : \"502\",\n \"UnitID\" : \"1\",\n \"Timeout\" : \"5\",\n \"IdleTimeout\" : \"5\"\n }\n },\n \"autoEvents\":[ \n { \n \"Interval\":\"30s\",\n \"onChange\":false,\n \"SourceName\":\"Temperature\"\n }\n ],\n \"adminState\":\"UNLOCKED\",\n \"operatingState\":\"UP\"\n }\n }\n ]'\n
The service name must match/refer to the target device service, and the profile name must match the device profile name from the previous steps.
Now we're ready to run some commands.
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#find-executable-commands","title":"Find Executable Commands","text":"Use the following query to find executable commands:
$ curl http://localhost:59882/api/v3/device/all | json_pp\n\n{\n\"apiVersion\" : \"v2\",\n\"deviceCoreCommands\" : [\n{\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\",\n\"coreCommands\" : [\n{\n\"url\" : \"http://edgex-core-command:59882\",\n\"name\" : \"AlarmThreshold\",\n\"get\" : true,\n\"set\" : true,\n\"parameters\" : [\n{\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"ThermostatL\"\n},\n{\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"ThermostatH\"\n}\n],\n\"path\" : \"/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold\"\n},\n{\n\"get\" : true,\n\"url\" : \"http://edgex-core-command:59882\",\n\"name\" : \"AlarmMode\",\n\"set\" : true,\n\"path\" : \"/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmMode\",\n\"parameters\" : [\n{\n\"resourceName\" : \"AlarmMode\",\n\"valueType\" : \"Int16\"\n}\n]\n},\n{\n\"get\" : true,\n\"url\" : \"http://edgex-core-command:59882\",\n\"name\" : \"Temperature\",\n\"path\" : \"/api/v3/device/name/Modbus-TCP-Temperature-Sensor/Temperature\",\n\"parameters\" : [\n{\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"Temperature\"\n}\n]\n}\n]\n}\n],\n\"statusCode\" : 200\n}\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-set-command","title":"Execute SET command","text":"Execute SET command according to url
and parameterNames
, replacing [host] with the server IP when running the SET command.
$ curl http://localhost:59882/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold \\\n -H \"Content-Type:application/json\" -X PUT \\\n -d '{\"ThermostatL\":\"15\",\"ThermostatH\":\"100\"}'\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#execute-get-command","title":"Execute GET command","text":"Replace \\<host> with the server IP when running the GET command.
$ curl http://localhost:59882/api/v3/device/name/Modbus-TCP-Temperature-Sensor/AlarmThreshold | json_pp\n\n{\n\"statusCode\" : 200,\n\"apiVersion\" : \"v2\",\n\"event\" : {\n\"origin\" : 1624324686964377495,\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"f3d44a0f-d2c3-4ef6-9441-ad6b1bfb8a9e\",\n\"sourceName\" : \"AlarmThreshold\",\n\"readings\" : [\n{\n\"resourceName\" : \"ThermostatL\",\n\"value\" : \"1.500000e+01\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"9aa879a0-c184-476b-8124-34d35a2a51f3\",\n\"valueType\" : \"Float32\",\n\"mediaType\" : \"\",\n\"binaryValue\" : null,\n\"origin\" : 1624324686963970614,\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n},\n{\n\"value\" : \"1.000000e+02\",\n\"resourceName\" : \"ThermostatH\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"bf7df23b-4338-4b93-a8bd-7abd5e848379\",\n\"valueType\" : \"Float32\",\n\"mediaType\" : \"\",\n\"binaryValue\" : null,\n\"origin\" : 1624324686964343768,\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n}\n],\n\"apiVersion\" : \"v2\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n}\n}\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#autoevent","title":"AutoEvent","text":"The AutoEvent is defined in the autoEvents
section of the device definition file:
deviceList:\nautoEvents:\ninterval: \"30s\"\nonChange: false\nsourceName: \"Temperature\"\n
After service startup, query core-data's API. The results show that the service auto-executes the command every 30 seconds. $ curl http://localhost:59880/api/v3/event/device/name/Modbus-TCP-Temperature-Sensor | json_pp\n\n{\n\"events\" : [\n{\n\"readings\" : [\n{\n\"value\" : \"5.300000e+01\",\n\"binaryValue\" : null,\n\"origin\" : 1624325219186870396,\n\"id\" : \"68a66a35-d3cf-48a2-9bf0-09578267a3f7\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"mediaType\" : \"\",\n\"valueType\" : \"Float32\",\n\"resourceName\" : \"Temperature\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n}\n],\n\"apiVersion\" : \"v2\",\n\"origin\" : 1624325219186977564,\n\"id\" : \"4b235616-7304-419e-97ae-17a244911b1c\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"sourceName\" : \"Temperature\",\n\"profileName\" : \"Ethernet-Temperature-Sensor\"\n},\n{\n\"readings\" : [\n{\n\"profileName\" : \"Ethernet-Temperature-Sensor\",\n\"resourceName\" : \"Temperature\",\n\"valueType\" : \"Float32\",\n\"id\" : \"56b7e8be-7ce8-4fa9-89e2-3a1a7ef09050\",\n\"origin\" : 1624325189184675483,\n\"value\" : \"5.300000e+01\",\n\"binaryValue\" : null,\n\"mediaType\" : \"\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\"\n}\n],\n\"profileName\" : \"Ethernet-Temperature-Sensor\",\n\"sourceName\" : \"Temperature\",\n\"deviceName\" : \"Modbus-TCP-Temperature-Sensor\",\n\"id\" : \"fbab44f5-9775-4c09-84bd-cbfb00001115\",\n\"origin\" : 1624325189184721223,\n\"apiVersion\" : \"v2\"\n},\n...\n],\n\"apiVersion\" : \"v2\",\n\"statusCode\" : 200\n}\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#set-up-the-modbus-rtu-device","title":"Set up the Modbus RTU Device","text":"This section describes how to connect the Modbus RTU device. We use Ubuntu OS and a Modbus RTU device for this example.
Connect the device to your machine(laptop or gateway,etc.) via RS485/USB adaptor and power on.
Execute a command on the machine, and you can find a message like the following:
$ dmesg | grep tty\n...\n...\n[18006.167625] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0\n
It shows the USB attach to ttyUSB0, then you can check whether the device path exists:
$ ls /dev/ttyUSB0\n/dev/ttyUSB0\n
For security reason, the EdgeX set up the user permission as below:
device-modbus:\n...\nuser: 2002:2001 # UID:GID\n
So we need to change the owner for the specified group by the following command: sudo chown :2001 /dev/ttyUSB0\n\n# Or change the permissions for multiple files\nsudo chown :2001 /dev/tty*\n
Note
Since the owner will reset after the system reboot, we can add this script to the startup script. For Raspberry Pi as example, add script to /etc/rc.local
, then the Pi will run this script at bootup.
Modify the docker-compose.yml file to mount the device path to the device-modbus, and here are two ways to mount the device path:
Using devices
:
device-modbus:\n...\ndevices:\n- /dev/ttyUSB0\n
Or using volumes
and device_cgroup_rules
:
device-modbus:\n...\nvolumes:\n...\n- /dev:/dev\ndevice_cgroup_rules:\n- 'c 188:* rw'
$ docker compose up -d\n
"},{"location":"examples/Ch-ExamplesAddingModbusDevice/#add-device-to-edgex","title":"Add device to EdgeX","text":"$ nano modbus.rtu.demo.profile.yml\n
name: \"Modbus-RTU-IO-Module\"\nmanufacturer: \"icpdas\"\nmodel: \"M-7055\"\nlabels:\n- \"Modbus RTU\"\n- \"IO Module\"\ndescription: \"This IO module offers 8 isolated channels for digital input and 8 isolated channels for digital output.\"\n\ndeviceResources:\n-\nname: \"DO0\"\nisHidden: true\ndescription: \"On/Off , 0-OFF 1-ON\"\nattributes:\n{ primaryTable: \"COILS\", startingAddress: 0 }\nproperties:\nvalueType: \"Bool\"\nreadWrite: \"RW\"\n-\nname: \"DO1\"\nisHidden: true\ndescription: \"On/Off , 0-OFF 1-ON\"\nattributes:\n{ primaryTable: \"COILS\", startingAddress: 1 }\nproperties:\nvalueType: \"Bool\"\nreadWrite: \"RW\"\n-\nname: \"DO2\"\nisHidden: true\ndescription: \"On/Off , 0-OFF 1-ON\"\nattributes:\n{ primaryTable: \"COILS\", startingAddress: 2 }\nproperties:\nvalueType: \"Bool\"\nreadWrite: \"RW\"\n\ndeviceCommands:\n-\nname: \"DO\"\nreadWrite: \"RW\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"DO0\" }\n- { deviceResource: \"DO1\" }\n- { deviceResource: \"DO2\" }\n
Upload the device profile
$ curl http://localhost:59881/api/v3/deviceprofile/uploadfile \\\n -F \"file=@modbus.rtu.demo.profile.yml\"\n
Create the device entity to the EdgeX. You can find the Modbus RTU setting on the device or the user manual.
$ curl http://localhost:59881/api/v3/device -H \"Content-Type:application/json\" -X POST \\\n-d '[\n{\n\"apiVersion\" : \"v3\",\n\"device\": {\n\"name\" :\"Modbus-RTU-IO-Module\",\n\"description\":\"The device can be used to monitor the status of the digital input and digital output channels.\",\n\"labels\":[ \"IO Module\",\n\"Modbus RTU\"\n],\n\"serviceName\": \"device-modbus\",\n\"profileName\": \"Ethernet-Temperature-Sensor\",\n\"protocols\":{\n\"modbus-tcp\":{\n\"Address\" : \"/dev/ttyUSB0\",\n\"BaudRate\" : \"19200\",\n\"DataBits\" : \"8\",\n\"StopBits\" : \"1\",\n\"Parity\" : \"N\",\n\"UnitID\" : \"1\",\n\"Timeout\" : \"5\",\n\"IdleTimeout\" : \"5\"\n}\n},\n\"adminState\":\"UNLOCKED\",\n\"operatingState\":\"UP\"\n}\n}\n]'\n
EdgeX - Ireland Release
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#overview","title":"Overview","text":"In this example, you add a new Patlite Signal Tower which communicates via SNMP. This example demonstrates how to connect a device through the SNMP Device Service.
Patlite Signal Tower, model NHL-FB2
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#setup","title":"Setup","text":""},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#hardware-needed","title":"Hardware needed","text":"In order to exercise this example, you will need the following hardware
In addition to the hardware, you will need the following software
If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry.
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-service-to-your-docker-composeyml","title":"Add the SNMP Device Service to your docker-compose.yml","text":"The EdgeX docker-compose.yml file used to run EdgeX must include the SNMP device service for this example. You can either:
See Getting Started using Docker if you need assistance running EdgeX once you have your Docker Compose file.
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#add-the-snmp-device-profile-and-device","title":"Add the SNMP Device Profile and Device","text":"SNMP devices, like the Patlite Signal Tower, provide a set of managed objects to get and set property information on the associated device. Each managed object has an address call an object identifier (or OID) that you use to interact with the SNMP device's managed object. You use the OID to query the state of the device or to set properties on the device. In the case of the Patlite, there are managed object for the colored lights and the buzzer of the device. You can read the current state of a colored light (get) or turn the light on (set) by making a call to the proper OIDs for the associated managed object.
For example, on the NH series signal towers used in this example, a \"get\" call to the 1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1
OID returns the current state of the Red
signal light. A return value of 1 would signal the light is off. A return value of 2 says the light is on. A return value of 3 says the light is flashing. Read this SNMP tutorial to learn more about the basics of the SNMP protocol. See the Patlite NH Series User's Manual for more information on the SNMP OIDs and function calls and parameters needed for some requests.
A device profile has been created for you to get and set the signal tower's three colored lights and to get and set the buzzer. The patlite-snmp
device profile defines three device resources for each of the lights and the buzzer.
Note that the attributes of each device resource specify the SNMP OID that the device service will use to make a request of the signal tower. For example, the device resource YAML below (taken from the profile) provides the means to get the current Red
light state. Note that a specific OID is provided that is unique to the RED
light, current state property.
-\nname: \"RedLightCurrentState\"\nisHidden: false\ndescription: \"red light current state\"\nattributes:\n{ oid: \"1.3.6.1.4.1.20440.4.1.5.1.2.1.4.1\", community: \"private\" } properties:\nvalueType: \"Int32\"\nreadWrite: \"R\"\ndefaultValue: \"1\"\n
Below is the device resource definitions for the Red
light control state and timer. Again, unique OIDs are provided as attributes for each property.
-\nname: \"RedLightControlState\"\nisHidden: true\ndescription: \"red light state\"\nattributes:\n{ oid: \"1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1\", community: \"private\" } properties:\nvalueType: \"Int32\"\nreadWrite: \"W\"\ndefaultValue: \"1\"\n-\nname: \"RedLightTimer\"\nisHidden: true\ndescription: \"red light timer\"\nattributes:\n{ oid: \"1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1\", community: \"private\" } properties:\nvalueType: \"Int32\"\nreadWrite: \"W\"\ndefaultValue: \"1\"\n
In order to set the Red
light on, one would need to send an SNMP request to set OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.2.1
to a value of 2 (on state) along with a number of seconds delay to the time at OID 1.3.6.1.4.1.20440.4.1.5.1.2.1.3.1
. Sending a zero value (0) to the timer would say you want to turn the light on immediately.
Because setting a light or buzzer requires both of the control state and timer OIDs to be set together (simultaneously), the device profile contains deviceCommands
to set the light and timer device resources (and therefore their SNMP property OIDs) in a single operation. Here is the device command to set the Red
light.
-\nname: \"RedLight\"\nreadWrite: \"W\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"RedLightControlState\" }\n- { deviceResource: \"RedLightTimer\" }\n
You will need to upload this profile into core metadata. Download the Patlite device profile to a convenient directory. Then, using the following curl
command, request the profile be uploaded into core metadata.
curl -X 'POST' 'http://localhost:59881/api/v3/deviceprofile/uploadfile' --form 'file=@\"/home/yourfilelocationhere/patlite-snmp.yml\"'\n
Alert
Note that the curl command above assumes that core metadata is available at localhost
. Change localhost
to the host address of your core metadata service. Also note that you will need to replace the /home/yourfilelocationhere
path with the path where the profile resides.
With the Patlite device profile now in metadata, you can add the Patlite device in metadata. When adding the device, you typically need to provide the name, description, labels and admin/op states of the device when creating it. You will also need to associate the device to a device service (in this case the device-snmp
device service). You will ned to associate the new device to a profile - the patlite profile just added in the step above. And you will need to provide the protocol information (such as the address and port of the device) to tell the device service where it can find the physical device. If you wish the device service to automatically get readings from the device, you will also need to provide AutoEvent properties when creating the device.
The curl command to POST the new Patlite device (named patlite1
) into metadata is provide below. You will need to change the protocol Address
(currently 10.0.0.14
) and Port
(currently 161
) to point to your Patlite on your network. In this request to add a new device, AutoEvents are setup to collect the current state of the 3 lights and buzzer every 10 seconds. Notice the reference to the current state device resources in setting up the AutoEvents.
curl -X 'POST' 'http://localhost:59881/api/v3/device' -d '[{\"apiVersion\" : \"v3\", \"device\": {\"name\": \"patlite1\",\"description\": \"patlite #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"patlite\"],\"serviceName\": \"device-snmp\",\"profileName\": \"patlite-snmp-profile\",\"protocols\": {\"TCP\": {\"Address\": \"10.0.0.14\",\"Port\": \"161\"}}, \"AutoEvents\":[{\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"RedLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"GreenLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"AmberLightCurrentState\"}, {\"Interval\":\"10s\",\"OnChange\":true,\"SourceName\":\"BuzzerCurrentState\"}]}}]'\n
Info
Rather than making a REST API call into metadata to add the device, you could alternately provide device configuration files that define the device. These device configuration files would then have to be provided to the service when it starts up. Since you did not create a new Docker image containing the device configuration and just used the existing SNMP device service Docker image, it was easier to make simple API calls to add the profile and device. However, this would mean the profile and device would need to be added each time metadata's database is cleaned out and reset.
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#test","title":"Test","text":"If the device service is up and running and the profile and device have been added correctly, you should now be able to interact with the Patlite via the core command service (and SNMP under the covers via the SNMP device service).
"},{"location":"examples/Ch-ExamplesAddingSNMPDevice/#get-the-current-state","title":"Get the Current State","text":"To get the current state of a light (in the example below the Green
light), make a curl request like the following of the command service.
curl 'http://localhost:59882/api/v3/device/name/patlite1/GreenLightCurrentState' | json_pp\n
Alert
Note that the curl command above assumes that the core command service is available at localhost
. Change the host address of your core command service if it is not available at localhost
.
The results should look something like that below.
{\n\"statusCode\" : 200,\n\"apiVersion\" : \"v2\",\n\"event\" : {\n\"origin\" : 1632188382048586660,\n\"deviceName\" : \"patlite1\",\n\"sourceName\" : \"GreenLightCurrentState\",\n\"id\" : \"1e2a7ba1-c273-46d1-b919-207aafbc60ba\",\n\"profileName\" : \"patlite-snmp-profile\",\n\"apiVersion\" : \"v2\",\n\"readings\" : [\n{\n\"origin\" : 1632188382048586660,\n\"resourceName\" : \"GreenLightCurrentState\",\n\"deviceName\" : \"patlite1\",\n\"id\" : \"a41ac1cf-703b-4572-bdef-8487e9a7100e\",\n\"valueType\" : \"Int32\",\n\"value\" : \"1\",\n\"profileName\" : \"patlite-snmp-profile\"\n}\n]\n}\n}\n
Info
Note the value
will be one of 4 numbers indicating the current state of the light
To turn a signal tower light or the buzzer on, you can issue a PUT device command via the core command service. The example below turns on the Green
light.
curl --location --request PUT 'http://localhost:59882/api/v3/device/name/patlite1/GreenLight' --header 'cont: application/json' --data-raw '{\"GreenLightControlState\":\"2\",\"GreenLightTimer\":\"0\"}'\n
This command sets the light on (solid versus flashing) immediate (as denoted by the GreenLightTimer parameter is set to 0). The timer value is the number of seconds delay in making the request to the light or buzzer. Again, the control state can be set to one of four values as listed in the table above.
Alert
Again note that the curl command above assumes that the core command service is available at localhost
. Change the host address of your core command service if it is not available at localhost
.
Did you notice that EdgeX obfuscates almost all information about SNMP, and managed objects and OIDs? The power of EdgeX is to abstract away protocol differences so that to a user, getting data from a device or setting properties on a device such as this Patlite signal tower is as easy as making simple REST calls into the command service. The only place that protocol information is really seen is in the device profile (where the attributes specify the SNMP OIDs). Of course, the device service must be coded to deal with the protocol specifics and it must know how to translate the simple command REST calls into protocol specific requests of the device. But even device service creation is made easier with the use of the SDKs which provide much of the boilerplate code found in almost every device service regardless of the underlying device protocol.
"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/","title":"Modbus - Data Type Conversion","text":"In use cases where the device resource uses an integer data type with a float scale, precision can be lost following transformation.
For example, a Modbus device stores the temperature and humidity in an Int16 data type with a float scale of 0.01. If the temperature is 26.53, the read value is 2653. However, following transformation, the value is 26.
To avoid this scenario, the device resource data type must differ from the value descriptor data type. This is achieved using the optional rawType
attribute in the device profile to define the binary data read from the Modbus device, and a valueType
to indicate what data type the user wants to receive.
If the rawType
attribute exists, the device service parses the binary data according to the defined rawType
, then casts the value according to the valueType
defined in the properties
of the device resources.
The following extract from a device profile defines the rawType
as Int16 and the valueType
as Float32:
Example - Device Profile
deviceResources:\n- name: \"humidity\"\ndescription: \"The response value is the result of the original value multiplied by 100.\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: \"1\", rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: \"0.01\"\nunits: \"%RH\"\n\n- name: \"temperature\"\ndescription: \"The response value is the result of the original value multiplied by 100.\"\nattributes:\n{ primaryTable: \"HOLDING_REGISTERS\", startingAddress: \"2\", rawType: \"Int16\" }\nproperties:\nvalueType: \"Float32\"\nreadWrite: \"R\"\nscale: \"0.01\"\nunits: \"degrees Celsius\"\n
"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#read-command","title":"Read Command","text":"A Read command is executed as follows:
A Write command is executed as follows:
You generally need to transform data when scaling readings between a 16-bit integer and a float value.
The following limitations apply:
rawType
supports only Int16, Uint16 and Int32 data typesvalueType
must be Float32 or Float64If an unsupported data type is defined for the rawType
attribute, the device service throws an exception similar to the following:
Read command failed. Cmd:temperature err:the raw type Int64 is not supported\n
"},{"location":"examples/Ch-ExamplesModbusdatatypeconversion/#supported-transformations","title":"Supported Transformations","text":"The supported transformations are as follows:
FromrawType
To valueType
Int16 Float32 Int16 Float64 Int32 Float64 Uint16 Float32 Uint16 Float64"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/","title":"Sending and Consuming Binary Data From EdgeX Device Services","text":"EdgeX - Ireland Release
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#overview","title":"Overview","text":"In this example, we will demonstrate how to send EdgeX Events and Readings that contain arbitrary binary data.
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#deviceservice-implementation","title":"DeviceService Implementation","text":""},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-profile","title":"Device Profile","text":"To indicate that a deviceResource represents a Binary type, the following format is used:
deviceResources:\n-\nname: \"camera_snapshot\"\nisHidden: false\ndescription: \"snapshot from camera\"\nproperties:\nvalueType: \"Binary\"\nreadWrite: \"R\"\nmediaType: \"image/jpeg\"\ndeviceCommands:\n-\nname: \"OnvifSnapshot\"\nisHidden: false\nreadWrite: \"R\"\nresourceOperations:\n- { deviceResource: \"camera_snapshot\" }\n
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#device-service","title":"Device Service","text":"Here is a snippet from a hypothetical Device Service's HandleReadCommands()
method that produces an event that represents a JPEG image captured from a camera:
if req.DeviceResourceName == \"camera_snapshot\" {\ndata, err := cameraClient.GetSnapshot() // returns ([]byte, error)\ncheck(err)\n\ncv, err := sdkModels.NewCommandValue(reqs[i].DeviceResourceName, common.ValueTypeBinary, data)\ncheck(err)\n\nresponses[i] = cv\n}\n
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#calling-device-service-command","title":"Calling Device Service Command","text":"Querying core-metadata for the Device's Commands and DeviceName provides the following as the URL to request a reading from the snapshot command: http://localhost:59990/api/v3/device/name/camera-device/OnvifSnapshot
Unlike with non-binary Events, making a request to this URL will return an event in CBOR representation. CBOR is a representation of binary data loosely based off of the JSON data model. This Event will not be human-readable.
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#parsing-cbor-encoded-events","title":"Parsing CBOR Encoded Events","text":"To access the data enclosed in these Events and Readings, they will first need to be decoded from CBOR. The following is a simple Go program that reads in the CBOR response from a file containing the response from the previous HTTP request. The Go library recommended for parsing these events can be found at https://github.com/fxamacker/cbor/
package main\n\nimport (\n\"io/ioutil\"\n\n\"github.com/edgexfoundry/go-mod-core-contracts/v2/dtos/requests\"\n\"github.com/fxamacker/cbor/v2\"\n)\n\nfunc check(e error) {\nif e != nil {\npanic(e)\n}\n}\n\nfunc main() {\n// Read in our cbor data\nfileBytes, err := ioutil.ReadFile(\"/Users/johndoe/Desktop/image.cbor\")\ncheck(err)\n\n// Decode into an EdgeX Event\neventRequest := &requests.AddEventRequest{}\nerr = cbor.Unmarshal(fileBytes, eventRequest)\ncheck(err)\n\n// Grab binary data and write to a file\nimgBytes := eventRequest.Event.Readings[0].BinaryValue\nioutil.WriteFile(\"/Users/johndoe/Desktop/image.jpeg\", imgBytes, 0644)\n}\n
In the code above, the CBOR data is read into a byte array , an EdgeX Event struct is created, and cbor.Unmarshal
parses the CBOR-encoded data and stores the result in the Event struct. Finally, the binary payload is written to a file from the BinaryValue
field of the Reading.
This method would work as well for decoding Events off the EdgeX message bus.
"},{"location":"examples/Ch-ExamplesSendingAndConsumingBinary/#encoding-arbitrary-structures-in-events","title":"Encoding Arbitrary Structures in Events","text":"The Device SDK's NewCommandValue()
function above only accepts a byte slice as binary data. Any arbitrary Go structure can be encoded in a binary reading by first encoding the structure into a byte slice using CBOR. The following illustrates this method:
// DeviceService HandleReadCommands() code:\nfoo := struct {\nX int\nY int\nZ int\nBar string\n} {\nX: 7,\nY: 3,\nZ: 100,\nBar: \"Hello world!\",\n}\n\ndata, err := cbor.Marshal(&foo)\ncheck(err)\n\ncv, err := sdkModels.NewCommandValue(reqs[i].DeviceResourceName, common.ValueTypeBinary, data)\nresponses[i] = cv\n
This code takes the anonymous struct with fields X, Y, Z, and Bar (of different types) and serializes it into a byte slice using the same cbor
library, and passing the output to NewCommandValue()
.
When consuming these events, another level of decoding will need to take place to get the structure out of the binary payload.
func main() {\n// Read in our cbor data\nfileBytes, err := ioutil.ReadFile(\"/Users/johndoe/Desktop/foo.cbor\")\ncheck(err)\n\n// Decode into an EdgeX Event\neventRequest := &requests.AddEventRequest{}\nerr = cbor.Unmarshal(fileBytes, eventRequest)\ncheck(err)\n\n// Decode into arbitrary type\nfoo := struct {\nX int\nY int\nZ int\nBar string\n}{}\n\nerr = cbor.Unmarshal(eventRequest.Event.Readings[0].BinaryValue, &foo)\ncheck(err)\nfmt.Println(foo)\n}\n
This code takes a command response in the same format as the previous example, but uses the cbor
library to decode the CBOR data inside the EdgeX Reading's BinaryValue
field.
Using this approach, an Event can be sent containing data containing an arbitrary, flexible structure. Use cases could be a Reading containing multiple images, a variable length list of integer read-outs, etc.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/","title":"Using the Virtual Device Service","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#overview","title":"Overview","text":"The Virtual Device Service GO can simulate different kinds of devices to generate Events and Readings to the Core Data Micro Service. Furthermore, users can send commands and get responses through the Command and Control Micro Service. The Virtual Device Service allows you to execute functional or performance tests without any real devices. This version of the Virtual Device Service is implemented based on Device SDK GO, and uses ql (an embedded SQL database engine) to simulate virtual resources.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#introduction","title":"Introduction","text":"For information on the virtual device service see virtual device under the Microservices tab.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#working-with-the-virtual-device-service","title":"Working with the Virtual Device Service","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-container","title":"Running the Virtual Device Service Container","text":"The virtual device service depends on the EdgeX core services. By default, the virtual device service is part of the EdgeX community provided Docker Compose files. If you use one of the community provide Compose files, you can pull and run EdgeX inclusive of the virtual device service without having to make any changes.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#running-the-virtual-device-service-natively-in-development-mode","title":"Running the Virtual Device Service Natively (in development mode)","text":"If you're going to download the source code and run the virtual device service in development mode, make sure that the EdgeX core service containers are up before starting the virtual device service. See how to work with EdgeX in a hybrid environment in order to run the virtual device service outside of containers. This same file will instruct you on how to get and run the virtual device service code.
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#get-command-example","title":"GET command example","text":"The virtual device service is configured to send simulated data to core data every few seconds (from 10-30 seconds depending on device - see the device configuration file for AutoEvent details). You can exercise the GET
request on the command service to see the generated value produced by any of the virtual device's simulated devices. Use the curl command below to exercise the virtual device service API (via core command service).
curl -X GET localhost:59882/api/v3/device/name/Random-Integer-Device/Int8\n
Warning
The example above assumes your core command service is available on localhost
at the default service port of 59882. Also, you must replace your device name and command name in the example above with your virtual device service's identifiers. If you are not sure of the identifiers to use, query the command service for the full list of commands and devices at http://localhost:59882/api/v3/device/all
.
The virtual device should respond (via the core command service) with event/reading JSON similar to that below.
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"id\": \"3beb5b83-d923-4c8a-b949-c1708b6611c1\",\n\"deviceName\": \"Random-Integer-Device\",\n\"profileName\": \"Random-Integer-Device\",\n\"sourceName\": \"Int8\",\n\"origin\": 1626227770833093400,\n\"readings\": [\n{\n\"id\": \"baf42bc7-307a-4647-8876-4e84759fd2ba\",\n\"origin\": 1626227770833093400,\n\"deviceName\": \"Random-Integer-Device\",\n\"resourceName\": \"Int8\",\n\"profileName\": \"Random-Integer-Device\",\n\"valueType\": \"Int8\",\n\"binaryValue\": null,\n\"mediaType\": \"\",\n\"value\": \"-5\"\n}\n]\n}\n}\n
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#put-command-example-assign-a-value-to-a-resource","title":"PUT command example - Assign a value to a resource","text":"The virtual devices managed by the virtual device can also be actuated. The virtual device can be told to enable or disable random number generation. When disabled, the virtual device services can be told what value to respond with for all GET
operations. When setting the fixed value, the value must be valid for the data type of the virtual device. For example, the minimum value of Int8 cannot be less than -128 and the maximum value cannot be greater than 127.
Below is example actuation of one of the virtual devices. In this example, it sets the fixed GET
return value to 123 and turns off random generation.
curl -X PUT -d '{\"Int8\": \"123\", \"EnableRandomization_Int8\": \"false\"}' localhost:59882/api/v3/device/name/Random-Integer-Device/Int8\n
Note
The value of the resource's EnableRandomization property is simultaneously updated to false when sending a put command to assign a specified value to the resource. Therefore, the need to set EnableRandomization_Int8 to false is not actually required in the call above
Return the virtual device to randomly generating numbers with another PUT
call.
curl -X PUT -d '{\"EnableRandomization_Int8\": \"true\"}' localhost:59882/api/v3/device/name/Random-Integer-Device/Int8\n
"},{"location":"examples/Ch-ExamplesVirtualDeviceService/#reference","title":"Reference","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#architectural-diagram","title":"Architectural Diagram","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#sequence-diagram","title":"Sequence Diagram","text":""},{"location":"examples/Ch-ExamplesVirtualDeviceService/#virtual-resource-table-schema","title":"Virtual Resource Table Schema","text":"Column Type DEVICE_NAME STRING COMMAND_NAME STRING DEVICE_RESOURCE_NAME STRING ENABLE_RANDOMIZATION BOOL DATA_TYPE STRING VALUE STRING"},{"location":"examples/Ch-OSImageWithEdgeX/","title":"Creating an EdgeX Ubuntu Core Image","text":""},{"location":"examples/Ch-OSImageWithEdgeX/#introduction","title":"Introduction","text":"This guide walks you through creating an Ubuntu Core OS image that is preloaded with an EdgeX stack. We use Ubuntu Core as the Linux distribution because it is optimized for IoT and is secure by design. We configure the image and bundle the current snapped versions of EdgeX components. After the deployment the snaps will continue to receive updates for the latest security and bug fixes (depending on the selected channel).
This guide is divided into three chapters to create:
Each chapter results in a working Ubuntu Core OS image that can be flashed on a disk and booted with the expected EdgeX stack.
In this example, we will create an amd64
image for Intel and AMD processors. The instructions can be adapted to other architectures and even for a Raspberry Pi. We will use the Device Virtual service to simulate devices and produce synthetic events.
Note
This guide has been tested on an amd64
Ubuntu 22.04 as the desktop OS. It may work on other Linux distributions and Ubuntu versions.
Some commands are executed on the desktop computer, but some others on the target Ubuntu Core system. For clarity, we use \ud83d\udda5 Desktop and \ud83d\ude80 Ubuntu Core titles for code blocks to distinguish where those commands are being executed.
An Intel NUC11TNH with 8GB RAM and 250GB NAND flash storage has been used as the target amd64
hardware.
We use the following tools on the desktop machine:
Install them using the following commands: \ud83d\udda5 Desktop
sudo snap install snapcraft --classic\nsudo snap install yq\nsudo snap install ubuntu-image --classic --channel=2/stable\n
Before we start, it is a good idea to read through the following documents:
In this chapter, we will create an OS image that includes the expected EdgeX components.
"},{"location":"examples/Ch-OSImageWithEdgeX/#create-an-ubuntu-core-model-assertion","title":"Create an Ubuntu Core model assertion","text":"The model assertion is a digitally signed document that describes the content of the OS image.
Refer to this article for details on how to sign the model assertion. Here are the needed steps:
1) Create a developer account
Follow the instructions here to create a developer account, if you don't already have one.
2) Create and register a key
\ud83d\udda5 Desktop
snap login\nsnap keys\n# continue if you have no existing keys\n# you'll be asked to set a passphrase which is needed before signing\nsnap create-key edgex-demo\nsnapcraft register-key edgex-demo\n
We now have a registered key named edgex-demo
which we'll use later. 3) Create the model assertion
First, make yourself familiar with the Ubuntu Core model assertion.
Find your developer ID using the Snapcraft CLI: \ud83d\udda5 Desktop
$ snapcraft whoami\n...\ndeveloper-id: <developer-id>\n
or from the Snapcraft Dashboard. YAML Model Assertion Unlike the official documentation which uses JSON, we use YAML serialization for the model. This is for consistency with all the other serialization formats in this tutorial. Moreover, it allows us to comment out some parts for testing or add comments to describe the details inline.
Create model.yaml
with the following content, replacing authority-id
, brand-id
, and timestamp
:
type: model\nseries: '16'\n\n# set authority-id and brand-id to your developer-id\nauthority-id: <developer-id>\nbrand-id: <developer-id>\n\nmodel: ubuntu-core-22-amd64\narchitecture: amd64\n\n# timestamp should be within your signature's validity period\ntimestamp: '2022-06-21T10:45:00+00:00'\nbase: core22\n\ngrade: dangerous\n\nsnaps:\n- name: pc\ntype: gadget\ndefault-channel: 22/stable\nid: UqFziVZDHLSyO3TqSWgNBoAdHbLI4dAH\n\n- name: pc-kernel\ntype: kernel\ndefault-channel: 22/stable\nid: pYVQrBcKmBa0mZ4CCN7ExT6jH8rY1hza\n\n- name: snapd\ntype: snapd\ndefault-channel: latest/candidate # temporary for latest pc-gadget compatibility; see https://github.com/canonical/edgex-ubuntu-core-testing/issues/1\nid: PMrrV4ml8uWuEUDBT8dSGnKUYbevVhc4\n\n# Snap base for EdgeX snaps\n- name: core22\ntype: base\ndefault-channel: latest/stable\nid: amcUKQILKXHHTlmSa7NMdnXSx02dNeeT\n\n- name: edgexfoundry\ntype: app\ndefault-channel: latest/edge # replace with latest/stable after EdgeX v3 release\nid: AZGf0KNnh8aqdkbGATNuRuxnt1GNRKkV\n\n- name: edgex-device-virtual\ntype: app\ndefault-channel: latest/edge # replace with latest/stable after EdgeX v3 release\nid: AmKuVTOfsN0uEKsyJG34M8CaMfnIqxc0\n
Note
We use the gadget and kernel snaps for 64bit personal computers using Intel or AMD processors. For a Raspberry Pi, you need to change the model, architecture, as well as the gadget and kernel snaps.
Finding Snap IDs
Query the unique store ID of a snap, for example the edgexfoundry
snap:
$ snap info edgexfoundry | grep snap-id\nsnap-id: AZGf0KNnh8aqdkbGATNuRuxnt1GNRKkV\n
4) Sign the model assertion
We sign the model using the edgex-demo
key created and registered earlier.
The snap sign
command takes JSON as input and produces YAML as output! We use the YQ app to convert our model assertion to JSON before passing it in for signing.
# sign\nyq eval model.yaml -o=json | snap sign -k edgex-demo > model.signed.yaml\n\n# check the signed model\ncat model.signed.yaml\n
Note
You need to repeat the signing every time you change the input model, because the signature is calculated based on the model.
"},{"location":"examples/Ch-OSImageWithEdgeX/#build-the-ubuntu-core-image","title":"Build the Ubuntu Core image","text":"We use ubuntu-image and set the path to signed model assertion YAML file.
This will download all the snaps specified in the model assertion and build an image file called pc.img
.
If you plan to use an emulator to install and run Ubuntu Core from the resulting image, it is a good idea to allocate additional writable storage. This is necessary only if you want to install additional snaps interactively or upgrade existing ones on the emulator.
The default size of the ubuntu-data
partition is 1G
as defined in the gadget snap. When installing on actual hardware, this partition extends automatically to take the whole remaining space on the disk volume. However, when using QEMU, the partition will have the exact same size because the image size is calculated based on the defined partition structure. The 1GB ubuntu-data
partition will be mostly full after first boot. You can configure the image to be larger so that the installer expands the partition automatically as with a large disk volume.
To extend the image size, use the --image-size
flag in the following command. For example, to add 500MB extra (the original image is around 3.5GB), set --image-size=4G
.
$ ubuntu-image snap model.signed.yaml --validation=enforce\nFetching snapd\nFetching pc-kernel\nFetching core22\nFetching pc\nFetching edgexfoundry\nFetching edgex-device-virtual\n\n# check the created image file\n$ file pc.img\npc.img: DOS/MBR boot sector, extended partition table (last)\n
Done
The image file is now ready to be flashed on a medium to create a bootable drive with the needed applications!
"},{"location":"examples/Ch-OSImageWithEdgeX/#boot-into-the-os","title":"Boot into the OS","text":"You can now flash the image on your disk and boot to start the installation. However, during development it is best to boot in an emulator to quickly detect and diagnose possible issues.
Instead of flashing and installing the OS on actual hardware, we will continue this guide using an emulator. Every other step will be similar to when image is flashed and installed on actual hardware.
Refer to the following to:
In this step, we connect to the machine that has the image installed over SSH, validate the installation, and do some manual configurations.
We SSH to the emulator from the previous step: \ud83d\udda5 Desktop
ssh <user>@localhost -p 8022\n
If you used the default approach (using console-conf
) and entered your Ubuntu account email address at the end of the installation, then <user>
is your Ubuntu account ID. If you don't know your ID, look it up using a browser from here or programmatically from https://login.ubuntu.com/api/v2/keys/<email>
. List the installed snaps and their services: \ud83d\ude80 Ubuntu Core
$ snap list\nName Version Rev Tracking Publisher Notes\ncore22 20230503 634 latest/stable canonical\u2713 base\nedgex-device-virtual 3.0.0-dev.50 669 latest/edge canonical\u2713 -\nedgexfoundry 3.0.0-dev.163 4452 latest/edge canonical\u2713 -\npc 22-0.3 127 22/stable canonical\u2713 gadget\npc-kernel 5.15.0-71.78.1 1281 22/stable canonical\u2713 kernel\nsnapd 2.59.4 19361 latest/candidate canonical\u2713 snapd\n\n$ snap services\nService Startup Current Notes\nedgex-device-virtual.device-virtual disabled inactive -\nedgexfoundry.consul disabled inactive -\nedgexfoundry.core-command disabled inactive -\nedgexfoundry.core-common-config-bootstrapper disabled inactive -\nedgexfoundry.core-data disabled inactive -\nedgexfoundry.core-metadata disabled inactive -\nedgexfoundry.nginx disabled inactive -\nedgexfoundry.redis disabled inactive -\nedgexfoundry.security-bootstrapper-consul disabled inactive -\nedgexfoundry.security-bootstrapper-nginx disabled inactive -\nedgexfoundry.security-bootstrapper-redis disabled inactive -\nedgexfoundry.security-proxy-auth disabled inactive -\nedgexfoundry.security-secretstore-setup disabled inactive -\nedgexfoundry.support-notifications disabled inactive -\nedgexfoundry.support-scheduler disabled inactive -\nedgexfoundry.vault disabled inactive -\n
Everything is inactive by default. Let start the platform: \ud83d\ude80 Ubuntu Core
$ snap start --enable edgexfoundry\nStarted.\n
We need to also start Device Virtual, but before doing so, increase the logging verbosity using snap options to add logging for the produced data: \ud83d\ude80 Ubuntu Core
$ snap set edgex-device-virtual config.writable-loglevel=DEBUG\n$ snap start --enable edgex-device-virtual\nStarted.\n
Inspect the logs: \ud83d\ude80 Ubuntu Core
$ snap logs edgexfoundry\n...\n2023-05-24T15:43:54Z edgexfoundry.consul[2785]: 2023-05-24T15:43:54.667Z [INFO] agent: Synced check: check=support-notifications\n2023-05-24T15:43:54Z edgexfoundry.consul[2785]: 2023-05-24T15:43:54.801Z [INFO] agent: Synced check: check=core-data\n2023-05-24T15:43:55Z edgexfoundry.consul[2785]: 2023-05-24T15:43:55.220Z [INFO] agent: Synced check: check=core-command\n2023-05-24T15:43:55Z edgexfoundry.consul[2785]: 2023-05-24T15:43:55.368Z [INFO] agent: Synced check: check=core-metadata\n2023-05-24T15:43:56Z edgexfoundry.consul[2785]: 2023-05-24T15:43:56.208Z [INFO] agent: Synced check: check=support-scheduler\n2023-05-24T15:44:03Z edgexfoundry.consul[2785]: 2023-05-24T15:44:03.596Z [INFO] agent: Synced check: check=device-virtual\n\n\n$ snap logs -f edgex-device-virtual\n...\n2023-05-24T15:44:14Z edgex-device-virtual.device-virtual[3369]: level=DEBUG ts=2023-05-24T15:44:14.269393977Z app=device-virtual source=utils.go:80 msg=\"Event(profileName: Random-UnsignedInteger-Device, deviceName: Random-UnsignedInteger-Device, sourceName: Uint64, id: 77701381-5bbc-404d-a9b5-f30d58182ac6) published to MessageBus on topic: edgex/events/device/device-virtual/Random-UnsignedInteger-Device/Random-UnsignedInteger-Device/Uint64\"\n2023-05-24T15:44:19Z edgex-device-virtual.device-virtual[3369]: level=DEBUG ts=2023-05-24T15:44:19.066059149Z app=device-virtual source=reporter.go:195 msg=\"Publish 0 metrics to the 'edgex/telemetry/device-virtual' base topic\"\n2023-05-24T15:44:19Z edgex-device-virtual.device-virtual[3369]: level=DEBUG ts=2023-05-24T15:44:19.06612871Z app=device-virtual source=manager.go:123 msg=\"Reported metrics...\"\n^C\n
All services appear healthy. The Device Virtual logs show that the service is producing the expected synthetic data.
Let's exit the SSH session: \ud83d\ude80 Ubuntu Core
$ exit\nlogout\nConnection to localhost closed.\n
... and query data from outside via the API Gateway: \ud83d\udda5 Desktop
curl --insecure https://localhost:8443/core-data/api/v3/reading/all?limit=2\n
Since the security is enabled, the access is not authorized. You can follow the instructions from the getting started to add a user to API Gateway, and generate a JWT token to access the API securely.
In this chapter, we demonstrated how to build an image that is pre-loaded with some EdgeX snaps. We then connected into a (virtual) machine instantiated with the image, verified the setup and performed additional steps to interactively start and configure the services.
In the next chapter, we walk you through creating an image that comes pre-loaded with this configuration, so it boots into a working EdgeX environment.
"},{"location":"examples/Ch-OSImageWithEdgeX/#b-override-configurations","title":"B. Override configurations","text":"In this chapter, we will improve our OS image so that:
Overriding the snap configurations upon installation is possible with gadget snaps.
The pc
gadget is available as a prebuilt snap in the store, however, in this chapter, we need to build our own to include custom configurations, passed in as default values to snaps. We will use the source code for Core22 AMD64 gadget from here as basis.
Tip
For a Raspberry Pi, you need to use the pi-gadget instead.
Clone the repo branch: \ud83d\udda5 Desktop
git clone https://github.com/snapcore/pc-amd64-gadget.git --branch=22\n
Add the following root level object to pc-amd64-gadget/gadget.yml
:
defaults:\n# edgexfoundry\nAZGf0KNnh8aqdkbGATNuRuxnt1GNRKkV: # snap id\n# automatically start all the services\nautostart: true\n# disable security\nsecurity: false\n# override a single service's startup message\napps.core-data.config.service-startupmsg: \"Core Data Startup message from gadget!\"\n# set bind address of services to all interfaces via the common config\napps.core-common-config-bootstrapper.config.all-services-service-serverbindaddr: 0.0.0.0\n\n# edgex-device-virtual\nAmKuVTOfsN0uEKsyJG34M8CaMfnIqxc0: # snap id\n# automatically start the service\nautostart: true\nconfig:\n# configure the service so it does not use the secret store\nedgex-security-secret-store: false\n# override the startup message\nservice-startupmsg: \"Startup message from gadget!\"\n
For service startup and other configuration overrides, refer to Managing services and Config Overrides.
Build: \ud83d\udda5 Desktop
$ cd pc-amd64-gadget\n$ snapcraft -v\n...\nCreated snap package pc_22-0.3_amd64.snap\n\n$ cd ..\n
Note
You need to rebuild the snap every time you change the gadget.yaml
file.
Use ubuntu-image tool again to build a new image. Use the same instructions as before but with an additional flag to set the path to gadget snap that we locally built above.
\ud83d\udda5 Desktop$ ubuntu-image snap model.signed.yaml --validation=enforce \\\n--snap pc-amd64-gadget/pc_22-0.3_amd64.snap # sideload the gadget\nFetching snapd\nFetching pc-kernel\nFetching core22\nFetching edgexfoundry\nFetching edgex-device-virtual\nWARNING: \"pc\" installed from local snaps disconnected from a store cannot be refreshed subsequently!\nCopying \"pc-amd64-gadget/pc_22-0.3_amd64.snap\" (pc)\n
The warning is because we sideloaded the gadget instead of pulling it from a store.
Tip
In production settings, a custom gadget would need to be uploaded to the IoT App Store to also receive OTA updates.
Note
You need to repeat the build every time you change and sign the model or rebuild the gadget.
Done
The image file is now ready to be flashed on a medium to create a bootable drive with the needed applications and basic configurations.
"},{"location":"examples/Ch-OSImageWithEdgeX/#try-it-out_1","title":"TRY IT OUT","text":"Refer to the following to:
This time, as set in the gadget defaults, services are started by default and security is disabled.
SSH to the Ubuntu Core machine as before and verify some of the seeded configurations:
\ud83d\ude80 Ubuntu Core$ snap services\nService Startup Current Notes\nedgex-device-virtual.device-virtual enabled active -\nedgexfoundry.consul enabled active -\nedgexfoundry.core-command enabled active -\nedgexfoundry.core-common-config-bootstrapper enabled inactive -\nedgexfoundry.core-data enabled active -\nedgexfoundry.core-metadata enabled active -\nedgexfoundry.nginx disabled inactive -\nedgexfoundry.redis enabled active -\nedgexfoundry.security-bootstrapper-consul disabled inactive -\nedgexfoundry.security-bootstrapper-nginx disabled inactive -\nedgexfoundry.security-bootstrapper-redis disabled inactive -\nedgexfoundry.security-proxy-auth disabled inactive -\nedgexfoundry.security-secretstore-setup disabled inactive -\nedgexfoundry.support-notifications enabled active -\nedgexfoundry.support-scheduler enabled active -\nedgexfoundry.vault disabled inactive -\n\n$ snap get edgex-device-virtual -d\n{\n \"autostart\": true,\n \"config\": {\n \"edgex-security-secret-store\": false,\n \"service-startupmsg\": \"Startup message from gadget!\"\n }\n}\n
Verify that Device Virtual has the startup message set from the gadget: \ud83d\ude80 Ubuntu Core
$ snap logs -n=all edgex-device-virtual | grep \"Startup message\"\n2023-05-24T16:52:05Z edgex-device-virtual.device-virtual[2807]: level=INFO ts=2023-05-24T16:52:05.791386915Z app=device-virtual source=variables.go:457 msg=\"Variables override of 'Service/StartupMsg' by environment variable: SERVICE_STARTUPMSG=Startup message from gadget!\"\n2023-05-24T16:52:22Z edgex-device-virtual.device-virtual[3010]: level=INFO ts=2023-05-24T16:52:22.342760716Z app=device-virtual source=message.go:55 msg=\"Startup message from gadget!\"\n
Since security is disabled and Core Data has been configured to listen on all interfaces (instead of just the loopback), we can now query data (insecurely) from outside: \ud83d\udda5 Desktop
$ curl --no-progress-meter http://localhost:59880/api/v3/reading/all?limit=2 | jq\n{\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"totalCount\": 86,\n \"readings\": [\n{\n\"id\": \"66c0e3ae-70a5-41b1-931f-bf680b2814ed\",\n \"origin\": 1684948755626088200,\n \"deviceName\": \"Random-Boolean-Device\",\n \"resourceName\": \"Bool\",\n \"profileName\": \"Random-Boolean-Device\",\n \"valueType\": \"Bool\",\n \"value\": \"true\"\n},\n {\n\"id\": \"94ec2182-7a0b-4515-8bcd-5445b8d59d2d\",\n \"origin\": 1684948755624763400,\n \"deviceName\": \"Random-UnsignedInteger-Device\",\n \"resourceName\": \"Uint32\",\n \"profileName\": \"Random-UnsignedInteger-Device\",\n \"valueType\": \"Uint32\",\n \"value\": \"2463192424\"\n}\n]\n}\n
We can do that only for servers that have their ports forwarded to the emulator's host as configured in Run in an emulator. Query all registered devices from Core Metadata: \ud83d\udda5 Desktop
$ curl --no-progress-meter http://localhost:59881/api/v3/device/all | jq '.devices[].name'\n\"Random-Boolean-Device\"\n\"Random-Float-Device\"\n\"Random-UnsignedInteger-Device\"\n\"Random-Binary-Device\"\n\"Random-Integer-Device\"\n
The response shows 5 virtual devices, registered by Device Virtual. In this chapter, we created an OS image which comes with EdgeX components that have overridden server configurations. We can extend the server configurations by setting other defaults in the gadget. This mechanism is made possible via a combination of snap options and environment variable overrides implemented for EdgeX services.
Overriding configuration fields is sufficient in most scenarios. However, there are situations in which we need to override entire configuration files instead of just some fields:
There are different ways to tackle the above situations such as pre-populating the EdgeX Config Provider and Core Metadata with the needed data, or deploying a local agent which takes care of the provisioning on runtime. In the next chapter, we will address the above requirements by deploying a snap which supplies custom configuration files to applications.
"},{"location":"examples/Ch-OSImageWithEdgeX/#c-replace-configuration-files","title":"C. Replace configuration files","text":"This chapter builds on top of what we did previously and shows how to override entire configuration files supplied via a snap package, called the config provider snap.
"},{"location":"examples/Ch-OSImageWithEdgeX/#create-a-config-provider-for-device-virtual","title":"Create a config provider for Device Virtual","text":"The EdgeX Device Virtual service cannot be fully configured using environment variables / snap options. Because of that, we need to package the modified config files and replace the defaults. Moreover, it is tedious to override many configurations one by one, compared to having a file which contains all the needed modifications.
Since we want to create an OS image pre-loaded with the configured system, we need to make sure the configurations are there without any manual user interaction. We do that by creating a snap which provides the configuration files to the Device Virtual snap.
For this exercise, we will replace the default Device Virtual configurations with a new set of files, containing just one virtual device and profile.
We use the config provider snap example as basis which already includes the mentioned configuration files:
\ud83d\udda5 Desktop$ git clone https://github.com/canonical/edgex-config-provider.git\n\n$ tree edgex-config-provider/examples/device-virtual/res/\nedgex-config-provider/examples/device-virtual/res/\n\u251c\u2500\u2500 configuration.yaml\n\u251c\u2500\u2500 devices\n\u2502 \u2514\u2500\u2500 devices.yaml\n\u251c\u2500\u2500 profiles\n\u2502 \u2514\u2500\u2500 device.virtual.float.yaml\n\u2514\u2500\u2500 README.md\n
This example includes only Device Virtual configurations. However, it is structured to allow the supply of configuration files to multiple EdgeX app and device services.
We'll continue with this example snap which is named edgex-config-provider-example
.
Tip
In production settings, you would create your own snap under a unique name and release it to the public snap store or a private IoT App Store along with your gadget. This will allow OTA updates as well as secure control of the provided configuration.
Build: \ud83d\udda5 Desktop
$ cd edgex-config-provider\n$ snapcraft -v\n...\nCreated snap package edgex-config-provider-example_<...>.snap\n\n$ cd ..\n
This will build for our host architecture which is amd64
. You can perform remote builds to build for other architectures.
Let's upload the snap and release it to the latest/edge
channel: \ud83d\udda5 Desktop
snapcraft upload --release=latest/edge edgex-config-provider/edgex-config-provider-example_<...>.snap\n
Uploading to the store is necessary because we need to define a connection contract on the OS between the config provider and Device Virtual snaps. Query the snap ID from the store: \ud83d\udda5 Desktop
$ snap info edgex-config-provider-example | grep snap-id\nsnap-id: WWPGZGi1bImphPwrRfw46aP7YMyZYl6w\n
"},{"location":"examples/Ch-OSImageWithEdgeX/#add-the-config-provider-to-the-image","title":"Add the config provider to the image","text":"Perform the following:
1) Add the config provider snap to model.yaml
:
- name: edgex-config-provider-example\ntype: app\ndefault-channel: latest/edge\nid: WWPGZGi1bImphPwrRfw46aP7YMyZYl6w\n
2) Sign the model as before: \ud83d\udda5 Desktop
yq eval model.yaml -o=json | snap sign -k edgex-demo > model.signed.yaml\n
3) Add the following root level object to pc-amd64-gadget/gadget.yaml
:
connections:\n- # Connect edgex-device-virtual's plug (consumer)\nplug: AmKuVTOfsN0uEKsyJG34M8CaMfnIqxc0:device-virtual-config\n# to edgex-config-provider-example's slot (provider) to override the default configuration files.\nslot: WWPGZGi1bImphPwrRfw46aP7YMyZYl6w:device-virtual-config\n
This tells the system to connect the device-virtual-config
plug of the Device Virtual snap to the slot of the same name on the config provider snap. 4) Rebuild the gadget: \ud83d\udda5 Desktop
$ cd pc-amd64-gadget\n$ snapcraft -v\n...\nCreated snap package pc_22-0.3_amd64.snap\n\n$ cd ..\n
"},{"location":"examples/Ch-OSImageWithEdgeX/#build-the-image_1","title":"Build the image","text":"Use ubuntu-image tool again to build a new image. Use the same instructions as before to build:
\ud83d\udda5 Desktop$ ubuntu-image snap model.signed.yaml --validation=enforce \\\n--snap pc-amd64-gadget/pc_22-0.3_amd64.snap\nFetching snapd\nFetching pc-kernel\nFetching core22\nFetching edgexfoundry\nFetching edgex-device-virtual\nFetching edgex-config-provider-example\nWARNING: \"pc\" installed from local snaps disconnected from a store cannot be refreshed subsequently!\nCopying \"pc-amd64-gadget/pc_22-0.3_amd64.snap\" (pc)\n
Note the addition of our config provider snap in the output.
Done
The image file is now ready to be flashed on a medium to create a bootable drive with the needed applications and custom configuration files.
"},{"location":"examples/Ch-OSImageWithEdgeX/#try-it-out_2","title":"TRY IT OUT","text":"Refer to the following to:
SSH to the Ubuntu Core machine and verify the installations:
List of snaps: \ud83d\ude80 Ubuntu Core
$ snap list\nName Version Rev Tracking Publisher Notes\ncore22 20230503 634 latest/stable canonical\u2713 base\nedgex-config-provider-example v3.0.0-beta+git6.1778bd4 29 latest/edge farshidtz -\nedgex-device-virtual 3.0.0-dev.51 673 latest/edge canonical\u2713 -\nedgexfoundry 3.0.0-dev.164 4455 latest/edge canonical\u2713 -\npc 22-0.3 x1 - - gadget\npc-kernel 5.15.0-71.78.1 1281 22/stable canonical\u2713 kernel\nsnapd 2.59.4 19361 latest/candidate canonical\u2713 snapd\n
Note that we now also have edgex-config-provider-example
in the list. Verify that Device Virtual has the startup message overridden via the gadget defaults: \ud83d\ude80 Ubuntu Core
$ snap logs -n=all edgex-device-virtual | grep \"Startup message\"\n2023-05-25T10:24:50Z edgex-device-virtual.device-virtual[2924]: level=INFO ts=2023-05-25T10:24:50.447466922Z app=device-virtual source=variables.go:457 msg=\"Variables override of 'Service/StartupMsg' by environment variable: SERVICE_STARTUPMSG=Startup message from gadget!\"\n2023-05-25T10:25:03Z edgex-device-virtual.device-virtual[3136]: level=INFO ts=2023-05-25T10:25:03.761993667Z app=device-virtual source=message.go:55 msg=\"Startup message from gadget!\"\n
From the host machine, query the device metadata to ensure that Device Virtual has registered only a single virtual device: \ud83d\udda5 Desktop
$ curl --no-progress-meter http://localhost:59881/api/v3/device/all | jq '.devices[].name'\n\"Random-Float-Device\"\n
Congratulations! You deployed a system that is pre-configured to have:
Running the image in an emulator makes it easier to quickly try the image and find out possible issues.
We use a amd64
QEMU emulator. Refer to Testing Ubuntu Core with QEMU to setup the dependencies and learn about the various emulation options. Here, we provide the command to run without TPM emulation.
Warning
The pc.img
file passed to the emulator is used as the secondary storage. It persists any changes made to the partitions during the installation and any user modifications after the boot. You can stop and re-start the emulator at a later time without losing your changes.
To do a fresh start or to flash this image on disk, your need to rebuild the image. Alternatively, you can make a copy before using it in QEMU.
Run the following command and wait for the boot to complete: \ud83d\udda5 Desktop
sudo qemu-system-x86_64 \\\n-smp 4 \\\n-m 4096 \\\n-drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on \\\n-drive file=pc.img,cache=none,format=raw,id=disk1,if=none \\\n-device virtio-blk-pci,drive=disk1,bootindex=1 \\\n-machine accel=kvm \\\n-serial mon:stdio \\\n-net nic,model=virtio \\\n-net user,hostfwd=tcp::8022-:22,hostfwd=tcp::8443-:8443,hostfwd=tcp::59880-:59880,hostfwd=tcp::59881-:59881\n
The above command forwards:
22
of the emulator to 8022
on the host8433
for external access in chapter A59880
59881
Could not set up host forwarding rule 'tcp::8443-:8443'
This means that the port 8443 is not available on the host. Try stopping the service that uses this port or change the host port (left hand side) to another port number, e.g. tcp::18443-:8443
.
Success
Once the installation is complete, you'll see the initialization interface; Refer here for details.
"},{"location":"examples/Ch-OSImageWithEdgeX/#flash-the-image-on-disk","title":"Flash the image on disk","text":"Warning
If you have used pc.img
to install in QEMU, the image has changed. You need to rebuild a new copy before continuing.
The installation instructions are device specific. You may refer to Ubuntu Core section in this page. For example:
A precondition to continue with some of the instructions is to compress pc.img
. This speeds up the transfer and makes the input file similar to official images, improving compatibility with the available instructions.
To compress with the lowest compression rate of zero: \ud83d\udda5 Desktop
$ xz -vk -0 pc.img\npc.img (1/1)\n100 % 817.2 MiB / 3,309.0 MiB = 0.247 10 MiB/s 5:30 \n\n$ ls -lh pc.*\n-rw-rw-r-- 1 ubuntu ubuntu 3.3G Sep 16 17:03 pc.img\n-rw-rw-r-- 1 ubuntu ubuntu 818M Sep 16 17:03 pc.img.xz\n
A higher compression rate significantly increases the processing time and needed resources, with very little gain. Follow the device specific instructions.
Success
You may refer here for the initialization steps appearing by default.
"},{"location":"examples/Ch-OSImageWithEdgeX/#initialization","title":"Initialization","text":"Once the installation is complete, you will see the interface of the console-conf
program. It will walk you through the networking and user account setup. You'll need to enter the email address of your Ubuntu account to create a OS user account with your registered username and have your SSH public keys deployed as authorized SSH keys for that user. If you haven't done so, follow the instructions here to add your SSH keys before doing this setup.
Read here to know how the manual account setup looks like and how it can be automated.
"},{"location":"examples/Ch-OSImageWithEdgeX/#references","title":"References","text":"Use the Camera Management Example application service to auto discover and connect to nearby ONVIF and USB based cameras. This application will also control cameras via commands, create inference pipelines for the camera video streams and publish inference results to MQTT broker.
This app uses EdgeX compose, Edgex Onvif Camera device service, Edgex USB Camera device service, Edgex MQTT device service and Edge Video Analytics Microservice.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-dependencies","title":"Install Dependencies","text":""},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#environment","title":"Environment","text":"This example has been tested with a relatively modern Linux environment - Ubuntu 20.04 and later
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-docker","title":"Install Docker","text":"Install Docker from the official repository as documented on the Docker site.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#configure-docker","title":"Configure Docker","text":"To enable running Docker commands without the preface of sudo, add the user to the Docker group.
Warning
The docker group grants root-level privileges to the user. For details on how this impacts security in your system, see Docker Daemon Attack Surface.
Create Docker group:
sudo groupadd docker\n
Note
If the group already exists, groupadd
outputs a message: groupadd: group docker
already exists. This is OK.
Add User to group:
sudo usermod -aG docker $USER\n
Restart your computer for the changes to take effect.
To verify the Docker installation, run hello-world
:
docker run hello-world\n
A Hello from Docker! greeting indicates successful installation. Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n2db29710123e: Pull complete \nDigest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n...\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-docker-compose","title":"Install Docker Compose","text":"Install Docker Compose from the official repository as documented on the Docker Compose site.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-golang","title":"Install Golang","text":"Install Golang from the official Golang website.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#install-tools","title":"Install Tools","text":"Install build tools:
sudo apt install build-essential\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#steps-for-running-this-example","title":"Steps for running this example:","text":""},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#1-start-the-edgex-core-services-and-device-services","title":"1. Start the EdgeX Core Services and Device Services.","text":"Clone edgex-compose
from github.com.
git clone https://github.com/edgexfoundry/edgex-compose.git\n
Navigate to the edgex-compose
directory:
cd edgex-compose\n
Checkout the latest release (main):
git checkout main\n
Navigate to the compose-builder
subdirectory:
cd compose-builder/\n
(Optional) Update the add-device-usb-camera.yml
file:
Note
This step is only required if you plan on using USB cameras.
a. Add enable rtsp server and the rtsp server hostname environment variables to the device-usb-camera
service, where your-local-ip-address
is the ip address of the machine running the device-usb-camera
service.
Snippet from add-device-usb-camera.yml
services:\n device-usb-camera:\n environment:\n DRIVER_ENABLERTSPSERVER: \"true\"\n DRIVER_RTSPSERVERHOSTNAME: \"your-local-ip-address\"\n
b. Under the ports
section, find the entry for port 8554 and change the host_ip from 127.0.0.1
to either 0.0.0.0
or the ip address you put in the previous step.
Clone the EdgeX Examples repository :
git clone https://github.com/edgexfoundry/edgex-examples.git\n
Navigate to the edgex-examples
directory:
cd edgex-examples\n
Checkout the latest release (main):
git checkout main\n
Navigate to the application-services/custom/camera-management
directory
cd application-services/custom/camera-management\n
Configure device-mqtt service to send Edge Video Analytics Microservice inference results into Edgex via MQTT
a. Copy the entire evam-mqtt-edgex folder into edgex-compose/compose-builder
directory.
b. Add this information into the add-device-mqtt.yml file in the edgex-compose/compose-builder
directory.
Snippet from add-device-mqtt.yml
services:\ndevice-mqtt:\n...\nenvironment:\nDEVICE_DEVICESDIR: /evam-mqtt-edgex/devices\nDEVICE_PROFILESDIR: /evam-mqtt-edgex/profiles\nMQTTBROKERINFO_INCOMINGTOPIC: \"incoming/data/#\"\nMQTTBROKERINFO_USETOPICLEVELS: \"true\"\n...\n... volumes:\n# example: - /home/github.com/edgexfoundry/edgex-compose/compose-builder/evam-mqtt-edgex:/evam-mqtt-edgex\n- <add-absolute-path-of-your-edgex-compose-builder-here-example-above>/evam-mqtt-edgex:/evam-mqtt-edgex\n
c. Add this information into the add-mqtt-broker-mosquitto.yml file in the edgex-compose/compose-builder
directory.
Snippet from add-mqtt-broker-mosquitto.yml
services:\nmqtt-broker:\n...\nports:\n...\n- \"59001:9001\"\n...\nvolumes:\n# example: - /home/github.com/edgexfoundry/edgex-compose/compose-builder/evam-mqtt-edgex:/evam-mqtt-edgex\n- <add-absolute-path-of-your-edgex-compose-builder-here>/evam-mqtt-edgex/mosquitto.conf:/mosquitto-no-auth.conf:ro
Note
Please note that both the services in this file need the absolute path to be inserted for their volumes.
Run the following command to start all the Edgex services.
Note
The ds-onvif-camera
parameter can be omitted if no Onvif cameras are present, or the ds-usb-camera
parameter can be omitted if no usb cameras are present.
make run no-secty ds-mqtt mqtt-broker ds-onvif-camera ds-usb-camera
Open cloned edgex-examples
repo and navigate to the edgex-examples/application-services/custom/camera-management
directory:
cd edgex-examples/application-services/custom/camera-management\n
Run this once to download edge-video-analytics into the edge-video-analytics sub-folder, download models, and patch pipelines
make install-edge-video-analytics\n
Note
This step is only required if you have Onvif cameras. Currently, this example app is limited to supporting only 1 username/password combination for all Onvif cameras.
Note
Please follow the instructions for the Edgex Onvif Camera device service in order to connect your Onvif cameras to EdgeX.
configuration.yamlenv varsModify the res/configuration.yaml file
InsecureSecrets:\nonvifauth:\nSecretName: onvifauth\nSecretData:\nusername: \"<username>\"\npassword: \"<password>\"\n
Export environment variable overrides
export WRITABLE_INSECURESECRETS_ONVIFAUTH_SECRETDATA_USERNAME=\"<username>\"\nexport WRITABLE_INSECURESECRETS_ONVIFAUTH_SECRETDATA_PASSWORD=\"<password>\"\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#32-optional-configure-usb-camera-rtsp-credentials","title":"3.2 (Optional) Configure USB Camera RTSP Credentials.","text":"Note
This step is only required if you have USB cameras.
Note
Please follow the instructions for the Edgex USB Camera device service in order to connect your USB cameras to EdgeX.
configuration.yamlenv varsModify the res/configuration.yaml file
InsecureSecrets:\nrtspauth:\nSecretName: rtspauth\nSecretData:\nusername: \"<username>\"\npassword: \"<password>\"\n
Export environment variable overrides
export WRITABLE_INSECURESECRETS_RTSPAUTH_SECRETDATA_USERNAME=\"<username>\"\nexport WRITABLE_INSECURESECRETS_RTSPAUTH_SECRETDATA_PASSWORD=\"<password>\"\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#33-configure-default-pipeline","title":"3.3 Configure Default Pipeline","text":"Initially, all new cameras added to the system will start the default analytics pipeline as defined in the configuration file below. The desired pipeline can be changed or the feature can be disabled by setting the DefaultPipelineName
and DefaultPipelineVersion
to empty strings.
Modify the res/configuration.yaml file with the name and version of the default pipeline to use when a new device is added to the system.
Note
These values can be left empty to disable the feature.
AppCustom:\nDefaultPipelineName: object_detection # Name of the default pipeline used when a new device is added to the system; can be left blank to disable feature\nDefaultPipelineVersion: person # Version of the default pipeline used when a new device is added to the system; can be left blank to disable feature\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#34-build-and-run","title":"3.4 Build and run","text":"Make sure you are at the root of this example app
cd edgex-examples/application-services/custom/camera-management\n
Build the docker image
make docker\n
Start the docker compose services in the background for both EVAM and Camera Management App
docker compose up -d\n
Note
If you would like to view the logs for these services, you can use docker compose logs -f
. To stop the services, use docker compose down
.
Note
The port for EVAM result streams has been changed from 8554 to 8555 to avoid conflicts with the device-usb-camera service.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#using-the-app","title":"Using the App","text":"Visit http://localhost:59750 to access the app.
Figure 1: Homepage for the Camera Management app
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#camera-position","title":"Camera Position","text":"You can control the position of supported cameras using ptz commands.
This section outlines how to start an analytics pipeline for inferencing on a specific camera stream.
Select a camera out of the drop down list of connected cameras.
Select a video stream out of the drop down list of connected cameras.
Select a analytics pipeline out of the drop down list of connected cameras.
Click the Start Pipeline
button.
Once the pipeline is running, you can view the pipeline and its status.
Expand a pipeline to see its status. This includes important information such as elapsed time, latency, frames per second, and elapsed time.
In the terminal where you started the app, once the pipeline is started, this log message will pop up.
level=INFO ts=2022-07-11T22:26:11.581149638Z app=app-camera-management source=evam.go:115 msg=\"View inference results at 'rtsp://<SYSTEM_IP_ADDRESS>:8555/<device name>'\"\n
Use the URI from the log to view the camera footage with analytics overlayed.
ffplay 'rtsp://<SYSTEM_IP_ADDRESS>:8555/<device name>'\n
Example Output:
Figure 2: analytics stream with overlay
If you want to stop the stream, press the red square:
Figure 3: the red square to shut down the pipeline"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#api-log","title":"API Log","text":"
The API log shows the status of the 5 most recent calls and commands that the management has made. This includes important information from the responses, including camera information or error messages.
Expand a log item to see the response
Good response: Bad response:
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#inference-events","title":"Inference Events","text":"To view the inference events in a json format, click the Stream Events
button.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#inference-results-in-edgex","title":"Inference results in Edgex","text":"
To view inference results in Edgex, open Edgex UI http://localhost:4000, click on the DataCenter
tab and view data streaming under Event Data Stream
by clicking on the Start
button.
A custom app service can be used to analyze this inference data and take action based on the analysis.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#video-example","title":"Video Example","text":"A brief video demonstration of building and using the device service:
Warning
This video was created with a previous release. Some new features may not be depicted in this video, and there might be some extra steps needed to configure the service.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#additional-development","title":"Additional Development","text":"
Warning
The following steps are only useful for developers who wish to make modifications to the code and the Web-UI.
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#development-and-testing-of-ui","title":"Development and Testing of UI","text":""},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#1-build-the-production-web-ui","title":"1. Build the production web-ui","text":"This builds the web ui into the web-ui/dist
folder, which is what is served by the app service on port 59750.
make web-ui\n
"},{"location":"examples/app-service-examples/camera-management/Ch-CameraManagement/#2-serve-the-web-ui-in-hot-reload-mode","title":"2. Serve the Web-UI in hot-reload mode","text":"This will serve the web ui in hot reload mode on port 4200 which will recompile and update anytime you make changes to a file. It is useful for rapidly testing changes to the UI.
make serve-ui\n
Open your browser to http://localhost:4200
"},{"location":"general/ContainerNames/","title":"EdgeX Container Names","text":"The following table provides the list of the default EdgeX Docker image names to the Docker container name and Docker Compose names.
CoreSupportingApplication & AnalyticsDeviceSecurityMiscellaneous Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/core-data edgex-core-data edgex-core-data core-data edgexfoundry/core-metadata edgex-core-metadata edgex-core-metadata core-metadata edgexfoundry/core-command edgex-core-command edgex-core-command core-command edgexfoundry/core-common-config-bootstrapper edgex-core-common-config-bootstrapper edgex-core-common-config-bootstrapper core-common-config-bootstrapper Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/support-notifications edgex-support-notifications edgex-support-notifications support-notifications edgexfoundry/support-scheduler edgex-support-scheduler edgex-support-scheduler support-scheduler Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/app-rfid-llrp-inventory edgex-app-rfid-llrp-inventory edgex-app-rfid-llrp-inventory app-rfid-llrp-inventory edgexfoundry/app-service-configurable edgex-app-rules-engine edgex-app-rules-engine app-rules-engine edgexfoundry/app-service-configurable edgex-app-http-export edgex-app-http-export app-http-export edgexfoundry/app-service-configurable edgex-app-mqtt-export edgex-app-mqtt-export app-mqtt-export edgexfoundry/app-service-configurable edgex-app-metrics-influxdb edgex-app-metrics-influxdb app-metrics-influxdb edgexfoundry/app-service-configurable edgex-app-sample edgex-app-sample app-sample edgexfoundry/app-service-configurable edgex-app-external-mqtt-trigger edgex-app-external-mqtt-trigger app-external-mqtt-trigger emqx/kuiper edgex-kuiper edgex-kuiper rulesengine Docker image name Docker container name Docker network hostname Docker Compose service name edgexfoundry/device-virtual edgex-device-virtual edgex-device-virtual device-virtual edgexfoundry/device-mqtt edgex-device-mqtt edgex-device-mqtt device-mqtt edgexfoundry/device-rest edgex-device-rest edgex-device-rest device-rest edgexfoundry/device-modbus edgex-device-modbus edgex-device-modbus device-modbus edgexfoundry/device-snmp edgex-device-snmp edgex-device-snmp device-snmp edgexfoundry/device-bacnet edgex-device-bacnet edgex-device-bacnet device-bacnet edgexfoundry/device-onvif-camera edgex-device-onvif-camera edgex-device-onvif-camera device-onvif-camera edgexfoundry/device-usb-camera edgex-device-usb-camera edgex-device-usb-camera device-usb-camera edgexfoundry/device-coap edgex-device-coap edgex-device-coap device-coap Docker image name Docker container name Docker network hostname Docker Compose service name vault edgex-vault edgex-vault vault nginx edgex-nginx edgex-nginx nginx edgexfoundry/security-proxy-auth edgex-proxy-auth edgex-proxy-auth security-proxy-auth edgexfoundry/security-proxy-setup edgex-security-proxy-setup edgex-security-proxy-setup security-proxy-setup edgexfoundry/security-secretstore-setup edgex-security-secretstore-setup edgex-security-secretstore-setup security-secretstore-setup edgexfoundry/security-bootstrapper edgex-security-bootstrapper edgex-security-bootstrapper security-bootstrapper Docker image name Docker container name Docker network hostname Docker Compose service name consul edgex-core-consul edgex-core-consul consul redis edgex-redis edgex-redis database"},{"location":"general/Definitions/","title":"Definitions","text":"The following glossary provides terms used in EdgeX Foundry. The definition are based on how EdgeX and its community use the term versus any strict technical or industry definition.
"},{"location":"general/Definitions/#actuate","title":"Actuate","text":"To cause a machine or device to operate. In EdgeX terms, to command a device or sensor under management of EdgeX to do something (example: stop a motor) or to reconfigure itself (example: set a thermostat's cooling point).
"},{"location":"general/Definitions/#brownfield-and-greenfield","title":"Brownfield and Greenfield","text":"Brownfield refers to older legacy equipment (nodes, devices, sensors) in an edge/IoT deployment, which typically uses older protocols. Greenfield refers to, typically, new equipment with modern protocols.
"},{"location":"general/Definitions/#cbor","title":"CBOR","text":"An acronym for \"concise binary object representation.\" A binary data serialization format used by EdgeX to transport binary sensed data (like an image). The user can also choose to send all data via CBOR for efficiency purposes, but at the expense of having EdgeX convert the CBOR into another format whenever the data needs to be understood and inspected or to persist the data.
"},{"location":"general/Definitions/#containerized","title":"Containerized","text":"EdgeX micro services and infrastructure (i.e. databases, registry, etc.) are built as executable programs, put into Docker images, and made available via Docker Hub (and Nexus repository for nightly builds). A service (or infrastructure element) that is available in Docker Hub (or Nexus) is said to be containerized. Docker images can be quickly downloaded and new Docker containers created from the images.
"},{"location":"general/Definitions/#contributordeveloper","title":"Contributor/Developer","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort.
"},{"location":"general/Definitions/#created-time-stamp","title":"Created time stamp","text":"The Created time stamp is the time the data was created in the database and is unchangeable. The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data was sent to EdgeX Foundry and the database.
Usually, the Origin and Created time stamps are the same, or very close to being the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different.
If persistence is disable in core-data, the time stamp will default to 0.
"},{"location":"general/Definitions/#device","title":"Device","text":"In EdgeX parlance, \"device\" is used to refer to a sensor, actuator, or IoT \"thing\". A sensor generally collects information from the physical world - like a temperature or vibration sensor. Actuators are machines that can be told to do something. Actuators move or otherwise control a mechanism or system - like a value on a pump. While there may be some technical differences, for the purposes of EdgeX documentation, device will refer to a sensor, actuator or \"thing\".
"},{"location":"general/Definitions/#edge-analytics","title":"Edge Analytics","text":"The terms edge or local analytics (the terms are used interchangeably and have the same meaning in this context) for the purposes of edge computing (and EdgeX), refers to an \u201canalytics\u201d service is that: - Receives and interprets the EdgeX sensor data to some degree; some analytics services are more sophisticated and able to provide more insights than others - Make determinations on what actions and actuations need to occur based on the insights it has achieved, thereby driving actuation requests to EdgeX associated devices or other services (like notifications)
The analytics service could be some simple logic built into an app service, a rules engine package, or an agent of some artificial intelligence/machine learning system. From an EdgeX perspective, actionable intelligence generation is all the same. From an EdgeX perspective, edge analytics = seeing the edge data and be able to make requests to act on what is seen. While EdgeX provides a rules engine service as its reference implementation of local analytics, app services and its data preparation capability allow sensor data to be streamed to any analytics package.
Because of EdgeX\u2019s micro service architecture and distributed nature, the analytics service would not necessarily have to run local to the devices / sensors. In other words, it would not have to run at the edge. App services could deliver the edge data to analytics living in the cloud. However, in these scenarios, the insight intelligence would not be considered local or edge in context. Because of latency concerns, data security and privacy needs, intermittent connectivity of edge systems, and other reasons, it is often vital for edge platforms to retain an analytic capability at the edge or local.
"},{"location":"general/Definitions/#gateway","title":"Gateway","text":"An IoT gateway is a compute platform at the farthest ends of an edge or IoT network. It is the host or \u201cbox\u201d to which physical sensors and devices connect and that is, in turn, connected to the networks (wired or wirelessly) of the information technology realm.
IoT or edge gateways are compute platforms that connect \u201cthings\u201d (sensors and devices) to IT networks and systems.
"},{"location":"general/Definitions/#micro-service","title":"Micro service","text":"In a micro service architecture, each component has its own process. This is in contrast to a monolithic architecture in which all components of the application run in the same process.
Benefits of micro service architectures include: - Allow any one service to be replaced and upgraded more easily - Allow services to be programmed using different programming languages and underlying technical solutions (use the best technology for each specific service) - Ex: services written in C can communicate and work with services written in Go - This allows organizations building solutions to maximize available developer resources and some legacy code - Allow services to be distributed across host compute platforms - allowing better utilization of available compute resources - Allow for more scalable solutions by adding copies of services when needed
"},{"location":"general/Definitions/#origin-time-stamp","title":"Origin time stamp","text":"The Origin time stamp is the time the data is created on the device, device services, sensor, or object that collected the data before the data is sent to EdgeX Foundry and the database. The Created time stamp is the time the data was created in the database.
Usually, the Origin and Created time stamps are the same or very close to the same. On occasion the sensor may be a long way from the gateway or even in a different time zone, and the Origin and Created time stamps may be quite different.
"},{"location":"general/Definitions/#reference-implementation","title":"Reference Implementation","text":"Default and example implementation(s) offered by the EdgeX community. Other implementations may be offered by 3rd parties or for specialization.
"},{"location":"general/Definitions/#resource","title":"Resource","text":"A piece of information or data available from a sensor or \"thing\". For example, a thermostat would have temperature and humidity resources. A resource has a name (ResourceName) to identify it (\"temperature\" or \"humidity\" in this example) and a value (the sensed data - like 72 degrees). A resource may also have additional properties or attributes associated with it. The data type of the value (e.g., integer, float, string, etc.) would be an example of a resource property.
"},{"location":"general/Definitions/#rules-engine","title":"Rules Engine","text":"Rules engines are important to the IoT edge system.
A rules engine is a software system that is connected to a collection of data (either database or data stream). The rules engine examines various elements of the data and monitors the data, and then triggers some action based on the results of the monitoring of the data it.
A rules engine is a collection of \"If-Then\" conditional statements. The \"If\" informs the rules engine what data to look at and what ranges or values of data must match in order to trigger the \"Then\" part of the statement, which then informs the rules engine what action to take or what external resource to call on, when the data is a match to the \"If\" statement.
Most rules engines can be dynamically programmed meaning that new \"If-Then\" statements or rules, can be provided while the engine is running. The rules are often defined by some type of rule language with simple syntax to enable non-Developers to provide the new rules.
Rules engines are one of the simplest forms of \"edge analytics\" provided in IoT systems. Rules engines enable data picked up by IoT sensors to be monitored and acted upon (actuated). Typically, the actuation is accomplished on another IoT device or sensor. For example, a temperature sensor in an equipment enclosure may be monitored by a rules engine to detect when the temperature is getting too warm (or too cold) for safe or optimum operation of the equipment. The rules engine, upon detecting temperatures outside of the acceptable range, shuts off the equipment in the enclosure.
"},{"location":"general/Definitions/#software-development-kit","title":"Software Development Kit","text":"In EdgeX, a software development kit (or SDK) is a library or module to be incorporated into a new micro service. It provides a lot of the boilerplate code and scaffolding associated with the type of service being created. The SDK allows the developer to focus on the details of the service functionality and not have to worry about the mundane tasks associated with EdgeX services.
"},{"location":"general/Definitions/#south-and-north-side","title":"South and North Side","text":"South Side: All IoT objects, within the physical realm, and the edge of the network that communicates directly with those devices, sensors, actuators, and other IoT objects, and collects the data from them, is known collectively as the \"south side.\"
North Side: The cloud (or enterprise system) where data is collected, stored, aggregated, analyzed, and turned into information, and the part of the network that communicates with the cloud, is referred to as the \"north side\" of the network.
EdgeX enables data to be sent \"north, \" \"south, \" or laterally as needed and as directed.
"},{"location":"general/Definitions/#snappy-ubuntu-core-snaps","title":"\"Snappy\" / Ubuntu Core & Snaps","text":"A Linux-based Operating System provided by Ubuntu - formally called Ubuntu Core but often referred to as \"Snappy\". The packages are called 'snaps' and the tool for using them 'snapd', and works for phone, cloud, internet of things, and desktop computers. The \"Snap\" packages are self-contained and have no dependency on external stores. \"Snaps\" can be used to create command line tools, background services, and desktop applications.
"},{"location":"general/Definitions/#user","title":"User","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\".
"},{"location":"general/EdgeX_CN/","title":"Is EdgeX Foundry Cloud Native?","text":"This is a question we get in the EdgeX Community quite often; along with other related or extended questions like:
As a simple (perhaps over simplified) answer to these questions, EdgeX was designed to run in/on minimal platforms (\"edge platforms\") with little compute, memory and network connectivity. Cloud native applications are, for the most part, designed to run in resource rich enterprise / cloud environments. Limited resources and other considerations greatly impact the design and operation of edge applications.
Before answering these questions in more detail, its important to understand the definition of cloud native systems. Where did \"cloud native\" come from and what is its purpose? How do all these other questions relate and what are people really asking?
"},{"location":"general/EdgeX_CN/#defining-cloud-native","title":"Defining Cloud Native","text":""},{"location":"general/EdgeX_CN/#origins","title":"Origins","text":"The origins of cloud native computing are right there in the name. Cloud native originated in the realm of cloud computing. Cloud native communities like to say their approach was \"born in the cloud.\" Cloud native computing and architectures emerged from organizations learning how to build and run applications in the cloud. Specifically, how to build and run applications that could scale (up and down) easily, remain functioning in the face of inevitable failures (resiliency), and could operate in the dynamic (or elastic) and distributed resource environments that exist in public, private or even hybrid clouds.
The origins of cloud native computing obviously come from the emergence of cloud technology, but many point specifically to 2015 and the creation of the Cloud Native Computing Foundation (launched by Google, IBM, Intel, VMWare and others with ties in the Cloud industry) as the event that started to galvenize cloud native concepts and steer the direction of Kubernetes (an important and typical ingredient in cloud native systems - see more below) used in cloud native applications.
"},{"location":"general/EdgeX_CN/#defining","title":"Defining","text":"So the origins are in cloud computing, but what exactly is cloud native computing? While debatable, most cloud native computing experts would agree that cloud native computing is about building and running applications in the cloud using methodologies, techniques and technologies that help applications be resilient, easy to manage, and easy to observe. \"Resilient, manageable, and observable\" are the mantra of cloud native experts. Why? Because applications that are resilient, manageable and observable make it easier for developers to make \"high impact\" code changes, at frequent rates, and with predicable impacts and minimal work. Simply put, the cloud native approach allows people to rapidly grow (iterate?) on an application and deploy it easily and with few or no outages.
"},{"location":"general/EdgeX_CN/#ingredients","title":"Ingredients","text":"How is this accomplished? The list of technologies and techniques of cloud native applications include:
Again, the list above is not official (and debatable on some of its points), but the product of the cloud native approach using these technologies create, say cloud native proponents, applications that exist in the cloud that are:
You might be thinking - \"Wow! with all that goodness, why shouldn't all software applications be manufactured using the cloud native approach?\" Indeed, many of the principles of cloud native computing are now applied to all sorts of software development. cloud native computing has expanded beyond the cloud. Additional methodologies (e.g. 12 factor apps), tools (e.g. Prometheus) and techniques (e.g., service discovery and service mesh mechanisms) have emerged to refine (some might say improve) the cloud native approach. Most, if not all, of what is labelled as cloud native computing technology can and has been used in general software development and deployment environments that don't operate in the cloud.
That includes use in edge or IoT computing.
There are, however, important differences between the edge and cloud. They are on opposite ends of the computing spectrum. These natural differences require, in many cases, that edge / IoT applications be constructed and run a little different.
Note
The continuum of edge computing is vast. One often needs to define \"edge\" before making too many generalizations. Running MCUs and PLCs in a factory is at one end of the edge spectrum versus rather large and powerful ruggedized server in a retail store versus a rack of servers at the base of a cell phone tower at the other end of the spectrum - yet all these qualify as \"edge computing\". In this light, as EdgeX Foundry was generally built for the more resource constrained, farther reaches of the edge (although it can be used in larger edge environments), this reference explores how cloud native computing applies under some of the lowest common denominator environments of the edge/IoT space.
So while it would be great if cloud native computing could be directly and wholly applied to the edge and IoT space - and by association then EdgeX Foundry - the constraints of the edge / IoT environment often allow only some of the the cloud native computing approach (tools, technology, etc.) to be applied. This reference attempts to explain where cloud native computing principals have been applied to EdgeX, and where (and why) some challenges exist. It also identifies where future work and improvements in EdgeX (and the edge) and products from CNCF may help bring EdgeX more in line with cloud native computing.
"},{"location":"general/EdgeX_CN/#edge-native","title":"Edge Native","text":"The EdgeX community likes to think of EdgeX as \"Edge Native\". Born at the edge and adhering to some well established needs of the edge and IoT environments. Edge Native shares many of the principals of Cloud Native, but there are differences and one cannot (should not) blanketly try to apply cloud native to edge native realms just as the reverse (applying edge native to cloud native realms) would also be wrong.
"},{"location":"general/EdgeX_CN/#edgex-and-cloud-native-computing","title":"EdgeX and Cloud Native Computing","text":"While EdgeX is not cloud native, it has adopted quite a bit of cloud native principals and technologies. The lists below discuss where EdgeX does, does partially, and does not apply cloud native.
"},{"location":"general/EdgeX_CN/#incorporated-cloud-native-ingredients-in-edgex","title":"Incorporated Cloud Native Ingredients In EdgeX","text":"Micro Services
EdgeX has fully embraced micro services. From the beginning of the project, micro services offered a means to provide an edge/IoT application platform based on loosely coupled capabilities with well defined APIs. A micro service architecture allows the adopter to pick and choose which services are important to their use case and drop the others (critical in a resource constrained environment). It allows EdgeX services to be more easily improved upon and replaced (often by 3rd parties and commercially driven implementers) as better solutions emerge over time. It allows services to be written in alternate programming languages or using technologies best suited to the job. The benefit of micro services can be very beneficial where flexibility is a driving force as it is in cloud and edge computing.
APIs
Each EdgeX micro service has a well defined API set. This API set is what allows replacement services to be created and inserted with ease. It allows for applications on top of EdgeX to be more easily created. Over the course of its existence, this API set has seen only one major revision (and most of that revision was based on the inclusion of standard communication elements such as correlation ids, pagination, and standard error messaging versus a change to the functional APIs). This speaks to how well the APIs are performing in the face of EdgeX requirements. Furthermore, the REST API definitions are even serving as the foundation for EdgeX service communication in other protocols (such as message oriented middleware). This is not unique as cloud native computing systems are also starting to embrace the use of service communications in alternate protocols as well as REST.
CI/CD
Through the efforts of some very talented, experienced and dedicated devops community members, EdgeX has enjoyed world class continuous integration/continuos development (CI/CD) since day one of the project. The EdgeX devops team has provided the project with automated builds, tests, and creation of project artifacts (like containers and snaps) that run with each pull request, nightly (for check of the days work), or on a regular schedule (such as performance checks monthly to ensure the platform remains within expected parameters as it is developed). As shown in cloud native environments, well developed CI/CD pipelines make sure EdgeX is able to \"make 'high impact' code changes, at frequent rates, and with predicable impacts and minimal work.\"
"},{"location":"general/EdgeX_CN/#sometimes-incorporated-cloud-native-ingredients-in-edgex","title":"Sometimes Incorporated Cloud Native Ingredients in EdgeX","text":"The following elements of cloud native are often, but not always applied in EdgeX.
Containers
EdgeX supports (even embraces) containers, but does not require their use. The EdgeX community produces both Docker containers and snaps (Ubuntu Linux software packages) with each release - along with Docker Compose and Helm Charts for orchestration and deployment assistance. Containers provide a convenient mechanism to package up a micro service with all of its dependencies, configuration, etc. They are a convenient software unit that makes deploying, orchestrating and monitoring the services of an application easier. However, there are environments where EdgeX runs that do not support container (or snap or other containerized) runtimes. Resource constraints (memory, storage, CPU, etc.), environmental situations (such as hardware architecture or OS), legacy infrastructure (old hardware or OS) and security constraints are just some of the reasons why EdgeX supports but does not dictate the use of containers. Further, and perhaps most importantly, EdgeX often provides the middleware between operational technology (OT) - like physical equipment and sensors - and information technology (IT). In the world of OT, there are physical connections and hardware specific touch points that need to be accommodated that make using a container in that instance very difficult. Its not uncommon to see EdgeX adopters apply a hybrid approach whereby some of its services are containerized while other services are running \"bare metal\" or outside of any containerization runtime.
Agile
EdgeX has not adopted the Agile Manifesto, but the project does operate on Agile principals. The community formally releases twice a year, but development of the product is ongoing constantly and any change (new feature, bug fix, refactor, etc.) is tested and integrated into the product continuously and immediately (through the CI/CD process mentioned above). Formal releases are more stakes in the ground with regard to higher-level stability and agreed upon timelines for significant features. The community has adopted a philosophy of \"crawl, walk, run\" to grow new features that support a requirements base - but with an understanding (even an expectation) that requirements will change and/or be more fully understood as the feature evolves and gets used. While face-to-face meetings between community members are difficult given the global nature of an open source project, regular and frequent communications between the community developers/architects in and about the code is favored above lots of formal and comprehensive document exchange. Developers are free to use the tools and processes that suit them best so long as the resulting code fulfils requirements and satisfies the CI/CD process.
Distributed
EdgeX is a micro service architecture. Services communicate with each other via REST or message bus and that communication can occur across nodes (aka machines, hosts, etc.). Services have even been built to wait and continue to attempt to communicate with a dependent service - allowing for some resiliency. As such, EdgeX is, at its core, distributable. It was designed such that the services could operate largely independently and on top of whatever limited resources are available at the edge. As an example deployment, an EdgeX device service could run on a Raspberry Pi or smaller compute platform that is directly connected by GPIO to a physical sensor, while the core services are run on an edge gateway, and the application and analytic services (rules engine) run on an edge service. This would allow each service to maximize the available resources available to the solution. Having said that, there are some complexities around real world distributed solutions that adopters would still need to solve depending on their use case and environment. For example, while services can communicate across a distributed set of nodes, the communications between EdgeX services are not secure by default (as would be provided via something like a cloud native service mesh). Adopters would need to provide for their own means to secure all traffic between services in most production environments. Service discovery is not fully implemented. EdgeX services do register with a service registry (Consul) but the services do not use that registry to locate other services. If a service changed location, other services would need to have their configuration changed in order to know and use the service at its new location. Finally, latency is a real concern in edge systems. In addition to service to service communications, most services use stores of information (Redis for data, Vault for secrets, Consul for configuration) which could also be distributed. These are referred to as backing services in cloud native terminology. Even if the communications were secure, if these stores or other services are all distributed, then the additional latency to constantly communicate with services and stores may not be conducive to the edge use case it supports. Each \"hop\" on a network of distributed services costs and that cost adds up when building solutions that operate and manage physical edge capability.
"},{"location":"general/EdgeX_CN/#cloud-native-ingredients-not-in-edgex-and-why","title":"Cloud Native Ingredients Not In EdgeX (and why)","text":"Kubernetes
EdgeX provides example Helm Charts to assist adopters that want to run EdgeX in a Kubernetes environment. However, EdgeX was not designed to fully operate in a multi-cluster environment and take advantage of a full K8s environment. Our example Helm Charts, for example, allow a single instance of each EdgeX service to be deployed/orchestrated and monitored, but it would not allow K8s to fully manage and scale EdgeX services. Why? First and foremost Kubernetes is large compared to the resource constraints of some edge platforms. While smaller Kubernetes environments are being developed for the edge (see Futures below), a whole host of challenges such as resource constraints, environment, infrastructure, etc. (as mentioned under Containers above) may not allow K8s to operate at the edge. Kubernetes is, for the most part, about the ability to load balance, distribute traffic, and scale (up or down) workloads so that an application remains stable. But on an edge platform, where would Kubernetes find the resources to balance and distribute and scale? Because edge nodes are static and often times physically connected to the sensors they collect data from, there is not the means to grow and/or shift the workloads. Portions of EdgeX might be able to scale up or down (those not physically tied to an edge sensor), but the platform as a whole is often rooted to the physical world it is connected to.
There are benefits (and challenges) to the use of Kubernetes that must be considered - whether used at the edge or in the enterprise..
Some of the Benefits of Kubernetes - It provides a \"central pane of glass\" for placing workloads at the edge, monitoring them, and being able to easily upgrade them, more easily than a native, snap-based, or Docker-based deployment. - It allows people to more easily deploy workloads that span from the cloud to the edge by using familiar tools that allow users to place their workloads in a more appropriate place. - Kubernetes is often choosen over Docker alone for container orchestration, with lots of commercially supported Kubernetes distributions for doing so. - Despite the fact that edge resources are not elastic, Kubernetes can make better scheduling decisions in a complex edge environment, where computational accelerators may be available on some nodes and not others, and Kubernetes can help place those workloads where they will run most efficiently.
Some of the Challenges of Using Kubernetes at the Edge - Edge resources are not elastic - Some devices are physically connected to nodes using non-routable or non-Internet protocols, which reduces the value of the Kubernetes scheduler - Storage is a sticking point - unless there is enough infrastructure at the edge to make storage highly available, separation of the storage from the workload mathematically reduces availability (i.e: 0.9 x 0.9 = 0.81 !) - Available network bandwidth and latency can be a concern: a Kubernetes cluster generates a lot of background network and CPU activity.
Serverless Functions
EdgeX is not built on a serverless execution model. Unlike the compute and infrastructure resources of the cloud (which can almost be thought of as infinitely available and scaled up or down as needed), edge compute resources and infrastructure are not scaled up or down based on demand. An edge gateway, running on a light pole of a smart city for example, is not dynamic. The gateway must be provisioned based on the expected highest demand of that platform. The workload on the edge gateway must operate within those resource constraints. EdgeX is designed to operate in some of the smaller of the static, resource constrained environments.
Cloud
Interestingly, we have been asked it EdgeX can run in the cloud. Indeed, some services (such as application services or analytics packages like the rules engine) could run in the cloud (most of the services are platform agnostic), but EdgeX was designed to serve as the middleware between the edge and the cloud. At the lowest level - EdgeX services are meant to connect the physical edge (IoT senors and devices of the OT world) to IT worlds. EdgeX connects things that don't always speak TCP/IP based IT protocols. EdgeX is meant to explore data at the edge in order to reduce latency of communication (making decisions closer to where the decision is turned into action) with the physical edge and reduce the amount of data that needs to be back-hauled to the world of IT (reducing the transportation and storage of unimportant edge data). Even if physical sensors or devices are able to connect and talk to the cloud directly (perhaps because they have Wifi or 5G capability allowing them to connect via TCP/IP), the latency needs and cost to transport all the data directly to the cloud is typically prohibitive.
Note
There are some edge use case where a sensor-to-cloud architecture is warranted. Where the sensor speaks well known IT protocols (TCP/IP REST, MQTT, etc.), the edge data collection rates are small, and there is no need to make quick decisions at the edge, a simple sensor to cloud architecture makes sense and would likely negate the need for EdgeX in that situation.
"},{"location":"general/EdgeX_CN/#other-cloud-native-aspects","title":"Other Cloud Native Aspects","text":"Here are some other aspects or thoughts associated to the cloud native approach (directly or by loose association) and how they apply to EdgeX.
OS is separate
As highly abstracted, containerized applications, cloud native apps do not have a dependency on any specific operating system or individual machine. EdgeX is, for the most part, platform agnostic and able to run on any hardware, OS or connect to any type of sensor or cloud system (whether using EdgeX containers or running on bare metal). However, there are some sensors/devices that require OS or hardware specific drivers or protocol support. These specific services (typically device services) are OS dependent.
High Availability
While not strictly a cloud native principal, cloud native container apps are typically said to provide high availability (HA) - avoiding downtime (scheduled or unscheduled), often by taking advantage of cloud native infrastructure like Kubernetes to keep multiple instances of a service running when HA is paramount. EdgeX does not offer HA out of the box. Services are built to be resilient (for example, recovering from anticipated errors or waiting for dependent services to come up or return when they are not detected), but they are not guaranteed to be HA. When EdgeX services are run in some environments (snaps for example) the environment may detect service issues and launch a new instance of the service to prevent downtime, but these are features of the underlying runtime environments and not of EdgeX services directly. HA often requires a certain amount of redundancy; that is keeping multiple instances of a service running (or at the ready) and using something like Kubernetes to route traffic appropriately given the condition of a service. EdgeX does not have this infrastructure built in, and even if it did, it would have difficulty since some services are again tied to physical senors/devices. If a device service connected to a Modbus device, for example, was to go down, then a backup/redundant service would be of little use without re-provisioning the sensor or device to the backup device service. In order to provide true HA uptime with an edge solution that includes EdgeX, one would need to scale out not up. That is, one would need to setup redundant hardware (sensors, gateway, etc.) with the edge application (EdgeX in this instance) connected to its copy of the sensors and devices and each transmitting back to the IT enterprise such that the enterprise could compare and detect when one of the copies was likely having issues.
Would EdgeX ever explore buiding more HA capability into its services (or even some of its services)? This is unlikely in the near term for the following reasons:
Benefitting from Elastic Infrastructure
Cloud native applications take advantage of shared infrastructure (hardware, software, etc.) provided by the cloud platform in an \"elastic manner\" - that is expanding or shrinking its use of infrastructure based on need (and not really availability which can be considered near infinite). As previously mentioned, edge platforms rarely, if ever, provide this type of infrastructure. Therefore, EdgeX is not built to benefit from it. If an EdgeX service was to begin to receive more and more hits on its APIs, the service would eventually fail. There is not EdgeX provided capability to scale out additional copies of the service.
12 factor app
EdgeX and its services are not 12 factor apps. EdgeX does try to abide by many of the twelve factors (one codebase, declared and isolated dependencies, external config, isolated and configurable backing services, separate build, release and run stages, etc.). But some of the 12 factors, such as concurrency (scale out via the process model), are not possible with each EdgeX service as already mentioned above.
Observable
Perhaps one of the greatest contributions of the CNCF community to cloud native computing is providing all sorts of tools and technologies to observe and analyze cloud native applications in the cloud. Tools like Prometheus make monitoring cloud native containers and their resource utilization a breeze. EdgeX does not come with native observability capabilities. When using EdgeX containers tools like Prometheus for observability and analytics can be used to monitor EdgeX services. Likewise, on some platforms and OS, there are ingredients (like Linux process status or system monitor for snaps) that can be used to help facilitate some level of monitoring. But these are not provided by EdgeX, usually require additional work by an adopter, and may not provide the level of inspection detail required. EdgeX is, with the Kamakura release, starting to provide more system level data (versus sensor data), metrics and events via message bus that an adopter can subscribe to in order to do more observing/analyzing of the EdgeX services. This, however is raw data, to which some additional tooling will be required to provide either human or machine monitoring of the data on top to make sense of it.
"},{"location":"general/EdgeX_CN/#the-future-of-cloud-native-and-edgex","title":"The Future of Cloud Native and EdgeX","text":"As cloud native computing technology and principals expands to more levels of our software realms and as the edge begins to become more indistinguishable from any other part of our computing network, it is inevitable that EdgeX will become more cloud native like. Or perhaps put more precisely, cloud native and edge native are tending toward each other. Edge computing environments are becoming less resource constrained in many places. The CNCF is looking to bring cloud native technology and tools (like Kubernetes) to the edge. Additionally, there are places where EdgeX improvements can help to bridge the cloud native | edge native divide.
Kubernetes Support
As lighter weight Kubernetes infrastructure becomes available (e.g. K3s, KubeEdge, Minikube, etc. - see a comparison for context) and are improved upon, and/or as more edge computing environments get more resources, one of the chief cloud native technologies - that is Kubernetes - or its close cousin will emerge to better facilitate deployment, orchestration, and monitoring (observability) of container based workloads at the edge. EdgeX must be prepared to support and embrace it as it has containers and snaps - yet still recognize that the lowest common denominator of edge platforms may only support \"bare metal\" (only OS and not hypervisor or container infrastructure) type deployments for the foreseeable future.
Better Use of the Service Registry
EdgeX services can and should use the service registry to locate dependent services. This will allow services to be more easily distributed and even allow for use of load balancing and redundant services in some cases.
Secure Service-to-Service Communications
Where warranted, the inclusion of secure communication between services and potentially the inclusion of an optional service mesh will allow for more easily distributed services.
"},{"location":"general/PlatformRequirements/","title":"Platform Requirements","text":"EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended:
MemoryStorageOperating SystemsMemory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve. When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual).
Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start.
EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems
Info
EdgeX Foundry runs on various distributions and / or versions of Linux, Unix, MacOS, Windows, etc. However, the community only supports the platform on amd64
(x86-64) and arm64
architectures.
EdgeX Foundry releases pre-built artifacts as Docker images and Snaps. Please refer to Getting Started for details.
EdgeX can run on armhf
architecture but that requires users to build their own executables from source. EdgeX does not officially support armhf
.
Each EdgeX micro service requires configuration (i.e. - a repository of initialization and operating values). The configuration is initially provided by a YAML file but a service can utilize the centralized configuration management provided by EdgeX for its configuration.
See the Configuration and Registry documentation for more details about initialization of services and the use of the configuration service.
Please refer to the EdgeX Foundry architectural decision record for details (and design decisions) behind the configuration in EdgeX.
Please refer to the general Common Configuration documentation for configuration properties common to all services. Find service specific configuration references in the tabs below.
CoreSupportingApplication & AnalyticsDeviceSecurity Service Name Configuration Reference core-data Core Data Configuration core-metadata Core Metadata Configuration core-command Core Command Configuration Service Name Configuration Reference support-notifications Support Notifications Configuration support-scheduler Support Scheduler Configuration Services Name Configuration Reference app-service General Application Service Configuration app-service-configurable Configurable Application Service Configuration eKuiper rules engine/eKuiper Basic eKuiper Configuration Services Name Configuration Reference device-service General Device Service Configuration device-virtual Virtual Device Service Configuration Services Name Configuration Reference API Gateway API Gateway Configuration Add-on Services Configuring Add-on Service"},{"location":"general/ServicePorts/","title":"Default Service Ports","text":"The following tables (organized by type of service) capture the default service ports. These default ports are also used in the EdgeX provided service routes defined in the Kong API Gateway for access control.
CoreSupportingApplicationDeviceSecurityMiscellaneous Services Name Port Definition core-data 59880 core-metadata 59881 core-command 59882 redis 6379 consul 8500 Services Name Port Definition support-notifications 59860 support-scheduler 59861 rules engine / eKuiper 59720 system management agent (deprecated) 58890 Services Name Port Definition app-sample 59700 app-service-rules 59701 app-push-to-core 59702 app-mqtt-export 59703 app-http-export 59704 app-functional-tests 59705 app-external-mqtt-trigger 59706 app-metrics-influxdb 59707 app-rfid-llrp-inventory 59711 Services Name Port Definition device-virtual 59900 device-modbus 59901 device-bacnet 59980 device-mqtt 59982 device-usb-camera 59983 device-onvif-camera 59984 device-camera 59985 device-rest 59986 device-coap 59988 device-rfid-llrp 59989 device-grove 59992 device-snmp 59993 device-gpio 59910 Services Name Port Definition vault 8200 nginx 8000, 8443 security-spire-server 59840 security-spiffe-token-provider 59841 security-proxy-auth 59842 Services Name Port Definition ui 4000 Modbus simulator 1502 MQTT broker 1883"},{"location":"getting-started/","title":"Getting Started","text":"EdgeX Foundry is operating system and architecture agnostic. The community releases artifacts for common architectures. However, it is possible to build the components for other platforms. See the platform requirements reference page for details.
To get started you need to get EdgeX Foundry either as a User or as a Developer/Contributor.
"},{"location":"getting-started/#user","title":"User","text":"If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". You will want to follow the Getting Started as a User guide which takes you through the process of deploying the latest EdgeX releases.
For demo purposes and to run EdgeX on your machine in just a few minutes, please refer to the Quick Start guide.
"},{"location":"getting-started/#developer-and-contributor","title":"Developer and Contributor","text":"If you want to change, add to or at least build the existing EdgeX code base, then you are a \"Developer\". \"Contributors\" are developers that further wish to contribute their code back into the EdgeX open source effort. You will want to follow the Getting Started for Developers guide.
"},{"location":"getting-started/#hybrid","title":"Hybrid","text":"See Getting Started Hybrid if you are developing or working on a particular micro service, but want to run the other micro services via Docker Containers. When working on something like an analytics service (as a developer or contributor) you may not wish to download, build and run all the EdgeX code - you only want to work with the code of your service. Your new service may still need to communicate with other services while you test your new service. Unless you want to get and build all the services, developers will often get and run the containers for the other EdgeX micro services and run only their service natively in a development environment. The EdgeX community refers to this as \"Hybrid\" development.
"},{"location":"getting-started/#device-service-developer","title":"Device Service Developer","text":"As a developer, if you intend to connect IoT objects (device, sensor or other \"thing\") that are not currently connected to EdgeX Foundry, you may also want to obtain the Device Service Software Development Kit (DS SDK) and create new device services. The DS SDK creates all the scaffolding code for a new EdgeX Foundry device service; allowing you to focus on the details of interfacing with the device in its native protocol. See Getting Started with Device SDK for help on using the DS SDK to create a new device service. Learn more about Device Services and the Device Service SDK at Device Services.
"},{"location":"getting-started/#application-service-developer","title":"Application Service Developer","text":"As a developer, if you intend to get EdgeX sensor data to external systems (be that an enterprise application, on-prem server or Cloud platform like Azure IoT Hub, AWS IoT, Google Cloud IOT, etc.), you will likely want to obtain the Application Functions SDK (App Func SDK) and create new application services. The App Func SDK creates all the scaffolding code for a new EdgeX Foundry application service; allowing you to focus on the details of data transformation, filtering, and otherwise prepare the sensor data for the external endpoint. Learn more about Application Services and the Application Functions SDK at Application Services.
"},{"location":"getting-started/#versioning","title":"Versioning","text":"Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.
"},{"location":"getting-started/#long-term-support","title":"Long Term Support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.
"},{"location":"getting-started/ApplicationFunctionsSDK/","title":"Getting Started","text":""},{"location":"getting-started/ApplicationFunctionsSDK/#the-application-functions-sdk","title":"The Application Functions SDK","text":"The SDK is built around the idea of a \"Functions Pipeline\". A functions pipeline is a collection of various functions that process the data in the order that you've specified. The functions pipeline is executed by the specified trigger in the configuration.yaml
. The first function in the pipeline is called with the event that triggered the pipeline (ex. dtos.Event
). Each successive call in the pipeline is called with the return result of the previous function. Let's take a look at a simple example that creates a pipeline to filter particular device ids and subsequently transform the data to XML:
package main\n\nimport (\n\"errors\"\n\"fmt\"\n\"os\"\n\n\"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg\"\n\"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/interfaces\"\n\"github.com/edgexfoundry/app-functions-sdk-go/v2/pkg/transforms\"\n)\n\nconst (\nserviceKey = \"app-simple-filter-xml\"\n)\n\nfunc main() {\n// turn off secure mode for examples. Not recommended for production\n_ = os.Setenv(\"EDGEX_SECURITY_SECRET_STORE\", \"false\")\n\n// 1) First thing to do is to create an new instance of an EdgeX Application Service.\nservice, ok := pkg.NewAppService(serviceKey)\nif !ok {\nos.Exit(-1)\n}\n\n// Leverage the built in logging service in EdgeX\nlc := service.LoggingClient()\n\n// 2) shows how to access the application's specific configuration settings.\ndeviceNames, err := service.GetAppSettingStrings(\"DeviceNames\")\nif err != nil {\nlc.Error(err.Error())\nos.Exit(-1)\n}\n\nlc.Info(fmt.Sprintf(\"Filtering for devices %v\", deviceNames))\n\n// 3) This is our pipeline configuration, the collection of functions to\n// execute every time an event is triggered.\nif err := service.SetDefaultFunctionsPipeline(\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\ntransforms.NewConversion().TransformToXML\n); err != nil {\nlc.Errorf(\"SetDefaultFunctionsPipeline returned error: %s\", err.Error())\nos.Exit(-1)\n}\n\n// 4) Lastly, we'll go ahead and tell the SDK to \"start\" and begin listening for events\n// to trigger the pipeline.\nerr = service.Run()\nif err != nil {\nlc.Errorf(\"Run returned error: %s\", err.Error())\nos.Exit(-1)\n}\n\n// Do any required cleanup here\n\nos.Exit(0)\n}\n
The above example is meant to merely demonstrate the structure of your application. Notice that the output of the last function is not available anywhere inside this application. You must provide a function in order to work with the data from the previous function. Let's go ahead and add the following function that prints the output to the console.
func printXMLToConsole(ctx interfaces.AppFunctionContext, data interface{}) (bool, interface{}) {\n// Leverage the built in logging service in EdgeX\nlc := ctx.LoggingClient()\n\nif data == nil {\nreturn false, errors.New(\"printXMLToConsole: No data received\")\n}\n\nxml, ok := data.(string)\nif !ok {\nreturn false, errors.New(\"printXMLToConsole: Data received is not the expected 'string' type\")\n}\n\nprintln(xml)\nreturn true, nil\n}\n
After placing the above function in your code, the next step is to modify the pipeline to call this function: if err := service.SetDefaultFunctionsPipeline(\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\ntransforms.NewConversion().TransformToXML,\nprintXMLToConsole //notice this is not a function call, but simply a function pointer. \n); err != nil {\n...\n}\n
Set the Trigger type to http
in configuration file found here: res/configuration.yaml [Trigger]\nType=\"http\"\n
Using PostMan or curl send the following JSON to localhost:<port>/api/v3/trigger
{\n\"requestId\": \"82eb2e26-0f24-48ba-ae4c-de9dac3fb9bc\",\n\"apiVersion\" : \"v3\",\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"deviceName\": \"Random-Float-Device\",\n\"profileName\": \"Random-Float-Device\",\n\"sourceName\" : \"Float32\",\n\"origin\": 1540855006456,\n\"id\": \"94eb2e26-0f24-5555-2222-de9dac3fb228\",\n\"readings\": [\n{\n\"apiVersion\" : \"v3\",\n\"resourceName\": \"Float32\",\n\"profileName\": \"Random-Float-Device\",\n\"deviceName\": \"Random-Float-Device\",\n\"value\": \"76677\",\n\"origin\": 1540855006469,\n\"ValueType\": \"Float32\",\n\"id\": \"82eb2e36-0f24-48aa-ae4c-de9dac3fb920\"\n}\n]\n}\n}\n
After making the above modifications, you should now see data printing out to the console in XML when an event is triggered.
Note
You can find this complete example \"Simple Filter XML\" and more examples located in the examples section.
Up until this point, the pipeline has been triggered by an event over HTTP and the data at the end of that pipeline lands in the last function specified. In the example, data ends up printed to the console. Perhaps we'd like to send the data back to where it came from. In the case of an HTTP trigger, this would be the HTTP response. In the case of EdgeX MessageBus, this could be a new topic to send the data back to the MessageBus for other applications that wish to receive it. To do this, simply call ctx.SetResponseData(data []byte)
passing in the data you wish to \"respond\" with. In the above printXMLToConsole(...)
function, replace println(xml)
with ctx.SetResponseData([]byte(xml))
. You should now see the response in your postman window when testing the pipeline.
These instructions are for C Developers and Contributors to get, run and otherwise work with C-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements.
If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User)
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#what-you-need-for-c-development","title":"What You Need For C Development","text":"Many of EdgeX device services are built in C. In the future, other services could be built in C. In additional to the hardware and software listed in the Developers guide, to build EdgeX C services, you will need the following:
You can install these on Debian 11 (Bullseye) by running:
sudo apt-get install libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev libcbor-dev libpaho-mqtt-dev uuid-dev libhiredis-dev\n
Some of these supporting packages have dependencies of their own, which will be automatically installed when using package managers such as APT, DNF etc. libpaho-mqtt-dev
is not included in Ubuntu prior to Groovy (20.10). IOTech provides a package for Focal (20.04 LTS) which may be installed as follows:
sudo curl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public -o /etc/apt/trusted.gpg.d/iotech-public.asc\nsudo echo \"deb https://iotech.jfrog.io/iotech/debian-release $(lsb_release -cs) main\" | tee -a /etc/apt/sources.list.d/iotech.list\nsudo apt-get update\nsudo apt-get install libpaho-mqtt\n
CMake is required to build the SDKs. Version 3 or better is required. You can install CMake on Debian by running:
sudo apt-get install cmake\n
Check that your C development environment includes the following:
From EdgeX version 3.0, the C utilities used by the SDK must be installed as a pre-requisite package, rather than being downloaded and built with the SDK itself as in previous versions. Note that if re-using an old build tree, the src/c/iot
and include/iot
directories must be removed as these will be outdated.
All commands shown are to be run as the root user.
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#debian-and-ubuntu","title":"Debian and Ubuntu","text":"Management of package signing keys is changed in newer versions. For Debian 11 and Ubuntu 22.04:
apt-get install lsb-release apt-transport-https curl gnupg\ncurl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public | gpg --dearmor -o /usr/share/keyrings/iotech.gpg\necho \"deb [signed-by=/usr/share/keyrings/iotech.gpg] https://iotech.jfrog.io/iotech/debian-release $(lsb_release -cs) main\" | tee -a /etc/apt/sources.list.d/iotech.list\napt-get update\napt-get install iotech-iot-1.5-dev\n
For earlier versions:
apt-get install lsb-release apt-transport-https curl gnupg\ncurl -fsSL https://iotech.jfrog.io/artifactory/api/gpg/key/public | apt-key add -\necho \"deb https://iotech.jfrog.io/iotech/debian-release $(lsb_release -cs) main\" | tee -a /etc/apt/sources.list.d/iotech.list\napt-get update\napt-get install iotech-iot-1.5-dev\n
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#alpine","title":"Alpine","text":"wget https://iotech.jfrog.io/artifactory/api/security/keypair/public/repositories/alpine-release -O /etc/apk/keys/alpine.dev.rsa.pub\necho \"https://iotech.jfrog.io/artifactory/alpine-release/v3.16/main\" >> /etc/apk/repositories\napk update\napk add iotech-iot-1.5-dev\n
Note: If not using Alpine 3.16, replace v3.16 in the above commands with the correct version.
"},{"location":"getting-started/Ch-GettingStartedCDevelopers/#next-steps","title":"Next Steps","text":"To explore how to create and build EdgeX device services in C, head to the Device Services, C SDK guide.
"},{"location":"getting-started/Ch-GettingStartedDTOValidation/","title":"DTO Validation","text":"The go-mod-core-contracts leverage the go-playground/validator for DTO validation as it provides common validation function and customization mechanism.
"},{"location":"getting-started/Ch-GettingStartedDTOValidation/#tag-usage","title":"Tag usage","text":"EdgeX verifies the struct fields by using go-playground/validator validation tags or custom validation tags, for example:
type Device struct {\n DBTimestamp `json:\",inline\"`\n Id string `json:\"id,omitempty\" validate:\"omitempty,uuid\"`\n Name string `json:\"name\" validate:\"required,edgex-dto-none-empty-string,edgex-dto-rfc3986-unreserved-chars\"`\n Description string `json:\"description,omitempty\"`\n AdminState string `json:\"adminState\" validate:\"oneof='LOCKED' 'UNLOCKED'\"`\n OperatingState string `json:\"operatingState\" validate:\"oneof='UP' 'DOWN' 'UNKNOWN'\"`\n ...\n}\n
The device name field contains the following validation: You can find more validations in the go-playground/validator and EdgeX custom validations in the go-mod-core-contracts.
"},{"location":"getting-started/Ch-GettingStartedDTOValidation/#character-restriction","title":"Character restriction","text":"The EdgeX uses the custom validation edgex-dto-rfc3986-unreserved-chars to prevent the user inputting the reserved characters.
This validation allows for only the following characters:
EdgeX 3.0
In EdgeX 3.0, the character restriction was reduced for the command name and resource name because some protocols may use /
or .
in the name. By using URL escaping for the API, device command name and resource name allow various characters. For example, the user can define the command name line-a/test:value
and use it with URL escaping as /api/v3/device/name/Modbus-TCP-Device/line-a%2Ftest%3Avalue
.
These instructions are for Developers and Contributors to get and run EdgeX Foundry. If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User)
EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability. EdgeX consists of a collection of reference implementation services and SDK tools. The micro services and SDKs are written in Go or C. These documentation pages provide a developer with the information and instructions to get and run EdgeX Foundry in development mode - that is running natively outside of containers and with the intent of adding to or changing the existing code base.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#what-you-need","title":"What You Need","text":""},{"location":"getting-started/Ch-GettingStartedDevelopers/#hardware","title":"Hardware","text":"EdgeX Foundry is an operating system (OS) and hardware (HW)-agnostic edge software platform. See the reference page for platform requirements. These provide guidance on a minimal platform to run the EdgeX platform. However, as a developer, you may find that additional memory, disk space, and improved CPU are essential to building and debugging.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#software","title":"Software","text":"Developers need to install the following software to get, run and develop EdgeX Foundry micro services:
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#git","title":"Git","text":"Use this free and open source version control (SVC) system to download (and upload) the EdgeX Foundry source code from the project's GitHub repositories. See https://git-scm.com/downloads for download and install instructions. Alternative tools (Easy Git for example) could be used, but this document assumes use of git and leaves how to use alternative SVC tools to the reader.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#redis","title":"Redis","text":"By default, EdgeX Foundry uses Redis (version 5 starting with the Geneva release) as the persistence mechanism for sensor data as well as metadata about the devices/sensors that are connected. See Redis Documentation for download and installation instructions.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#docker-optional","title":"Docker (Optional)","text":"If you intend to create Docker images for your updated or newly created EdgeX services, you need to install Docker. See https://docs.docker.com/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#additional-programming-tools-and-next-steps","title":"Additional Programming Tools and Next Steps","text":"Depending on which part of EdgeX you work on, you need to install one or more programming languages (Go, C, etc.) and associated tooling. These tools are covered under the documentation specific to each type of development.
Please refer to the EdgeX Foundry versioning policy for information on how EdgeX services are released and how EdgeX services are compatible with one another. Specifically, device services (and the associated SDK), application services (and the associated app functions SDK), and client tools (like the EdgeX CLI and UI) can have independent minor releases, but these services must be compatible with the latest major release of EdgeX.
"},{"location":"getting-started/Ch-GettingStartedDevelopers/#long-term-support","title":"Long Term Support","text":"Please refer to the EdgeX Foundry LTS policy for information on support of EdgeX releases. The EdgeX community does not offer support on any non-LTS release outside of the latest release.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/","title":"Getting Started using Docker","text":""},{"location":"getting-started/Ch-GettingStartedDockerUsers/#introduction","title":"Introduction","text":"These instructions are for users to get and run EdgeX Foundry using the latest stable Docker images.
If you wish to get the latest builds of EdgeX Docker images (prior to releases), then see the EdgeX Nexus Repository guide.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#get-run-edgex-foundry","title":"Get & Run EdgeX Foundry","text":""},{"location":"getting-started/Ch-GettingStartedDockerUsers/#install-docker-docker-compose","title":"Install Docker & Docker Compose","text":"To run Dockerized EdgeX, you need to install Docker first. See https://docs.docker.com/engine/install/ to learn how to install Docker. If you are new to Docker, the same web site provides you educational information. The following short video is also very informative https://www.youtube.com/watch?time_continue=3&v=VhabrYF1nms
Use Docker Compose to orchestrate the fetch (or pull), install, and start the EdgeX micro service containers. Also use Docker Compose to stop the micro service containers. See: https://docs.docker.com/compose/ to learn more about Docker Compose and https://docs.docker.com/compose/install/linux/ to install it.
You do not need to be an expert with Docker (or Docker Compose) to get and run EdgeX. This guide provides the steps to get EdgeX running in your environment. Some knowledge of Docker and Docker Compose are nice to have, but not required. Basic Docker and Docker Compose commands provided here enable you to run, update, and diagnose issues within EdgeX.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#select-a-edgex-foundry-compose-file","title":"Select a EdgeX Foundry Compose File","text":"After installing Docker and Docker Compose, you need a EdgeX Docker Compose file. EdgeX Foundry has over a dozen micro services, each deployed in its own Docker container. This file is a manifest of all the EdgeX Foundry micro services to run. The Docker Compose file provides details about how to run each of the services. Specifically, a Docker Compose file is a manifest file, which lists:
The EdgeX development team provides Docker Compose files for each release. Visit the project's GitHub and find the edgex-compose repository. This repository holds all of the EdgeX Docker Compose files for each of the EdgeX releases/versions. The Compose files for each release are found in separate branches. Click on the main
button to see all the branches.
The edgex-compose repositor contains branches for each release. Select the release branch to locate the Docker Compose files for each release.
Locate the branch containing the EdgeX Docker Compose file for the version of EdgeX you want to run.
Note
The main
branch contains the Docker Compose files that use artifacts created from the latest code submitted by contributors (from the night builds). Most end users should avoid using these Docker Compose files. They are work-in-progress. Users should use the Docker Compose files for the latest version of EdgeX.
In each edgex-compose branch, you will find several Docker Compose files (all with a .yml extension). The name of the file will suggest the type of EdgeX instance the Compose file will help setup. The table below provides a list of the Docker Compose filenames for the main
version. Find the Docker Compose file that matches:
Once you have selected the release branch of edgex-compose you want to use, download it using your favorite tool. The examples below uses wget to fetch Docker Compose for the Ireland release with no security.
x86ARMwget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty.yml -O docker-compose.yml\n
wget https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty-arm64.yml -O docker-compose.yml\n
Note
The commands above fetch the Docker Compose to a file named 'docker-compose.yml' in the current directory. Docker Compose commands look for a file named 'docker-compose.yml' by default. You can use an alternate file name but then must specify that file name when issuing Docker Compose commands. See Compose reference documentation for help.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#generate-a-custom-docker-compose-file","title":"Generate a custom Docker Compose file","text":"The Docker Compose files in the ireland
branch contain the standard set of EdgeX services configured to use Redis
message bus and include only the Virtual and REST device services. If you need to have different device services running or use MQTT
for the message bus, you need a modified version of one of the standard Docker Compose files. You could manually add the device services to one of the existing EdgeX Compose files or, use the EdgeX Compose Builder tool to generate a new custom Compose file that contains the services you would like included. When you use Compose Builder, you don't have to worry about adding all the necessary ports, variables, etc. as the tool will generate the service elements in the file for you. The Compose Builder tool was added with the Hanoi release. You will find the Compose Builder tool in each of the release branches since Hanoi
under the compose-builder folder of those branches. You will also find a compose-builder folder on the main
branch for creating custom Compose files for the nightly builds.
Do the following to use this tool to generate a custom Compose file:
git clone https://github.com/edgexfoundry/edgex-compose.git\n
2. Change directories to the clone and checkout the appropriate release branch. Checkout of the Ireland release branch is shown here. cd edgex-compose/\ngit checkout kamakura\n
3. Change directories to the compose-builder folder and then use the make gen <options>
command to generate your custom compose file. The generated Docker Compose file is named docker-compose.yaml
. Here are some examples: cd compose-builder/\nmake gen ds-mqtt mqtt-broker\n - Generates secure Compose file configured to use MQTT for the message bus, adds then MQTT broker and the Device MQTT services. \n\nmake gen no-secty ds-modbus \n - Generates non-secure compose file with just the Device Modbus device service.\n\nmake gen no-secty arm64 ds-grove \n - Generates non-secure compose file for ARM64 with just the Device Grove device service.\n
See the README document in the compose-builder directory for details on all the available options. The Compose Builder is different per release, so make sure to consult the README in the appropriate release branch. See Ireland's Compose Builder README for details on the lastest release Compose Builder options for make gen
.
Note
The generated Docker Compose file may require addition customizations for your specific needs, such as environment override(s) to set appropriate Host IP address, etc.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#run-edgex-foundry","title":"Run EdgeX Foundry","text":"Now that you have the EdgeX Docker Compose file, you are ready to run EdgeX. Follow these steps to get the container images and start EdgeX!
In a command terminal, change directories to the location of your docker-compose.yml. Run the following command in the terminal to pull (fetch) and then start the EdgeX containers.
docker-compose up -d\n
Warning
If you are using Docker Compose Version 2, please replace docker-compose
with docker compose
before proceeding. This change should be applied to all the docker-compose
in this tutorial. See: https://www.docker.com/blog/announcing-compose-v2-general-availability/ for more information.
Info
If you wish, you can fetch the images first and then run them. This allows you to make sure the EdgeX images you need are all available before trying to run.
docker-compose pull\ndocker-compose up -d\n
Note
The -d option indicates you want Docker Compose to run the EdgeX containers in detached mode - that is to run the containers in the background. Without -d, the containers will all start in the terminal and in order to use the terminal further you have to stop the containers.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#verify-edgex-foundry-running","title":"Verify EdgeX Foundry Running","text":"In the same terminal, run the process status command shown below to confirm that all the containers downloaded and started.
docker-compose ps\n
If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above. If you are using a custom Compose file, your containers list may vary. Also note that some \"setup\" containers are designed to start and then exit after configuring your EdgeX instance.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#checking-the-status-of-edgex-foundry","title":"Checking the Status of EdgeX Foundry","text":"In addition to the process status of the EdgeX containers, there are a number of other tools to check on the health and status of your EdgeX instance.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#edgex-foundry-container-logs","title":"EdgeX Foundry Container Logs","text":"Use the command below to see the log of any service.
# see the logs of a service\ndocker-compose logs -f [compose-service-name]\n# example - core data\ndocker-compose logs -f data\n
See EdgeX Container Names for a list of the EdgeX Docker Compose service names.
A check of an EdgeX service log usually indicates if the service is running normally or has errors.
When you are done reviewing the content of the log, select Control-c to stop the output to your terminal.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#ping-check","title":"Ping Check","text":"Each EdgeX micro service has a built-in response to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro service containers are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available.
http://localhost:[service port]/api/v3/ping\n
See EdgeX Default Service Ports for a list of the EdgeX default service ports.
\"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues.
"},{"location":"getting-started/Ch-GettingStartedDockerUsers/#consul-registry-check","title":"Consul Registry Check","text":"EdgeX uses the open source Consul project as its registry service. All EdgeX micro services are expected to register with Consul as they start. Going to Consul's dashboard UI enables you to see which services are up. Find the Consul UI at http://localhost:8500/ui.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/","title":"Getting Started - Go Developers","text":""},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#introduction","title":"Introduction","text":"These instructions are for Go Lang Developers and Contributors to get, run and otherwise work with Go-based EdgeX Foundry micro services. Before reading this guide, review the general developer requirements.
If you want to get the EdgeX platform and run it (but do not intend to change or add to the existing code base now) then you are considered a \"User\". Users should read: Getting Started as a User)
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#what-you-need-for-go-development","title":"What You Need For Go Development","text":"In additional to the hardware and software listed in the Developers guide, you will need the following to work with the EdgeX Go-based micro services.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#go","title":"Go","text":"The open sourced micro services of EdgeX Foundry are written in Go 1.16. See https://golang.org/dl/ for download and installation instructions. Newer versions of Go are available and may work, but the project has not built and tested to these newer versions of the language. Older versions of Go, especially 1.10 or older, are likely to cause issues (EdgeX now uses Go Modules which were introduced with Go Lang 1.11).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-essentials","title":"Build Essentials","text":"In order to compile and build some elements of EdgeX, Gnu C compiler, utilities (like make), and associated librarires need to be installed. Some IDEs may already come with these tools. Some OS environments may already come with these tools. Others environments may require you install them. For Ubuntu environments, you can install a convenience package called Build Essentials.
Note
If you are installing Build Essentials, note that there is a build-essential package for each Ubuntu release. Search for 'build-essential' associated to your Ubuntu version via Ubuntu Packages Search.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#ide-optional","title":"IDE (Optional)","text":"There are many tool options for writing and editing Go Lang code. You could use a simple text editor. For more convenience, you may choose to use an integrated development environment (IDE). The list below highlights IDEs used by some of the EdgeX community (without any project endorsement).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#goland","title":"GoLand","text":"GoLand is a popular, although subscription-fee based, Go specific IDE. Learn how to purchase and download Go Land here: https://www.jetbrains.com/go/.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#visual-studio-code","title":"Visual Studio Code","text":"Visual Studio Code is a free, open source IDE developed by Microsoft. Find and download Visual Studio Code here: https://code.visualstudio.com/.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#atom","title":"Atom","text":"Atom is also a free, open source IDE used with many languages. Find and download Atom here: https://ide.atom.io/.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#get-the-code","title":"Get the code","text":"This part of the documentation assumes you wish to get and work with the key EdgeX services. This includes but is not limited to Core, Supporting, some security, and system management services. To work with other Go-based security services, device services, application services, SDKs, user interface, or other service you may need to pull in other EdgeX repository code. See other getting started guides for working with other Go-based services. As you will see below, you do not need to explicitly pull in dependency modules (whether EdgeX or 3rd party provided). Dependencies will automatically be pulled through the building process.
To work with the key services, you will need to download the source code from the EdgeX Go repository. The EdgeX Go-based micro services are all available in a single GitHub repository download. Once the code is pulled, the Go micro services are built and packaged as platform dependent executables. If Docker is installed, the executable can also be containerized for end user deployment/use.
To download the EdgeX Go code, first change directories to the location where you want to download the code (to edgex in the image below). Then use your git tool and request to clone this repository with the following command:
git clone https://github.com/edgexfoundry/edgex-go.git\n
Note
If you plan to contribute code back to the EdgeX project (as a Contributor), you are going to want to fork the repositories you plan to work with and then pull your fork versus the EdgeX repositories directly. This documentation does not address the process and procedures for working with an EdgeX fork, committing changes and submitting contribution pull requests (PRs). See some of the links below in the EdgeX Wiki for help on how to fork and contribute EdgeX code.
Furthermore, this pulls and works with the latest code from the main
branch. The main
branch contains code that is \"work in progress\" for the upcoming release. If you want to work with a specific release, checkout code from the specific release branch or tag(e.g. v2.0.0
, hanoi
, v1.3.11
, etc.)
To build the Go Lang services found in edgex-go, first change directories to the root of the edgex-go code
cd edgex-go\n
Second, use the community provided Makefile to build all the services in a single call make build\n
Info
The first time EdgeX builds, it will take longer than other builds as it has to download all dependencies. Depending on the size of your host machine, an initial build can take several minutes. Make sure the build completes and has no errors. If it does build, you should find new service executables in each of the service folders under the service directories found in the /edgex-go/cmd folder.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-foundry","title":"Run EdgeX Foundry","text":""},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-the-database","title":"Run the Database","text":"Several of the EdgeX Foundry micro services use a database. This includes core-data, core-metadata, support-scheduler, among others. Therefore, when working with EdgeX Foundry its a good idea to have the database up and running as a general rule. See the Redis Quick Start Guide for how to run Redis in a Linux environment (or find similar documentation for other environments).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex-services","title":"Run EdgeX Services","text":"With the services built, and the database up and running, you can now run each of the services. In this example, the services will run without security services turned on. If you wish to run with security, you will need to clone, build and run the security services.
In order to turn security off, first set the EDGEX_SECURITY_SECRET_STORE
environment variable to false with an export call.
Simply call
export EDGEX_SECURITY_SECRET_STORE=false\n
Next, move to the cmd
folder and then change folders to the service folder for the service you want to run. Start the executable (with default configuration) that is in that folder. For example, to start Core Metadata, enter the cmd/core-metadata folder and start core-metadata.
cd cmd/core-metadata/\n./core-metadata &\n
Note
When running the services from the command line, you will usually want to start the service with the &
character after the command. This makes the command run in the background. If you do not run the service in the background, then you will need to leave the service running in the terminal and open another terminal to start the other services.
This will start the EdgeX go service and leave it running in the background until you kill it. The log entries from the service will still display in the terminal. Watch the log entries for any ERROR indicators.
Info
To kill a service there are several options, but an easy means is to use pkill with the service name.
pkill core-metadata\n
Start as many services as you need in order to carry out your development, testing, etc. As an absolute minimal set, you will typically need to run core-metadata, core-data, core-command and a device service. Selection of the device service will depend on which physical sensor or device you want to use (or use the virtual device to simulate a sensor). Here are the set of commands to launch core-data and core-command (in addition to core-metadata above)
cd ../core-data/\n./core-data &\ncd ../core-command/\n./core-command &\n
Tip
You can run some services via Docker containers while working on specific services in Go. See Working in a Hybrid Environment for more details.
While the EdgeX services are running you can make EdgeX API calls to localhost
.
Info
No sensor data will flow yet as this just gets the key services up and running. To get sensor data flowing into EdgeX, you will need to get, build and run an EdgeX device service in a similar fashion. The community provides a virtual device service to test and experiment with (https://github.com/edgexfoundry/device-virtual-go).
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#verify-edgex-is-working","title":"Verify EdgeX is Working","text":"Each EdgeX micro service has a built-in respond to a \"ping\" HTTP request. In networking environments, use a ping request to check the reach-ability of a network resource. EdgeX uses the same concept to check the availability or reach-ability of a micro service. After the EdgeX micro services are running, you can \"ping\" any one of the micro services to check that it is running. Open a browser or HTTP REST client tool and use the service's ping address (outlined below) to check that is available.
http://localhost:[port]/api/v3/ping\n
See EdgeX Default Service Ports for a list of the EdgeX default service ports.
\"Pinging\" an EdgeX micro service allows you to check on its availability. If the service does not respond to ping, the service is down or having issues. The example above shows the ping of core-data.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#next-steps","title":"Next Steps","text":"Application services and some device services are also built in Go. To explore how to create and build EdgeX application and devices services in Go, head to SDK documentation covering these EdgeX elements.
IDEs offer many code editing conveniences. Go Land was specifically built to edit and work with Go code. So if you are doing any significant code work with the EdgeX Go micro services, you will likely find it convenient to edit, build, run, test, etc. from GoLand or other IDE.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#import-edgex","title":"Import EdgeX","text":"To bring in the EdgeX repository code into Go Land, use the File \u2192 Open... menu option in Go Land to open the Open File or Project Window.
In the \"Open File or Project\" popup, select the location of the folder containing your cloned edgex-go repo.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#open-the-terminal","title":"Open the Terminal","text":"From the View menu in Go Land, select the Terminal menu option. This will open a command terminal from which you can issue commands to install the dependencies, build the micro services, run the micro services, etc.
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#build-the-edgex-micro-services","title":"Build the EdgeX Micro Services","text":"Run \"make build\" in the Terminal view (as shown below) to build the services. This can take a few minutes to build all the services.
Just as when running make build from the command line in a terminal, the micro service executables that get built in Go Land's terminal will be created in each of the service folders under the service directories found in the /edgex-go/cmd folder..
"},{"location":"getting-started/Ch-GettingStartedGoDevelopers/#run-edgex","title":"Run EdgeX","text":"With all the micro services built, you can now run EdgeX services. You may first want to make sure the database is running. Then, set any environment variables, change directories to the /cmd and service subfolder, and run the service right from the the terminal (same as in Run EdgeX Services).
You can now call on the service APIs to make sure they are running correctly. Namely, call on http://localhost:\\[service port\\]/api/v3/ping
to see each service respond to the simplest of requests.
In some cases, as a developer or contributor, you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run all the micro services. There is an alternative approach! You can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a \"hybrid\" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from Docker containers. This page outlines how to work in a hybrid development environment.
As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-and-run-the-edgex-docker-containers","title":"Get and Run the EdgeX Docker Containers","text":"Since we plan to work with the virtual device service in this example, you don't need or want to run the virtual device service. You will run all the other services via Docker Compose.
Based on the instructions found in the Getting Started using Docker, locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start the EdgeX containers and then stop the virtual device service (which is the service you are working on in this example).
docker-compose up -d \ndocker-compose stop device-virtual\n
Run the EdgeX containers and then stop the service container that you are going to work on - in this case the virtual device service container.
Note
These notes assume you are working with the EdgeX Minnesota or later release. It also assumes you have downloaded the appropriate Docker Compose file and have named it docker-compose.yml
so you don't have to specify the file name each time you run a Docker Compose command. Some versions of EdgeX may require other or additional containers to run.
Tip
You can also use the EdgeX Compose Builder tool to create a custom Docker Compose file with just the services you want. See the Compose Builder documentation on and checkout the Compose Builder tool in GitHub.
Run the command below to confirm that all the containers have started and that the virtual device container is no longer running.
docker-compose ps\n
With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#get-the-service-code","title":"Get the service code","text":"Per Getting Started Go Developers, pull the micro service code you want to work on from GitHub. In this example, we use the latest released tag for device-virtual-go as the micro service that is going to be worked on. The main branch is the development branch for the next release. The latest release tag should always be used so you are worked with the most recent stable code. The release tags can be found here. Release tags are those tags to do not have -dev
in the name.
git clone --branch <latest-release-tag> https://github.com/edgexfoundry/device-virtual-go.git\n
"},{"location":"getting-started/Ch-GettingStartedHybrid/#build-the-service-code","title":"Build the service code","text":"At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service.
cd device-virtual-go/\nmake build\n
Clone the service from Github, make your code changes and then build the service locally.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#run-the-service-code-natively","title":"Run the service code natively.","text":"The executable created by the make build
command is found in the cmd folder of the service. Change folders to the location of the executable. Set any environment variables needed depending on your EdgeX setup. In this example, we did not start the security elements so we need to set EDGEX_SECURITY_SECRET_STORE
to false
in order to turn off security. Finally, run the service right from a terminal.
cd cmd\nexport EDGEX_SECURITY_SECRET_STORE=false\n./device-virtual -cp -d -o\n
Note
The -cp
flag tells the service to use the Configuration Provider. This is required so that the service can pull the common configuration. The -d
flag tells the service to run in developer mode (aka hybrid mode) so that any Host
names in configuration for dependent services are automatically changed from their Docker network names to localhost
allowing the service to find the dependent services. The -o
flag tells the service to overwrite of configuration from local file into Config Provider (only need when service was previously run in Docker).
EdgeX 3.0
Common configuration is new in EdgeX 3.0. EdgeX services now have a reduced local configuration file that only contains the services' private configuration. All other configuration settings are now in the common configuration. See the Service Configuration section for more details.
Change folders to the service's cmd/ folder, set env vars, and then execute the service executable in the cmd folder.
"},{"location":"getting-started/Ch-GettingStartedHybrid/#check-the-results","title":"Check the results","text":"At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Because Core Metadata callbacks do not work in the hybrid environment, the virtual device service will not receive the Add Device callbacks on the initial run after creating them in Core Metadata. The simple work around for this issue is to stop (Ctrl-c
from the terminal) and restart the virtual device service (again with ./device-virtual -cp -d
execution).
The virtual device service log after stopping and restarting.
Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data.
http://localhost:59880/api/v3/event/count\n
For this example, you can check that the virtual device service is sending data into Core Data by checking the event count.
Note
If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/","title":"C SDK","text":"In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some of the SDK framework and work necessary to complete a device service without actually having a device to talk to.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#install-dependencies","title":"Install dependencies","text":"See the Getting Started - C Developers guide to install the necessary tools and infrastructure needed to develop a C service.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#get-the-edgex-device-sdk-for-c","title":"Get the EdgeX Device SDK for C","text":"The next step is to download and build the EdgeX device service SDK for C.
First, clone the device-sdk-c from Github:
git clone -b v3.0.1 https://github.com/edgexfoundry/device-sdk-c.git\ncd ./device-sdk-c\n
Note
The clone command above has you pull v3.0.1 of the C SDK which is the version compatible with the Minnesota release.
Then, build the device-sdk-c:
make\n
For this guide, you use the example template provided by the C SDK as a starting point for a new device service. You modify the device service to generate random integer values.
Begin by copying the template example source into a new directory named example-device-c
:
mkdir -p ../example-device-c/res/profiles\nmkdir -p ../example-device-c/res/devices\ncp ./src/c/examples/template.c ../example-device-c\ncd ../example-device-c\n
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#build-your-device-service","title":"Build your Device Service","text":"Now you are ready to build your new device service using the C SDK you compiled in an earlier step.
Tell the compiler where to find the C SDK files:
export CSDK_DIR=../device-sdk-c/build/release/_CPack_Packages/Linux/TGZ/csdk-3.0.1\n
Note
The exact path to your compiled CSDK_DIR may differ depending on the tagged version number on the SDK. The version of the SDK can be found in the VERSION file located in the ./device-sdk-c/VERSION file. In the example above, the Minnesota release of 3.0.1 is used.
Now build your device service executable:
gcc -I$CSDK_DIR/include -I/opt/iotech/iot/1.5/include -L$CSDK_DIR/lib -L/opt/iotech/iot/1.5/lib -o device-example-c template.c -lcsdk -liot\n
If everything is working properly, a device-example-c
executable will be created in the directory.
Up to now you've been building the example device service provided by the C SDK. In order to change it to a device service that generates random numbers, you need to modify your template.c
method template_get_handler. Replace the following code:
for (uint32_t i = 0; i < nreadings; i++)\n{\n/* Log the attributes for each requested resource */\niot_log_debug (driver->lc, \" Requested reading %u:\", i);\ndump_attributes (driver->lc, requests[i].resource->attrs);\n/* Fill in a result regardless */\nreadings[i].value = iot_data_alloc_string (\"Template result\", IOT_DATA_REF);\n}\nreturn true;\n
with this code:
for (uint32_t i = 0; i < nreadings; i++)\n{\nconst char *rdtype = iot_data_string_map_get_string (requests[i].resource->attrs, \"type\");\nif (rdtype)\n{\nif (strcmp (rdtype, \"random\") == 0)\n{\n/* Set the reading as a random value between 0 and 100 */\nreadings[i].value = iot_data_alloc_i32 (rand() % 100);\n}\nelse\n{\n*exception = iot_data_alloc_string (\"Unknown sensor type requested\", IOT_DATA_REF);\nreturn false;\n}\n}\nelse\n{\n*exception = iot_data_alloc_string (\"Unable to read value, no \\\"type\\\" attribute given\", IOT_DATA_REF);\nreturn false;\n}\n}\nreturn true;\n
Here the reading value is set to a random signed integer. Various iot_data_alloc_
functions are defined in the iot/data.h
header allowing readings of different types to be generated.
A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the device and how to get it.
Follow these steps to create a device profile for the simple random number generating device service.
Explore the files in the device-sdk-c/src/c/examples/res/profiles folder. Note the example TemplateProfile.json device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources
in the file represent properties of a device (properties like SensorOne, SensorTwo and Switch).
A pre-created device profile for the random number device is provided in this documentation. This is supplied in the alternative file format .yaml. Download random-generator.yaml and save the file to the ./res/profiles
folder.
Open the random-generator.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber
. Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber
will be a Int32. In real world IoT situations, this deviceResource list could be extensive and filled with many deviceResources all different types of data.
Device Service accepts pre-defined devices to be added to EdgeX during device service startup.
Follow these steps to create a pre-defined device for the simple random number generating device service.
A pre-created device for the random number device is provided in this documentation. Download random-generator-devices.json and save the file to the ./res/devices
folder.
Open the random-generator-devices.json file in a text editor. Note how the file contents represent an actual device with its properties (properties like Name, ProfileName, AutoEvents). In this example, the device described has a profileName: RandNum-Device
. In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile
Now update the configuration for the new device service. This documentation provides a new configuration.yaml file. This configuration file: - changes the port the service operates on so as not to conflict with other device services
Download configuration.yaml and save the file to the ./res folder.
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#custom-structured-configuration","title":"Custom Structured Configuration","text":"C Device Services support structured custom configuration as part of the [Driver]
section in the configuration.yaml file.
View the main
function of template.c
. The confparams
variable is initialized with default values for three test parameters. These values may be overridden by entries in the configuration file or by environment variables in the usual way. The resulting configuration is passed to the init
function when the service starts.
Configuration parameters X
, Y/Z
and Writable/Q
correspond to configuration file entries as follows:
[Writable]\n [Writable.Driver]\n Q = \"foo\"\n\n[Driver]\n X = \"bar\"\n [Driver.Y]\n Z = \"baz\"\n
Entries in the writable section can be changed dynamically if using the registry; the reconfigure
callback will be invoked with the new configuration when changes are made.
In addition to strings, configuration entries may be integer, float or boolean typed. Use the different iot_data_alloc_
functions when setting up the defaults as appropriate.
Now you have your new device service, modified to return a random number, a device profile that will tell EdgeX how to read that random number, as well as a configuration file that will let your device service register itself and its device profile with EdgeX, and begin taking readings every 10 seconds.
Rebuild your Device Service to reflect the changes that you have made:
gcc -I$CSDK_DIR/include -I/opt/iotech/iot/1.5/include -L$CSDK_DIR/lib -L/opt/iotech/iot/1.5/lib -o device-example-c template.c -lcsdk -liot\n
"},{"location":"getting-started/Ch-GettingStartedSDK-C/#run-your-device-service","title":"Run your Device Service","text":"Allow your newly created Device Service, which was formed out of the Device Service C SDK, to create sensor mimicking data which it then sends to EdgeX.
Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call:
docker compose -f docker-compose-no-secty.yml up -d\n
Back in your custom device service directory, tell your device service where to find the libcsdk.so
and libiot.so
:
export LD_LIBRARY_PATH=$CSDK_DIR/lib:/opt/iotech/iot/1.5/lib\n
Run your device service:
./device-example-c\n
You should now see your device service having its /Random command called every 10 seconds. You can verify that it is sending data into EdgeX by watching the logs of the edgex-core-data
service:
docker logs -f edgex-core-data\n
Which would print an event record every time your device service is called.
You can manually generate an event using curl to query the device service directly:
curl 0:59999/api/v3/device/name/RandNum-Device01/RandomNumber\n
Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX:
http://localhost:59880/api/v3/event/device/name/RandNum-Device01?limit=100
This request asks core data to provide the last 100 events/readings associated to the RandNum-Device-01.
In this guide, you create a simple device service that generates a random number as a means to simulate getting data from an actual device. In this way, you explore some SDK framework and work necessary to complete a device service without actually having a device to talk to.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#install-dependencies","title":"Install dependencies","text":"See the Getting Started - Go Developers guide to install the necessary tools and infrastructure needed to develop a GoLang service.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#get-the-edgex-device-sdk-for-go","title":"Get the EdgeX Device SDK for Go","text":"Follow these steps to create a folder on your file system, download the Device SDK, and get the GoLang device service SDK on your system.
Create a collection of nested folders, ~/edgexfoundry
on your file system. This folder will hold your new Device Service. In Linux, create a directory with a single mkdir command
mkdir -p ~/edgexfoundry\n
In a terminal window, change directories to the folder just created and pull down the SDK in Go with the commands as shown.
cd ~/edgexfoundry\ngit clone --depth 1 --branch v2.0.0 https://github.com/edgexfoundry/device-sdk-go.git\n
Note
The clone command above has you pull v2.0.0 of the Go SDK which is the version associated to Ireland. There are later releases of EdgeX, and it is always a good idea to pull and use the latest version associated with the major version of EdgeX you are using. You may want to check for the latest released version by going to https://github.com/edgexfoundry/device-sdk-go and look for the latest release.
Create a folder that will hold the new device service. The name of the folder is also the name you want to give your new device service. Standard practice in EdgeX is to prefix the name of a device service with device-
. In this example, the name 'device-simple' is used.
mkdir -p ~/edgexfoundry/device-simple\n
Copy the example code from device-sdk-go to device-simple:
cd ~/edgexfoundry\ncp -rf ./device-sdk-go/example/* ./device-simple/\n
Copy Makefile to device-simple:
cp ./device-sdk-go/Makefile ./device-simple\n
cp ./device-sdk-go/version.go ./device-simple/\n
After completing these steps, your device-simple folder should look like the listing below.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#start-a-new-device-service","title":"Start a new Device Service","text":"With the device service application structure in place, time now to program the service to act like a sensor data fetching service.
Change folders to the device-simple directory.
cd ~/edgexfoundry/device-simple\n
Open main.go file in the cmd/device-simple folder with your favorite text editor. Modify the import statements. Replace github.com/edgexfoundry/device-sdk-go/v2/example/driver
with github.com/edgexfoundry/device-simple/driver
in the import statements. Also replace github.com/edgexfoundry/device-sdk-go/v2
with github.com/edgexfoundry/device-simple
. Save the file when you have finished editing.
Open Makefile found in the base folder (~/edgexfoundry/device-simple) in your favorite text editor and make the following changes
Replace:
MICROSERVICES=example/cmd/device-simple/device-simple\n
with:
MICROSERVICES=cmd/device-simple/device-simple\n
Change:
GOFLAGS=-ldflags \"-X github.com/edgexfoundry/device-sdk-go/v2.Version=$(VERSION)\"\n
to refer to the new service with:
GOFLAGS=-ldflags \"-X github.com/edgexfoundry/device-simple.Version=$(VERSION)\"\n
Change:
example/cmd/device-simple/device-simple:\ngo mod tidy\n$(GOCGO) build $(GOFLAGS) -o $@ ./example/cmd/device-simple\n
to:
cmd/device-simple/device-simple:\ngo mod tidy\n$(GOCGO) build $(GOFLAGS) -o $@ ./cmd/device-simple\n
Save the file.
Enter the following command to create the initial module definition and write it to the go.mod file:
GO111MODULE=on go mod init github.com/edgexfoundry/device-simple\n
Use an editor to open and edit the go.mod file created in ~/edgexfoundry/device-simple. Add the code highlighted below to the bottom of the file. This code indicates which version of the device service SDK and the associated EdgeX contracts module to use.
require (\ngithub.com/edgexfoundry/device-sdk-go/v2 v2.0.0\ngithub.com/edgexfoundry/go-mod-core-contracts/v2 v2.0.0\n)\n
Note
You should always check the go.mod file in the latest released version SDK for the correct versions of the Go SDK and go-mod-contracts to use in your go.mod.
To ensure that the code you have moved and updated still works, build the device service. In a terminal window, make sure you are still in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command:
make build\n
If there are no errors, your service is ready for you to add custom code to generate data values as if there was a sensor attached.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#customize-your-device-service","title":"Customize your Device Service","text":"The device service you are creating isn't going to talk to a real device. Instead, it is going to generate a random number where the service would ordinarily make a call to get sensor data from the actual device.
Locate the simpledriver.go file in the /driver folder and open it with your favorite editor.
In the import() area at the top of the file, add \"math/rand\" under \"time\".
Locate the HandleReadCommands() function in this same file (simpledriver.go). Find the following lines of code in this file (around line 139):
if reqs[0].DeviceResourceName == \"SwitchButton\" {\ncv, _ := sdkModels.NewCommandValue(reqs[0].DeviceResourceName, common.ValueTypeBool, s.switchButton) res[0] = cv\n}\n
Add the conditional (if-else) code in front of the above conditional:
if reqs[0].DeviceResourceName == \"randomnumber\" {\ncv, _ := sdkModels.NewCommandValue(reqs[0].DeviceResourceName, common.ValueTypeInt32, int32(rand.Intn(100)))\nres[0] = cv\n} else\n
The first line of code checks that the current request is for a resource called \"RandomNumber\". The second line of code generates an integer (between 0 and 100) and uses that as the value the device service sends to EdgeX -- mimicking the collection of data from a real device. It is here that the device service would normally capture some sensor reading from a device and send the data to EdgeX. The HandleReadCommands is where you'd need to do some customization work to talk to the device, get the latest sensor values and send them into EdgeX.
Save the simpledriver.go file
A device profile is a YAML file that describes a class of device to EdgeX. General characteristics about the type of device, the data these devices provide, and how to command the device are all in a device profile. The device profile tells the device service what data gets collected from the the device and how to get it.
Follow these steps to create a device profile for the simple random number generating device service.
Explore the files in the cmd/device-simple/res/profiles folder. Note the example Simple-Driver.yaml device profile that is already in this folder. Open the file with your favorite editor and explore its contents. Note how deviceResources
in the file represent properties of a device (properties like SwitchButton, X, Y and Z rotation).
A pre-created device profile for the random number device is provided in this documentation. Download random-generator.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/profiles
folder.
Open the random-generator.yaml file in a text editor. In this device profile, the device described has a deviceResource: RandomNumber
. Note how the association of a type to the deviceResource. In this case, the device profile informs EdgeX that RandomNumber will be a INT32. In real world IoT situations, this deviceResource list could be extensive. Rather than a single deviceResource, you might find this section filled with many deviceResources and each deviceResource associated to a different type.
Device Service accepts pre-defined devices to be added to EdgeX during device service startup.
Follow these steps to create a pre-defined device for the simple random number generating device service.
Explore the files in the cmd/device-simple/res/devices folder. Note the example simple-device.yaml that is already in this folder. Open the file with your favorite editor and explore its contents. Note how DeviceList
in the file represent an actual device with its properties (properties like Name, ProfileName, AutoEvents).
A pre-created device for the random number device is provided in this documentation. Download random-generator-devices.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res/devices
folder.
Open the random-generator-devices.yaml file in a text editor. In this example, the device described has a ProfileName: RandNum-Device
. In this case, the device informs EdgeX that it will be using the device profile we created in Creating your Device Profile
Go Device Services provide /api/v3/validate/device
API to validate device's ProtocolProperties. This feature allows Device Services whose protocol has strict rule to validate their devices before adding them into EdgeX.
Go SDK provides DeviceValidator
interface:
// DeviceValidator is a low-level device-specific interface implemented\n// by device services that validate device's protocol properties.\ntype DeviceValidator interface {\n// ValidateDevice triggers device's protocol properties validation, returns error\n// if validation failed and the incoming device will not be added into EdgeX.\nValidateDevice(device models.Device) error\n}\n
By implementing DeviceValidator
interface whenever a device is added or updated, ValidateDevice
function will be called to validate incoming device's ProtocolProperties and reject the request if validation failed.
Now update the configuration for the new device service. This documentation provides a new configuration.yaml file. This configuration file:
Download configuration.yaml and save the file to the ~/edgexfoundry/device-simple/cmd/device-simple/res
folder (overwrite the existing configuration file). Change the host address of the device service to your system's IP address.
Warning
In the configuration.yaml, change the host address (around line 14) to the IP address of the system host. This allows core metadata to callback to your new device service when a new device is created. Because the rest of EdgeX, to include core metadata, will be running in Docker, the IP address of the host system on the Docker network must be provided to allow metadata in Docker to call out from Docker to the new device service running on your host system.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#custom-structured-configuration","title":"Custom Structured Configuration","text":"Go Device Services can now define their own custom structured configuration section in the configuration.yaml
file. Any additional sections in the configuration file are ignored by the SDK when it parses the file for the SDK defined sections.
This feature allows a Device Service to define and watch it's own structured section in the service's configuration file.
The SDK
API provides the follow APIs to enable structured custom configuration:
LoadCustomConfig(config UpdatableConfig, sectionName string) error
Loads the service's custom configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration the first time the service is started, if service is using the Configuration Provider. The UpdateFromRaw
interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider.
ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
Starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the UpdateWritableFromRaw interface will be called on the custom configuration to apply the updates and then signal that the changes occurred via changedCallback.
See the Device MQTT Service for an example of using the new Structured Custom Configuration capability.
The following built-in device service metrics are collected by the Device SDK
See Device Service Configuration Properties for detail on configuring device service metrics
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#custom","title":"Custom","text":"The Custom Device Service Metrics capability allows for device service developers to define, collect and report their own service metrics beyond the common built-in service metrics supplied by the Device SDK.
The following are the steps to collect and report service metrics:
Determine the metric type that needs to be collected
counter
- Track the integer count of somethinggauge
- Track the integer value of something gaugeFloat64
- Track the float64 value of something timer
- Track the time it takes to accomplish a taskhistogram
- Track the integer value variance of somethingCreate instance of the metric type from github.com/rcrowley/go-metrics
myCounter = gometrics.NewCounter()
myGauge = gometrics.NewGauge()
myGaugeFloat64 = gometrics.NewGaugeFloat64()
myTimer = gometrics.NewTime()
myHistogram = gometrics.NewHistogram(gometrics.NewUniformSample(<reservoir size))
Determine if there are any tags to report along with your metric. Not common so nil
is typically passed for the tags map[strings]string
parameter in the next step.
Register your metric(s) with the MetricsManager from the sdk
reference. See Device SDK API for more details:
service.MetricsManager().Register(\"MyCounterName\", myCounter, nil)
Collect the metric
myCounter.Inc(someIntvalue)
myCounter.Dec(someIntvalue)
myGauge.Update(someIntvalue)
myGaugeFloat64.Update(someFloatvalue)
myTimer.Update(someDuration)
myTimer.Time(func { do sometime})
myTimer.UpdateSince(someTimeValue)
myHistogram.Update(someIntvalue)
Configure reporting of the service's metrics. See Writable.Telemetry
configuration details in the Common Configuration section for more detail.
Example - Service Telemetry Configuration
Writable:\nTelemetry\nInterval: \"30s\"\nMetrics: # All service's metric names must be present in this list.\nMyCounterName: true\nMyGaugeName: true\nMyGaugeFloat64Name: true\nMyTimerName: true\nMyHistogram: true\nTags: # Contains the service level tags to be attached to all the service's metrics\nGateway: \"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only change existing value, not added new ones.\n
Note
The metric names used in the above configuration (to enable or disable reporting of a metric) must match the metric name used when the metric is registered. A partial match of starts with is acceptable, i.e. the metric name registered starts with the above configured name.
"},{"location":"getting-started/Ch-GettingStartedSDK-Go/#retrieving-secrets","title":"Retrieving Secrets","text":"The Go Device SDK provides the SecretProvider.GetSecret()
API to retrieve the Device Services secrets. See the Device MQTT Service for an example of using the SecretProvider.GetSecret()
API. Note that this code implements a retry loop allowing time for the secret(s) to be push into the service's SecretStore
via the /secret endpoint. See Storing Secrets section for more details.
Just as you did in the Build your Device Service step above, build the device-simple service, which creates the executable program that is your device service. In a terminal window, make sure you are in the device-simple folder (the folder containing the Makefile). Build the service by issuing the following command:
cd ~/edgexfoundry/device-simple\nmake build\n
If there are no errors, your service is created and put in the ~/edgexfoundry/device-simple/cmd/device-simple
folder. Look for the device-simple
executable in the folder.
Allow the newly created device service, which was formed out of the Device Service Go SDK, to create sensor-mimicking data that it then sends to EdgeX:
Follow the Getting Started using Docker guide to start all of EdgeX. From the folder containing the docker-compose file, start EdgeX with the following call (we're using non-security EdgeX in this example):
docker composef docker-compose-no-secty.yml up -d\n
In a terminal window, change directories to the device-simple's cmd/device-simple folder and run the new device-simple service.
cd ~/edgexfoundry/device-simple/cmd/device-simple\n./device-simple -cp -d\n
This starts the service and immediately displays log entries in the terminal.
EdgeX 3.0
In EdgeX 3.0, services must be provided with a flag indicating where the new common configuration can be found. In most case this will be -cp/--configProvider
specifying to use the Configuration Provider for configuration. Alternatively the -cc/--commonConfig
flag can be used to specify a file that contains the common configuration. In addition, when running in hybrid mode the -d/--dev
flag tells the service that it is running in hybrid mode and to override the Host
names for dependencies with localhost
. See Command Line Options for more details.
Using a browser, enter the following URL to see the event/reading data that the service is generating and sending to EdgeX:
http://localhost:59880/api/v3/event/device/name/RandNum-Device01
This request asks core data to provide the events associated to the RandNum-Device-01.
The EdgeX device service software development kits (SDKs) help developers create new device connectors for EdgeX. An SDK provides the common scaffolding that each device service needs. This allows developers to create new device/sensor connectors more quickly.
The EdgeX community already provides many device services. However, there is no way the community can provide for every protocol and every sensor. Even if the EdgeX community provided a device service for every protocol, your use case, sensor, or security infrastructure might require customization. Thus, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity.
EdgeX provides two SDKs to help developers create new device services. Most of EdgeX is written in Go and C. Thus, there's a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, the community may offer alternate language SDKs.
The SDKs are libraries that get incorporated into a new micro services. They make writing a new device service much easier. By importing the SDK library into your new device service project, developers are left to focus on the code that is specific to the communications with the device via the protocol of the device.
The code in the SDK handles the other details, such as: - initialization of the device service - getting the service configured - sending sensor data to core data - managing communications with core metadata - and much more.
The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX. For example, it makes sure the service registers with the EdgeX registry service when it starts.
Use the GoLang SDK
Use the C SDK
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/","title":"Getting Started using Snaps","text":""},{"location":"getting-started/Ch-GettingStartedSnapUsers/#introduction","title":"Introduction","text":"Snaps are application packages that are easy to install and update while being secure, cross\u2010platform and self-contained. Snaps can be installed on any Linux distribution with snap support.
Quick Start
Spinning up EdgeX with snaps is extremely easy. For demonstration purposes, let's install the platform, along with the virtual device service and EdgeX UI.
1) Install the platform snap, Device Virtual and EdgeX UI:
snap install edgexfoundry edgex-device-virtual edgex-ui\n
This installs the latest stable version of the snaps. The installation section provides more explanations. 2) Disable security in each of the installed snaps:
snap set edgexfoundry security=false\nsnap set edgex-device-virtual config.edgex-security-secret-store=false\nsnap set edgex-ui config.edgex-security-secret-store=false\n
Beware that this leaves the services at risk! We do it here only to simplify the quick start. Refer to disabling security for details.
3) Start the services:
# start Core and Support services in the platform snap\nsudo snap start edgexfoundry.consul edgexfoundry.redis \\\nedgexfoundry.core-common-config-bootstrapper \\\nedgexfoundry.core-data edgexfoundry.core-metadata edgexfoundry.core-command \\\nedgexfoundry.support-scheduler edgexfoundry.support-notifications\n\n# start Device Virtual\nsnap start edgex-device-virtual\n\n# start EdgeX UI\nsnap start edgex-ui\n
You should now be able to access the UI using a browser at http://localhost:4000
To run the services with security, skip step 2 and refer to platform snap for starting all platform services and adding an API Gateway user to generate a JWT. The JWT is needed to access the secured EdgeX UI.
The following sub-sections provide generic instructions for installation, configuration, and managing services using snaps.
For the list of EdgeX snaps and specific instructions, please refer to the EdgeX Snaps section.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#installation","title":"Installation","text":"When using the snap CLI, the installation is possible by simply executing:
snap install <snap>\n
This is similar to setting --channel=latest/stable
or shorthand --stable
and will install the latest stable release of a snap. In this case, latest/stable
is the channel, composed of latest
track and stable
risk level.
To install a specific version with long term support (e.g. 2.1), or to install a beta or development release, refer to the store page for the snap, choose install, and then pick the desired channel. The store page also provides instructions for installation on different Linux distributions as well as the list of supported CPU architectures.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#configuration","title":"Configuration","text":"EdgeX snaps are packaged with default service configuration files. In certain cases, few configuration fields are overridden within the snap for snap-specific deployment requirements.
There are a few ways to configure snapped services. In simple cases, it should be sufficient to modify the default config files before starting the services for the first time and use config overrides to change supported settings afterwards. Please refer below to learn about the different configuration methods.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-files","title":"Config files","text":"The default configuration files are typically placed at /var/snap/<snap>/current/config
. Upon a successful startup of an EdgeX service, the server configuration file (typically named configuration.yaml
) is uploaded to the Registry by default. After that, the local server configuration file will no longer be read and any modifications will not be applied. At this point, the configurations can be only changed via the Registry or by setting environment variables. Refer to config registry or config overrides for details.
For device services, the Device and Device Profile files are submitted to Core Metadata upon initial startup. Refer to the documentation of Device Services for details.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-registry","title":"Config registry","text":"The configurations that are uploaded to the Registry (i.e. Consul by default) can be modified using Consul's UI or kv REST API. The Registry is a Core services, part of the Platform Snap.
Changes to configurations in Registry are loaded by the service at startup. If the service has already started, a restart is required to load new configurations. Configurations that are in the writable section get loaded not only at startup, but also during the runtime. In other words, changes to the writable configurations are loaded automatically without a restart.
Please refer to Common Configuration and Configuration and Registry Providers for more information.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-provider-snap","title":"Config provider snap","text":"Most EdgeX snaps have a content interface which allows another snap to seed it with configuration files. This is useful for replacing all the configuration files in a service snap via a config provider snap without manual user interaction. This should not to be confused with the EdgeX Config Provider.
A config provider snap could be a standalone package with all the necessary configurations for multiple snaps. It will expose one or more interface slots to allow connections from consumer plugs. The config provider snap can be released to the store just like any other snap. Upon a connection between provider and consumer snaps, the packaged config files get mounted inside the consumer snap, to be used by services.
Please refer to edgex-config-provider, for an example.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#config-overrides","title":"Config overrides","text":"EdgeX snap options schemeSince EdgeX v2.2, the snaps use the following scheme for the snap configuration options:
apps.<app>.<type>.<key>\n
where: <app>
is the name of the app (service, executable)<type>
is the type of option with respect to the app<key>
is key for the option. It could contain a path to set a value inside an object, e.g. x.y=z
sets {\"x\": {\"y\": \"z\"}}
.We call these app options because of the apps.<app>
prefix which is used to apply configurations to specific services. This prefix can be dropped to apply the configuration globally to all apps within a snap!
This scheme is used for config overrides (described in this section) as well as autostart described in managing services, among others.
To know more about snap configuration in general, refer here.
The EdgeX services allow overriding server configurations using environment variables. Moreover, the services read EdgeX Common Environment Variables that override configurations which are hardcoded in source code or set as command-line options.
The EdgeX snaps provide an mechanism that reads stored key-value options and internally export environment variables to specific services and apps.
The snap options for setting environment variable uses the the following format:
apps.<app>.config.<env-var>
: setting an app-specific value (e.g. apps.core-data.config.service-port=1000
).config.<env-var>
: setting a global value (e.g. config.service-host=localhost
or config.writable-loglevel=DEBUG
)where:
<app>
is the name of the app (service, executable)<env-var>
is a lowercase, dash-separated mapping from the uppercase, underscore-separate environment variable name (e.g. X_Y
->x-y
). The reason for such mapping is that uppercase and underscore characters are not supported as config keys for snaps.Mapping examples:
Snap config key Environment Variable Service configuration YAML service-port SERVICE_PORTService: Port:clients-core-data-host CLIENTS_CORE_DATA_HOST
Clients: core-data: Host:edgex-startup-duration EDGEX_STARTUP_DURATION - edgex-add-secretstore-tokens EDGEX_ADD_SECRETSTORE_TOKENS -
Example
To change the service port of the core-data
service on edgexfoundry
snap to 8080:
snap set edgexfoundry apps.core-data.config.service-port=8080\n
\u200b This would internally export SERVICE_PORT=8080
to core-data
service.
Note
The services load the set configuration on startup. If a service has already started, a restart will be necessary to load the configurations.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#examples","title":"Examples","text":""},{"location":"getting-started/Ch-GettingStartedSnapUsers/#disabling-security","title":"Disabling security","text":"Warning
Disabling security is NOT recommended, unless for demonstration purposes, or when there are other means to secure the services.
The platform snap snap does NOT allow the security to be re-enabled. The only way to re-enable it is to re-install the snap.
Disabling security involves a few steps:
The platform snap which includes all the reference security components provides a convenience option to help disabling security:
sudo snap set edgexfoundry security=false\n
The above command results in stopping everything (if active), disabling the security components (by setting their autostart options to false), as well as setting EDGEX_SECURITY_SECRET_STORE=false
internally so that the included core/support services stop using the Secret Store. Now, to start the platform without security components, either start the non-security services selectively:
sudo snap start edgexfoundry.consul edgexfoundry.redis \\\nedgexfoundry.core-common-config-bootstrapper \\\nedgexfoundry.core-data edgexfoundry.core-metadata edgexfoundry.core-command \\\nedgexfoundry.support-scheduler edgexfoundry.support-notifications\n
or set the autostart option globally:
sudo snap set edgexfoundry autostart=true\n
After disabling the security on the platform, the external services should be similarly configured by setting EDGEX_SECURITY_SECRET_STORE=false
so that they don't attempt to initialize the security.
Example
To disable security for the edgex-ui snap:
snap set edgex-ui config.edgex-security-secret-store=false\nsnap restart edgex-ui\n
Note
All snapped services except for the API Gateway are restricted by default to listening on localhost (127.0.0.1). On the platform snap, the API Gateway proxies external requests to internal services. Since disabling security on the platform snap disables the API Gateway, the service endpoints will no longer be accessible from other systems. They will be still accessible on the local machine and reachable by other local services.
If you need to make an insecure service accessible remotely, set the bind address of the service to the IP address of that networking interface on the local machine. If you trust all your interfaces and want the services to accept connections from all, set it to 0.0.0.0
.
By default, core-data
listens on 127.0.0.1:59880
:
$ sudo lsof -nPi :59880\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\ncore-data 30944 root 12u IPv4 198726 0t0 TCP 127.0.0.1:59880 (LISTEN)\n
To set the bind address of core-data
in the platform snap to 0.0.0.0
:
snap set edgexfoundry apps.core-data.config.service-serverbindaddr=\"0.0.0.0\"\n
Now, core data is listening an all interfaces (*:59880
):
$ sudo lsof -nPi :59880\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\ncore-data 30548 root 12u IPv6 185059 0t0 TCP *:59880 (LISTEN)\n
To set it for all services inside the platform snap:
snap set edgexfoundry config.service-serverbindaddr=\"0.0.0.0\"\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#using-mqtt-message-bus","title":"Using MQTT message bus","text":"The default message bus for EdgeX services is Redis Pub/Sub. If you prefer to use MQTT instead of Redis, change the message bus configurations using snap options.
Example
To switch to an insecure MQTT message bus for all core services (inside the platform snap) and the Device Virtual using snap options, set the following:
snap set edgexfoundry config.messagequeue-protocol=\"mqtt\" \\\nconfig.messagequeue-port=1883 \\\nconfig.messagequeue-type=\"mqtt\" \\\nconfig.messagequeue-authmode=\"none\"\n\nsnap set edgex-device-virtual config.messagequeue-protocol=\"mqtt\" \\\nconfig.messagequeue-port=1883 \\\nconfig.messagequeue-type=\"mqtt\" \\\nconfig.messagequeue-authmode=\"none\"\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#disabling-registry-and-config-provider","title":"Disabling registry and config provider","text":"Consul is the default Registry and Config Provider in EdgeX. To disable both, it would be sufficient to disable Consul and configure the services not to use Registry and Config Provider.
Example
To disable Consul and configure all services (inside the platform snap) not to use Registry and Config provider using snap options, set the following:
snap set edgexfoundry apps.consul.autostart=false\nsnap set edgexfoundry config.edgex-use-registry=false \nsnap set edgexfoundry config.edgex-configuration-provider=none\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#managing-services","title":"Managing services","text":"The services of a snap can be started/stopped/restarted using the snap CLI. When starting/stopping, you can additionally set them to enable/disable which configures whether or not the service should also start on boot.
To list the services and check their status:
snap services <snap>\n
To start and optionally enable services:
# all services\nsnap start --enable <snap>\n\n# one service\nsnap start --enable <snap>.<app>\n
Similarly, a service can be stopped and optionally disabled using snap stop --disable
.
Note
The service autostart overrides the status and startup setting of the services. In other words, if autostart is set to true/false, it will apply that setting every time the snap is re-configured, e.g. when executing snap set|unset
.
To restart services, e.g. to load the configurations:
# all services\nsnap restart <snap>\n\n# one service\nsnap restart <snap>.<app>\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#service-autostart","title":"Service autostart","text":"The EdgeX snaps provide a mechanism to change the default startup of services (e.g. enabled instead of disabled).
The EdgeX snaps allows the change using snap options following the below scheme:
apps.<app>.autostart=true|false
: changing the default startup of one appautostart=true|false
: changing the default startup of all appswhere <app>
is the name of the app which can run as a service.
Disable the autostart of support-scheduler on the platform snap:
snap set edgexfoundry apps.support-scheduler.autostart=false\n
Enable the autostart of all Device USB Camera services:
snap set edgex-device-virtual autostart=true\n
The autostart options are also useful for changing the startup behavior when seeding the snap from a Gadget on Ubuntu Core.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#debugging","title":"Debugging","text":"The service logs can be queried using the snap log
command.
For example, to query 100 lines and follow:
# all services\nsnap logs -n=100 -f <snap>\n\n# one service\nsnap logs -n=100 -f <snap>.<app>\n
Check snap logs --help
for details. To query not only the service logs, but also the snap logs (incl. hook apps such as install and configure), use journalctl
:
sudo journalctl -n 100 -f | grep <snap>\n
Info
The verbosity of service logs is INFO by default. This can be changed by overriding the log level using the WRITABLE_LOGLEVEL
environment variable using snap config overrides apps.<app>.config.writable-loglevel
or globally as config.writable-loglevel
.
The following snaps are maintained by the EdgeX working groups:
To find all EdgeX snaps on the public Snap Store, search by keyword.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#platform-snap","title":"Platform Snap","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The main platform snap, simply called edgexfoundry
contains all reference core and security services along with support-scheduler and support-notifications.
Upon installation, the services are stopped and disabled. They can be started altogether or selectively; see managing services. For example, to start all the services, run:
sudo snap start edgexfoundry\n
For the configuration of services, refer to configuration. Read below for other deployment-related instructions about this snap.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#adding-api-gateway-users","title":"Adding API Gateway users","text":"The API gateway will pass any request that authenticates using a signed identity token from the EdgeX secret store.
The baseline implementation in EdgeX 3.0 uses Vault identity and the 'userpass' authentication engine to create users, though EdgeX adopters are free to add their own Vault identities using authentication methods of their choice. To add a new user locally, use the snapped secrets-config
utility.
To get the usage help:
edgexfoundry.secrets-config proxy adduser -h\n
You may also refer to the secrets-config proxy documentation. Creating an example user
Use secrets-config
to add an example
user (note: always specify --useRootToken
for the snap deployment of EdgeX):
sudo edgexfoundry.secrets-config proxy adduser --user example --useRootToken \\\n| jq --raw-output '.password' \\\n> password.txt\n
On success, the above command writes the system-generated password for example
user to password.txt
. If the \"adduser\" command is run multiple times, each run will overwrite the password from the previous run with a new random password. Generating a JWT token (ID Token) for the example user
Some additional work is required to generate a JWT that is usable for API gateway authentication.
username=example\npassword=$(cat password.txt)\n\nvault_token=$(curl --silent --show-err \"http://localhost:8200/v1/auth/userpass/login/${username}\" --data \"{\\\"password\\\":\\\"${password}\\\"}\" \\\n| jq --raw-output '.auth.client_token')\n\ncurl --silent --show-err -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/token/${username}\" \\\n| jq --raw-output '.data.token' \\\n> id-token.txt\n
The ID Token gets written to id-token.txt
. Once you have the token, you can access the services via the API Gateway (the vault token can be discarded). To obtain a new JWT token once the current one is expired, repeat the above snippet of code.
Calling an API on behalf of example user
curl --insecure https://localhost:8443/core-data/api/v3/ping -H \"Authorization: Bearer $(cat id-token.txt)\"\n
Output: {\"apiVersion\" : \"v3\",\"timestamp\":\"Mon May 15 16:45:55 CEST 2023\",\"serviceName\":\"core-data\"}
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#accessing-consul","title":"Accessing Consul","text":"Consul API and UI can be accessed using the consul token (Secret ID). For the snap, token is the value of SecretID
typically placed in a JSON file at /var/snap/edgexfoundry/current/secrets/consul-acl-token/mgmt_token.json
.
Example
To get the token:
sudo cat /var/snap/edgexfoundry/current/secrets/consul-acl-token/mgmt_token.json \\\n| jq -r '.SecretID' \\\n> consul-token.txt\n
The output gets written to consul-token.txt
. Try it out locally:
curl --silent --show-err http://localhost:8500/v1/kv/edgex/v3/core-data/Service/Port -H \"X-Consul-Token:$(cat consul-token.txt)\"\n
Through the API Gateway: We need to pass both the Consul token and Secret Store token obtained in Adding API Gateway users examples.
curl --insecure --silent --show-err https://localhost:8443/consul/v1/kv/edgex/v3/core-data/Service/Port -H \"X-Consul-Token:$(cat consul-token.txt)\" -H \"Authorization: Bearer $(cat id-token.txt)\"\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#changing-tls-certificates","title":"Changing TLS certificates","text":"The API Gateway setup generates a self-signed certificate with a short expiration by default.
The JWT authentication token that is consumed by the proxy is sensitive and it is important that measures are taken to ensure that clients do not disclose the JWT to unauthorized parties. For this reason, the default certificate and key should be replaced with a certificate and key that is trusted by connecting clients.
The certificate and key can be replaced locally. They are located at:
/var/snap/edgexfoundry/current/nginx/nginx.crt
/var/snap/edgexfoundry/current/nginx/nginx.key
Changes to the files should be followed by reloading Nginx: sudo snap restart --reload edgexfoundry.nginx
Alternatively, the certificate and key can be replaced using the snapped secrets-config
application. To get the usage help:
edgexfoundry.secrets-config proxy tls -h\n
Refer to the secrets-config proxy documentation. Example
Given the following files created outside the scope of this document:
server.crt
user-provided certificate (replacing the default)server.key
user-provided private key (replacing the default)ca.crt
Certificate Authority certificate (that signed server.crt
, directly or indirectly)For example, to generate a CA and issue a certificate valid for 30 days:
# Generate the Certificate Authority (CA) Private Key\nopenssl ecparam -name prime256v1 -genkey -noout -out ca.key\n# Generate the Certificate Authority Certificate\nopenssl req -new -x509 -sha256 -key ca.key -out ca.crt -subj \"/CN=getting-started-ca\"\n# Generate the Server Certificate Private Key\nopenssl ecparam -name prime256v1 -genkey -noout -out server.key\n# Generate the Server Certificate Signing Request\nopenssl req -new -sha256 -key server.key -out server.csr -subj \"/CN=localhost\"\n# Generate the Server Certificate\nopenssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 30 -sha256\n
Perform the following steps:
Copy server.crt
and server.key
to the snap
sudo cp server.crt server.key /var/snap/edgexfoundry/common/\n
We do this to allow temporary access to the files by the confined application. Instead of temporarily adding the files to the snap, the files can be read directly from the root user's home (/root
) or a removable media, after granting the home or removable-media permissions. Add new certificate files:
sudo edgexfoundry.secrets-config proxy tls \\\n--targetFolder /var/snap/edgexfoundry/current/nginx \\\n--inCert /var/snap/edgexfoundry/common/server.crt \\\n--inKey /var/snap/edgexfoundry/common/server.key
Reload Nginx:
sudo snap restart --reload edgexfoundry.nginx\n
Try it out:
curl --verbose --cacert ca.crt https://localhost:8443/core-data/api/v3/ping\n
The output should include a message indicating that the request is unauthorized. This means that TLS is setup correctly, but the request misses the required authentication. See Adding API Gateway users. In the information about the TLS, look for the Server certificate's issuer and make sure it matches your CA. For example, issuer: CN=getting-started-ca
.
The --cacert
can be omitted if the CA is available in root certificates (e.g. CA-signed or pre-installed CA certificate).
The services inside standalone snaps (e.g. device, app snaps) automatically receive a Secret Store token when:
The edgex-secretstore-token
content interface provides the mechanism to automatically supply tokens to connected snaps.
Execute the following command to check the status of connections:
sudo snap connections edgexfoundry\n
To manually connect the edgexfoundry's plug to a standalone snap's slot:
snap connect edgexfoundry:edgex-secretstore-token <snap>:edgex-secretstore-token\n
Note that the token has a limited expiry time of 1h by default. The connection and service startup should happen within the validity period.
To better understand the snap connections, read the interface management
Extend the default Secret Store token TTL
The TOKENFILEPROVIDER_DEFAULTTOKENTTL environment variable can be set to override the default time to live (TTL) of the Secret Store tokens. This is useful when the microservice consumers of the tokens are expected to start after a delay that is longer than the default TTL.
This can be achieved in the snap by setting the equivalent tokenfileprovider-defaulttokenttl
config option:
sudo snap set edgexfoundry app-options=true\nsudo snap set edgexfoundry apps.security-secretstore-setup.config.tokenfileprovider-defaulttokenttl=72h\n\n# Re-start the oneshot setup service to re-generate tokens:\nsudo snap start edgexfoundry.security-secretstore-setup\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#edgex-ui","title":"EdgeX UI","text":"| Installation | Managing Services | Debugging | Source |
For usage instructions, please refer to the Graphical User Interface (GUI) guide.
The service is not started by default. Please refer to configuration and managing services.
Once started, the UI will be reachable locally and by default at: http://localhost:4000
A valid JWT token is required to access the UI; follow Adding API Gateway users steps to generate a token. In development environments, the UI access control can be disabled as described in disabling security.
To enable all the functionalities of the UI, the following services should be running:
For example, to start/install the support services:
sudo snap start edgexfoundry.support-scheduler\nsudo snap start edgexfoundry.support-notifications\nsudo snap install edgex-ekuiper\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#edgex-ekuiper","title":"EdgeX eKuiper","text":"| Installation | Managing Services | Debugging | Source |
For the documentation of the standalone EdgeX eKuiper snap, visit the README.
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#app-service-configurable","title":"App Service Configurable","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-app-service-configurable/current/config/\n\u2514\u2500\u2500 res\n \u251c\u2500\u2500 external-mqtt-trigger\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 functional-tests\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 http-export\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 metrics-influxdb\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 mqtt-export\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 push-to-core\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 configuration.yaml\n \u2514\u2500\u2500 rules-engine\n \u2514\u2500\u2500 configuration.yaml\n
Filtering devices using snap options App service configurable provides various event filtering options. For example, to filter by device names Random-Integer-Device
and Random-Binary-Device
using snap options:
snap set edgex-app-service-configurable config.writable-pipeline-executionorder=\"FilterByDeviceName, SetResponseData\"\nsnap set edgex-app-service-configurable config.writable-pipeline-functions-filterbydevicename-parameters-devicenames=\"Random-Integer-Device, Random-Binary-Device\"\nsnap set edgex-app-service-configurable config.writable-pipeline-functions-filterbydevicename-parameters-filterout=true\n
Please refer to App Service Configurable guide for detailed usage instructions.
Profile
Before you can start the service, you must select one of available profiles, using snap options.
For example, to set mqtt-export
profile using the snap CLI:
sudo snap set edgex-app-service-configurable profile=mqtt-export\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#app-rfid-llrp-inventory","title":"App RFID LLRP Inventory","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-app-rfid-llrp-inventory/current/config/\n\u2514\u2500\u2500 app-rfid-llrp-inventory\n \u2514\u2500\u2500 res\n \u2514\u2500\u2500 configuration.yaml\n
Aliases
The aliases need to be provided for the service to work. See Setting the Aliases.
For the snap, this can either be by:
configuration.yaml
file with the correct aliases, before startup| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-gpio/current/config\n\u2514\u2500\u2500 device-gpio\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 device.custom.gpio.yaml\n \u2514\u2500\u2500 profiles\n \u2514\u2500\u2500 device.custom.gpio.yaml\n
GPIO Access
This snap is strictly confined which means that the access to interfaces are subject to various security measures.
On a Linux distribution without snap confinement for GPIO (e.g. Raspberry Pi OS 11), the snap may be able to access the GPIO directly, without any snap interface and manual connections.
On Linux distributions with snap confinement for GPIO such as Ubuntu Core, the GPIO access is possible via the gpio interface, provided by a gadget snap. The official Raspberry Pi Ubuntu Core image includes that gadget. It is NOT possible to use this snap on Linux distributions that have the GPIO confinement but not the interface (e.g. Ubuntu Server 20.04), unless for development purposes.
In development environments, it is possible to install the snap in dev mode (using --devmode
flag which disables security confinement and automatic upgrades) to allow direct GPIO access.
The gpio
interface provides slots for each GPIO channel. The slots can be listed using:
$ sudo snap interface gpio\nname: gpio\nsummary: allows access to specific GPIO pin\nplugs:\n - edgex-device-gpio\nslots:\n - pi:bcm-gpio-0\n - pi:bcm-gpio-1\n - pi:bcm-gpio-10\n ...\n
The slots are not connected automatically. For example, to connect GPIO-17:
$ sudo snap connect edgex-device-gpio:gpio pi:bcm-gpio-17\n
Check the list of connections:
$ sudo snap connections\nInterface Plug Slot Notes\ngpio edgex-device-gpio:gpio pi:bcm-gpio-17 manual\n\u2026\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-modbus","title":"Device Modbus","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-modbus/current/config/\n\u2514\u2500\u2500 device-modbus\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 modbus.test.devices.yaml\n \u2514\u2500\u2500 profiles\n \u2514\u2500\u2500 modbus.test.device.profile.yml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-mqtt","title":"Device MQTT","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-mqtt/current/config/\n\u2514\u2500\u2500 device-mqtt\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 mqtt.test.device.yaml\n \u2514\u2500\u2500 profiles\n \u2514\u2500\u2500 mqtt.test.device.profile.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-rest","title":"Device REST","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-rest/current/config/\n\u2514\u2500\u2500 device-rest\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 sample-devices.yaml\n \u2514\u2500\u2500 profiles\n \u251c\u2500\u2500 sample-image-device.yaml\n \u251c\u2500\u2500 sample-json-device.yaml\n \u2514\u2500\u2500 sample-numeric-device.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-rfid-llrp","title":"Device RFID LLRP","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-rfid-llrp/current/config/\n\u2514\u2500\u2500 device-rfid-llrp\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u251c\u2500\u2500 profiles\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 llrp.device.profile.yaml\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 llrp.impinj.profile.yaml\n \u2514\u2500\u2500 provision_watchers\n \u251c\u2500\u2500 impinj.provision.watcher.yaml\n \u2514\u2500\u2500 llrp.provision.watcher.yaml\n
Subnet setup
The DiscoverySubnets
setting needs to be provided before a device discovery can occur. This can be done in a number of ways:
Using snap set
to set your local subnet information. Example:
sudo snap set edgex-device-rfid-llrp apps.device-rfid-llrp.config.app-custom.discovery-subnets=\"192.168.10.0/24\"\n\ncurl -X POST http://localhost:59989/api/v3/discovery\n
Using a config-provider-snap to set device configuration
Using the auto-configure
command.
This command finds all local network interfaces which are online and non-virtual and sets the value of DiscoverySubnets
in Consul. When running with security enabled, it requires a Consul token, so it needs to be run as follows:
# get Consul ACL token\nCONSUL_TOKEN=$(sudo cat /var/snap/edgexfoundry/current/secrets/consul-acl-token/bootstrap_token.json | jq \".SecretID\" | tr -d '\"') echo $CONSUL_TOKEN # start the device service and connect the interfaces required for network interface discovery\nsudo snap start edgex-device-rfid-llrp.device-rfid-llrp \nsudo snap connect edgex-device-rfid-llrp:network-control \nsudo snap connect edgex-device-rfid-llrp:network-observe # run the nework interface discovery, providing the Consul token\nedgex-device-rfid-llrp.auto-configure $CONSUL_TOKEN\n
| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-snmp/current/config/\n\u2514\u2500\u2500 device-snmp\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 device.snmp.trendnet.TPE082WS.yaml\n \u2514\u2500\u2500 profiles\n \u251c\u2500\u2500 device.snmp.patlite.yaml\n \u251c\u2500\u2500 device.snmp.switch.dell.N1108P-ON.yaml\n \u2514\u2500\u2500 device.snmp.trendnet.TPE082WS.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-usb-camera","title":"Device USB Camera","text":"| Installation | Configuration | Managing Services | Debugging | Source |
This snap includes two services:
The services are not started by default. Please refer to configuration and managing services.
The snap uses the camera interface to access local USB camera devices. The interface management document describes how Snap interfaces are used to control the access to resources.
The default configuration files are installed at:
/var/snap/edgex-device-usb-camera/current/config\n\u251c\u2500\u2500 device-usb-camera\n\u2502 \u2514\u2500\u2500 res\n\u2502 \u251c\u2500\u2500 configuration.yaml\n\u2502 \u251c\u2500\u2500 devices\n\u2502 \u2502 \u251c\u2500\u2500 general.usb.camera.yaml.example\n\u2502 \u2502 \u2514\u2500\u2500 hp.w200.yaml.example\n\u2502 \u251c\u2500\u2500 profiles\n\u2502 \u2502 \u251c\u2500\u2500 general.usb.camera.yaml\n\u2502 \u2502 \u251c\u2500\u2500 hp.w200.yaml.example\n\u2502 \u2502 \u2514\u2500\u2500 jinpei.general.yaml.example\n\u2502 \u2514\u2500\u2500 provision_watchers\n\u2502 \u2514\u2500\u2500 generic.provision.watcher.yaml\n\u2514\u2500\u2500 rtsp-simple-server\n \u2514\u2500\u2500 config.yml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-virtual","title":"Device Virtual","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-virtual/current/config\n\u2514\u2500\u2500 device-virtual\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 devices.yaml\n \u2514\u2500\u2500 profiles\n \u251c\u2500\u2500 device.virtual.binary.yaml\n \u251c\u2500\u2500 device.virtual.bool.yaml\n \u251c\u2500\u2500 device.virtual.float.yaml\n \u251c\u2500\u2500 device.virtual.int.yaml\n \u2514\u2500\u2500 device.virtual.uint.yaml\n
"},{"location":"getting-started/Ch-GettingStartedSnapUsers/#device-onvif-camera","title":"Device ONVIF Camera","text":"| Installation | Configuration | Managing Services | Debugging | Source |
The service is not started by default. Please refer to configuration and managing services.
The default configuration files are installed at:
/var/snap/edgex-device-onvif-camera/current/config\n\u2514\u2500\u2500 device-onvif-camera\n \u2514\u2500\u2500 res\n \u251c\u2500\u2500 configuration.yaml\n \u251c\u2500\u2500 devices\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 camera.yaml.example\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 control-plane-device.yaml\n \u251c\u2500\u2500 profiles\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 camera.yaml\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 control-plane.profile.yaml\n \u2514\u2500\u2500 provision_watchers\n \u2514\u2500\u2500 generic.provision.watcher.yaml\n
"},{"location":"getting-started/Ch-GettingStartedUsers/","title":"Getting Started as a User","text":"This section provides instructions for Users to get EdgeX up and running. If you are a Developer, you should read Getting Started as a Developer.
EdgeX is a collection of more than a dozen micro services that are deployed to provide a minimal edge platform capability.
You can download EdgeX micro service source code and build your own micro services. However, if you do not have a need to change or add to EdgeX, then you do not need to download source code. Instead, you can download and run the pre-built EdgeX micro service artifacts.
The EdgeX community builds and creates Docker images as well as Snap packages with each release. The community also provides the latest unstable builds (prior to releases).
Please continue by referring to:
Released EdgeX Docker container images are available from Docker Hub. Please refer to the Getting Started using Docker for instructions related to stable releases.
In some cases, it may be necessary to get your EdgeX container images from the Nexus repository. The Linux Foundation manages the Nexus repository for the project.
Warning
Containers used from Nexus are considered \"work in progress\". There is no guarantee that these containers will function properly or function properly with other containers from the current release.
Nexus contains the EdgeX project staging and development container images. In other words, Nexus contains work-in-progress or pre-release images. These, pre-release/work-in-progress Docker images are built nightly and made available at the following Nexus location:
nexus3.edgexfoundry.org:10004\n
"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#rationale-to-use-nexus-images","title":"Rationale To Use Nexus Images","text":"Reasons you might want to use container images from Nexus include:
A set of Docker Compose files have been created to allow you to get and use the latest EdgeX service images from Nexus. Find these Nexus \"Nightly Build\" Compose files in the main
branch of the edgex-compose
respository in GitHub. The EdgeX development team provides these Docker Compose files. As with the EdgeX release Compose files, you will find several different Docker Compose files that allow you to get the type of EdgeX instance setup based on:
Warning
The \"Nightly Build\" images are provided as-is and may not always function properly or with other EdgeX services. Use with caution and typically only if you are a developer/contributor to EdgeX. These images represent the latest development work and may not have been thoroughly tested or integrated.
"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-nexus-images","title":"Using Nexus Images","text":"The operations to pull the images and run the Nexus Repository containers are the same as when using EdgeX images from Docker Hub (see Getting Started using Docker).
To get container images from the Nexus Repository, in a command terminal, change directories to the location of your downloaded Nexus Docker Compose yaml. Rename the file to docker-compose.yml. Then run the following command in the terminal to pull (fetch) and then start the EdgeX Nexus-image containers.
docker compose up -d\n
"},{"location":"getting-started/Ch-GettingStartedUsersNexus/#using-a-single-nexus-image","title":"Using a Single Nexus Image","text":"In some cases, you may only need to use a single image from Nexus while other EdgeX services are created from the Docker Hub images. In this case, you can simply replace the image location for the selected image in your original Docker Compose file. The address of Nexus is nexus3.edgexfoundry.org at port 10004. So, if you wished to use the EdgeX core data image from Nexus, you would replace the name and location of the core data image edgexfoundry/core-data:2.0.0
with nexus3.edgexfoundry.org:10004/core-data:latest
in the Compose file.
Note
The example above replaces the Ireland core data service from Docker Hub with the latest core data image in Nexus.
"},{"location":"getting-started/native/Ch-BuildRunNative/","title":"Native Build and Run","text":"There are instances, in both development as well as production, where you need to run EdgeX \"natively.\" That is, you want to run EdgeX on the native operating system / hardware outside of any emulation, container platform, Docker, Docker Compose, Snaps, etc.. Per PC Magazine, running natively
\"is to execute software written for the computer's natural, basic mode of operation; for example, a program written for Windows running under Windows. Contrast with running a program under some type of emulation or simulation\".
The following guides will assist you in building and running EdgeX natively.
Alert
Please note that the rest of the EdgeX documentation, outside of these native build and run guides, focuses on running EdgeX in Docker containers or EdgeX snaps. Using containers or snaps are usually the easiest and preferred way to run EdgeX - especially when you are not a developer and not familiar with operating system commands, compiling code, building program artifacts, and running programs in an operating system.
Therefore, these native build and run guides do not contain every aspect or option for running EdgeX in native environments. They are meant as a quick start for more seasoned developers or administrators comfortable with running a system by setting up build tools/environments, pulling source code, building from source and running the program outputs (executable artifacts) of the build without the benefits and ease that container platforms and similar technology bring.
Warning
These build and run guides offer some assistance to seasoned developers or administrators to help build and run EdgeX in environments not always supported by the project. EdgeX was built to be platform independent. As such, we believe most of EdgeX can run on almost any environment (on any hardware architecture and almost any operating system). However, there are elements of the EdgeX platform that will not run on all operating systems. For example, Redis will not run on Windows OS natively and some device services are only capable of running on Linux distributions or ARM64 platforms.
Existence of these guides does not imply current or future support. Use of these guides should be used with care and with an understanding that they are the community's best effort to provide advanced developers with the means to begin their own custom EdgeX development.
"},{"location":"getting-started/native/Ch-BuildRunNative/#guides","title":"Guides","text":"Warning
This build and run guide offers some assistance to seasoned developers or administrators to help build and run EdgeX on Linux OS with ARM 32 hardware natively (not using Docker and not running with snaps). Running on ARM 32 is not supported by the project. EdgeX was built to be platform independent. As such, we believe most of EdgeX can run on almost any environment (on any hardware architecture and almost any operating system).
Existence of this guide does not imply current or future support. Use of this guide should be used with care and with an understanding that it is the community's best effort to provide advanced developers with the means to begin their own custom EdgeX development and execution on Linux distributions running on ARM 32 hardware.
This build and run guide shows you how to get, compile/build, execute and test EdgeX (including the core and supporting services, the configurable application service, eKuiper rules engine and a virtual device service) in Linux on ARM 32 hardware. Specifically, this guide was done using a Raspberry Pi 3 running Raspberry Pi OS - version 5.15. For the most part, the guide should assist in building and running EdgeX in almost any Linux distribution on almost any ARM 32 hardware, but some instructions will vary based on the nuances of the underlying distribution.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#environment","title":"Environment","text":"Building and running EdgeX on Linux natively will require you have:
sudo
or root accessThe following software is assumed to already be installed and available on the host platform. Follow the referenced guides if you need to install or setup this software. Please note, the commands to check for the required software documented below are correct, but the actual results of the check may vary per OS distribution and version.
Go Lang, version 1.17 or later as of the Kamakura release
How to check for existence and version on your machine
GCC Build Essentials (for C++)
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Consul, version 1.10 or later as of the Kamakura release
How to check for existence and version on your machine
Redis,version 6.2 or later as of the Kamakura release
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Git
How to check for existence and version on your machine
In this guide, you will be building and running EdgeX in \"non-secure\" mode. That is, you will be building and running the EdgeX platform without the security services and security configuration. An environmental variable, EDGEX_SECURITY_SECRET_STORE
, is set to indicate whether the EdgeX services are expected to initialize and use the secure secret store. By default, this variable is set to true
. Prior to building and running EdgeX, set this environment variable to false.
export EDGEX_SECURITY_SECRET_STORE=false
This can be done in the terminal from which you build and run EdgeX or you can set it in your user's profile to make an environment persist across terminal sessions. See How to Set Environment Variables in Linux for assistance.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#download-edgex-source","title":"Download EdgeX Source","text":"In order to build and run EdgeX micro services, you will first need to get the source code for the platform. Using git, clone the EdgeX repositories with the following commands:
Tip
You may wish to create a new folder and then issue these git commands from that folder so that all EdgeX code is neatly stored in a single folder.
git clone https://github.com/edgexfoundry/edgex-go.git\ngit clone https://github.com/edgexfoundry/device-virtual-go.git\ngit clone https://github.com/edgexfoundry/app-service-configurable.git\ngit clone https://github.com/lf-edge/ekuiper.git\ngit clone https://github.com/edgexfoundry/edgex-ui-go.git\n
Note that a new folder, named for the repository, gets created containing source code with each of the git clones above.
Warning
These git clone operations pull from the main branch of the EdgeX repositories. This is the current working branch in EdgeX development. See the git clone documentation for how to clone a specific named release branch or version tag.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-edgex-services","title":"Build EdgeX Services","text":"With the source code, you can now build the EdgeX services, GUI, as well as eKuiper rules engine.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-core-and-supporting-services","title":"Build Core and Supporting Services","text":"Most of the services are in the edgex-go
folder. This folder contains the code for the core and supporting services. A single command in this repository will build several of the services.
Enter the edgex-go
folder and issue the make build
command as shown below.
Warning
Depending on the amount of memory your system has, building the services in edgex-go
can take several minutes (in the case of a Raspberry Pi 3, build time for edgex-go services can take as much as 30-45 minutes and a device service is taking about 10-15 minutes to build).
Note
Building the services in edgex-go folder will actually build some of the services (such as the security services) not used in this guide, but issuing a single command is the easiest way to build the services needed without having to build services one by one.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-the-virtual-device-service","title":"Build the Virtual Device Service","text":"The virtual device service simulates devices/sensors sending data to EdgeX as if it was a \"thing\". This guide uses the virtual device service to exemplify how other devices services can be built and run.
Enter the device-virtual-go
folder and issue the make build
command as shown below.
The configurable application service helps prepare device/sensor data for enterprise or cloud systems. It also prepares data for use by the rules engine - eKuiper
Enter the app-service-configurable
folder and issue the make build
command as shown below.
Sister Linux Foundation, LF Edge project eKuiper is the reference implementation rules engine for EdgeX.
Enter the ekuiper
folder and issue the make build_with_edgex
command as shown below.
Note
eKuiper does also provide binaries which can be downloaded and used without the need for builds.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#build-the-gui","title":"Build the GUI","text":"EdgeX provides a graphical user interface for exploring a single instance of the EdgeX platform. The GUI makes it easier to work with EdgeX and see sample data coming from sensors. It provides a means to check that EdgeX is working correctly, monitor EdgeX and even make some configuration changes.
Enter the edgex-ui-go
folder and issue the make build
command as shown below.
Provided everything built correctly and without issue, you can now start your EdgeX services one at a time. First make sure Redis Server is running. If Redis is not running, start it before the other services. If it is running, you can start each of the EdgeX services in order as listed below.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-consul","title":"Start Consul","text":"Start Consul Agent with the following command.
nohup consul agent -ui -bootstrap -server -client 0.0.0.0 -data-dir=tmp/consul &\n
The nohup
is used to execute the command and ignore all SIGHUP (hangup) signals. The &
says to execute the process in the background. Both nohup
and &
will be used to run each of the services so that the same terminal can be used and the output will be directed to local nohup.out log files.
If Consul is running correctly, you should be able to reach the Consul UI through a browser at http://(host address):8500
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-core-metadata","title":"Start Core Metadata","text":"Each of core and supporting EdgeX services are located in edgex-go/cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go/cmd/core-metadata
. Change directories to the core-metadata service subfolder and then run the executable found in the subfolder with -cp
and -registry
command line options as shown below.
cd edgex-go/cmd/core-metadata/\nnohup ./core-metadata -cp=consul.http://localhost:8500 -registry &\n
The -cp=consul.http://localhost:8500
command line parameter tells core-metadata to use Consul and where to find Consul running. The -registry
command line parameter tells core-metadata to use (and register with) the registry service. Both of these command line parameters will be use when launching all EdgeX services.
In a similar fashion, enter each of the other core and supporting service folders in edgex-go/cmd
and launch the services.
cd ../core-data\nnohup ./core-data -cp=consul.http://localhost:8500 -registry &\ncd ../core-command\nnohup ./core-command -cp=consul.http://localhost:8500 -registry &\ncd ../support-notifications/\nnohup ./support-notifications -cp=consul.http://localhost:8500 -registry &\ncd ../support-scheduler/\nnohup ./support-scheduler -cp=consul.http://localhost:8500 -registry &\n
Tip
If you still have the Consul UI up, you should see each of the EdgeX core and supporting services listed in Consul's Services
page with green check marks next to them suggesting they are running.
The configurable application service is located in the root of app-service-configurable
folder.
The configurable application service is started in a similar way as the other EdgeX services. The configurable application service is going to be used to route data to the rules engine. Therefore, an additional command line parameter (p
) is added to its launch command to tell the app service to use the rules engine configuration and profile.
nohup ./app-service-configurable -cp=consul.http://localhost:8500 -registry -p=rules-engine &\n
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-the-virtual-device-service","title":"Start the Virtual Device Service","text":"The virtual device service is also started in similar way as the other EdgeX services. The virtual device service manufactures data as if it was to come from a sensor and sends that data into the rest of EdgeX. By default, the virtual device service will generate random numbers (integers, unsigned integers, floats), booleans and even binary data as simulated sensor data. The virtual device service is located in the device-virtual-go/cmd
folder.
Change directories to the virtual device service's cmd
folder and then launch the service with the command shown below.
nohup ./device-virtual -cp=consul.http://localhost:8500 -registry &\n
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-the-gui","title":"Start the GUI","text":"The EdgeX graphical user interface (GUI) provides an easy to use visual tool to monitor data passing through EdgeX services. It also provides some capability to change an EdgeX instance's configuration or metadata. The EdgeX GUI is located in the edgex-ui-go/cmd/edgex-ui-server
folder.
Change directories to the GUI's cmd/edgex-ui-server
folder and then launch the GUI with the command shown below.
nohup ./edgex-ui-server &\n
If the GUI is running correctly, you should be able to reach the GUI through a browser at http://(host address):4000. It may take a few seconds for the GUI to initialize once you hit the URL.
Note
Some elements of the GUI will not work as you do not have all available EdgeX services running. Notably, the System Management service and its executor are not running so the System view of the GUI will display an error. By default, the System Management service and its executor operate by checking on the other services memory, CPU, etc. via Docker Stats. In this case, since you are not running Docker containers, the System Management service would not function.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#start-ekuiper","title":"Start eKuiper","text":"eKuiper is the reference implementation rules engine that is typically run with EdgeX by default. It is a lightweight, easy to use rules engine. Rules can be established using SQL. It is a sister project under the LF Edge umbrella project.
eKuiper's executable (called kuiperd
) is located in the ekuiper/_build/kuiper-*version*-linux-arm/bin
folder. Note that the location is in a _build
folder subfolder created when you built eKuiper. The subfolder is named for the eKuiper version, OS, architecture.
Change directories to the ekuiper/_build/kuiper-*version*-linux-arm/bin
folder.
As a 3rd party component, eKuiper can be setup to work with many streams of data from various systems or engines. It must be provided knowledge about where it is receiving data and how to handle/treat the incoming data. Therefore, before launching eKuiper, execute the following export of environmental variables in order to tell eKuiper where to receive data coming from the EdgeX configurable application service (via the EdgeX message bus).
export CONNECTION__EDGEX__REDISMSGBUS__PORT=6379\nexport CONNECTION__EDGEX__REDISMSGBUS__PROTOCOL=redis\nexport CONNECTION__EDGEX__REDISMSGBUS__SERVER=localhost\nexport CONNECTION__EDGEX__REDISMSGBUS__TYPE=redis\nexport EDGEX__DEFAULT__PORT=6379\nexport EDGEX__DEFAULT__PROTOCOL=redis\nexport EDGEX__DEFAULT__SERVER=localhost\nexport EDGEX__DEFAULT__TOPIC=rules-events\nexport EDGEX__DEFAULT__TYPE=redis\nexport KUIPER__BASIC__CONSOLELOG=\"true\"\nexport KUIPER__BASIC__RESTPORT=59720\n
Setting these environment variables must be done in the same terminal from which you plan to execute the eKuiper server.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#run-ekuiper","title":"Run eKuiper","text":"From the ekuiper/_build/kuiper-*version*-linux-arm
folder, and with the environmental variables set, launch eKuiper's server with the command shown below.
nohup ./bin/kuiperd &\n
Warning
There is both a kuiper
and a kuiperd
executable in the bin
folder. Make sure you are running kuiperd
.
If eKuiper is running correctly, the RuleEngine tab in the EdgeX GUI should offer the ability to define eKuiper Streams and Rules as shown below.
If eKuiper is not running correctly or if the environmental variables where incorrectly set, then you will see an error screen like that shown below.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#test-and-explore-edgex","title":"Test and Explore EdgeX","text":"With EdgeX up and running (inclusive of Consul, Redis, and eKuiper), you can try these quick tests to see that EdgeX is running correctly.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#see-sensor-data-flowing-through-edgex","title":"See sensor data flowing through EdgeX","text":"You have already been using Consul and the EdgeX GUI to check on some items of EdgeX in this tutorial. You can use the EdgeX GUI to further check that sensor data is flowing through the system.
In a browser, go to http://(host address):4000. Remember, it may take a few seconds for the GUI to initialize once you hit the URL. Once the GUI displays, find and click on the DataCenter
link on the left hand navigation bar (highlighted below).
The DataCenter
display allows you to see the EdgeX event/readings as they are persisted by the core data service to Redis. Simply press the >Start
button to see the \"stream\" of simulated sensor data that was generated by the virtual device service and sent to EdgeX. The simulated data may take a second or two to start to display in the EventDataStream
area of the GUI.
Press the Pause
button to stop this display of data. Notice that you can see the EdgeX Events (and associated Readings) or just the Readings with the two tabs on this DataCenter
display.
Each EdgeX micro service has a REST API associated with it. You can use curl or a browser to test that the service is up using its ping
API. Below are curl commands to \"ping\" both core data and core metadata.
curl http://localhost:59880/api/v3/ping\n curl http://localhost:59881/api/v3/ping\n
Each service should respond with JSON data to indicate it is able to respond to requests. Below is an example response from the core metadata \"Ping\" request.
{\"apiVersion\":\"v2\",\"timestamp\":\"Thu May 12 23:25:04 UTC 2022\",\"serviceName\":\"core-metadata\"}\n
See the service port reference page for a list of service ports to check the ping
API of other services.
As an added test, use curl to get the count of the number of events persisted by core data with the command below (you can also use a browser with the URL to get the same).
curl http://localhost:59880/api/v3/event/count\n
The response will indicate a \"count\" of events stored (in this case 6270).
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":6270}\n
Info
The full set of APIs for each service can be found in SwaggerHub. You can use the documentation to test other APIs as well.
"},{"location":"getting-started/native/Ch-BuildRunOnArm32/#set-up-an-ekuiper-stream-and-rule","title":"Set up an eKuiper Stream and Rule","text":"While eKuiper is running, it is currently sitting idle since it has no rules on which to watch for data and execute commands. Set up a simple eKuiper rule to log any sensor data it sees. Use the GUI tool to establish the eKuiper stream
and rule
. Learn about Streams and Rules in the eKuiper documentation.
In the GUI, click on the Rules Engine
link in the navigation bar on the left. Then, click on the Add
button on the Stream tab. Allow the default EdgeX stream be created by hitting the Submit
button.
Next, click on the Rules
tab on the Rules Engine
page. Then click on the Add
button on the Rules
tab in order to create a new eKuiper rule. In the form that appears, enter any name for the rule (TestRule
is used below) in the Name field. Enter SELECT * FROM EdgeXStream
in the RuleSQL field and add a log
action - all as shown below in the form. Hit the Submit
button when you have your rule established.
With the stream and rule defined, you have asked eKuiper to fire a log entry each time it sees a new EdgeX event/reading come through it. In the future, you could have eKuiper look for particular events/readings (e.g., thermostat readings above a specified temperature) produced by a particular sensor in order to issue commands to some device. But for now, you can check the eKuiper log to see that the rule engine is working and publishing a message to the log with each event/reading.
In the ekuiper/_build/kuiper-*version*-linux-arm/log
folder, you will find a stream.log
file.
If you use Linux tail
, you can see that the eKuiper rules engine is firing a log entry for each virtual device service record that flows through EdgeX. Issue the following command to see the log entries occur in real time:
tail -f stream.log\n
Info
Seeing the eKuiper rules engine fire a log entry into a file for each EdgeX event/reading that comes through, has allowed you to confirm and see the entire EdgeX system is working properly.
With the nohup
command on each service, the log file contents are redirected to a file (nohup.out
) in the directory where you started each service. if you find that a service does not appear to be running or if it is running but not working correctly, check the nohup.out
file for any errors or issues. In the example below, the core data's nohup.out
log file is explored.
This build and run guide shows you how to get, compile/build, execute and test EdgeX (including the core and supporting services, the configurable application service, eKuiper rules engine and a virtual device service) in Linux on an x86 or x86_64 hardware. Specifically, this guide was done using Ubuntu 20.04. For the most part, the guide should assist in building and running EdgeX in almost any Linux distribution, but some instructions will vary based on the nuances of the underlying distribution.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#environment","title":"Environment","text":"Building and running EdgeX on Linux natively will require you have:
sudo
accessThe following software is assumed to already be installed and available on the host platform. Follow the referenced guides if you need to install or setup this software.
Go Lang, version 1.17 or later as of the Kamakura release
How to check for existence and version on your machine
GCC Build Essentials (for C++)
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Consul, version 1.10 or later as of the Kamakura release
How to check for existence and version on your machine
Redis,version 6.2 or later as of the Kamakura release
How to check for existence and version on your machine
Your installation process may vary based on Linux version/distribution
Git
How to check for existence and version on your machine
In this guide, you will be building and running EdgeX in \"non-secure\" mode. That is, you will be building and running the EdgeX platform without the security services and security configuration. An environmental variable, EDGEX_SECURITY_SECRET_STORE
, is set to indicate whether the EdgeX services are expected to initialize and use the secure secret store. By default, this variable is set to true
. Prior to building and running EdgeX, set this environment variable to false.
export EDGEX_SECURITY_SECRET_STORE=false
This can be done in the terminal from which you build and run EdgeX or you can set it in your user's profile to make an environment persist across terminal sessions. See How to Set Environment Variables in Linux for assistance.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#download-edgex-source","title":"Download EdgeX Source","text":"In order to build and run EdgeX micro services, you will first need to get the source code for the platform. Using git, clone the EdgeX repositories with the following commands:
Tip
You may wish to create a new folder and then issue these git commands from that folder so that all EdgeX code is neatly stored in a single folder.
git clone https://github.com/edgexfoundry/edgex-go.git\ngit clone https://github.com/edgexfoundry/device-virtual-go.git\ngit clone https://github.com/edgexfoundry/app-service-configurable.git\ngit clone https://github.com/lf-edge/ekuiper.git\ngit clone https://github.com/edgexfoundry/edgex-ui-go.git\n
Note that a new folder, named for the repository, gets created containing source code with each of the git clones above.
Warning
These git clone operations pull from the main branch of the EdgeX repositories. This is the current working branch in EdgeX development. See the git clone documentation for how to clone a specific named release branch or version tag.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-edgex-services","title":"Build EdgeX Services","text":"With the source code, you can now build the EdgeX services, GUI, as well as eKuiper rules engine.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-core-and-supporting-services","title":"Build Core and Supporting Services","text":"Most of the services are in the edgex-go
folder. This folder contains the code for the core and supporting services. A single command in this repository will build several of the services.
Enter the edgex-go
folder and issue the make build
command as shown below.
Warning
Depending on the amount of memory your system has, building the services in edgex-go
can take several minutes.
Note
Building the services in edgex-go folder will actually build some of the services (such as the security services) not used in this guide, but issuing a single command is the easiest way to build the services needed without having to build services one by one.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-the-virtual-device-service","title":"Build the Virtual Device Service","text":"The virtual device service simulates devices/sensors sending data to EdgeX as if it was a \"thing\". This guide uses the virtual device service to exemplify how other devices services can be built and run.
Enter the device-virtual-go
folder and issue the make build
command as shown below.
The configurable application service helps prepare device/sensor data for enterprise or cloud systems. It also prepares data for use by the rules engine - eKuiper
Enter the app-service-configurable
folder and issue the make build
command as shown below.
Sister Linux Foundation, LF Edge project eKuiper is the reference implementation rules engine for EdgeX.
Enter the ekuiper
folder and issue the make build_with_edgex
command as shown below.
Note
eKuiper does also provide binaries which can be downloaded and used without the need for builds.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#build-the-gui","title":"Build the GUI","text":"EdgeX provides a graphical user interface for exploring a single instance of the EdgeX platform. The GUI makes it easier to work with EdgeX and see sample data coming from sensors. It provides a means to check that EdgeX is working correctly, monitor EdgeX and even make some configuration changes.
Enter the edgex-ui-go
folder and issue the make build
command as shown below.
Provided everything built correctly and without issue, you can now start your EdgeX services one at a time. First make sure Redis Server is running. If Redis is not running, start it before the other services. If it is running, you can start each of the EdgeX services in order as listed below.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-consul","title":"Start Consul","text":"Start Consul Agent with the following command.
nohup consul agent -ui -bootstrap -server -client 0.0.0.0 -data-dir=tmp/consul &\n
The nohup
is used to execute the command and ignore all SIGHUP (hangup) signals. The &
says to execute the process in the background. Both nohup
and &
will be used to run each of the services so that the same terminal can be used and the output will be directed to local nohup.out log files.
If Consul is running correctly, you should be able to reach the Consul UI through a browser at http://(host address):8500
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-core-metadata","title":"Start Core Metadata","text":"Each of core and supporting EdgeX services are located in edgex-go/cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go/cmd/core-metadata
. Change directories to the core-metadata service subfolder and then run the executable found in the subfolder with -cp
and -registry
command line options as shown below.
cd edgex-go/cmd/core-metadata/\nnohup ./core-metadata -cp=consul.http://localhost:8500 -registry &\n
The -cp=consul.http://localhost:8500
command line parameter tells core-metadata to use Consul and where to find Consul running. The -registry
command line parameter tells core-metadata to use (and register with) the registry service. Both of these command line parameters will be use when launching all EdgeX services.
In a similar fashion, enter each of the other core and supporting service folders in edgex-go/cmd
and launch the services.
cd ../core-data\nnohup ./core-data -cp=consul.http://localhost:8500 -registry &\ncd ../core-command\nnohup ./core-command -cp=consul.http://localhost:8500 -registry &\ncd ../support-notifications/\nnohup ./support-notifications -cp=consul.http://localhost:8500 -registry &\ncd ../support-scheduler/\nnohup ./support-scheduler -cp=consul.http://localhost:8500 -registry &\n
Tip
If you still have the Consul UI up, you should see each of the EdgeX core and supporting services listed in Consul's Services
page with green check marks next to them suggesting they are running.
The configurable application service is located in the root of app-service-configurable
folder.
The configurable application service is started in a similar way as the other EdgeX services. The configurable application service is going to be used to route data to the rules engine. Therefore, an additional command line parameter (p
) is added to its launch command to tell the app service to use the rules engine configuration and profile.
nohup ./app-service-configurable -cp=consul.http://localhost:8500 -registry -p=rules-engine &\n
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-the-virtual-device-service","title":"Start the Virtual Device Service","text":"The virtual device service is also started in similar way as the other EdgeX services. The virtual device service manufactures data as if it was to come from a sensor and sends that data into the rest of EdgeX. By default, the virtual device service will generate random numbers (integers, unsigned integers, floats), booleans and even binary data as simulated sensor data. The virtual device service is located in the device-virtual-go/cmd
folder.
Change directories to the virtual device service's cmd
folder and then launch the service with the command shown below.
nohup ./device-virtual -cp=consul.http://localhost:8500 -registry &\n
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-the-gui","title":"Start the GUI","text":"The EdgeX graphical user interface (GUI) provides an easy to use visual tool to monitor data passing through EdgeX services. It also provides some capability to change an EdgeX instance's configuration or metadata. The EdgeX GUI is located in the edgex-ui-go/cmd/edgex-ui-server
folder.
Change directories to the GUI's cmd/edgex-ui-server
folder and then launch the GUI with the command shown below.
nohup ./edgex-ui-server &\n
If the GUI is running correctly, you should be able to reach the GUI through a browser at http://(host address):4000. It may take a few seconds for the GUI to initialize once you hit the URL.
Note
Some elements of the GUI will not work as you do not have all available EdgeX services running. Notably, the System Management service and its executor are not running so the System view of the GUI will display an error. By default, the System Management service and its executor operate by checking on the other services memory, CPU, etc. via Docker Stats. In this case, since you are not running Docker containers, the System Management service would not function.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#start-ekuiper","title":"Start eKuiper","text":"eKuiper is the reference implementation rules engine that is typically run with EdgeX by default. It is a lightweight, easy to use rules engine. Rules can be established using SQL. It is a sister project under the LF Edge umbrella project.
eKuiper's executable (called kuiperd
) is located in the ekuiper/_build/kuiper-*version*-linux-amd64/bin
folder. Note that the location is in a _build
folder subfolder created when you built eKuiper. The subfolder is named for the eKuiper version, OS, architecture.
Change directories to the ekuiper/_build/kuiper-*version*-linux-amd64/bin
folder.
As a 3rd party component, eKuiper can be setup to work with many streams of data from various systems or engines. It must be provided knowledge about where it is receiving data and how to handle/treat the incoming data. Therefore, before launching eKuiper, execute the following export of environmental variables in order to tell eKuiper where to receive data coming from the EdgeX configurable application service (via the EdgeX message bus).
export CONNECTION__EDGEX__REDISMSGBUS__PORT=6379\nexport CONNECTION__EDGEX__REDISMSGBUS__PROTOCOL=redis\nexport CONNECTION__EDGEX__REDISMSGBUS__SERVER=localhost\nexport CONNECTION__EDGEX__REDISMSGBUS__TYPE=redis\nexport EDGEX__DEFAULT__PORT=6379\nexport EDGEX__DEFAULT__PROTOCOL=redis\nexport EDGEX__DEFAULT__SERVER=localhost\nexport EDGEX__DEFAULT__TOPIC=rules-events\nexport EDGEX__DEFAULT__TYPE=redis\nexport KUIPER__BASIC__CONSOLELOG=\"true\"\nexport KUIPER__BASIC__RESTPORT=59720\n
Setting these environment variables must be done in the same terminal from which you plan to execute the eKuiper server.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#run-ekuiper","title":"Run eKuiper","text":"From the ekuiper/_build/kuiper-*version*-linux-amd64
folder, and with the environmental variables set, launch eKuiper's server with the command shown below.
nohup ./bin/kuiperd &\n
Warning
There is both a kuiper
and a kuiperd
executable in the bin
folder. Make sure you are running kuiperd
.
If eKuiper is running correctly, the RuleEngine tab in the EdgeX GUI should offer the ability to define eKuiper Streams and Rules as shown below.
If eKuiper is not running correctly or if the environmental variables where incorrectly set, then you will see an error screen like that shown below.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#test-and-explore-edgex","title":"Test and Explore EdgeX","text":"With EdgeX up and running (inclusive of Consul, Redis, and eKuiper), you can try these quick tests to see that EdgeX is running correctly.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#see-sensor-data-flowing-through-edgex","title":"See sensor data flowing through EdgeX","text":"You have already been using Consul and the EdgeX GUI to check on some items of EdgeX in this tutorial. You can use the EdgeX GUI to further check that sensor data is flowing through the system.
In a browser, go to http://(host address):4000. Remember, it may take a few seconds for the GUI to initialize once you hit the URL. Once the GUI displays, find and click on the DataCenter
link on the left hand navigation bar (highlighted below).
The DataCenter
display allows you to see the EdgeX event/readings as they are persisted by the core data service to Redis. Simply press the >Start
button to see the \"stream\" of simulated sensor data that was generated by the virtual device service and sent to EdgeX. The simulated data may take a second or two to start to display in the EventDataStream
area of the GUI.
Press the Pause
button to stop this display of data. Notice that you can see the EdgeX Events (and associated Readings) or just the Readings with the two tabs on this DataCenter
display.
Each EdgeX micro service has a REST API associated with it. You can use curl or a browser to test that the service is up using its ping
API. Below are curl commands to \"ping\" both core data and core metadata.
curl http://localhost:59880/api/v3/ping\n curl http://localhost:59881/api/v3/ping\n
Each service should respond with JSON data to indicate it is able to respond to requests. Below is an example response from the core metadata \"Ping\" request.
{\"apiVersion\":\"v2\",\"timestamp\":\"Thu May 12 23:25:04 UTC 2022\",\"serviceName\":\"core-metadata\"}\n
See the service port reference page for a list of service ports to check the ping
API of other services.
As an added test, use curl to get the count of the number of events persisted by core data with the command below (you can also use a browser with the URL to get the same).
curl http://localhost:59880/api/v3/event/count\n
The response will indicate a \"count\" of events stored (in this case 6270).
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":6270}\n
Info
The full set of APIs for each service can be found in SwaggerHub. You can use the documentation to test other APIs as well.
"},{"location":"getting-started/native/Ch-BuildRunOnLinuxDistro/#set-up-an-ekuiper-stream-and-rule","title":"Set up an eKuiper Stream and Rule","text":"While eKuiper is running, it is currently sitting idle since it has no rules on which to watch for data and execute commands. Set up a simple eKuiper rule to log any sensor data it sees. Use the GUI tool to establish the eKuiper stream
and rule
. Learn about Streams and Rules in the eKuiper documentation.
In the GUI, click on the Rules Engine
link in the navigation bar on the left. Then, click on the Add
button on the Stream tab. Allow the default EdgeX stream be created by hitting the Submit
button.
Next, click on the Rules
tab on the Rules Engine
page. Then click on the Add
button on the Rules
tab in order to create a new eKuiper rule. In the form that appears, enter any name for the rule (TestRule
is used below) in the Name field. Enter SELECT * FROM EdgeXStream
in the RuleSQL field and add a log
action - all as shown below in the form. Hit the Submit
button when you have your rule established.
With the stream and rule defined, you have asked eKuiper to fire a log entry each time it sees a new EdgeX event/reading come through it. In the future, you could have eKuiper look for particular events/readings (e.g., thermostat readings above a specified temperature) produced by a particular sensor in order to issue commands to some device. But for now, you can check the eKuiper log to see that the rule engine is working and publishing a message to the log with each event/reading.
In the ekuiper/_build/kuiper-*version*-linux-amd64/log
folder, you will find a stream.log
file.
If you use Linux tail
, you can see that the eKuiper rules engine is firing a log entry for each virtual device service record that flows through EdgeX. Issue the following command to see the log entries occur in real time:
tail -f stream.log\n
Info
Seeing the eKuiper rules engine fire a log entry into a file for each EdgeX event/reading that comes through, has allowed you to confirm and see the entire EdgeX system is working properly.
With the nohup
command on each service, the log file contents are redirected to a file (nohup.out
) in the directory where you started each service. if you find that a service does not appear to be running or if it is running but not working correctly, check the nohup.out
file for any errors or issues. In the example below, the core data's nohup.out
log file is explored.
Warning
This build and run guide offers some assistance to seasoned developers or administrators to help build and run EdgeX on Windows natively (not using Docker and not running on Windows Subsystem for Linux ) but running natively on Windows is not supported by the project. EdgeX was built to be platform independent. As such, we believe most of EdgeX can run on almost any environment (on any hardware architecture and almost any operating system). However, there are elements of the EdgeX platform that will not run natively on Windows. Specifically, Redis, Kong and eKuiper will not run on Windows natively. Additionally, there are a number of device services that will not work on native Windows. In these instances, developers will need to find workarounds for services or run them outside of Windows and access them across the network.
Existence of this guides does not imply current or future support. Use of this guides should be used with care and with an understanding that it is the community's best effort to provide advanced developers with the means to begin their own custom EdgeX development and execution on Windows.
This build and run guide shows you how to get, compile/build, execute and test EdgeX (including the core and supporting services, the configurable application service, and a virtual device service) on Windows x86_64 hardware. Specifically, this guide was done using Windows 11. It is believed that this same guide works for Windows 10.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#environment","title":"Environment","text":"Building and running EdgeX on Windows natively will require you have:
The following software is assumed to already be installed and available on the host platform. Follow the referenced guides if you need to install or setup this software.
Go Lang, version 1.17 or later as of the Kamakura release
How to check for existence and version on your machine
Consul, version 1.10 or later as of the Kamakura release
How to check for existence and version on your machine
Git for Windows version 2.10 (that provides a BASH emulation to run Git from the command line)
How to check for existence and version on your machine
You may also need GCC (for C++, depending on whether services you are creating have or require C/C++ elements) and Make. These can be provided via a variety of tools/packages in Windows. Some options include use of:
Redis will not run on Windows, but is required in order to run EdgeX. Your Windows platform must be able to connect to a Redis instance on another platform via TCP/IP on port 6379 (by default). Redis,version 6.2 or later as of the Kamakura release. As an example, see How to install and configure Redis on Ubuntu 20.04.
Because EdgeX on your Windows platform will access Redis on another host, Redis should be configured to allow for traffic from other machines, you'll need to allow access from other addresses (see Open Redis port for remote connections). Additionally, you will need to configure EdgeX to use a username/password to access Redis, or set Redis into unprotected mode (see Turn off 'protected-mode' in Redis)
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#prepare-your-environment","title":"Prepare your environment","text":"Info
As you have installed Git for Windows, you will notice that all commands are executed from the Git BASH emulator. This is the easiest way to build and run EdgeX on Windows. You will also find that the instructions closely parallel build and run operations in Linux or other OS. When referring to the \"terminal\" window throughout these instructions, this means use the Git BASH emulator window.
In this guide, you will be building and running EdgeX in \"non-secure\" mode. That is, you will be building and running the EdgeX platform without the security services and security configuration. An environmental variable, EDGEX_SECURITY_SECRET_STORE
, is set to indicate whether the EdgeX services are expected to initialize and use the secure secret store. By default, this variable is set to true
. Prior to building and running EdgeX, set this environment variable to false. You can do this in each terminal window you open by executing the following command:
export EDGEX_SECURITY_SECRET_STORE=false
This can be done in the Git BASH (aka terminal) window from which you will eventually build and run EdgeX.
If you prefer, you can also set a Windows Environment Variable. Open the System Properties
Window, then click on the Environmental Variables
button to add a new variable.
In the Environment Variables Window that comes up, click on the New...
button under the System variables section. Enter EDGEX_SECURITY_SECRET_STORE
in the Variable Name
field and false
in the Variable value
field of the New System Variable
popup. Click OK
to close the System Properties
and Environment Variables
windows.
Now, each time you open a terminal window, the EDGEX_SECURITY_SECRET_STORE
will already be set to false
for you without having to execute the export command above.
In order to build and run EdgeX micro services, you will first need to get the source code for the platform. Using git, clone the EdgeX repositories with the following commands:
Tip
You may wish to create a new folder and then issue these git commands from that folder so that all EdgeX code is neatly stored in a single folder.
git clone https://github.com/edgexfoundry/edgex-go.git\ngit clone https://github.com/edgexfoundry/device-virtual-go.git\ngit clone https://github.com/edgexfoundry/app-service-configurable.git\ngit clone https://github.com/edgexfoundry/edgex-ui-go.git\n
Note that a new folder, named for the repository, gets created containing source code with each of the git clones above.
Note
eKuiper will not run on Windows natively. As with Redis, if you want to use eKuiper, you will need to run eKuiper outside of Windows and communicate via TCP/IP on a connected network.
Warning
These git clone operations pull from the main branch of the EdgeX repositories. This is the current working branch in EdgeX development. See the git clone documentation for how to clone a specific named release branch or version tag.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#build-edgex-services","title":"Build EdgeX Services","text":"With the source code, you can now build the EdgeX services and the GUI.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#build-core-and-supporting-services","title":"Build Core and Supporting Services","text":"Most of the services are in the edgex-go
folder. This folder contains the code for the core and supporting services. A single command in this repository will build several of the services.
Enter the edgex-go
folder and issue the make build
command as shown below.
Note
Building the services in edgex-go folder will actually build some of the services (such as the security services) not used in this guide, but issuing a single command is the easiest way to build the services needed without having to build services one by one.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#build-the-virtual-device-service","title":"Build the Virtual Device Service","text":"The virtual device service simulates devices/sensors sending data to EdgeX as if it was a \"thing\". This guide uses the virtual device service to exemplify how other devices services can be built and run.
Enter the device-virtual-go
folder and issue the make build
command as shown below.
The configurable application service helps prepare device/sensor data for enterprise or cloud systems. It also prepares data for use by the rules engine - eKuiper
Enter the app-service-configurable
folder and issue the make build
command as shown below.
EdgeX provides a graphical user interface for exploring a single instance of the EdgeX platform. The GUI makes it easier to work with EdgeX and see sample data coming from sensors. It provides a means to check that EdgeX is working correctly, monitor EdgeX and even make some configuration changes.
Enter the edgex-ui-go
folder and issue the make build
command as shown below.
Provided everything built correctly and without issue, you can now start your EdgeX services one at a time. First make sure Redis Server is running on its host machine and is accessible via TCP/IP (assuming default port of 6379). If Redis is not running, start it before the other services. If it is running, you can start each of the EdgeX services in order as listed below.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#point-services-to-redis","title":"Point Services to Redis","text":"Because Redis is not running on your Windows machine, the configuration of all the services need to be changed to point the services to Redis on the different host when they start.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#modify-the-configuration-of-edgex-core-and-supporting-services","title":"Modify the Configuration of EdgeX Core and Supporting Services","text":"Each of core and supporting EdgeX services are located in edgex-go\\cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go\\cmd\\core-metadata
. Core-metadata's configuration is located in a configuration.yaml
file in edgex-go\\cmd\\core-metadata\\res
. Use your favorite editor to open the configuration file and locate the Database
section in that file (about 1/2 the way down the configuration listings). Change the host address from localhost
to the IP address of your Redis hosting machine (changed to 10.0.0.75 in the example below).
Modify the host location for Redis in the Database
section of configuration.yaml
files for notifications (edgex-go\\cmd\\support-notifications\\res
) and scheduler (edgex-go\\cmd\\support-scheduler\\res
) services in the same way.
In core-data, you need to modify two host settings. You need to change the location for Redis in the Database
section as well as the host location for Redis in the MessageQueue
section of configuration.yaml
. The latter setting is for accessing the Redis Pub/Sub message bus.
The Configurable App Service uses both the Redis database and message bus like core-data does. Locate the configuration.yaml
file in app-service-configurable\\res\\rules-engine
folder. Open the file with an editor and change the Host in the Database
, Trigger.EdgexMessageBus.SubscribeHost
, and Trigger.EdgexMessageBus.PublishHost
sections from localhost
to the IP address of your Redis hosting machine.
The Virtual Device Service uses the Redis message bus like core-data does. Locate the configuration.yaml
file in device-virtual-go\\cmd\\res
folder. Open the file with an editor and change the Redis MessageQueue
host address from localhost
to the IP address of your Redis hosting machine.
Wherever you installed Consul, start Consul Agent with the following command.
consul agent -ui -bootstrap -server -data-dir=tmp/consul &\n
If Consul is running correctly, you should be able to reach the Consul UI through a browser at http://localhost:8500 on your Windows machine.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#start-core-metadata","title":"Start Core Metadata","text":"Each of core and supporting EdgeX services are located in edgex-go\\cmd
under a subfolder by the service name. In the first case, core-metadate is located in edgex-go\\cmd\\core-metadata
. In a Git BASH terminal, change directories to the core-metadata service subfolder and then run the executable found in the subfolder with -cp
and -registry
command line options as shown below.
cd edgex-go/cmd/core-metadata/\nnohup ./core-metadata -cp=consul.http://localhost:8500 -registry &\n
The nohup
is used to execute the command and ignore all SIGHUP (hangup) signals. The &
says to execute the process in the background. Both nohup
and &
will be used to run each of the services so that the same terminal can be used and the output will be directed to local nohup.out log files.
The -cp=consul.http://localhost:8500
command line parameter tells core-metadata to use Consul and where to find Consul running. The -registry
command line parameter tells core-metadata to use (and register with) the registry service. Both of these command line parameters will be use when launching all EdgeX services.
In a similar fashion, enter each of the other core and supporting service folders in edgex-go\\cmd
and launch the services.
cd ../core-data\nnohup ./core-data -cp=consul.http://localhost:8500 -registry &\ncd ../core-command\nnohup ./core-command -cp=consul.http://localhost:8500 -registry &\ncd ../support-notifications/\nnohup ./support-notifications -cp=consul.http://localhost:8500 -registry &\ncd ../support-scheduler/\nnohup ./support-scheduler -cp=consul.http://localhost:8500 -registry &\n
Tip
If you still have the Consul UI up, you should see each of the EdgeX core and supporting services listed in Consul's Services
page with green check marks next to them suggesting they are running.
The configurable application service is located in the root of app-service-configurable
folder.
The configurable application service is started in a similar way as the other EdgeX services. The configurable application service is going to be used to route data to the rules engine. Therefore, an additional command line parameter (p
) is added to its launch command to tell the app service to use the rules engine configuration and profile.
nohup ./app-service-configurable -cp=consul.http://localhost:8500 -registry -p=rules-engine &\n
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#start-the-virtual-device-service","title":"Start the Virtual Device Service","text":"The virtual device service is also started in similar way as the other EdgeX services. The virtual device service manufactures data as if it was to come from a sensor and sends that data into the rest of EdgeX. By default, the virtual device service will generate random numbers (integers, unsigned integers, floats), booleans and even binary data as simulated sensor data. The virtual device service is located in the device-virtual-go\\cmd
folder.
Change directories to the virtual device service's cmd
folder and then launch the service with the command shown below.
nohup ./device-virtual -cp=consul.http://localhost:8500 -registry &\n
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#start-the-gui","title":"Start the GUI","text":"The EdgeX graphical user interface (GUI) provides an easy to use visual tool to monitor data passing through EdgeX services. It also provides some capability to change an EdgeX instance's configuration or metadata. The EdgeX GUI is located in the edgex-ui-go\\cmd\\edgex-ui-server
folder.
Change directories to the GUI's cmd\\edgex-ui-server
folder and then launch the GUI with the command shown below.
nohup ./edgex-ui-server &\n
If the GUI is running correctly, you should be able to reach the GUI through a Window's browser at http://localhost:4000. It may take a few seconds for the GUI to initialize once you hit the URL.
Note
Some elements of the GUI will not work as you do not have all available EdgeX services running. Notably, the System Management service and its executor are not running so the System view of the GUI will display an error. By default, the System Management service and its executor operate by checking on the other services memory, CPU, etc. via Docker Stats. In this case, since you are not running Docker containers, the System Management service would not function. Also, as eKuiper does not run on Windows, any Rules Engine functionality will not work either.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#test-and-explore-edgex","title":"Test and Explore EdgeX","text":"With EdgeX up and running (inclusive of Consul, and with Redis running on a separate host), you can try these quick tests to see that EdgeX is running correctly.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#see-sensor-data-flowing-through-edgex","title":"See sensor data flowing through EdgeX","text":"You have already been using Consul and the EdgeX GUI to check on some items of EdgeX in this tutorial. You can use the EdgeX GUI to further check that sensor data is flowing through the system.
In your Window's browser, go to http://localhost:4000. Remember, it may take a few seconds for the GUI to initialize once you hit the URL. Once the GUI displays, find and click on the DataCenter
link on the left hand navigation bar (highlighted below).
The DataCenter
display allows you to see the EdgeX event/readings as they are persisted by the core data service to Redis. Simply press the >Start
button to see the \"stream\" of simulated sensor data that was generated by the virtual device service and sent to EdgeX. The simulated data may take a second or two to start to display in the EventDataStream
area of the GUI.
Press the Pause
button to stop this display of data. Notice that you can see the EdgeX Events (and associated Readings) or just the Readings with the two tabs on this DataCenter
display.
Each EdgeX micro service has a REST API associated with it. You can use curl or a browser to test that the service is up using its ping
API. Below are curl commands to \"ping\" both core data and core metadata.
curl http://localhost:59880/api/v3/ping\n curl http://localhost:59881/api/v3/ping\n
Each service should respond with JSON data to indicate it is able to respond to requests. Below is an example response from the core metadata \"Ping\" request.
{\"apiVersion\":\"v2\",\"timestamp\":\"Thu May 12 23:25:04 UTC 2022\",\"serviceName\":\"core-metadata\"}\n
See the service port reference page for a list of service ports to check the ping
API of other services.
As an added test, use curl to get the count of the number of events persisted by core data with the command below (you can also use a browser with the URL to get the same).
curl http://localhost:59880/api/v3/event/count\n
The response will indicate a \"count\" of events stored (in this case 6270).
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":6270}\n
Info
The full set of APIs for each service can be found in SwaggerHub. You can use the documentation to test other APIs as well.
"},{"location":"getting-started/native/Ch-BuildRunOnWindows/#debugging-and-troubleshooting","title":"Debugging and Troubleshooting","text":"With the nohup
command on each service, the log file contents are redirected to a file (nohup.out
) in the directory where you started each service. if you find that a service does not appear to be running or if it is running but not working correctly, check the nohup.out
file for any errors or issues. In the example below, the core data's nohup.out
log file is explored.
This guide will get EdgeX up and running on your machine in as little as 5 minutes using pre-built Docker containers. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible.
For a quick start with Snaps, refer to Getting Started with Snaps.
When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started using Docker or Getting Started as a Developer guides.
"},{"location":"getting-started/quick-start/#setup-docker","title":"Setup Docker","text":"Install the following:
Info
The version of EdgeX used in the following examples is main
.
Once you have Docker and Docker Compose installed, you need to:
docker-compose
fileThis can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures).
x86ARMcurl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty.yml -o docker-compose.yml; docker compose up -d\n
curl https://raw.githubusercontent.com/edgexfoundry/edgex-compose/main/docker-compose-no-secty-arm64.yml -o docker-compose.yml; docker compose up -d\n
Verify that the EdgeX containers have started:
docker compose ps \n
If all EdgeX containers pulled and started correctly and without error, you should see a process status (ps) that looks similar to the image above."},{"location":"getting-started/quick-start/#connected-devices","title":"Connected Devices","text":"EdgeX Foundry provides a Virtual device service which is useful for testing and development. It simulates a number of devices, each randomly generating data of various types and within configurable parameters. For example, the Random-Integer-Device will generate random integers.
The Virtual Device (also known as Device Virtual) service is already a service pulled and running as part of the default EdgeX configuration.
You can verify that Virtual Device readings are already being sent by querying the EdgeX core data service for the event records sent for Random-Integer-Device:
curl http://localhost:59880/api/v3/event/device/name/Random-Integer-Device\n
Verify the virtual device service is operating correctly by requesting the last event records received by core data for the Random-Integer-Device. Note
By default, the maximum number of events returned will be 20 (the default limit). You can pass a limit
parameter to get more or less event records.
curl http://localhost:59880/api/v3/event/device/name/Random-Integer-Device?limit=50\n
"},{"location":"getting-started/quick-start/#controlling-the-device","title":"Controlling the Device","text":"Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it.
When our Virtual Device service registered the device Random-Integer-Device
, it used a profile to also define commands that allow you to tell the service not to generate random integers, but to always return a value you set.
You won't call commands on devices directly, instead you use the EdgeX Foundry Command Service to do that. The first step is to check what commands are available to call by asking the Command service about your device:
curl http://localhost:59882/api/v3/device/name/Random-Integer-Device\n
This will return a lot of JSON, because there are a number of commands you can call on this device, but the commands we're going to use in this guide are Int16
(the comand to get the current integer 16 value) and WriteInt16Value
(the command to disable the generation of the random integer 16 number and specify the integer value to return). Look for the Int16
and WriteInt16Value
commands like those shown in the JSON as below:
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"deviceCoreCommand\": {\n\"deviceName\": \"Random-Integer-Device\",\n\"profileName\": \"Random-Integer-Device\",\n\"coreCommands\": [\n{\n\"name\": \"WriteInt16Value\",\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Integer-Device/WriteInt16Value\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Int16\",\n\"valueType\": \"Int16\"\n},\n{\n\"resourceName\": \"EnableRandomization_Int16\",\n\"valueType\": \"Bool\"\n}\n]\n},\n{\n\"name\": \"Int16\",\n\"get\": true,\n\"set\": true,\n\"path\": \"/api/v3/device/name/Random-Integer-Device/Int16\",\n\"url\": \"http://edgex-core-command:59882\",\n\"parameters\": [\n{\n\"resourceName\": \"Int16\",\n\"valueType\": \"Int16\"\n}\n]\n}\n...\n\n]\n}\n}\n
You'll notice that the commands have get
or set
(or both) options. A get call will return a random number (integer 16), and is what is being called automatically to send data into the rest of EdgeX (specifically core data). You can also call get manually using the URL provided (with no additinal parameters needed): curl http://localhost:59882/api/v3/device/name/Random-Integer-Device/Int16\n
Warning
Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command, but when calling the service from outside of Docker, you have to use localhost to reach it.
This command will return a JSON result that looks like this:
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"id\": \"6d829637-730c-4b70-9208-dc179070003f\",\n\"deviceName\": \"Random-Integer-Device\",\n\"profileName\": \"Random-Integer-Device\",\n\"sourceName\": \"Int16\",\n\"origin\": 1625605672073875500,\n\"readings\": [\n{\n\"id\": \"545b7add-683b-4745-84f1-d859f3d839e0\",\n\"origin\": 1625605672073875500,\n\"deviceName\": \"Random-Integer-Device\",\n\"resourceName\": \"Int16\",\n\"profileName\": \"Random-Integer-Device\",\n\"valueType\": \"Int16\",\n\"binaryValue\": null,\n\"mediaType\": \"\",\n\"value\": \"-8146\"\n}\n]\n}\n}\n
A call to GET of the Int16 device's Random-Integer-Device operation through the command service results in the next random value produced by the device in JSON format.
The default range for this reading is -32,768 to 32,767. In the example above, a value of -8146
was returned as the reading value. With the service set up to randomly return values, the value returned will be different each time the Int16
command is sent. However, we can use the WriteInt16Value
command to disable random values from being returned and instead specify a value to return. Use the curl command below to call the set command to disable random values and return the value 42
each time.
curl -X PUT -d '{\"Int16\":\"42\", \"EnableRandomization_Int16\":\"false\"}' http://localhost:59882/api/v3/device/name/Random-Integer-Device/WriteInt16Value\n
Warning
Again, also notice that localhost replaces edgex-core-command.
If successful, the service will confirm your setting of the value to be returned with a 200
status code.
A call to the device's SET command through the command service will return the API version and a status code (200 for success).
Now every time we call get on the Int16
command, the returned value will be 42
.
A call to GET of the Int16 device's Random-Integer-Device operation after setting the Int16 value to 42 and disabling randomization will always return a value of 42.
"},{"location":"getting-started/quick-start/#exporting-data","title":"Exporting Data","text":"EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client.
First add the following application service to your docker-compose.yml file right after the 'app-service-rules' service (the first service in the file). Spacing is important in YAML, so make sure to copy and paste it correctly.
app-service-mqtt:\ncontainer_name: edgex-app-mqtt\ndepends_on:\n- consul\n- data\nenvironment:\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nEDGEX_PROFILE: mqtt-export\nEDGEX_SECURITY_SECRET_STORE: \"false\"\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nSERVICE_HOST: edgex-app-mqtt\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\nWRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS: tcp://broker.mqttdashboard.com:1883\nWRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_TOPIC: EdgeXEvents\nhostname: edgex-app-mqtt\nimage: edgexfoundry/app-service-configurable:2.0.0\nnetworks:\nedgex-network: {}\nports:\n- 127.0.0.1:59702:59702/tcp\nread_only: true\nsecurity_opt:\n- no-new-privileges:true\nuser: 2002:2001\n
Note
This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at tcp://broker.mqttdashboard.com:1883
. You will be publishing to the EdgeXEvents topic.
For convenience, see documentation on the EdgeX Compose Builder to create custom Docker Compose files.
Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service.
docker compose up -d\n
You can connect to this broker with any MQTT client to watch the sent data. HiveMQ provides a web-based client that you can use. Use a browser to go to the client's URL. Once there, hit the Connect button to connect to the HiveMQ public broker. Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to.
Then, use the Subscriptions area to subscribe to the \"EdgeXEvents\" topic.
You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service.
You will begin seeing your random number readings appear in the Messages area on the screen.
Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen.
"},{"location":"getting-started/quick-start/#next-steps","title":"Next Steps","text":"Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX.
It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the micro services work together to control devices and read data from them as you just did.
"},{"location":"getting-started/tools/Ch-GUI/","title":"Graphical User Interface (GUI)","text":"EdgeX's graphical user interface (GUI) is provided for demonstration and development use to manage and monitor a single instance of EdgeX Foundry.
"},{"location":"getting-started/tools/Ch-GUI/#setup","title":"Setup","text":"You can quickly run the GUI in a Docker container or as a Snap. You can also download, build and run the GUI natively on your host.
"},{"location":"getting-started/tools/Ch-GUI/#docker-compose","title":"Docker Compose","text":"The EdgeX GUI is now incorporated into all the secure and non-sure Docker Compose files provided by the project. Locate and download the Docker Compose file that best suits your needs from https://github.com/edgexfoundry/edgex-compose. For example, in the Jakarta branch of edgex-compose
the *-with-app-sample*
compose files include the Sample App Service allowing the configurable pipeline to be manipulated from the UI. See the four Docker Compose files that include the Sample App Service circled below.
Note
The GUI can now be used in secure mode as well as non-secure mode.
See the Getting Started using Docker guide for help on how to find, download and use a Docker Compose file to run EdgeX - in this case with the Sample App Service.
"},{"location":"getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token","title":"Secure mode with API Gateway token","text":"When first running the UI in secure mode, you will be prompted to enter a token.
Following the How to get access token? link to view the documentation how get an API Gateway access token. Once you enter the token the UI will have asses to the EdgeX service via the API Gateway.
Note
The UI is no longer restricted to access from localhost
. It can now be accessed from any IP address that can access the host system. This is allowed because the UI is secured via API Gateway token when running in secure mode.
The latest stable version of the snap can be installed using:
$ sudo snap install edgex-ui\n
A specific release of the snap can be installed from a dedicated channel. For example, to install the 2.1 (Jakarta) release:
$ sudo snap install edgex-ui --channel=2.1\n
The latest development version of the edgex-ui snap can be installed using:
$ sudo snap install edgex-ui --edge\n
"},{"location":"getting-started/tools/Ch-GUI/#generate-token-for-entering-ui-secure-mode","title":"Generate token for entering UI secure mode","text":"A JWT access token is required to access the UI securely through the API Gateway. To do so:
$ openssl ecparam -genkey -name prime256v1 -noout -out private.pem\n$ openssl ec -in private.pem -pubout -out public.pem\n
$ sudo snap set edgexfoundry env.security-proxy.user=user01,USER_ID,ES256\n$ sudo snap set edgexfoundry env.security-proxy.public-key=\"$(cat public.pem)\"\n
$ edgexfoundry.secrets-config proxy jwt --algorithm ES256 \\\n--private_key private.pem --id USER_ID --expiration=1h\n
This output is the JWT token for UI login in secure mode. Please keep the token in a safe place for future re-use as the same token cannot be regenerated or recovered from EdgeX's secret-config CLI. The token is required each time you reopen the web page.
"},{"location":"getting-started/tools/Ch-GUI/#using-the-edgex-ui-snap","title":"Using the edgex-ui snap","text":"Open your browser http://localhost:4000
Please log in to EdgeX with the JWT token we generated above.
For more details please refer to edgex-ui Snap
"},{"location":"getting-started/tools/Ch-GUI/#native","title":"Native","text":"If you are running EdgeX natively (outside of Docker Compose or a Snap), you will find instructions on how to build and run the GUI on your platform in the GUI repository README
"},{"location":"getting-started/tools/Ch-GUI/#general","title":"General","text":""},{"location":"getting-started/tools/Ch-GUI/#gui-address","title":"GUI Address","text":"Once the GUI is up and running, simply visit port 4000 on the GUI's host machine (ex: http://localhost:4000) to enter the GUI Dashboard (see below). The GUI does not require any login.
"},{"location":"getting-started/tools/Ch-GUI/#menu-bar","title":"Menu Bar","text":"The left side of the Dashboard holds a menu bar that allows you access to the GUI functionality. The \"hamburger\" icon on the menu bar allows you to shrink or expand the menu bar to icons vs icons and menu bar labels.
"},{"location":"getting-started/tools/Ch-GUI/#mobile-device-ready","title":"Mobile Device Ready","text":"
The EdgeX GUI can be used/displayed on a mobile device via the mobile device's browser if the GUI address is accessible to the device. The display may be skewed in order to fit the device screen. For example, the Dashboard menu will often change to icons over the expanded labeled menu bar when shown on a mobile device.
"},{"location":"getting-started/tools/Ch-GUI/#capability","title":"Capability","text":"The GUI allows you to
The Dashboard page (the main page of the GUI) presents you with a set of clickable \"tiles\" that provide a quick view of the status of your EdgeX instance. That is, it provides some quick data points about the EdgeX instance and what the GUI is tracking. Specifically, the tiles in the Dashboard show you:
If for some reason the GUI has an issue or difficulty getting the information it needs to display a tile in the Dashboard when it is displayed, a popup will be displayed over the screen indicating the issue. In the example below, the support scheduling service was down and the GUI Dashboard was unable to access the scheduler service.
In this way, the Dashboard provides a quick and easy way to see whether the EdgeX instance is nominal or has underlying issues.
You can click on each of the tiles in the Dashboard. Doing so provides more details about each. More precisely, clicking on a tile takes you to another part of the GUI where the details of that item can be found. For example, clicking on the Device Profiles tile takes you to the Metadata page and the Device Profile tab (covered below)
"},{"location":"getting-started/tools/Ch-GUI/#config","title":"Config","text":"The configuration of each service is made available for each service by clicking on the Config
icon for any service from the System Service List. The configuration is displayed in JSON form and is read only. If running Consul, use the Consul Web UI to make changes to the configuration.
From the System Service List, you can request to stop, start or restart any of the listed services with the operation buttons in the far right column.
Warning
There is no confirmation popup or warning on these requests. When you push a stop, start, restart button, the request is immediately made to the system management service for that operation.
The state of the service will change when these operations are invoked. When a service is stopped, the metric and config information for the service will be unavailable.
After starting (or restarting) a service, you may need to hit the Refresh
button on the page to get the state and metric/config icons to change.
The Metadata page (available from the Metadata menu option) provides three tabs to be able to see and manage the basic elements of metadata: device services, device profiles and devices.
"},{"location":"getting-started/tools/Ch-GUI/#device-service-tab","title":"Device Service Tab","text":"The Device Service tab displays the device services known to EdgeX (as device services registered in core metadata). Device services cannot be added or removed through the GUI, but information about the existing device services (i.e., port, admin state) and several actions on the existing device services can be accomplished on this tab.
First note that for each device service listed, the number of associated devices are depicted. If you click on the Associated Devices
button, it will take you to the Device tab to be able to get more information about or work with any of the associated devices.
The Settings
button on each device service allows you to change the description or the admin state of the device service.
Alert
Please note that you must hit the Save
button after making any changes to the Device Service Settings. If you don't and move away from the page, your changes will be lost.
The Device Tab on the Metadata page offers you details about all the sensors/devices known to your EdgeX instance. Buttons at the top of the tab allow you to add, remove or edit a device (or collection of devices when deleting and using the selector checkbox in the device list).
On the row of each device listed, links take you to the appropriate tabs to see the associated device profile or device service for the device.
Icons on the row of each device listed cause editable areas to expand at the bottom of the tab to execute a device command or see/modify the device's AutoEvents.
The command execution display allows you to select the specific device resource or device command (from the Command Name List
), and execute or try
either a GET or SET command (depending on what the associated device profile for the device says is allowed). The response will be displayed in the ResponseRaw
area after the try
button is pushed.
The Add
button on the Device List tab will take you to the Add Device Wizard
. This nice utility will assist you, entry screen by entry screen, in getting a new device setup in EdgeX. Specifically, it has you (in order):
Once all the information in the Add Device Wizard
screens is entered, the Submit
button at the end of the wizard causes your new device to be created in core metadata with all appropriate associations.
The Device Profile Tab on the Metadata page displays the device profiles known to EdgeX and allows you to add new profiles or edit/remove existing profiles.
The AssociatedDevice
button on each row of the Device Profile List will take you to the Device tab and show you the list of devices currently associated to the device profile.
Warning
When deleting a profile, the system will popup an error if deices are still associated to the profile.
"},{"location":"getting-started/tools/Ch-GUI/#data-center-seeing-eventreading-data","title":"Data Center (Seeing Event/Reading Data)","text":"From the Data Center option on the GUI's menu bar you can see the stream of Event/Readings coming from the device services into core data. The event/reading data will be displayed in JSON form.
There are two tabs on the Data Stream page, both with Start
and Pause
buttons:
Hit the Start
button on either tab to see the event or reading data displayed in the stream pane (events are shown in the example below). Push the Pause
button to stop the display of event or reading data.
Warning
In actuality, the event and reading data is pulled from core data via REST call every three (3) seconds - so it is not a live stream display but a poll of data. Furthermore, if EdgeX is setup to have device services send data directly to application services via message bus and core data is not running or if core data is configured to have persistence turned off, there will be no data in core data to pull and so there will be no events or readings to see.
"},{"location":"getting-started/tools/Ch-GUI/#scheduler-intervalinterval-list","title":"Scheduler (Interval/Interval List)","text":"Interval and Interval Actions, which help define task management schedules in EdgeX, are managed via the Scheduler page from selecting Scheduler off the menu bar.
Again, as with many of the EdgeX GUI pages, there are two tabs on the Scheduler page:
When updating or adding an Interval, you must provide a name Interval duration string which takes an unsigned integer plus a unit of measure which must be one of \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Optionally provide a start/end dates and an indication that the interval runs only once (and thereby ignores the interval).
"},{"location":"getting-started/tools/Ch-GUI/#interval-action-list","title":"Interval Action List","text":"Interval Actions define what happens when the Interval kicks off. Interval Actions can define REST, MQTT or Email actions that take place when an Interval timer hits. The GUI provides the means to edit or create any of these actions. Note that an Interval Action must be associated to an already defined Interval.
"},{"location":"getting-started/tools/Ch-GUI/#notifications","title":"Notifications","text":"Notifications are messages from EdgeX to external systems about something that has happened in EdgeX - for example that a new device has been created. Currently, notifications can be sent by email or REST call.
The Notification Center page, available from the Notifications menu option, allows you to see new (not processed), processed or escalated (notifications that have failed to be sent within its resend limit) notifications. By default, the new notifications are displayed, but if you click on the Advanced >>
link on the page (see below), you can select which type of notifications to display.
The Subscriptions tab on the Notification Center page allows you to add, update or remove subscriptions to notifications. Subscribers are registered receivers of notifications - either via email or REST.
When adding (or editing) a subscription, you must provide a name, category, label, receiver, and either an email address or REST endpoint. A template is provided to specify either the email or REST endpoint configuration data needed for the subscription.
"},{"location":"getting-started/tools/Ch-GUI/#ruleengine","title":"RuleEngine","text":"The Rule Engine page, from the RuleEngine menu option, provides the means to define streams and rules for the integrated eKuiper rules engine.
Via the Stream tab, streams are defined by JSON. All that is really required is a stream name (EdgeXStream in the example below).
The Rules tab allows eKuiper rules to be added, removed or updated/edited as well as started, stopped or restarted. When adding or editing a rule, you must provide a name, the rule SQL and action. The action can be one of the following (some requiring extra parameters):
See the eKuiper documentation for more information on how to define rules.
Alert
Once a rule is created, it is started by default. Return to the Rules tab on the RulesEngine page to stop a new rule.
When creating or editing the rule, if the stream referenced in the rule is not already defined, the GUI will present an error when trying to submit the rule.
"},{"location":"getting-started/tools/Ch-GUI/#appservice","title":"AppService","text":"In the AppService page, you can configure existing configurable application services. The list of available configurable app services is determined by the UI automatically (based on a query for available app services from the registry service).
"},{"location":"getting-started/tools/Ch-GUI/#configurable","title":"Configurable","text":"When the application service is a configurable app service and is known to the GUI, the Configurable
button on the App Service List allows you to change the triggers, functions, secrets and other configuration associated to the configurable app service.
There are four tabs in the Configurable Setting editor:
Note
When the Trigger is changed, the service must be restarted for the change to take effect.
"},{"location":"getting-started/tools/Ch-GUI/#why-demo-and-developer-use-only","title":"Why Demo and Developer Use Only","text":"The GUI is meant as a developer tool or to be used in EdgeX demonstration situations. It is not yet designed for production settings. There are several reasons for this restriction.
The EdgeX community is exploring efforts to make the GUI available in secure mode in a future release.
"},{"location":"microservices/application/AdvancedTopics/","title":"Advanced Topics","text":"The following items discuss topics that are a bit beyond the basic use cases of the Application Functions SDK when interacting with EdgeX.
"},{"location":"microservices/application/AdvancedTopics/#configurable-functions-pipeline","title":"Configurable Functions Pipeline","text":"This SDK provides the capability to define the functions pipeline via configuration rather than code by using the app-service-configurable application service. See the App Service Configurable section for more details.
"},{"location":"microservices/application/AdvancedTopics/#custom-rest-endpoints","title":"Custom REST Endpoints","text":"It is not uncommon to require your own custom REST endpoints when building an Application Service. Rather than spin up your own webserver inside of your app (alongside the already existing running webserver), we've exposed a method that allows you add your own routes to the existing webserver. A few routes are reserved and cannot be used:
To add your own route, use the AddCustomRoute()
API provided on the ApplicationService
interface.
Example - Add Custom REST route
myhandler := func(c echo.Context) error {\nservice.LoggingClient().Info(\"TEST\") c.Response().WriteHeader(http.StatusOK)\nc.Response().Write([]byte(\"hello\")) } service := pkg.NewAppService(serviceKey) service.AddCustomRoute(\"/myroute\", service.Authenticated, myHandler, \"GET\")
Under the hood, this simply adds the provided route, handler, and method to the gorilla mux.Router
used in the SDK. For more information on gorilla mux
you can check out the github repo here. You can access the interfaces.ApplicationService
API for resources such as the logging client by pulling it from the context as shown above -- this is useful for when your routes might not be defined in your main.go
where you have access to the interfaces.ApplicationService
instance.
The target type is the object type of the incoming data that is sent to the first function in the function pipeline. By default this is an EdgeX dtos.Event
since typical usage is receiving Events
from the EdgeX MessageBus.
There are scenarios where the incoming data is not an EdgeX Event
. One example scenario is two application services are chained via the EdgeX MessageBus. The output of the first service is inference data from analyzing the original Event
data, and published back to the EdgeX MessageBus. The second service needs to be able to let the SDK know the target type of the input data it is expecting.
For usages where the incoming data is not events
, the TargetType
of the expected incoming data can be set when the ApplicationService
instance is created using the NewAppServiceWithTargetType()
factory function.
Example - Set and use custom Target Type
type Person struct { FirstName string `json:\"first_name\"` LastName string `json:\"last_name\"` } service := pkg.NewAppServiceWithTargetType(serviceKey, &Person{})
TargetType
must be set to a pointer to an instance of your target type such as &Person{}
. The first function in your function pipeline will be passed an instance of your target type, not a pointer to it. In the example above, the first function in the pipeline would start something like:
func MyPersonFunction(ctx interfaces.AppFunctionContext, data interface{}) (bool, interface{}) { ctx.LoggingClient().Debug(\"MyPersonFunction executing\")\n\nif data == nil {\nreturn false, errors.New(\"no data received to MyPersonFunction\")\n}\n\nperson, ok := data.(Person)\nif !ok {\nreturn false, errors.New(\"MyPersonFunction type received is not a Person\")\n}\n\n// ....\n
The SDK supports un-marshaling JSON or CBOR encoded data into an instance of the target type. If your incoming data is not JSON or CBOR encoded, you then need to set the TargetType
to &[]byte
.
If the target type is set to &[]byte
the incoming data will not be un-marshaled. The content type, if set, will be set on the interfaces.AppFunctionContext
and can be access via the InputContentType()
API. Your first function will be responsible for decoding the data or not.
See the Common Command Line Options for the set of command line options common to all EdgeX services. The following command line options are specific to Application Services.
"},{"location":"microservices/application/AdvancedTopics/#skip-version-check","title":"Skip Version Check","text":"-s/--skipVersionCheck
Indicates the service should skip the Core Service's version compatibility check.
"},{"location":"microservices/application/AdvancedTopics/#service-key","title":"Service Key","text":"-sk/--serviceKey
Sets the service key that is used with Registry, Configuration Provider and security services. The default service key is set by the application service. If the name provided contains the placeholder text <profile>
, this text will be replaced with the name of the profile used. If profile is not set, the <profile>
text is simply removed
Can be overridden with EDGEX_SERVICE_KEY environment variable.
"},{"location":"microservices/application/AdvancedTopics/#environment-variables","title":"Environment Variables","text":"See the Common Environment Variables section for the list of environment variables common to all EdgeX Services. The remaining in this section are specific to Application Services.
"},{"location":"microservices/application/AdvancedTopics/#edgex_service_key","title":"EDGEX_SERVICE_KEY","text":"This environment variable overrides the -sk/--serviceKey
command-line option and the default set by the application service.
Note
If the name provided contains the text <profile>
, this text will be replaced with the name of the profile used.
Example - Service Key
EDGEX_SERVICE_KEY: app-<profile>-mycloud
profile: http-export
then service key will be app-http-export-mycloud
Applications can specify custom configuration in the service's configuration file in two ways.
"},{"location":"microservices/application/AdvancedTopics/#application-settings","title":"Application Settings","text":"The first simple way is to add items to the ApplicationSetting
section. This is a map of string key/value pairs, i.e. map[string]string
. Use for simple string values or comma separated list of string values. The ApplicationService
API provides the follow access APIs for this configuration section:
ApplicationSettings() map[string]string
GetAppSetting(setting string) (string, error)
setting
valueGetAppSettingStrings(setting string) ([]string, error)
setting
value. The Entry is assumed to be a comma separated list of strings.The second is the more complex Structured Custom Configuration
which allows the Application Service to define and watch it's own structured section in the service's configuration file.
The ApplicationService
API provides the follow APIs to enable structured custom configuration:
LoadCustomConfig(config UpdatableConfig, sectionName string) error
UpdateFromRaw
interface will be called on the custom configuration when the configuration is loaded from the Configuration Provider.ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
See the Application Service Template for an example of using the new Structured Custom Configuration capability.
The Store and Forward capability allows for export functions to persist data on failure and for the export of the data to be retried at a later time.
Note
The order the data exported via this retry mechanism is not guaranteed to be the same order in which the data was initial received from Core Data
"},{"location":"microservices/application/AdvancedTopics/#configuration","title":"Configuration","text":"Writable.StoreAndForward
allows enabling, setting the interval between retries and the max number of retries. If running with Configuration Provider, these setting can be changed on the fly via Consul without having to restart the service.
Example - Store and Forward configuration
Writable:\nStoreAndForward:\nEnabled: false\nRetryInterval: \"5m\"\nMaxRetryCount: 10\n
Note
RetryInterval should be at least 1 second (eg. '1s') or greater. If a value less than 1 second is specified, 1 second will be used. Endless retries will occur when MaxRetryCount is set to 0. If MaxRetryCount is set to less than 0, a default of 1 retry will be used.
Database configuration section describes which database type to use and the information required to connect to the database. This section is required if Store and Forward is enabled. It is optional if not using Redis
for the EdgeX MessageBus which is now the default.
Example - Database configuration
Database:\nType: \"redisdb\"\nHost: \"localhost\"\nPort: 6379\nTimeout: \"5s\"\n
"},{"location":"microservices/application/AdvancedTopics/#how-it-works","title":"How it works","text":"When an export function encounters an error sending data it can call SetRetryData(payload []byte)
on the AppFunctionContext
. This will store the data for later retry. If the Application Service is stopped and then restarted while stored data hasn't been successfully exported, the export retry will resume once the service is up and running again.
Note
It is important that export functions return an error and stop pipeline execution after the call to SetRetryData
. See HTTPPost function in SDK as an example
When the RetryInterval
expires, the function pipeline will be re-executed starting with the export function that saved the data. The saved data will be passed to the export function which can then attempt to resend the data.
Note
The export function will receive the data as it was stored, so it is important that any transformation of the data occur in functions prior to the export function. The export function should only export the data that it receives.
One of three out comes can occur after the export retried has completed.
Export retry was successful
In this case, the stored data is removed from the database and the execution of the pipeline functions after the export function, if any, continues.
Export retry fails and retry count has not been
exceeded
In this case, the stored data is updated in the database with the incremented retry count
Export retry fails and retry count has been
exceeded
In this case, the stored data is removed from the database and never retried again.
Note
Changing Writable.Pipeline.ExecutionOrder will invalidate all currently stored data and result in it all being removed from the database on the next retry. This is because the position of the export function can no longer be guaranteed and no way to ensure it is properly executed on the retry.
"},{"location":"microservices/application/AdvancedTopics/#custom-storage","title":"Custom Storage","text":"The default backing store is redis. Custom implementations of the StoreClient
interface can be provided if redis does not meet your requirements.
type StoreClient interface {\n// Store persists a stored object to the data store and returns the assigned UUID.\nStore(o StoredObject) (id string, err error)\n\n// RetrieveFromStore gets an object from the data store.\nRetrieveFromStore(appServiceKey string) (objects []StoredObject, err error)\n\n// Update replaces the data currently in the store with the provided data.\nUpdate(o StoredObject) error\n\n// RemoveFromStore removes an object from the data store.\nRemoveFromStore(o StoredObject) error\n\n// Disconnect ends the connection.\nDisconnect() error\n}\n
A factory function to create these clients can then be registered with your service by calling RegisterCustomStoreFactory service.RegisterCustomStoreFactory(\"jetstream\", func(cfg interfaces.DatabaseInfo, cred config.Credentials) (interfaces.StoreClient, error) {\nconn, err := nats.Connect(fmt.Sprintf(\"nats://%s:%d\", cfg.Host, cfg.Port))\n\nif err != nil {\nreturn nil, err\n}\n\njs, err := conn.JetStream()\n\nif err != nil {\nreturn nil, err\n}\n\nkv, err := js.KeyValue(serviceKey)\n\nif err != nil {\nkv, err = js.CreateKeyValue(&nats.KeyValueConfig{Bucket: serviceKey})\n}\n\nreturn &JetstreamStore{\nconn: conn,\nserviceKey: serviceKey,\nkv: kv,\n}, err\n})\n
and configured using the registered name in the Database
section:
Example - Database configuration
Database:\nType: \"jetstream\"\nHost: \"broker\"\nPort: 4222\nTimeout: \"5s\"\n
"},{"location":"microservices/application/AdvancedTopics/#secrets","title":"Secrets","text":""},{"location":"microservices/application/AdvancedTopics/#configuration_1","title":"Configuration","text":"All instances of App Services running in secure mode require a SecretStore to be configured. With the use of Redis Pub/Sub
as the default EdgeX MessageBus all App Services need the redisdb
known secret added to their SecretStore so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details.
Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It now has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
"},{"location":"microservices/application/AdvancedTopics/#storing-secrets","title":"Storing Secrets","text":""},{"location":"microservices/application/AdvancedTopics/#secure-mode","title":"Secure Mode","text":"When running an application service in secure mode, secrets can be stored in the service's secure SecretStore by making an HTTP POST
call to the /api/v3/secret
API route in the application service. The secret data POSTed is stored and retrieved from the service's secure SecretStore . Once a secret is stored, only the service that added the secret will be able to retrieve it. For secret retrieval see Getting Secrets section below.
Example - JSON message body
{\n\"secretName\" : \"MySecret\",\n\"secretData\" : [\n{\n\"key\" : \"MySecretKey\",\n\"value\" : \"MySecretValue\"\n}\n]\n}\n
Note
SecretName specifies the location of the secret within the service's SecretStore.
"},{"location":"microservices/application/AdvancedTopics/#insecure-mode","title":"Insecure Mode","text":"When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration file. Insecure secrets and their paths can be configured as below.
Example - InsecureSecrets Configuration
Writable:\nInsecureSecrets: AWS:\nSecretName: \"aws\"\nSecretsData:\nusername: \"aws-user\"\npassword: \"aws-pw\"\nDB:\nSecretName: \"redisdb\"\nSecretsData:\nusername: \"\"\npassword: \"\"\n
"},{"location":"microservices/application/AdvancedTopics/#getting-secrets","title":"Getting Secrets","text":"Application Services can retrieve their secrets from their SecretStore using the interfaces.ApplicationService.SecretProvider.GetSecret() API or from the interfaces.AppFunctionContext.SecretProvider.GetSecret() API
When in secure mode, the secrets are retrieved from the service secure SecretStore.
When running in insecure mode, the secrets are retrieved from the Writable.InsecureSecrets
configuration.
The background publisher API has been deprecated. Any applications using it should migrate replacements available on the ApplicationService
or AppFunctionContext
APIs:
Application Services using the MessageBus trigger can request a background publisher using the AddBackgroundPublisher API in the SDK. This method takes an int representing the background channel's capacity as the only parameter and returns a reference to a BackgroundPublisher. This reference can then be used by background processes to publish to the configured MessageBus output. A custom topic can be provided to use instead of the configured message bus output as well.
Example - Background Publisher
func runJob (service interfaces.ApplicationService, done chan struct{}){\nticker := time.NewTicker(1 * time.Minute)\n\n//initialize background publisher with a channel capacity of 10 and a custom topic\npublisher, err := service.AddBackgroundPublisherWithTopic(10, \"custom-topic\")\n\nif err != nil {\n// do something\n}\n\ngo func(pub interfaces.BackgroundPublisher) {\nfor {\nselect {\ncase <-ticker.C:\nmsg := myDataService.GetMessage()\npayload, err := json.Marshal(message)\n\nif err != nil {\n//do something\n}\n\nctx := svc.BuildContext(uuid.NewString(), common.ContentTypeJSON)\n\n// modify context as needed\n\nerr = pub.Publish(payload, ctx)\n\nif err != nil {\n//do something\n}\ncase <-j.done:\nticker.Stop()\nreturn\n}\n}\n}(publisher)\n}\n\nfunc main() {\nservice := pkg.NewAppService(serviceKey)\n\ndone := make(chan struct{})\ndefer close(done)\n\n//pass publisher to your background job\nrunJob(service, done)\n\nservice.SetDefaultFunctionsPipeline(\nAll,\nMy,\nFunctions,\n)\n\nservice.Run()\n\nos.Exit(0)\n}
"},{"location":"microservices/application/AdvancedTopics/#stopping-the-service","title":"Stopping the Service","text":"Application Services will listen for SIGTERM / SIGINT signals from the OS and stop the function pipeline in response. The pipeline can also be exited programmatically by calling sdk.Stop()
on the running ApplicationService
instance. This can be useful for cases where you want to stop a service in response to a runtime condition, e.g. receiving a \"poison pill\" message through its trigger.
When messages are received via the EdgeX MessageBus or External MQTT triggers, the topic that the data was received on is seeded into the new Context Storage on the AppFunctionContext
with the key receivedtopic
. This make the Received Topic
available to all functions in the pipeline. The SDK provides the interfaces.RECEIVEDTOPIC
constant for this key. See the Context Storage section for more details on extracting values.
The Pipeline Per Topics
feature allows for multiple function pipelines to be defined. Each will execute only when one of the specified pipeline topics matches the received topic. The pipeline topics can have wildcards (+
and #
) allowing the topic to match a variety of received topics. Each pipeline has its own set of functions (transforms) that are executed on the received message. If the #
wildcard is used by itself for a pipeline topic, it will match all received topics and the specified functions pipeline will execute on every message received.
Note
The Pipeline Per Topics
feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank
, so the pipeline's topics must contain a single topic set to the #
wildcard so that all messages received are processed by the pipeline.
Example pipeline topics with wildcards
\"#\" - Matches all messages received\n\"edegex/events/#\" - Matches all messages received with the based topic `edegex/events/`\n\"edegex/events/core/#\" - Matches all messages received just from Core Data\n\"edegex/events/device/#\" - Matches all messages received just from Device services\n\"edegex/events/+/my-profile/#\" - Matches all messages received from Core Data or Device services for `my-profile`\n\"edegex/events/+/+/my-device/#\" - Matches all messages received from Core Data or Device services for `my-device`\n\"edegex/events/+/+/+/my-source\" - Matches all messages received from Core Data or Device services for `my-source`\n
Refer to the Filter By Topics section for details on the structure of the received topic.
All pipeline function capabilities such as Store and Forward, Batching, etc. can be used with one or more of the multiple function pipelines. Store and Forward uses the Pipeline's ID to find and restart the pipeline on retries.
Example - Adding multiple function pipelines
This example adds two pipelines. One to process data from the Random-Float-Device
device and one to process data from the Int32
and Int64
sources.
sample := functions.NewSample()\nerr = service.AddFunctionsPipelineForTopics(\n\"Floats-Pipeline\", []string{\"edgex/events/+/+/Random-Float-Device/#\"}, transforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\n...\nreturn -1\n}\n\nerr = app.service.AddFunctionsPipelineForTopics(\n\"Int32-Pipleine\", []string{\"edgex/events/+/+/+/Int32\", \"edgex/events/+/+/+/Int64\"},\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\n...\nreturn -1\n}\n
"},{"location":"microservices/application/AdvancedTopics/#built-in-application-service-metrics","title":"Built-in Application Service Metrics","text":"All application services have the following built-in metrics:
MessagesReceived
- This is a counter metric that counts the number of messages received by the application service. Includes invalid messages.
InvalidMessagesReceived
- (NEW) This is a counter metric that counts the number of invalid messages received by the application service.
HttpExportSize
- (NEW) This is a histogram metric that collects the size of data exported via the built-in HTTP Export pipeline function. The metric data is not currently tagged due to breaking changes required to tag the data with the destination endpoint. This will be addressed in a future EdgeX 3.0 release.
MqttExportSize
- (NEW) This is a histogram metric that collects the size of data exported via the built-in MQTT Export pipeline function. The metric data is tagged with the specific broker address and topic.
PipelineMessagesProcessed
- This is a counter metric that counts the number of messages processed by the individual function pipelines defined by the application service. The metric data is tagged with the specific function pipeline ID the count is for.
PipelineProcessingErrors
- (NEW) This is a counter metric that counts the number of errors returned by the individual function pipelines defined by the application service. The metric data is tagged with the specific function pipeline ID the count is for.
PipelineMessageProcessingTime
- This is a timer metric that tracks the amount of time taken to process messages by the individual function pipelines defined by the application service. The metric data is tagged with the specific function pipeline ID the timer is for.
Note
The time tracked for this metric is only for the function pipeline processing time. The overhead of receiving the messages and handing them to the appropriate function pipelines is not included. Accounting for this overhead may be added as another timer metric in a future release.
Reporting of these built-in metrics is disabled by default in the Writable.Telemetry
configuration section. See Writable.Telemetry
configuration details in the Application Service Configuration section for complete detail on this section. If the configuration for these built-in metrics are missing, then the reporting of the metrics will be disabled.
Example - Service Telemetry Configuration with all built-in metrics enabled for reporting
Writable:\nTelemetry:\nInterval: \"30s\"\nMetrics:\nMessagesReceived: true\nInvalidMessagesReceived: true\nPipelineMessagesProcessed: true PipelineMessageProcessingTime: true\nPipelineProcessingErrors: true HttpExportSize: true MqttExportSize: true Tags: # Contains the service level tags to be attached to all the service's metrics\nGateway: \"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only change existing value, not added new ones.\n
"},{"location":"microservices/application/AdvancedTopics/#custom-application-service-metrics","title":"Custom Application Service Metrics","text":"The Custom Application Service Metrics capability allows for custom application services to define, collect and report their own custom service metrics.
The following are the steps to collect and report custom service metrics:
Determine the metric type that needs to be collected
counter
- Track the integer count of somethinggauge
- Track the integer value of something gaugeFloat64
- Track the float64 value of something timer
- Track the time it takes to accomplish a taskhistogram
- Track the integer value variance of somethingCreate instance of the metric type from github.com/rcrowley/go-metrics
myCounter = gometrics.NewCounter()
myGauge = gometrics.NewGauge()
myGaugeFloat64 = gometrics.NewGaugeFloat64()
myTimer = gometrics.NewTime()
myHistogram = gometrics.NewHistogram(gometrics.NewUniformSample(<reservoir size))
Determine if there are any tags to report along with your metric. Not common so nil
is typically passed for the tags map[strings]string
parameter in the next step.
Register your metric(s) with the MetricsManager from the service
or pipeline function context
reference. See Application Service API and App Function Context API for more details:
service.MetricsManager().Register(\"MyCounterName\", myCounter, nil)
ctx.MetricsManager().Register(\"MyCounterName\", myCounter, nil)
Collect the metric
myCounter.Inc(someIntvalue)
myCounter.Dec(someIntvalue)
myGauge.Update(someIntvalue)
myGaugeFloat64.Update(someFloatvalue)
myTimer.Update(someDuration)
myTimer.Time(func { do sometime})
myTimer.UpdateSince(someTimeValue)
myHistogram.Update(someIntvalue)
Configure reporting of the service's metrics. See Writable.Telemetry
configuration details in the Application Service Configuration section for more detail.
Example - Service Telemetry Configuration
Writable:\nTelemetry:\nInterval: \"30s\"\nMetrics:\nMyCounterName: true\nMyGaugeName: true\nMyGaugeFloat64Name: true\nMyTimerName: true\nMyHistogram: true\nTags: # Contains the service level tags to be attached to all the service's metrics\nGateway: \"my-iot-gateway\" # Tag must be added here or via Consul Env Override can only change existing value, not added new ones.\n
Note
The metric names used in the above configuration (to enable or disable reporting of a metric) must match the metric name used when the metric is registered. A partial match of starts with is acceptable, i.e. the metric name registered starts with the above configured name.
The context parameter passed to each function/transform provides operations and data associated with each execution of the pipeline.
Let's take a look at its API:
type AppFunctionContext interface {\nCorrelationID() string\nInputContentType() string\nSetResponseData(data []byte)\nResponseData() []byte\nSetResponseContentType(string)\nResponseContentType() string\nSetRetryData(data []byte)\nSecretProvider() interfaces.SecretProvider\nLoggingClient() logger.LoggingClient\nEventClient() interfaces.EventClient\nCommandClient() interfaces.CommandClient\nNotificationClient() interfaces.NotificationClient\nSubscriptionClient() interfaces.SubscriptionClient\nDeviceServiceClient() interfaces.DeviceServiceClient\nDeviceProfileClient() interfaces.DeviceProfileClient\nDeviceClient() interfaces.DeviceClient\nMetricsManager() bootstrapInterfaces.MetricsManager\nGetDeviceResource(profileName string, resourceName string) (dtos.DeviceResource, error)\nAddValue(key string, value string)\nRemoveValue(key string)\nGetValue(key string) (string, bool)\nGetAllValues() map[string]string\nApplyValues(format string) (string, error)\nPipelineId() string\nPublish(data any) error\nPublishWithTopic(topic string, data any) error\nClone() AppFunctionContext\n}\n
"},{"location":"microservices/application/AppFunctionContextAPI/#response-data","title":"Response Data","text":""},{"location":"microservices/application/AppFunctionContextAPI/#setresponsedata","title":"SetResponseData","text":"SetResponseData(data []byte)
This API sets the response data that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#responsedata","title":"ResponseData","text":"ResponseData()
This API returns the data that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#setresponsecontenttype","title":"SetResponseContentType","text":"SetResponseContentType(string)
This API sets the content type that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#responsecontenttype","title":"ResponseContentType","text":"ResponseContentType()
This API returns the content type that will be returned to the trigger when pipeline execution is complete.
"},{"location":"microservices/application/AppFunctionContextAPI/#clients","title":"Clients","text":""},{"location":"microservices/application/AppFunctionContextAPI/#loggingclient","title":"LoggingClient","text":"LoggingClient() logger.LoggingClient
Returns a LoggingClient
to leverage logging libraries/service utilized throughout the EdgeX framework. The SDK has initialized everything so it can be used to log Trace
, Debug
, Warn
, Info
, and Error
messages as appropriate.
Example - LoggingClient
ctx.LoggingClient().Info(\"Hello World\")\nc.LoggingClient().Errorf(\"Some error occurred: %w\", err)\n
"},{"location":"microservices/application/AppFunctionContextAPI/#eventclient","title":"EventClient","text":"EventClient() interfaces.EventClient
Returns an EventClient
to leverage Core Data's Event
API. See interface definition for more details. This client is useful for querying events. Note if Core Data is not specified in the Clients configuration, this will return nil.
CommandClient() interfaces.CommandClient
Returns a CommandClient
to leverage Core Command's Command
API. See interface definition for more details. Useful for sending commands to devices. Note if Core Command is not specified in the Clients configuration, this will return nil.
NotificationClient() interfaces.NotificationClient
Returns a NotificationClient
to leverage Support Notifications' Notifications
API. See interface definition for more details. Useful for sending notifications. Note if Support Notifications is not specified in the Clients configuration, this will return nil.
SubscriptionClient() interfaces.SubscriptionClient
Returns a SubscriptionClient
to leverage Support Notifications' Subscription
API. See interface definition for more details. Useful for creating notification subscriptions. Note if Support Notifications is not specified in the Clients configuration, this will return nil.
DeviceServiceClient() interfaces.DeviceServiceClient
Returns a DeviceServiceClient
to leverage Core Metadata's DeviceService
API. See interface definition for more details. Useful for querying information about Device Services. Note if Core Metadata is not specified in the Clients configuration, this will return nil.
DeviceProfileClient() interfaces.DeviceProfileClient
Returns a DeviceProfileClient
to leverage Core Metadata's DeviceProfile
API. See interface definition for more details. Useful for querying information about Device Profiles and is used by the GetDeviceResource
helper function below. Note if Core Metadata is not specified in the Clients configuration, this will return nil.
DeviceClient() interfaces.DeviceClient
Returns a DeviceClient
to leverage Core Metadata's Device
API. See interface definition for more details. Useful for querying information about Devices. Note if Core Metadata is not specified in the Clients configuration, this will return nil.
Each of the clients above is only initialized if the Clients section of the configuration contains an entry for the service associated with the Client API. If it isn't in the configuration the client will be nil
. Your code must check for nil
to avoid panic in case it is missing from the configuration. Only add the clients to your configuration that your Application Service will actually be using. All application services need Core-Data
for version compatibility check done on start-up. The following is an example Clients
section of a configuration.yaml with all supported clients specified:
Example - Client Configuration Section
Clients:\ncore-data:\nProtocol: http\nHost: localhost\nPort: 59880\n\ncore-command:\nProtocol: http\nHost: localhost\nPort: 59882\n\nsupport-notifications:\nProtocol: http\nHost: localhost\nPort: 59860\n
Note
Core Metadata client is required and provided by the App Services Common Configuration, so it is not included in the above example.
"},{"location":"microservices/application/AppFunctionContextAPI/#context-storage","title":"Context Storage","text":"The context API exposes a map-like interface that can be used to store custom data specific to a given pipeline execution. This data is persisted for retry if needed. Currently only strings are supported, and keys are treated as case-insensitive.
There following values are seeded into the Context Storage when an Event is received:
interfaces.PROFILENAME
)interfaces.DEVICENAME
)interfaces.SOURCENAME
)interfaces.RECEIVEDTOPIC
)Note
Received Topic only available when the message was received from the Edgex MessageBus or External MQTT triggers.
Storage can be accessed using the following methods:
"},{"location":"microservices/application/AppFunctionContextAPI/#addvalue","title":"AddValue","text":"AddValue(key string, value string)
This API stores a value for access within a pipeline execution
"},{"location":"microservices/application/AppFunctionContextAPI/#removevalue","title":"RemoveValue","text":"RemoveValue(key string)
This API deletes a value stored in the context at the given key
"},{"location":"microservices/application/AppFunctionContextAPI/#getvalue","title":"GetValue","text":"GetValue(key string) (string, bool)
This API attempts to retrieve a value stored in the context at the given key
"},{"location":"microservices/application/AppFunctionContextAPI/#getallvalues","title":"GetAllValues","text":"GetAllValues() map[string]string
This API returns a read-only copy of all data stored in the context
"},{"location":"microservices/application/AppFunctionContextAPI/#applyvalues","title":"ApplyValues","text":"ApplyValues(format string) (string, error)
This API will replace placeholders of the form {context-key-name}
with the value found in the context at context-key-name
. Note that key matching is case insensitive. An error will be returned if any placeholders in the provided string do NOT have a corresponding entry in the context storage map.
SecretProvider() interfaces.SecretProvider
This API returns reference to the SecretProvider instance. See Secret Provider API section for more details.
"},{"location":"microservices/application/AppFunctionContextAPI/#miscellaneous","title":"Miscellaneous","text":""},{"location":"microservices/application/AppFunctionContextAPI/#clone","title":"Clone()","text":"Clone() AppFunctionContext
This method returns a copy of the context that can be mutated independently where appropriate. This can be useful when running operations that take AppFunctionContext in parallel.
"},{"location":"microservices/application/AppFunctionContextAPI/#correlationid","title":"CorrelationID()","text":"CorrelationID() string
This API returns the ID used to track the EdgeX event through entire EdgeX framework.
"},{"location":"microservices/application/AppFunctionContextAPI/#pipelineid","title":"PipelineId","text":"PipelineId() string
This API returns the ID of the pipeline currently executing. Useful when logging messages from pipeline functions so the message contain the ID of the pipeline that executed the pipeline function.
"},{"location":"microservices/application/AppFunctionContextAPI/#inputcontenttype","title":"InputContentType()","text":"InputContentType() string
This API returns the content type of the data that initiated the pipeline execution. Only useful when the TargetType for the pipeline is []byte, otherwise the data will be the type specified by TargetType.
"},{"location":"microservices/application/AppFunctionContextAPI/#getdeviceresource","title":"GetDeviceResource()","text":"GetDeviceResource(profileName string, resourceName string) (dtos.DeviceResource, error)
This API retrieves the DeviceResource for the given profile / resource name. Results are cached to minimize HTTP traffic to core-metadata.
"},{"location":"microservices/application/AppFunctionContextAPI/#setretrydata","title":"SetRetryData()","text":"SetRetryData(data []byte)
This method can be used to store data for later retry. This is useful when creating a custom export function that needs to retry on failure. The payload data will be stored for later retry based on Store and Forward
configuration. When the retry is triggered, the function pipeline will be re-executed starting with the function that called this API. That function will be passed the stored data, so it is important that all transformations occur in functions prior to the export function. The Context
will also be restored to the state when the function called this API. See Store and Forward for more details.
Note
Store and Forward
be must enabled when calling this API, otherwise the data is ignored.
MetricsManager() bootstrapInterfaces.MetricsManager
This API returns the Metrics Manager used to register counter, gauge, gaugeFloat64 or timer metric types from github.com/rcrowley/go-metrics
myCounterMetricName := \"MyCounter\"\nmyCounter := gometrics.NewCounter()\nmyTags := map[string]string{\"Tag1\":\"Value1\"}\nctx.MetricsManager().Register(myCounterMetricName, myCounter, myTags)
"},{"location":"microservices/application/AppFunctionContextAPI/#publish","title":"Publish","text":"Publish(data any) error
This API pushes data to the EdgeX MessageBus using configured topic and returns an error if the EdgeX MessageBus is disabled in configuration
"},{"location":"microservices/application/AppFunctionContextAPI/#publishwithtopic","title":"PublishWithTopic","text":"PublishWithTopic(topic string, data any) error
This API pushes data to the EdgeX MessageBus using a given topic and returns an error if the EdgeX MessageBus is diasbled in configuration
"},{"location":"microservices/application/ApplicationFunctionsSDK/","title":"App Functions SDK Overview","text":"Welcome the App Functions SDK for EdgeX. This SDK is meant to provide all the plumbing necessary for developers to get started in processing/transforming/exporting data out of EdgeX.
If you're new to the SDK - checkout the Getting Started guide.
If you're already familiar - checkout the various sections about the SDK:
Section Description Application Service API Provides a list of all available APIs on the interface use to build Application Services App Function Context API Provides a list of all available APIs on the context interface that is available inside of a pipeline function Pipeline Function Error Handling Describes how to properly handle pipeline execution failures Built-In Pipeline Functions Provides a list of the available pipeline functions/transforms in the SDK Advanced Topics Learn about other ways to leverage the SDK beyond basic use casesThe App Functions SDK implements a small REST API which can be seen Here.
"},{"location":"microservices/application/ApplicationServiceAPI/","title":"Application Service API","text":"The ApplicationService
API is the central API for creating an EdgeX Application Service.
The new ApplicationService
API is as follows:
type AppFunction = func(appCxt AppFunctionContext, data interface{}) (bool, interface{})\n\ntype FunctionPipeline struct {\nId string\nTransforms []AppFunction\nTopic string\nHash string\n}\n\ntype ApplicationService interface {\nApplicationSettings() map[string]string\nGetAppSetting(setting string) (string, error)\nGetAppSettingStrings(setting string) ([]string, error)\nLoadCustomConfig(config UpdatableConfig, sectionName string) error\nListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error\nSetDefaultFunctionsPipeline(transforms ...AppFunction) error\nAddFunctionsPipelineForTopics(id string, topics []string, transforms ...AppFunction) error\nLoadConfigurableFunctionPipelines() (map[string]FunctionPipeline, error)\nRemoveAllFunctionPipelines()\nRun() error\nStop()\nSecretProvider() interfaces.SecretProvider\nLoggingClient() logger.LoggingClient\nEventClient() interfaces.EventClient\nCommandClient() interfaces.CommandClient\nNotificationClient() interfaces.NotificationClient\nSubscriptionClient() interfaces.SubscriptionClient\nDeviceServiceClient() interfaces.DeviceServiceClient\nDeviceProfileClient() interfaces.DeviceProfileClient\nDeviceClient() interfaces.DeviceClient\nRegistryClient() registry.Client\nMetricsManager() bootstrapInterfaces.MetricsManager\nAddBackgroundPublisher(capacity int) (BackgroundPublisher, error)\nAddBackgroundPublisherWithTopic(capacity int, topic string) (BackgroundPublisher, error)\nBuildContext(correlationId string, contentType string) AppFunctionContext\nAddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error\nAddCustomRoute(route string, authentication Authentication, handler echo.HandlerFunc, methods ...string) error\nAppContext() context.Context\nRequestTimeout() time.Duration\nRegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error\nRegisterCustomStoreFactory(name string, factory func(cfg DatabaseInfo, cred config.Credentials) (StoreClient, error)) error\nPublish(data any) error\nPublishWithTopic(topic string, data any) error\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#factory-functions","title":"Factory Functions","text":"The App Functions SDK provides two factory functions for creating an ApplicationService
NewAppService(serviceKey string) (interfaces.ApplicationService, bool)
This factory function returns an interfaces.ApplicationService
using the default Target Type of dtos.Event
and initializes the service. The second bool
return parameter will be true
if successfully initialized, otherwise it will be false
when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1)
if false
is returned.
Example - NewAppService
const serviceKey = \"app-myservice\"\n...\n\nservice, ok := pkg.NewAppService(serviceKey)\nif !ok {\nos.Exit(-1)\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#newappservicewithtargettype","title":"NewAppServiceWithTargetType","text":"NewAppServiceWithTargetType(serviceKey string, targetType interface{}) (interfaces.ApplicationService, bool)
This factory function returns an interfaces.ApplicationService
using the passed in Target Type and initializes the service. The second bool
return parameter will be true
if successfully initialized, otherwise it will be false
when error(s) occurred during initialization. All error(s) are logged so the caller just needs to call os.Exit(-1)
if false
is returned.
See the Target Type advanced topic for more details.
Example - NewAppServiceWithTargetType
const serviceKey = \"app-myservice\"\n...\n\nservice, ok := pkg.NewAppServiceWithTargetType(serviceKey, &[]byte{})\nif !ok {\nos.Exit(-1)\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#custom-configuration-apis","title":"Custom Configuration APIs","text":"The following ApplicationService
APIs allow your service to access their custom configuration from the configuration file and/or Configuration Provider. See the Custom Configuration advanced topic for more details.
ApplicationSettings() map[string]string
This API returns the complete key/value map of custom settings
Example - ApplicationSettings
ApplicationSettings:\nGreeting: \"Hello World\"\n
appSettings := service.ApplicationSettings()\ngreeting := appSettings[\"Greeting\"]\nservice.LoggingClient.Info(greeting)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#getappsetting","title":"GetAppSetting","text":"GetAppSetting(setting string) (string, error)
This API is a convenience API that returns a single setting from the [ApplicationSetting]
section of the service configuration. An error is returned if the specified setting is not found.
Example - GetAppSetting
ApplicationSettings:\nGreeting: \"Hello World\"\n
greeting, err := service.GetAppSetting[\"Greeting\"]\nif err != nil {\n...\n}\nservice.LoggingClient.Info(greeting)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#getappsettingstrings","title":"GetAppSettingStrings","text":"GetAppSettingStrings(setting string) ([]string, error)
This API is a convenience API that parses the string value for the specified custom application setting as a comma separated list. It returns the list of strings. An error is returned if the specified setting is not found.
Example - GetAppSettingStrings
ApplicationSettings:\nGreetings: \"Hello World, Welcome World, Hi World\"\n
greetings, err := service.GetAppSettingStrings[\"Greetings\"]\nif err != nil {\n...\n}\nfor _, greeting := range greetings {\nservice.LoggingClient.Info(greeting)\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#loadcustomconfig","title":"LoadCustomConfig","text":"LoadCustomConfig(config UpdatableConfig, sectionName string) error
This API loads the service's Structured Custom Configuration from local file or the Configuration Provider (if enabled). The Configuration Provider will also be seeded with the custom configuration if service is using the Configuration Provider. The UpdateFromRaw
API (UpdatableConfig
interface) will be called on the custom configuration when the configuration is loaded from the Configuration Provider. The custom config must implement the UpdatableConfig
interface.
Example - LoadCustomConfig
AppCustom: # Can be any name you choose\nResourceNames: \"Boolean, Int32, Uint32, Float32, Binary\"\nSomeValue: 123\nSomeService:\nHost: \"localhost\"\nPort: 9080\nProtocol: \"http\"\n
type ServiceConfig struct {\nAppCustom AppCustomConfig\n}\n\ntype AppCustomConfig struct {\nResourceNames string\nSomeValue int\nSomeService HostInfo\n}\n\nfunc (c *ServiceConfig) UpdateFromRaw(rawConfig interface{}) bool {\nconfiguration, ok := rawConfig.(*ServiceConfig)\nif !ok {\nreturn false //errors.New(\"unable to cast raw config to type 'ServiceConfig'\")\n}\n\n*c = *configuration\n\nreturn true\n}\n\n...\n\nserviceConfig := &ServiceConfig{}\nerr := service.LoadCustomConfig(serviceConfig, \"AppCustom\")\nif err != nil {\n...\n}\n
See the App Service Template for a complete example of using Structured Custom Configuration.
"},{"location":"microservices/application/ApplicationServiceAPI/#listenforcustomconfigchanges","title":"ListenForCustomConfigChanges","text":"ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
This API starts a listener on the Configuration Provider for changes to the specified section of the custom configuration. When changes are received from the Configuration Provider the provided changedCallback
function is called with the updated section of configuration. The service must then implement the code to copy the updates into it's copy of the configuration and respond to the updates if needed.
Example - ListenForCustomConfigChanges
AppCustom: # Can be any name you choose\nResourceNames: \"Boolean, Int32, Uint32, Float32, Binary\"\nSomeValue: 123\nSomeService:\nHost: \"localhost\"\nPort: 9080\nProtocol: \"http\"\n
...\n\nerr := service.ListenForCustomConfigChanges(&serviceConfig.AppCustom, \"AppCustom\", ProcessConfigUpdates)\nif err != nil {\nlogger.Errorf(\"unable to watch custom writable configuration: %s\", err.Error())\n}\n\n...\n\nfunc (app *myApp) ProcessConfigUpdates(rawWritableConfig interface{}) {\nupdated, ok := rawWritableConfig.(*config.AppCustomConfig)\nif !ok {\n...\nreturn\n}\n\nprevious := app.serviceConfig.AppCustom\napp.serviceConfig.AppCustom = *updated\n\nif reflect.DeepEqual(previous, updated) {\nlogger.Info(\"No changes detected\")\nreturn\n}\n\nif previous.SomeValue != updated.SomeValue {\nlogger.Infof(\"AppCustom.SomeValue changed to: %d\", updated.SomeValue)\n}\nif previous.ResourceNames != updated.ResourceNames {\nlogger.Infof(\"AppCustom.ResourceNames changed to: %s\", updated.ResourceNames)\n}\nif !reflect.DeepEqual(previous.SomeService, updated.SomeService) {\nlogger.Infof(\"AppCustom.SomeService changed to: %v\", updated.SomeService)\n}\n}\n
See the App Service Template for a complete example of using Structured Custom Configuration.
"},{"location":"microservices/application/ApplicationServiceAPI/#function-pipeline-apis","title":"Function Pipeline APIs","text":"The following ApplicationService
APIs allow your service to set the Functions Pipeline and start and stop the Functions Pipeline.
type AppFunction = func(appCxt AppFunctionContext, data interface{}) (bool, interface{})
This type defines the signature that all pipeline functions must implement.
"},{"location":"microservices/application/ApplicationServiceAPI/#functionpipeline","title":"FunctionPipeline","text":"This type defines the struct that contains the metadata for a functions pipeline instance.
type FunctionPipeline struct {\nId string\nTransforms []AppFunction\nTopic string\nHash string\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#setdefaultfunctionspipeline","title":"SetDefaultFunctionsPipeline","text":"SetDefaultFunctionsPipeline(transforms ...AppFunction) error
This API sets the default functions pipeline with the specified list of Application Functions. This pipeline is executed for all messages received from the configured trigger. Note that the functions are executed in the order provided in the list. An error is returned if the list is empty.
Example - SetDefaultFunctionsPipeline
sample := functions.NewSample()\nerr = service.SetDefaultFunctionsPipeline(\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\napp.lc.Errorf(\"SetDefaultFunctionsPipeline returned error: %s\", err.Error())\nreturn -1\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#addfunctionspipelinefortopics","title":"AddFunctionsPipelineForTopics","text":"AddFunctionsPipelineForTopics(id string, topics []string, transforms ...AppFunction) error
This API adds a functions pipeline with the specified unique ID and list of functions (transforms) to be executed when the received topic matches one of the specified pipeline topics. See the Pipeline Per Topic section for more details.
Example - AddFunctionsPipelineForTopics
sample := functions.NewSample()\nerr = service.AddFunctionsPipelineForTopic(\"Floats-Pipeline\", []string{\"edgex/events/+/+/Random-Float-Device/#\"},\ntransforms.NewFilterFor(deviceNames).FilterByDeviceName,\nsample.LogEventDetails,\nsample.ConvertEventToXML,\nsample.OutputXML)\nif err != nil {\n...\nreturn -1\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#loadconfigurablefunctionpipelines","title":"LoadConfigurableFunctionPipelines","text":"LoadConfigurableFunctionPipelines() (map[string]FunctionPipeline, error)
This API loads the function pipelines (default and per topic) from configuration. An error is returned if the configuration is not valid, i.e. missing required function parameters, invalid function name, etc.
Note
This API is only useful if pipeline is always defined in configuration as is with App Service Configurable.
Example - LoadConfigurableFunctionPipelines
configuredPipelines, err := service.LoadConfigurableFunctionPipelines()\nif err != nil {\n...\nos.Exit(-1)\n}\n\n...\n\nfor _, pipeline := range configuredPipelines {\nswitch pipeline.Id {\ncase interfaces.DefaultPipelineId:\nif err = service.SetDefaultFunctionsPipeline(pipeline.Transforms...); err != nil {\n...\nos.Exit(-1)\n}\ndefault:\nif err = service.AddFunctionsPipelineForTopic(pipeline.Id, pipeline.Topic, pipeline.Transforms...); err != nil {\n...\nos.Exit(-1)\n}\n}\n}\n
"},{"location":"microservices/application/ApplicationServiceAPI/#removeallfunctionpipelines","title":"RemoveAllFunctionPipelines","text":"RemoveAllFunctionPipelines()
This API removes all existing functions pipelines previously added via SetDefaultFunctionsPipeline
, AddFunctionsPipelineForTopics
or LoadConfigurableFunctionPipelines
Run() error
This API starts the configured trigger to allow the Functions Pipeline to execute when the trigger receives data. The internal webserver is also started. This is a long running API which does not return until the service is stopped or Stop() is called. An error is returned if the trigger can not be create or initialized or if the internal webserver encounters an error.
Example - Run
if err := service.Run(); err != nil {\nlogger.Errorf(\"Run returned error: %s\", err.Error())\nos.exit(-1)\n}\n\n// Do any required cleanup here, if needed\n\nos.exit(0)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#stop","title":"Stop","text":"Stop()
This API stops the configured trigger so that the functions pipeline no longer executes. The internal webserver continues to accept requests. See Stopping the Service advanced topic for more details
Example - Stop
service.Stop()\n...\n
"},{"location":"microservices/application/ApplicationServiceAPI/#secrets-apis","title":"Secrets APIs","text":"The following ApplicationService
APIs allow your service retrieve and store secrets from/to the service's SecretStore. See the Secrets advanced topic for more details about using secrets.
SecretProvider() interfaces.SecretProvider
This API returns reference to the SecretProvider instance. See Secret Provider API section for more details.
"},{"location":"microservices/application/ApplicationServiceAPI/#client-apis","title":"Client APIs","text":"The following ApplicationService
APIs allow your service access the various EdgeX clients and their APIs.
LoggingClient() logger.LoggingClient
This API returns the LoggingClient instance which the service uses to log messages. See the LoggingClient interface for more details.
Example - LoggingClient
service.LoggingClient().Info(\"Hello World\")\nservice.LoggingClient().Errorf(\"Some error occurred: %w\", err)\n
"},{"location":"microservices/application/ApplicationServiceAPI/#registryclient","title":"RegistryClient","text":"RegistryClient() registry.Client
This API returns the Registry Client. Note the registry must been enabled, otherwise this will return nil. See the Registry Client interface for more details. Useful if service needs to add additional health checks or needs to get endpoint of another registered service.
"},{"location":"microservices/application/ApplicationServiceAPI/#eventclient","title":"EventClient","text":"EventClient() interfaces.EventClient
This API returns the Event Client. Note if Core Data is not specified in the Clients configuration, this will return nil. See the Event Client interface for more details. Useful for adding, deleting or querying Events.
"},{"location":"microservices/application/ApplicationServiceAPI/#commandclient","title":"CommandClient","text":"CommandClient() interfaces.CommandClient
This API returns the Command Client. Note if Core Command is not specified in the Clients configuration, this will return nil. See the Command Client interface for more details. Useful for issuing commands to devices.
"},{"location":"microservices/application/ApplicationServiceAPI/#notificationclient","title":"NotificationClient","text":"NotificationClient() interfaces.NotificationClient
This API returns the Notification Client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Notification Client interface for more details. Useful for sending notifications.
"},{"location":"microservices/application/ApplicationServiceAPI/#subscriptionclient","title":"SubscriptionClient","text":"SubscriptionClient() interfaces.SubscriptionClient
This API returns the Subscription client. Note if Support Notifications is not specified in the Clients configuration, this will return nil. See the Subscription Client interface for more details. Useful for creating notification subscriptions.
"},{"location":"microservices/application/ApplicationServiceAPI/#deviceserviceclient","title":"DeviceServiceClient","text":"DeviceServiceClient() interfaces.DeviceServiceClient
This API returns the Device Service Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Service Client interface for more details. Useful for querying information about a Device Service.
"},{"location":"microservices/application/ApplicationServiceAPI/#deviceprofileclient","title":"DeviceProfileClient","text":"DeviceProfileClient() interfaces.DeviceProfileClient
This API returns the Device Profile Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Profile Client interface for more details. Useful for querying information about a Device Profile such as Device Resource details.
"},{"location":"microservices/application/ApplicationServiceAPI/#deviceclient","title":"DeviceClient","text":"DeviceClient() interfaces.DeviceClient
This API returns the Device Client. Note if Core Metadata is not specified in the Clients configuration, this will return nil. See the Device Client interface for more details. Useful for querying list of devices for a specific Device Service or Device Profile.
"},{"location":"microservices/application/ApplicationServiceAPI/#background-publisher-apis","title":"Background Publisher APIs","text":"The following ApplicationService
APIs allow Application Services to have background publishers. See the Background Publishing advanced topic for more details and example.
AddBackgroundPublisher(capacity int) (BackgroundPublisher, error)
This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus.
"},{"location":"microservices/application/ApplicationServiceAPI/#addbackgroundpublisherwithtopic-deprecated","title":"AddBackgroundPublisherWithTopic DEPRECATED","text":"AddBackgroundPublisherWithTopic(capacity int, topic string) (BackgroundPublisher, error)
This API adds and returns a BackgroundPublisher which is used to publish asynchronously to the Edgex MessageBus on the specified topic.
"},{"location":"microservices/application/ApplicationServiceAPI/#buildcontext","title":"BuildContext","text":"BuildContext(correlationId string, contentType string) AppFunctionContext
This API allows external callers that may need a context (eg background publishers) to easily create one.
"},{"location":"microservices/application/ApplicationServiceAPI/#other-apis","title":"Other APIs","text":""},{"location":"microservices/application/ApplicationServiceAPI/#addroute-deprecated","title":"AddRoute (Deprecated)","text":"AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error
This API is deprecated in favor of AddCustomRoute()
which has an explicit parameter to indicate whether the route should require authentication.
AddCustomRoute(route string, authentication Authentication, handler echo.HandlerFunc, methods ...string) error
This API adds a custom REST route to the application service's internal webserver. If the route is marked authenticated, it will require an EdgeX JWT when security is enabled. A reference to the ApplicationService is added to the context that is passed to the handler, which can be retrieved using the AppService
key. See Custom REST Endpoints advanced topic for more details and example.
AppContext() context.Context
This API returns the application service context used to detect cancelled context when the service is terminating. Used by custom app service to appropriately exit any long running functions.
"},{"location":"microservices/application/ApplicationServiceAPI/#requesttimeout","title":"RequestTimeout","text":"RequestTimeout() time.Duration
This API returns the parsed value for the Service.RequestTimeout
configuration setting. The setting is parsed on start-up so that any error is caught then.
Example - RequestTimeout
Service:\n...\nRequestTimeout: \"60s\"\n...\n
timeout := service.RequestTimeout()\n
"},{"location":"microservices/application/ApplicationServiceAPI/#registercustomtriggerfactory","title":"RegisterCustomTriggerFactory","text":"RegisterCustomTriggerFactory(name string, factory func(TriggerConfig) (Trigger, error)) error
This API registers a trigger factory for a custom trigger to be used. See the Custom Triggers section for more details and example.
"},{"location":"microservices/application/ApplicationServiceAPI/#registercustomstorefactory","title":"RegisterCustomStoreFactory","text":"RegisterCustomStoreFactory(name string, factory func(cfg DatabaseInfo, cred config.Credentials) (StoreClient, error)) error
This API registers a factory to construct a custom store client for the store & forward loop.
"},{"location":"microservices/application/ApplicationServiceAPI/#metricsmanager","title":"MetricsManager","text":"MetricsManager() bootstrapInterfaces.MetricsManager
This API returns the Metrics Manager used to register counter, gauge, gaugeFloat64 or timer metric types from github.com/rcrowley/go-metrics
myCounterMetricName := \"MyCounter\"\nmyCounter := gometrics.NewCounter()\nmyTags := map[string]string{\"Tag1\":\"Value1\"}\napp.service.MetricsManager().Register(myCounterMetricName, myCounter, myTags)
"},{"location":"microservices/application/ApplicationServiceAPI/#publish","title":"Publish","text":"Publish(data any) error
This API pushes data to the EdgeX MessageBus using configured topic and returns an error if the EdgeX MessageBus is disabled in configuration
"},{"location":"microservices/application/ApplicationServiceAPI/#publishwithtopic","title":"PublishWithTopic","text":"PublishWithTopic(topic string, data any) error
This API pushes data to the EdgeX MessageBus using a given topic and returns an error if the EdgeX MessageBus is disabled in configuration
"},{"location":"microservices/application/ApplicationServices/","title":"Application Services Overview","text":"Application Services are a means to get data from EdgeX Foundry to be processed at the edge and/or sent to external systems (be it analytics package, enterprise or on-prem application, cloud systems like Azure IoT, AWS IoT, or Google IoT Core, etc.). Application Services provide the means for data to be prepared (transformed, enriched, filtered, etc.) and groomed (formatted, compressed, encrypted, etc.) before being sent to an endpoint of choice or published back to other Application Service to consume. The export endpoints supported out of the box today include HTTP and MQTT endpoints, but custom endpoints can be implemented along side the existing functionality.
Application Services are based on the idea of a \"Functions Pipeline\". A functions pipeline is a collection of functions that process messages (in this case EdgeX event/reading messages) in the order that you've specified. Triggers seed the first function in the pipeline with the data received by the Application Service. A trigger is something like a message landing in a watched message queue. The most commonly used Trigger is the MessageBus Trigger. See the Triggers section for more details
An Applications Functions Software Development Kit (or App Functions SDK
) is available to help create Application Services. Currently the only SDK supported language is Golang, with the intention that community developed and supported SDKs may come in the future for other languages. The SDK is available as a Golang module to remain operating system (OS) agnostic and to comply with the latest EdgeX guidelines on dependency management.
Any application built on top of the Application Functions SDK is considered an App Service. This SDK is provided to help build Application Services by assembling triggers, pre-existing functions and custom functions of your making into a pipeline.
"},{"location":"microservices/application/ApplicationServices/#standard-functions","title":"Standard Functions","text":"As mentioned, an Application Service is a function pipeline. The SDK provides some standard functions that can be used in a functions pipeline. In the future, additional functions will be provided \"standard\" or in other words provided with the SDK. Additionally, developers can implement their own custom functions and add those to their Application Service functions pipeline.
One of the most common use cases for working with data that comes from the MessageBus is to filter data down to what is relevant for a given application and to format it. To help facilitate this, six primary functions are included in the SDK.
FilterByProfileName
function which will remove events that do or do not match the configured ProfileNames
and execution of the pipeline will cease if no event remains after filtering. FilterByDeviceName
function which will remove events that do or do not match the configured DeviceNames
and execution of the pipeline will cease if no event remains after filtering. FilterBySourceName
function which will remove events that do or do not match the configured SourceNames
and execution of the pipeline will cease if no event remains after filtering. A SourceName
is the name of the source (command or resource) that the Event was created from. FilterByResourceName
which exhibits the same behavior as DeviceNameFilter
except filtering the event's Readings
on ResourceName
instead of DeviceName
. Execution of the pipeline will cease if no readings remain after filtering. XMLTransform
or JSONTransform
.Typically, after filtering and transforming the data as needed, exporting is the last step in a pipeline to ship the data where it needs to go. There are three primary functions included in the SDK to help facilitate this. The first are theHTTPPost/HTTPPut
functions that will POST/PUT the provided data to a specified endpoint, and the third is an MQTTSecretSend()
function that will publish the provided data to an MQTT Broker as specified in the configuration.
See Built-in Functions section for full list of SDK supplied functions
Note
The App SDK provides much more functionality than just filtering, formatting and exporting. The above simple example is provided to demonstrate how the functions pipeline works. With the ability to write your custom pipeline functions, your custom application services can do what ever your use case demands.
There are three primary triggers that have been included in the SDK that initiate the start of the function pipeline. First is the HTTP Trigger via a POST to the endpoint /api/v3/trigger
with the EdgeX Event data as the body. Second is the EdgeX MessageBus Trigger with connection details as specified in the configuration and the third it the External MQTT Trigger with connection details as specified in the configuration. See the Triggers section for full list of available Triggers
Finally, data may be sent back to the Trigger response by calling .SetResponseData()
on the context. If the trigger is HTTP, then it will be an HTTP Response. If the trigger is EdgeX MessageBus, then it will be published to the configured host and publish topic. If the trigger is External MQTT, then it will be published to the configured publish topic.
All pipeline functions define a type and a factory function which is used to initialize an instance of the type with the required options. The instances returned by these factory functions give access to their appropriate pipeline function pointers when setting up the function pipeline.
Example
NewFilterFor([] {\"Device1\", \"Device2\"}).FilterByDeviceName\n
"},{"location":"microservices/application/BuiltIn/#batching","title":"Batching","text":"Included in the SDK is an in-memory batch function that will hold on to your data before continuing the pipeline. There are three functions provided for batching each with their own strategy.
Factory Method Description NewBatchByTime(timeInterval string) This function returns aBatchConfig
instance with time being the strategy that is used for determining when to release the batched data and continue the pipeline. timeInterval
is the duration to wait (i.e. 10s
). The time begins after the first piece of data is received. If no data has been received no data will be sent forward. NewBatchByCount(batchThreshold int) This function returns a BatchConfig
instance with count being the strategy that is used for determining when to release the batched data and continue the pipeline. batchThreshold
is how many events to hold on to (i.e. 25
). The count begins after the first piece of data is received and once the threshold is met, the batched data will continue forward and the counter will be reset. NewBatchByTimeAndCount(timeInterval string, batchThreshold int) This function returns a BatchConfig
instance with a combination of both time and count being the strategy that is used for determining when to release the batched data and continue the pipeline. Whichever occurs first will trigger the data to continue and be reset. Examples
NewBatchByTime(\"10s\").Batch\nNewBatchByCount(10).Batch\nNewBatchByTimeAndCount(\"30s\", 10).Batch\n
Property Description IsEventData The IsEventData
flag, when true, lets this function know that the data being batched is Events
and to un-marshal the data a []Event
prior to returning the batched data. MergeOnSend The MergeOnSend
flag, when true, will merge the [][]byte
data to a single[]byte
prior to sending the data to the next function in the pipeline. Batch with IsEventData
flag set to true.
batch := NewBatchByTimeAndCount(\"30s\", 10)\nbatch.IsEventData = true\n...\nbatch.Batch\n
Batch with MergeOnSend
flag set to true.
batch := NewBatchByTimeAndCount(\"30s\", 10)\nbatch.MergeOnSend = true\n...\nbatch.Batch\n
"},{"location":"microservices/application/BuiltIn/#batch","title":"Batch","text":"Batch
- This pipeline function will apply the selected strategy in your pipeline. By default the batched data returned by this function is [][]byte
. This is because this function doesn't need to know the type of the individual items batched. It simply marshals the items to JSON if the data isn't already a []byte
.
Warning
Keep memory usage in mind as you determine the thresholds for both time and count. The larger they are the more memory is required and could lead to performance issue.
"},{"location":"microservices/application/BuiltIn/#compression","title":"Compression","text":"There are two compression types included in the SDK that can be added to your pipeline. These transforms return a []byte
.
Compression
instance that is used to access the compression functions."},{"location":"microservices/application/BuiltIn/#gzip","title":"GZIP","text":"CompressWithGZIP
- This pipeline function receives either a string
,[]byte
, or json.Marshaler
type, GZIP compresses the data, converts result to base64 encoded string, which is returned as a []byte
to the pipeline.
Example
NewCompression().CompressWithGZIP\n
"},{"location":"microservices/application/BuiltIn/#zlib","title":"ZLIB","text":"CompressWithZLIB
- This pipeline function receives either a string
,[]byte
, or json.Marshaler
type, ZLIB compresses the data, converts result to base64 encoded string, which is returned as a []byte
to the pipeline.
Example
NewCompression().CompressWithZLIB\n
"},{"location":"microservices/application/BuiltIn/#conversion","title":"Conversion","text":"There are two conversions included in the SDK that can be added to your pipeline. These transforms return a string
.
Conversion
instance that is used to access the conversion functions."},{"location":"microservices/application/BuiltIn/#json","title":"JSON","text":"TransformToJSON
- This pipeline function receives an dtos.Event
type and converts it to JSON format and returns the JSON string to the pipeline.
Example
NewConversion().TransformToJSON\n
"},{"location":"microservices/application/BuiltIn/#xml","title":"XML","text":"TransformToXML
- This pipeline function receives an dtos.Event
type, converts it to XML format and returns the XML string to the pipeline.
Example
NewConversion().TransformToXML\n
"},{"location":"microservices/application/BuiltIn/#event","title":"Event","text":"This enables the ability to wrap data into an Event/Reading
Factory Method Description NewEventWrapperSimpleReading(profileName string, deviceName string, resourceName string, valueType string) This factory function returns anEventWrapper
instance configured to push a Simple
reading. TheEventWrapper
instance returned is used to access core data functions. NewEventWrapperBinaryReading(profileName string, deviceName string, resourceName string, mediaType string) This factory function returns an EventWrapper
instance configured to push a Binary
reading. The EventWrapper
instance returned is used to access core data functions. NewEventWrapperObjectReading(profileName string, deviceName string, resourceName string) This factory function returns an EventWrapper
instance configured to push an Object
reading. The EventWrapper
instance returned is used to access core data functions."},{"location":"microservices/application/BuiltIn/#wrap-into-event","title":"Wrap Into Event","text":"WrapIntoEvent
- This pipeline function provides the ability to Wrap data in an Event/Reading. The data passed into this function from the pipeline is wrapped in an EdgeX Event with the Event and Reading metadata specified from the factory function options. The function returns the new EdgeX Event with ID populated.
Example
NewEventWrapperSimpleReading(\"my-profile\", \"my-device\", \"my-resource\", \"string\").Wrap\n
"},{"location":"microservices/application/BuiltIn/#data-protection","title":"Data Protection","text":"There are two transforms included in the SDK that can be added to your pipeline for data protection.
"},{"location":"microservices/application/BuiltIn/#aesprotection","title":"AESProtection","text":"Factory Method Description NewAESProtection(secretName string, secretValueKey string) This function returns aEncryption
instance initialized with the passed in secretName
and secretValueKey
It requires a 64-byte key from secrets which is split in half, the first half used for encryption, the second for generating the signature.
Encrypt
: This pipeline function receives either a string
, []byte
, or json.Marshaller
type and encrypts it using AES256 encryption, signs it with a SHA512 hash and returns a []byte
to the pipeline of the following form:
Example
transforms.NewAESProtection(secretName, secretValueKey).Encrypt(ctx, data)\n
Note
The Algorithm
used with app-service-configurable configuration to access this transform is AES256
Reading data protected with this function is a multi step process:
Signing Hash Validation
def hash(cipher_hex, key):\n # Extract the 32 bytes of the Hash signature from the end of the cipher_hex\n extract_hash = cipher_hex[-64:]\n\n # last 32 bytes of the 64 byte key used by the encrypt function (2 hex digits per byte)\n private_key = key[-64:]\n # IV & ciphertext\n content = cipher_hex[:-64]\n\n hash_text = hmac.new(key=bytes.fromhex(private_key), msg=(bytes.fromhex(content) + bytearray(8)), digestmod='SHA512')\n\n # Calculated tag is only the the first 32 bytes of the resulting SHA512\n calculated_hash = hash_text.hexdigest()[:64]\n\n if extract_hash == calculated_hash:\n return \"true\"\n else:\n return \"false\", extract_hash, calculated_hash\n
If the signing hash can be validated, the message is OK to decrypt
Payload Decryption
def decrypt(cipher_hex, key):\n # first 32 bytes of the 64 byte key used by the encrypt function (2 hex digits per byte)\n private_key = bytes.fromhex(key[:64])\n\n # Extract the cipher text (remaining bytes in the middle)\n cipher_text = cipher_hex[32:]\n cipher_text = bytes.fromhex(cipher_text[:-64])\n\n # Extract the 16 bytes of initial vector from the beginning of the data\n iv = bytes.fromhex(cipher_hex[:32])\n\n # Decrypt\n cipher = AES.new(private_key, AES.MODE_CBC, iv)\n\n plain_pad = cipher.decrypt(cipher_text)\n unpadded = Padding.unpad(plain_pad, AES.block_size)\n\n return unpadded.decode('utf-8')\n
"},{"location":"microservices/application/BuiltIn/#export","title":"Export","text":"There are two export functions included in the SDK that can be added to your pipeline.
"},{"location":"microservices/application/BuiltIn/#http-export","title":"HTTP Export","text":"Factory Method Description NewHTTPSender(url string, mimeType string, persistOnError bool) This factory function returns aHTTPSender
instance initialized with the passed in url, mime type and persistOnError values. NewHTTPSenderWithSecretHeader(url string, mimeType string, persistOnError bool, headerName string, secretName string, secretValueKey string) This factory function returns a HTTPSender
instance similar to the above function however will set up the HTTPSender
to add a header to the HTTP request using the headerName
for the field name and the secretName
and secretValueKey
to pull the header field value from the Secret Store. NewHTTPSenderWithOptions(options HTTPSenderOptions) This factory function returns a HTTPSender
using the passed in options
to configure it. // HTTPSenderOptions contains all options available to the sender\ntype HTTPSenderOptions struct {\n// URL of destination\nURL string\n// MimeType to send to destination\nMimeType string\n// PersistOnError enables use of store & forward loop if true\nPersistOnError bool\n// HTTPHeaderName to use for passing configured secret\nHTTPHeaderName string\n// SecretName to search for configured secret\nSecretName string\n// SecretValueKey is the key for configured secret data\nSecretValueKey string\n// URLFormatter specifies custom formatting behavior to be applied to configured URL.\n// If nothing specified, default behavior is to attempt to replace placeholders in the\n// form '{some-context-key}' with the values found in the context storage.\nURLFormatter StringValuesFormatter\n// ContinueOnSendError allows execution of subsequent chained senders after errors if true\nContinueOnSendError bool\n// ReturnInputData enables chaining multiple HTTP senders if true\nReturnInputData bool\n}\n
"},{"location":"microservices/application/BuiltIn/#http-post","title":"HTTP POST","text":"HTTPPost
- This pipeline function receives either a string
, []byte
, or json.Marshaler
type from the previous function in the pipeline and posts it to the configured endpoint and returns the HTTP response. If no previous function exists, then the event that triggered the pipeline, marshaled to json, will be used. If the post fails and persistOnError=true
and Store and Forward
is enabled, the data will be stored for later retry. See Store and Forward for more details. If ReturnInputData=true
the function will return the data that it received instead of the HTTP response. This allows the following function in the pipeline to be another HTTP Export which receives the same data but is configured to send to a different endpoint. When chaining for multiple HTTP Exports you need to decide how to handle errors. Do you want to stop execution of the pipeline or continue so that the next HTTP Export function can attempt to export to its endpoint. This is where ContinueOnSendError
comes in. If set to true
the error is logged and the function returns the received data for the next function to use. ContinueOnSendError=true
can only be used when ReturnInputData=true
and cannot be use when PersistOnError=true
.
Example
POST NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPost
PUT NewHTTPSender(\"https://myendpoint.com\",\"application/json\",false).HTTPPut
POST with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPost
PUT with secure header NewHTTPSenderWithSecretHeader(\"https://myendpoint.com\",\"application/json\",false,\"Authentication\",\"/jwt\",\"AuthToken\").HTTPPPut
"},{"location":"microservices/application/BuiltIn/#http-put","title":"HTTP PUT","text":"HTTPPut
- This pipeline function operates the same as HTTPPost
but uses the PUT
method rather than POST
.
The configured URL is dynamically formatted prior to the POST/PUT request. The default formatter (used if URLFormatter
is nil) simply replaces any placeholder text, {key-name}
, in the configured URL with matching values from the new Context Storage
. An error will occur if a specified placeholder does not exist in the Context Storage
. See the Context Storage documentation for more details on seeded values and storing your own values.
The URLFormatter
option allows you to override the default formatter with your own custom URL formatting scheme.
Example
Export the Events to different endpoints base on their device name Url=\"http://myhost.com/edgex-events/{devicename}\"
Example
httpRequestHeaders = map[string]string{ \"Connection\": \"keep-alive\", \"From\": \"user@example.com\" } SetHttpRequestHeaders(httpRequestHeaders)
MQTTSecretSender
instance initialized with the options specified in the MQTTSecretConfig
and persistOnError
. NewMQTTSecretSenderWithTopicFormatter(mqttConfig MQTTSecretConfig, persistOnError bool, topicFormatter StringValuesFormatter) This factory function returns a MQTTSecretSender
instance initialized with the options specified in the MQTTSecretConfig
, persistOnError
and topicFormatter
. See Topic Formatting below for more details. type MQTTSecretConfig struct {\n// BrokerAddress should be set to the complete broker address i.e. mqtts://mosquitto:8883/mybroker\nBrokerAddress string\n// ClientId to connect with the broker with.\nClientId string\n// The name of the secret in secret provider to retrieve your secrets\nSecretName string\n// AutoReconnect indicated whether or not to retry connection if disconnected\nAutoReconnect bool\n// KeepAlive is the interval duration between client sending keepalive ping to broker\nKeepAlive string\n// ConnectTimeout is the duration for timing out on connecting to the broker\nConnectTimeout string\n// Topic that you wish to publish to\nTopic string\n// QoS for MQTT Connection\nQoS byte\n// Retain setting for MQTT Connection\nRetain bool\n// SkipCertVerify\nSkipCertVerify bool\n// AuthMode indicates what to use when connecting to the broker. \n// Options are \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\".\n// If a CA Cert exists in the SecretName data then it will be used for \n// all modes except \"none\". \nAuthMode string\n}\n
Secrets in the Secret Store may be located at any SecretName however they must have some or all the follow keys at the specified in the secret data:
username
- username to connect to the brokerpassword
- password used to connect to the brokerclientkey
- client private key in PEM formatclientcert
- client cert in PEM formatcacert
- ca cert in PEM formatThe AuthMode
setting you choose depends on what secret values above are used. For example, if \"none\" is specified as auth mode all keys will be ignored. Similarly, if AuthMode
is set to \"clientcert\" username and password will be ignored.
The configured Topic is dynamically formatted prior to publishing . The default formatter (used if topicFormatter
is nil) simply replaces any placeholder text, {key-name}
, in the configured Topic
with matching values from the new Context Storage
. An error will occur if a specified placeholder does not exist in the Context Storage
. See the Context Storage documentation for more details on seeded values and storing your own values.
The topicFormatter
option allows you to override the default formatter with your own custom topic formatting scheme.
There are four basic types of filtering included in the SDK to add to your pipeline. There is also an option to Filter Out
specific items. These provided filter functions return a type of dtos.Event
. If filtering results in no remaining data, the pipeline execution for that pass is terminated. If no values are provided for filtering, then data flows through unfiltered.
Filter
instance initialized with the passed in filter values with FilterOut
set to false
. This Filter
instance is used to access the following filter functions that will operate using the specified filter values. NewFilterOut([]string filterValues) This factory function returns a Filter
instance initialized with the passed in filter values with FilterOut
set to true
. This Filter
instance is used to access the following filter functions that will operate using the specified filter values. type Filter struct {\n// Holds the values to be filtered\nFilterValues []string\n// Determines if items in FilterValues should be filtered out. If set to true all items found in the filter will be removed. If set to false all items found in the filter will be returned. If FilterValues is empty then all items will be returned.\nFilterOut bool\n}\n
Note
Either strings or regular expressions are accepted as filter values.
"},{"location":"microservices/application/BuiltIn/#by-profile-name","title":"By Profile Name","text":"FilterByProfileName
- This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified profiles names.
Example
NewFilterFor([] {\"Profile1\", \"Profile2\"}).FilterByProfileName\n\nNewFilterFor([] {\"Profile[0-9]+\"}).FilterByProfileName\n
"},{"location":"microservices/application/BuiltIn/#by-device-name","title":"By Device Name","text":"FilterByDeviceName
- This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified device names.
Example
NewFilterFor([] {\"(Device)1, Device2\"}).FilterByDeviceName\n\nNewFilterFor([] {\"(Device)[0-9]+\"}).FilterByDeviceName\n
"},{"location":"microservices/application/BuiltIn/#by-source-name","title":"By Source Name","text":"FilterBySourceName
- This pipeline function will filter the event data down to Events that either have (For) or don't have (Out) the specified source names. Source name is either the resource name
or command name
responsible for the Event creation.
Example
NewFilterFor([] {\"Source1\", \"Source2\"}).FilterBySourceName\n\nNewFilterFor([] {\"Source[0-9]+\"}).FilterBySourceName\n
"},{"location":"microservices/application/BuiltIn/#by-resource-name","title":"By Resource Name","text":"FilterByResourceName
- This pipeline function will filter the Event's reading data down to Readings that either have (For) or don't have (Out) the specified resource names. If the result of filtering is zero Readings remaining, the function terminates pipeline execution.
Example
NewFilterFor([] {\"Resource1\", \"Resource2\"}).FilterByResourceName\n\nNewFilterFor([] {\"Resource[0-9]+\"}).FilterByResourceName\n
"},{"location":"microservices/application/BuiltIn/#json-logic","title":"JSON Logic","text":"Factory Method Description NewJSONLogic(rule string) This factory function returns a JSONLogic
instance initialized with the passed in JSON rule. The rule passed in should be a JSON string conforming to the specification here: http://jsonlogic.com/operations.html."},{"location":"microservices/application/BuiltIn/#evaluate","title":"Evaluate","text":"Evaluate
- This is the pipeline function that will be used in the pipeline to apply the JSON rule to data coming in on the pipeline. If the condition of your rule is met, then the pipeline will continue and the data will continue to flow to the next function in the pipeline. If the condition of your rule is NOT met, then pipeline execution stops.
Example
NewJSONLogic(\"{ \\\"in\\\" : [{ \\\"var\\\" : \\\"device\\\" }, \n [\\\"Random-Integer-Device\\\",\\\"Random-Float-Device\\\"] ] }\").Evaluate\n
Note
Only operations that return true or false are supported. See http://jsonlogic.com/operations.html# for the complete list of operations paying attention to return values. Any operator that returns manipulated data is currently not supported. For more advanced scenarios checkout LF Edge eKuiper.
Tip
Leverage http://jsonlogic.com/play.html to get your rule right before implementing in code. JSON can be a bit tricky to get right in code with all the escaped double quotes.
"},{"location":"microservices/application/BuiltIn/#response-data","title":"Response Data","text":"There is one response data function included in the SDK that can be added to your pipeline.
Factory Method Description NewResponseData() This factory function returns aResponseData
instance that is used to access the following pipeline function below."},{"location":"microservices/application/BuiltIn/#content-type","title":"Content Type","text":"ResponseContentType
- This property is used to set the content-type of the response.
Example
responseData := NewResponseData()\nresponseData.ResponseContentType = \"application/json\"\n
"},{"location":"microservices/application/BuiltIn/#set-response-data","title":"Set Response Data","text":"SetResponseData
- This pipeline function receives either a string
,[]byte
, or json.Marshaler
type from the previous function in the pipeline and sets it as the response data that the pipeline returns to the configured trigger. If configured to use theEdgeXMessageBus
trigger, the data will be published back to the EdgeX MessageBus as determined by the configuration. Similar, if configured to use theExternalMQTT
trigger, the data will be published back to the external MQTT Broker as determined by the configuration. If configured to use HTTP
trigger the data is returned as the HTTP response.
Note
Calling SetResponseData()
and SetResponseContentType()
from the Context API in a custom function can be used in place of adding this function to your pipeline.
There is one Tags transform included in the SDK that can be added to your pipeline.
Factory Method Description NewTags(tagsmap[string]interface{}
) Tags This factory function returns a Tags
instance initialized with the passed in collection of generic tag key/value pairs. This Tags
instance is used to access the following Tags function that will use the specified collection of tag key/value pairs. This allows for generic complex types for the Tag values."},{"location":"microservices/application/BuiltIn/#add-tags","title":"Add Tags","text":"AddTags
- This pipeline function receives an Edgex Event
type and adds the collection of specified tags to the Event's Tags
collection.
Example
var myTags = map[string]interface{}{\n\"MyValue\" : 123,\n\"GatewayId\": \"HoustonStore000123\",\n\"Coordinates\": map[string]float32 {\n\"Latitude\": 29.630771,\n\"Longitude\": \"-95.377603\",\n},\n}\n\nNewGenericTags(myTags).AddTags\n
"},{"location":"microservices/application/BuiltIn/#metricsprocessor","title":"MetricsProcessor","text":"MetricsProcessor
contains configuration and functions for processing the new dtos.Metrics
type.
`MetricsProcessor
instance initialized with the passed in collection of additionalTags
(name/value pairs). This MetricsProcessor
instance is used to access the following functions that will process a dtos.Metric instance. The additionalTags
are added as metric tags to the processed data. An error will be returned if any of the additionalTags
have an invalid name. Currently must be non-blank."},{"location":"microservices/application/BuiltIn/#tolineprotocol","title":"ToLineProtocol","text":"ToLineProtocol
- This pipeline function will transform the received dtos.Metric
to a Line Protocol
formatted string. See https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/ for details on the Line Protocol
syntax.
Note
When ToLineProtocol
is the first function in the functions pipeline, the TargetType
for the service must be set to &dtos.Metric{}
. See Target Type section for details on setting the service's TargetType
. The Trigger configuration must also be set so SubscribeTopics=\"edgex/telemetry/#\"
in order to receive the dtos.Metric
data from other services. See the new App Service Configurable metrics-influxdb
profile for an example.
Example
mp, err := NewMetricsProcessor(map[string]string{\"MyTag\":\"MyTagValue\"})\nif err != nil {\n... handle error\n}\n...\nmp.ToLineProtocol\n
Warning
Any service using the MetricsProcessor
needs to disable its own Telemetry reporting to avoid circular data generation from processing. To do this set the servicesWriteable.Telemetry
configuration to:
[Writable.Telemetry]\nInterval = \"0s\" # Don't report any metrics as that would be cyclic processing.\n
"},{"location":"microservices/application/ErrorHandling/","title":"Pipeline Function Error Handling","text":"Each transform returns a true
or false
as part of the return signature. This is called the continuePipeline
flag and indicates whether the SDK should continue calling successive transforms in the pipeline.
return false, nil
will stop the pipeline and stop processing the event. This is useful, for example, when filtering on values and nothing matches the criteria you've filtered on. return false, error
, will stop the pipeline as well and the SDK will log the error you have returned.return true, nil
tells the SDK to continue, and will call the next function in the pipeline with your result.The SDK will return control back to main when receiving a SIGTERM/SIGINT event to allow for custom clean up.
"},{"location":"microservices/application/GeneralAppServiceConfig/","title":"Application Service Configuration","text":"Similar to other EdgeX services, configuration is first determined by the configuration.yaml
file in the /res
folder. Once loaded any environment overrides are applied. If -cp
is passed to the application on startup, the SDK will leverage the specific configuration provider (i.e Consul) to push the configuration into the provider and monitor Writeable
configuration from there. You will find the configuration under the edgex/appservices/2.0/
key in the provider (i.e Consul). On re-restart the service will pull the configuration from the provider and apply any environment overrides.
This section describes the configuration elements that are unique to Application Services
Please first refer to the general Configuration documentation for configuration properties common across all EdgeX services.
Note
*
indicates the configuration value can be changed on the fly if using a configuration provider (like Consul). **
indicates the configuration value can be changed but the service must be restarted.
The tabs below provide additional entries in the Writable section which are applicable to Application Services.
Writable.StoreAndForwardWritable.PipelineWritable.InsecureSecretsWritable.TelemetryThe section configures the Store and Forward capability. Please refer to Store and Forward documentation for more details.
Configuration Default Value Enabled false* Indicates whether the Store and Forward capability enabled or disabled RetryInterval \"5m\"* Indicates the duration of time to wait before retries, aka Forward MaxRetryCount 10* Indicates whether maximum number of retries of failed data. The failed data is removed after the maximum retries has been exceeded. A value of0
indicates endless retries. The section configures the Configurable Function Pipeline which is used only by App Service Configurable. Please refer to App Service Configurable section for more details
This section defines Insecure Secrets that are used when running in non-secure mode, i.e. when Vault isn't available. This is a dynamic map of configuration, so can empty if no secrets are used or can have as many or few user define secrets. It simulates a Secret Store in non-secure mode. Below are a few examples that are need if using the indicated capabilities.
Configuration Default Value Description `' --- This section defines a block of insecure secrets for some service specific need SecretName<name>
Indicates the location in the simulated Secret Store where the secret resides. SecretData --- This section is the collection of secret data. key
value
Secret data key value pairs Property <Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics Service metrics that the application service collects. Boolean value indicates if reporting of the metric is enabled. Custom metrics are also included here for custom application services that define custom metrics Metrics.MessagesReceived false Enable/disable reporting of the built-in MessagesReceived metric Metrics.InvalidMessagesReceived false Enable/disable reporting of the built-in InvalidMessagesReceived metric Metrics.HttpExportSize false Enable/disable reporting of the built-in HttpExportSize metric Metrics.MqttExportSize false Enable/disable reporting of the built-in MqttExportSize metric Metrics.PipelineMessagesProcessed false Enable/disable reporting of the built-in PipelineMessagesProcessed metric Metrics.PipelineProcessingErrors false Enable/disable reporting of the built-in PipelineProcessingErrors metric Metrics.PipelineMessageProcessingTime false Enable/disable reporting of the built-in PipelineMessageProcessingTime metric Metrics.<CustomMetric>
false (Service Specific) Enable/disable reporting of custom application service's custom metric. See Custom Application Service Metrics for more detail Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. i.e. Gateway=\"my-iot-gateway\"
"},{"location":"microservices/application/GeneralAppServiceConfig/#not-writable","title":"Not Writable","text":"The tabs below provide additional configuration which are applicable to Application Services that require the service to be restarted after value(s) are changed.
HttpServerClientsTriggerTrigger ExternalMqttThis section contains the configuration for the internal Webserver. Only need if configuring the Webserver for HTTPS
certificate data
to use for HTTPS HTTPSKeyName blank** Indicates the key name in the HTTPS secret data that contains the key data
to use for HTTPS This service specific section defines the connection information for the EdgeX Clients and is the same as that used by all EdgeX services, just which clients are needed differs. Please refer to the Note about Clients section for more details.
This section defines the Trigger
for incoming data. See the Triggers documentation for more details on the inner working of triggers.
Trigger
binding type. valid values are edgex-messagebus
, external-mqtt
, http
, or <custom>
SubscribeTopics events/#** Topic(s) to subscribe to. This is a comma separated list of topics. Supports filtering by subscribe topics. Only set when using edgex-messagebus
or external-mqtt
. See EdgeXMessageBus Trigger for more details. PublishTopic blank** Indicates the topic in which to publish the function pipeline response data, if any. Supports dynamic topic places holders. Only set when using edgex-messagebus
or external-mqtt
. See EdgeXMessageBus Trigger for more details. This section defines the external MQTT Broker connect information. Only used for external-mqtt
trigger binding type
Note
external-mqtt
is not the default Trigger type, so there are no default values for ExternalMqtt
settings beyond those that the Go compiler gives to the empty struct. Some of those default values are not valid and must be specified, i.e. Authmode
tcp://localhost:1883
ClientId blank** ClientId to connect to the broker with ConnectTimeout blank** Time duration indicating how long to wait before timing out broker connection, i.e \"30s\" AutoReconnect false** Indicates whether or not to retry connection if disconnected KeepAlive 0** Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 QOS 0** Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain false** Retain setting for MQTT Connection SkipCertVerify false** Indicates if the certificate verification should be skipped SecretPath blank** Name of the path in secret provider to retrieve your secrets. Must be non-blank. AuthMode blank** Indicates what to use when connecting to the broker. Must be one of \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". If a CA Cert exists in the SecretPath then it will be used for all modes except \"none\". RetryDuration 600 Indicates how long (in seconds) to wait timing out on the MQTT client creation RetryInterval 5 Indicates the time (in seconds) that will be waited between attempts to create MQTT client Note
Authmode=cacert
is only needed when client authentication (e.g. usernamepassword
) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert.
[ApplicationSettings]
- Is used for custom application settings and is accessed via the ApplicationSettings() API. The ApplicationSettings API returns a map[string] string
containing the contents on the ApplicationSetting section of the configuration.yaml
file.
ApplicationSettings:\nApplicationName: \"My Application Service\"\n
Custom Application Services can now define their own custom structured configuration section in the configuration.yaml
file. Any additional sections in the configuration file are ignore by the SDK when it parses the file for the SDK defined sections. See the Custom Configuration section of the SDK documentation for more details.
There are two flavors of Applications Service which are configurable
and custom
. This section will describe how and when each flavor should be used.
The App Functions SDK
has a full suite of built-in features that are accessible via configuration when using the App Service Configurable
service. This service is built using the App Functions SDK
and uses configuration profiles to define separate distinct instances of the service. The service comes with a few built in profiles for common use cases, but custom profiles can also be used. If your use case needs can be meet with the built-in functionality then the App Service Configurable
service is right for you. See the App Service Configurable section for more details.
Custom Application Services are needed when use case needs can not be meet with just the built-in functionality. This is when you must develop you own custom Application Service use the App Functions SDK
. Typically this is triggered by the use case needing an custom Pipeline Function
. See the App Functions SDK section for all the details on the features you custom Application Service can take advantage of.
To help accelerate the creation of your custom Application Service the App Functions SDK
contains a template for new custom Application Services. This template has TODO's in the code and a README that walk you through the creation of your new custom Application Service. See the template README for more details.
Triggers
are common to both Configurable
and Custom
Application Services. The are the next logical area to get familiar with. See the Triggers section for more details.
Finally service configuration is very important to understand for both Configurable
and Custom
Application Services. The service configuration documentation is broken into two parts. First is the configuration that is common to all EdgeX services and the second is the configuration that is specific to Application Services. See the Common Configuration and Application Service Configuration sections for more details.
Triggers determine how the App Functions Pipeline begins execution. The trigger is determined by the [Trigger]
configuration section in the configuration.yaml
file.
Edgex 2.0
For Edgex 2.0 the [Binding]
configuration section has been renamed to [Trigger]
. The [MessageBus]
section has been renamed to EdgexMessageBus
and moved under the [Trigger]
section. The [MqttBroker]
section has been renamed to ExternalMqtt
and moved under the [Trigger]
section.
There are 4 types of Triggers
supported in the App Functions SDK which are discussed in this document
An EdgeX MessageBus trigger will execute the pipeline every time data is received from the configured Edgex MessageBus SubscribeTopics
. The EdgeX MessageBus is the central message bus internal to EdgeX and has a specific message envelope that wraps all data published to this message bus.
There currently are four implementations of the EdgeX MessageBus available to be used. Two of these are available out of the box: Redis Pub/Sub
(default) and MQTT
. Additionally NATS (both core and JetStream) options can be made available with the build flag mentioned above. The implementation type is selected via the [Trigger.EdgexMessageBus]
configuration described below.
Example Trigger Configuration
Trigger:\nType: \"edgex-messagebus\"\n
In the above example Type
is set to edgex-messagebus
trigger type so data will be received from the EdgeX MessageBus and may be Published to the EdgeX MessageBus, if configured.
The SubscribeTopics configuration specifies the comma separated list of topics the service will subscribe to.
Note
The default SubscribeTopics
configuration is set in the App Services Common Trigger Configuration.
The PublishTopic configuration specifies the topic published to when the ResponseData
is set via the ctx.SetResponseData([]byte outputData)
API. Nothing will be published if the PublishTopic is not set or the ResponseData
is never set
Note
The default PublishTopic
configuration is set in the App Services Common Trigger Configuration.
See the EdgeX MessageBus section for complete details.
Edgex 3.0
For Edgex 3.0 the MessageBus configuration settings are set in the Common MessageBus Configuration.
"},{"location":"microservices/application/Triggers/#filter-by-topics","title":"Filter By Topics","text":"App services now have the capability to filter by EdgeX MessageBus topics rather than using Filter functions in the functions pipeline. Filtering by topic is more efficient since the App Service never receives the data off the MessageBus. Core Data and/or Device Services now publish to multi-level topics that include the profilename
, devicename
and sourcename
. Sources are the commandname
or resourcename
that generated the Event. The publish topics now look like this:
# From Core Data\nedgex/events/core/<device-service>/<profile-name>/<device-name>/<source-name>\n\n# From Device Services\nedgex/events/device/<device-service>/<profile-name>/<device-name>/<source-name>\n
This with App Services capability to have multiple subscriptions allows for multiple filters by subscriptions. The SubscribeTopics
setting takes a comma separated list of subscribe topics.
Here are a few examples of how to configure the SubscribeTopics
setting under the Trigger.EdgexMessageBus.SubscribeHost
section to filter by subscriptions using the profile
, device
and source
names from the SNMP Device Service file here:
Trigger:\nSubscribeTopics: \"events/#\"\n
Trigger:\nSubscribeTopics: \"events/+/+/trendnet/#\"\n
Trigger:\nSubscribeTopics: \"edgex/events/+/+/+/trendnet01/#\"\n
Trigger:\nSubscribeTopics: \"edgex/events/+/+/+/trendnet01/#, edgex/events/+/+/+/trendnet02/#\"\n
Trigger:\nSubscribeTopics: \"edgex/events/+/+/+/+/Uptime, edgex/events/+/+/+/+/MacAddress\"\n
"},{"location":"microservices/application/Triggers/#external-mqtt-trigger","title":"External MQTT Trigger","text":"An External MQTT trigger will execute the pipeline every time data is received from an external MQTT broker on the configured SubscribeTopics
.
Note
The data received from the external MQTT broker is not wrapped with any metadata known to EdgeX. The data is handled as JSON or CBOR. The data is assumed to be JSON unless the first byte in the data is not a {
or a [
, in which case it is then assumed to be CBOR.
Note
The data received, encoded as JSON or CBOR, must match the TargetType
defined by your application service. The default TargetType
is an Edgex Event
. See TargetType for more details.
Example Trigger Configuration
Trigger:\nType: \"external-mqtt\"\nSubscribeTopics: \"external/#\"\nPublishTopic: \"\"\n...\n
The Type
is set to external-mqtt
. To receive data from the external MQTT Broker you must set your SubscribeTopics
to the appropriate topic(s) that the external publisher is using. You may also designate a PublishTopic
if you wish to publish data back to the external MQTT Broker. The Context function ctx.SetResponseData([]byte outputData)
stores the data to send back to the external MQTT Broker on the topic specified by the PublishTopic
setting.
the PublishTopic
can have placeholders. See Publish Topic Placeholders section below for more details
The other piece of configuration required are the MQTT Broker connection settings:
Trigger:\n...\nExternalMqtt:\nUrl: \"tls://test.mosquitto.org:8884\"\nClientId: \"app-external-mqtt-trigger\"\nQos: 0\nKeepAlive: 10\nRetained: false\nAutoReconnect: true\nConnectTimeout: \"30s\"\nSkipCertVerify: true\nAuthMode: \"clientcert\"\nSecretName: \"external-mqtt\"\nRetryDuration: 600\nRetryInterval: 5\n
"},{"location":"microservices/application/Triggers/#http-trigger","title":"HTTP Trigger","text":"Designating an HTTP trigger will allow the pipeline to be triggered by a RESTful POST
call to http://[host]:[port]/api/v3/trigger/
.
Example Trigger Configuration
Trigger:\nType: \"http\"
The Type=
is set to http
. This will enable listening to the api/v3/trigger/
endpoint. No other configuration is required. The Context function ctx.SetResponseData([]byte outputData)
stores the data to send back as the response to the requestor that originally triggered the HTTP Request.
Note
The HTTP trigger uses the content-type
from the HTTP Header to determine if the data is JSON or CBOR encoded and the optional X-Correlation-ID
to set the correlation ID for the request.
Note
The data received, encoded as JSON or CBOR, must match the TargetType
defined by your application service. The default TargetType
is an Edgex Event
. See TargetType for more details.
It is also possible to define your own trigger and register a factory function for it with the SDK. You can then configure the trigger by registering a factory function to build it along with a name to use in the config file. These triggers can be registered with:
service.RegisterCustomTriggerFactory(\"my-trigger-name\", myFactoryFunc)
Note
You can NOT override trigger names built into the SDK ( \"edgex-messagebus\", \"external-mqtt\", or \"http\") for a custom trigger.
The trigger factory function is bound to an instance of a trigger configuration struct that is provided by the SDK:
type TriggerConfig struct {\nLogger logger.LoggingClient\nContextBuilder TriggerContextBuilder\nMessageReceived TriggerMessageHandler\nConfigLoader TriggerConfigLoader\n}\n
This type carries a pointer to the internal edgex logger, along with three functions:
ContextBuilder
builds an interfaces.AppFunctionContext
from a message envelope you construct.MessageReceived
exposes a function that sends your message envelope and context to any pipelines configured in the EdgeX service. It also takes a function that will be run to process the response for each successful pipeline.Note
The context passed in to Received
will be cloned for each pipeline configured to run. If a nil context is passed a new one will be initialized from the message.
ConfigLoader
exposes a function that loads your custom config struct. By default this is done from the primary EdgeX configuration pipeline, and only loads root-level elements.If you need to override these functions it can be done in the factory function registered with the service.
The custom trigger constructed here will then need to implement the trigger interface so that the SDK can invoke it:
type Trigger interface {\nInitialize(wg *sync.WaitGroup, ctx context.Context, background <-chan BackgroundMessage) (bootstrap.Deferred, error)\n}\n\ntype BackgroundMessage interface {\nMessage() types.MessageEnvelope\nTopic() string\n}\n
This leaves a lot of flexibility for how you want the trigger to behave (for example you could write a trigger to watch for file changes, or run on a timer). Below is a sample implementation of a trigger that reads lines from os.Stdin and pass the captured string through the edgex function pipeline. In this case the target type for the service is set to &[]byte{}
.
type stdinTrigger struct{\ntc appsdk.TriggerConfig\n}\n\nfunc (t *stdinTrigger) Initialize(wg *sync.WaitGroup, ctx context.Context, _ <-chan interfaces.BackgroundMessage) (bootstrap.Deferred, error) {\nmsgs := make(chan []byte)\n\nreceiveMessage := true\n\nresponseHandler := func(ctx AppFunctionContext, pipeline *FunctionPipeline) {\n// do stuff\n}\n\ngo func() {\nfmt.Print(\"> \")\nrdr := bufio.NewReader(os.Stdin)\nfor receiveMessage {\ns, err := rdr.ReadString('\\n')\ns = strings.TrimRight(s, \"\\n\")\n\nif err != nil {\nt.tc.Logger.Error(err.Error())\ncontinue\n}\n\nmsgs <- []byte(s)\n}\n}()\n\ngo func() {\nfor receiveMessage {\nselect {\ncase <-ctx.Done():\nreceiveMessage = false\n\ncase m := <-msgs:\ngo func() {\nenv := types.MessageEnvelope{\nPayload: m,\n}\n\nctx := t.tc.ContextBuilder(env)\n\nerr := t.tc.MessageReceived(ctx, env, responseHandler)\n\nif err != nil {\nt.tc.Logger.Error(err.Error())\n}\n}()\n}\n}\n}()\n\nreturn cancel, nil\n}\n
This trigger can then be registered by calling:
appService.RegisterCustomTriggerFactory(\"custom-stdin\", func(config appsdk.TriggerConfig) (appsdk.Trigger, error) {\nreturn &stdinTrigger{\ntc: config,\n}, nil\n})\n
"},{"location":"microservices/application/Triggers/#type-configuration_3","title":"Type Configuration","text":"Example Trigger Configuration
Trigger:\nType: \"custom-stdin\"
Now the custom trigger is configured to be used rather than one of the built-in triggers.
A complete working example can be found here
"},{"location":"microservices/application/Triggers/#publish-topic-placeholders","title":"Publish Topic Placeholders","text":"Both the EdgeX MessageBus
and the External MQTT
triggers support the new Publish Topic Placeholders capability. The configured PublishTopic
for either of these triggers can contain placeholders for runtime replacements. The placeholders are replaced with values from the new Context Storage
whose key match the placeholder name. Function pipelines can add values to the Context Storage
which can then be used as replacement values in the publish topic. If an EdgeX Event is received by the configured trigger the Event's profilename
, devicename
and sourcename
as well as the will be seeded into the Context Storage
. See the Context Storage documentation for more details.
The Publish Topic Placeholders format is a simple {<key-name>}
that can appear anywhere in the topic multiple times. An error will occur if a specified placeholder does not exist in the Context Storage
.
PublishTopic: \"data/{profilename}/{devicename}/{custom}\"\n
"},{"location":"microservices/application/Triggers/#received-topic","title":"Received Topic","text":"The topic the data was received on for EdgeX MessageBus
and the External MQTT
triggers is now stored in the new Context Storage
with the key receivedtopic
. This makes it available to pipeline functions via the Context Storage
.
The migration of any Application Service's configuration starts with migrating configuration common to all EdgeX services. See the V3 Migration of Common Configuration section for details including the change from TOML format to YAML format for the configuration file. The remainder of this section focuses on configuration specific to Application Services.
"},{"location":"microservices/application/V3Migration/#common-configuration-removed","title":"Common Configuration Removed","text":"Any configuration that is common to all EdgeX services or all EdgeX Application Services needs to be removed from custom application service's private configuration.
Note
With this change, any custom application service must be run with either the -cp/--configProvider
flag or the -cc/--commonConfig
flag in order for the service to receive the common configuration that has been removed from its private configuration. See Config Provider and Common Config sections for more details on these flags.
The EdgeX MessageBus configuration has been moved out of the Trigger configuration and most values are placed in the common configuration. The only values remaining in the application service's private configuration are:
Disabled
- Used to disable the use of the EdgeX MessageBus when not using metrics and not using edgex-messagebus
Trigger type. Value need to be present so that it can be overridden with environment variable.Optional.ClientId
- Unique name needed for when MQTT or NATS are used as the MessageBus implementation.Example Application Service specific MessageBus section for 3.0
MessageBus:\nDisabled: false # Set to true if not using metrics and not using `edgex-messagebus` Trigger type\nOptional:\nClientId: \"<service-key>\"\n
"},{"location":"microservices/application/V3Migration/#trigger","title":"Trigger","text":""},{"location":"microservices/application/V3Migration/#edgex-messagebus-changes","title":"edgex-messagebus changes","text":"As noted above the EdgeX MessageBus configuration has been removed from the Trigger configuration. In addition the SubscribeTopics
and PublishTopic
settings have been move to the top level of the Trigger configuration. Most application services can simply use the default trigger configuration from application service common configuration.
Example application service Trigger configuration - From Common Configuration
Trigger:\nType: \"edgex-messagebus\"\nSubscribeTopics: \"events/#\" # Base topic is prepended to this topic when using edgex-messagebus\n
Example local application service Trigger configuration - None
# Using default Trigger config from common config\n
Some application services may need to publish results back to the EdgeX MessageBus. In this case the PublishTopic
will remain in the service private configuration.
Example local application service Trigger configuration - PublishTopic
Trigger:\n# Default value for SubscribeTopics is aslo set in common config\nPublishTopic: \"<my-topic>\" # Base topic is prepended to this topic when using edgex-messagebus\n
Note
In EdgeX 3.0 Application services, the base topic in MessageBus common configuration is prepended to the configured SubscribeTopics
and PublishTopic
values. The default base topic is edgex
; thus, all topics start with edgex/
If the common Trigger configuration is what your service needs
If your service publishes back to the EdgeX MessageBus
PublishTopic
to top level in your Trigger configurationedgex/
prefix if usedIf your service uses filter by topic
SubscribeTopics
to top level in your Trigger configurationedgex/
prefix from each topic if used#
between levels with +
. See Multi-level topics and wildcards section for more detailsThe External MQTT trigger configuration remains under Trigger configuration, but the SubscribeTopics
and PublishTopic
setting have been moved to the top level of the Trigger configuration.
Example - External MQTT trigger configuration
Trigger:\nType: \"external-mqtt\"\nSubscribeTopics: \"external-request/#\"\nPublishTopic: \"\" # optional if publishing response back to the the External MQTT Broker\nExternalMqtt:\nUrl: \"tcp://broker.hivemq.com:1883\" # fully qualified URL to connect to the MQTT broker\nClientId: \"app-my-service\"\nConnectTimeout: \"30s\" AutoReconnect: true\nKeepAlive: 10 # Seconds (must be 2 or greater)\nQoS: 0 # Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once)\nRetain: true\nSkipCertVerify: false\nSecretName: \"mqtt-trigger\" AuthMode: \"none\"\n
"},{"location":"microservices/application/V3Migration/#external-mqtt-trigger-migration","title":"external-mqtt Trigger Migration","text":"SubscribeTopics
and PublishTopic
top level of the Trigger configurationThe HTTP trigger configuration has not changed for EdgeX 3.0
"},{"location":"microservices/application/V3Migration/#writable-pipeline","title":"Writable Pipeline","text":"See Pipeline Configuration section below for changes to the Writable Pipeline configuration
"},{"location":"microservices/application/V3Migration/#custom-application-service","title":"Custom Application Service","text":""},{"location":"microservices/application/V3Migration/#code","title":"Code","text":""},{"location":"microservices/application/V3Migration/#dependencies","title":"Dependencies","text":"You first need to update the go.mod
file to specify go 1.20
and the v3 versions of the App Functions SDK and any EdgeX go-mods directly used by your service.
Example go.mod for V3
module <your service>\n\ngo 1.20\n\nrequire (\ngithub.com/edgexfoundry/app-functions-sdk-go/v3 v3.0.0\ngithub.com/edgexfoundry/go-mod-core-contracts/v3 v3.0.0\n)\n
Once that is complete then the import statements for these dependencies must be updated to include the /v3
in the path.
Example import statements for V3
import (\n...\n\n\"github.com/edgexfoundry/app-functions-sdk-go/v3/pkg/interfaces\"\n\"github.com/edgexfoundry/go-mod-core-contracts/v3/dtos\"\n)\n
"},{"location":"microservices/application/V3Migration/#api-changes","title":"API Changes","text":""},{"location":"microservices/application/V3Migration/#applicationservice-api","title":"ApplicationService API","text":"The ApplicationService
API has the following changes:
SetFunctionsPipeline
has been removed. Use SetDefaultFunctionsPipeline
insteadMakeItRun
has been renamed to Run
MakeItStop
has been renamed to Stop
GetSecret
has been removed. Use SecretProvider().GetSecret
StoreSecret
has been removed. Use SecretProvider().StoreSecret
LoadConfigurablePipeline
has been removed. Use LoadConfigurableFunctionPipelines
CommandClient
Get
API's dsPushEvent
and dsReturnEvent
parameters changed to be type bool
See Application Service API section for completed details on this API, including some new capabilities.
"},{"location":"microservices/application/V3Migration/#appfunctioncontext-api","title":"AppFunctionContext API","text":"The AppFunctionContext
API has the following changes:
PushToCore
has been removed. Use WrapIntoEvent function and publishing to the EdgeX MessageBus instead. See Trigger.PublishTopic or Background Publisher sections for more details on publishing data back to the EdgeX MessageBus.GetSecret
has been removed. Use SecretProvider().GetSecret
StoreSecret
has been removed. Use SecretProvider().StoreSecret
SecretsLastUpdated
has been removed. Use SecretProvider().SecretsLastUpdated
CommandClient
Get
API's dsPushEvent
and dsReturnEvent
parameters changed to be type bool
NewAESProtection
signature has changes. secretName
parameter renamed tosecretValueKey
secretPath
parameter renamed to secretName
Encrypt
pipeline function now require a *AESProtection
for the receiverNewAESProtection
now returns a *AESProtection
Compression
pipeline functions now require a *Compression
for the receiverNewCompression
now returns a *Compression
Conversion
pipeline functions now require a *Conversion
for the receiverNewConversion
now returns a *Conversion
PushToCoreData
function has been removed. Use WrapIntoEvent function and publishing to the EdgeX MessageBus instead. See Trigger.PublishTopic or Background Publisher sections for more details on publishing data back to the EdgeX MessageBus.EncryptWithAES
function has been removed, use AESProtection.Encrypt
instead. See AES Protection for more detailsFilter
pipeline functions now requires a *Filter
for the receiverNewFilterFor
and NewFilterOut
now return a *Filter
NewHTTPSenderWithSecretHeader
signature has changedsecretName
parameter renamed tosecretValueKey
secretPath
parameter renamed to secretName
Evaluate
pipeline function now requires a *JSONLogic
for the receiverNewJSONLogic
now returns a *JSONLogic
MQTTSecretConfig
has changedSecretPath
field renamed to SecretName
SetResponseData
pipeline function now requires a *ResponseData
for the receiverNewResponseData
now returns a *ResponseData
NewGenericTags
has been removed and replaced with new version of NewTags
which takes map[string]interface{}
for the tags
parameter.NewTags
now returns a *Tags
PushToCore
profile has been removed. Use WrapIntoEvent function and publishing to the EdgeX MessageBus instead. See Trigger.PublishTopic or Background Publisher sections for more details on publishing data back to the EdgeX MessageBus.Custom profiles for App Service Configurable must be migrated in a similar fashion to the configuration for custom application services. All configuration that is common to all EdgeX services or all EdgeX Application Services needs to be removed from custom profiles. See Common Service Configuration section for details about configuration that is common to all Edgex services. See Application Service Configuration section for details about configuration that is common to all EdgeX Application Services. Use the App Service Configurable provided profiles as examples of what configuration is left after removing the common configuration.
"},{"location":"microservices/application/V3Migration/#pipeline-configuration","title":"Pipeline Configuration","text":"raw
, event
or metric
#
between level has be replaced with +
. See Multi-level topics and wildcards for more details.SecretName
renamed to be SecretValueKey
SecretPath
renamed to be SecretName
SecretName
renamed to be SecretValueKey
SecretPath
renamed to be SecretName
Environment variable overrides must be adjusted appropriately for the above changes. Remove any overrides that apply to common configuration.
"},{"location":"microservices/application/services/AppLLRPInventory/","title":"App RFID LLRP Inventory","text":""},{"location":"microservices/application/services/AppLLRPInventory/#introduction","title":"Introduction","text":"Edgex application service for processing raw LLRP tag reads, producing events [Arrived, Moved, Departed], configure and manage LLRP readers via commands
See README for details
"},{"location":"microservices/application/services/AppRecordReplay/","title":"App Record and Replay","text":""},{"location":"microservices/application/services/AppRecordReplay/#introduction","title":"Introduction","text":"This service is a developer testing tool which will record Events from the EdgeX MessageBus and replay them back to the EdgeX MessageBus at a later time. The value of this is a session with devices present can be recorded for later replay on a system which doesn't have the required devices. This allows for testing of services that receive and process the Events without requiring the devices to be present.
Note
The source device service must be running when data is imported since the devices and device profiles are captured as part of the recorded data will be added to the system during import.
"},{"location":"microservices/application/services/AppRecordReplay/#storage","title":"Storage","text":"Since this is targeted as a developer testing tool, the storage model is kept simple by using in-memory storage for the recorded data. This should be kept in mind when recording or importing a recoding on systems with limited resources.
"},{"location":"microservices/application/services/AppRecordReplay/#rest-api","title":"REST API","text":"Control of this service is accomplished via the following REST API.
"},{"location":"microservices/application/services/AppRecordReplay/#postman-collection","title":"Postman Collection","text":"A sample Postman collection can be found here.
Note
Use the Postman Send and Download
option for the Export recording - JSON
request so that the response can be saved to file. The Send and Download
option is on the Send
button.
Note
Postman automatically un-compresses the responses when requesting GZIB or ZLIB compression. Use the following curl command to save the compressed response to file.
curl localhost:59712/api/v3/data?compression=gzip -o test.gz\ncurl localhost:59712/api/v3/data?compression=zlib -o test.zlib\n
"},{"location":"microservices/application/services/AppServiceConfigurable/","title":"App Service Configurable","text":""},{"location":"microservices/application/services/AppServiceConfigurable/#introduction","title":"Introduction","text":"App-Service-Configurable is provided as an easy way to get started with processing data flowing through EdgeX. This service leverages the App Functions SDK and provides a way for developers to use configuration instead of having to compile standalone services to utilize built in functions in the SDK. Please refer to Available Configurable Pipeline Functions section below for full list of built-in functions that can be used in the configurable pipeline.
To get started with App Service Configurable, you'll want to start by determining which functions are required in your pipeline. Using a simple example, let's assume you wish to use the following functions from the SDK:
Once the functions have been identified, we'll go ahead and build out the configuration in the configuration.yaml
file under the [Writable.Pipeline]
section.
Example - Writable.Pipeline
Writable:\nPipeline:\nExecutionOrder: \"FilterByDeviceName, Transform, HTTPExport\"\nFunctions:\nFilterByDeviceName:\nParameters:\nFilterValues: \"Random-Float-Device, Random-Integer-Device\"\nTransform:\nParameters:\nType: \"xml\"\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"\n
The first line of note is ExecutionOrder: \"FilterByDeviceName, Transform, HTTPExport\"
. This specifies the order in which to execute your functions. Each function specified here must also be placed in the Functions:
section.
Next, each function and its required information is listed. Each function typically has associated Parameters that must be configured to properly execute the function as designated by Parameters:
under {FunctionName}
. Knowing which parameters are required for each function, can be referenced by taking a look at the Available Configurable Pipeline Functions section below.
Note
By default, the configuration provided is set to use EdgexMessageBus
as a trigger. This means you must have EdgeX Running with devices sending data in order to trigger the pipeline. You can also change the trigger to be HTTP. For more details on triggers, view the Triggers
documentation located in the Triggers section.
That's it! Now we can run/deploy this service and the functions pipeline will process the data with functions we've defined.
"},{"location":"microservices/application/services/AppServiceConfigurable/#pipeline-per-topics","title":"Pipeline Per Topics","text":"The above pipeline configuration in Introduction section is the preferred way if your use case only requires a single functions pipeline. For use cases that require multiple functions pipelines in order to process the data differently based on the profile
, device
or source
for the Event, there is the Pipeline Per Topics feature. This feature allows multiple pipelines to be configured in the [Writable.Pipeline.PerTopicPipelines]
section. This section is a map of pipelines. The map key must be unique , but isn't used so can be any value. Each pipleline is defined by the following configuration settings:
ExecutionOrder
in the above example in the Introduction sectionExample - Writable.Pipeline.PerTopicPipelines
In this example Events from the device Random-Float-Device
are transformed to JSON and then HTTP exported. At the same time, Events for the source Int8
are transformed to XML and then HTTP exported to same endpoint. Note the custom naming for TransformJson
and TransformXml
. This is taking advantage of the Multiple Instances of a Function described below.
Writable:\nPipeline:\nPerTopicPipelines:\nfloat:\nId: float-pipeline\nTopics: \"edgex/events/device/+/Random-Float-Device/#, edgex/events/device/+/Random-Integer-Device/#\"\nExecutionOrder: \"TransformJson, HTTPExport\"\nint8:\nId: int8-pipeline\nTopic: edgex/events/device/+/+/+/Int8\nExecutionOrder: \"TransformXml, HTTPExport\"\nFunctions:\nFilterByDeviceName:\nParameters:\nFilterValues: \"Random-Float-Device, Random-Integer-Device\"\nTransformJson:\nParameters:\nType: json\nTransformXml:\nParameters:\nType: xml\nHTTPExport:\nParameters:\nMethod: post\nMimeType: application/xml\nUrl: \"http://my.api.net/edgexdata\"\n
Note
The Pipeline Per Topics
feature is targeted for EdgeX MessageBus and External MQTT triggers, but can be used with Custom or HTTP triggers. When used with the HTTP trigger the incoming topic will always be blank
, so the pipeline's topics must contain a single topic set to the #
wildcard so that all messages received are processed by the pipeline.
EdgeX services no longer have docker specific profiles. They now rely on environment variable overrides in the docker compose files for the docker specific differences.
Example - Environment settings required in the compose files for App Service Configurable
EDGEX_PROFILE : [target profile]\nSERVICE_HOST : [services network host name]\nEDGEX_SECURITY_SECRET_STORE: \"false\" # only need to disable as default is true\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\n
Example - Docker compose entry for App Service Configurable in no-secure compose file
app-rules-engine:\ncontainer_name: edgex-app-rules-engine\ndepends_on:\n- consul\n- data\nenvironment:\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nEDGEX_PROFILE: rules-engine\nEDGEX_SECURITY_SECRET_STORE: \"false\"\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nSERVICE_HOST: edgex-app-rules-engine\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\nhostname: edgex-app-rules-engine\nimage: edgexfoundry/app-service-configurable:2.0.0\nnetworks:\nedgex-network: {}\nports:\n- 127.0.0.1:59701:59701/tcp\nread_only: true\nsecurity_opt:\n- no-new-privileges:true\nuser: 2002:2001\n
Note
App Service Configurable is designed to be run multiple times each with different profiles. This is why in the above example the name edgex-app-rules-engine
is used for the instance running the rules-engine
profile.
App Service Configurable was designed to be deployed as multiple instances for different purposes. Since the function pipeline is specified in the configuration.yaml
file, we can use this as a way to run each instance with a different function pipeline. App Service Configurable does not have the standard default configuration at /res/configuration.yaml
. This default configuration has been moved to the sample
profile. This forces you to specify the profile for the configuration you would like to run. The profile is specified using the -p/--profile=[profilename]
command line option or the EDGEX_PROFILE=[profilename]
environment variable override. The profile name selected is used in the service key (app-[profile name]
) to make each instance unique, e.g. AppService-sample
when specifying sample
as the profile.
Note
If you need to run multiple instances with the same profile, e.g. http-export
, but configured differently, you will need to override the service key with a custom name for one or more of the services. This is done with the -sk/-serviceKey
command-line option or the EDGEX_SERVICE_KEY
environment variable. See the Command-line Options and Environment Overrides sections for more detail.
Note
Functions can be declared in a profile but not used in the pipeline ExecutionOrder
allowing them to be added to the pipeline ExecutionOrder
later at runtime if needed.
The following profiles and their purposes are provided with App Service Configurable.
"},{"location":"microservices/application/services/AppServiceConfigurable/#rules-engine","title":"rules-engine","text":"Profile used to push Event messages to the Rules Engine via the Redis Pub/Sub Message Bus. This is used in the default docker compose files for the app-rules-engine
service
One can optionally add Filter function via environment overrides
WRITABLE_PIPELINE_EXECUTIONORDER: \"FilterByDeviceName, HTTPExport\"
WRITABLE_PIPELINE_FUNCTIONS_FILTERBYDEVICENAME_PARAMETERS_DEVICENAMES: \"[comma separated list]\"
There are many optional functions and parameters provided in this profile. See the complete profile for more details
"},{"location":"microservices/application/services/AppServiceConfigurable/#http-export","title":"http-export","text":"Starter profile used for exporting data via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides
Required:
WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your URL]
There are many more optional functions and parameters provided in this profile. See the complete profile for more details.
Starter profile used for exporting telemetry data from other EdgeX services to InfluxDB via HTTP export. This profile configures the service to receive telemetry data from other services, transform it to Line Protocol syntax, batch the data and then export it to an InfluxDB service via HTTP. Requires further configuration which can easily be accomplished using environment variable overrides.
Required:
WRITABLE_PIPELINE_FUNCTIONS_HTTPEXPORT_PARAMETERS_URL: [Your InfluxDB URL]
`WRITABLE_INSECURESECRETS_INFLUXDB_SECRETS_TOKEN
: [Your InfluxDB Token]
Example value: \"Token 29ER8iMgQ5DPD_icTnSwH_77aUhSvD0AATkvMM59kZdIJOTNoJqcP-RHFCppblG3wSOb7LOqjp1xubA80uaWhQ==\"
If using secure mode, store the token in the service's secret store via POST to the service's /secret
endpoint
Example JSON to post to /secret endpoint
{\n\"apiVersion\":\"v2\",\n\"secretName\":\"influxdb\",\n\"secretData\":[\n{\n\"key\":\"Token\",\n\"value\":\"Token 29ER8iMgQ5DPD_icTnSwH_77aUhSvD0AATkvMM59kZdIJOTNoJqcP-RHFCppblG3wSOb7LOqjp1xubA80uaWhQ==\"\n}]\n}\n
Optional Additional Tags:
WRITABLE_PIPELINE_FUNCTIONS_TOLINEPROTOCOL_PARAMETERS_TAGS: <your additional tags>
Optional Batching parameters (see Batch function for more details):
WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_MODE: <your batch mode>
\"bytimecount\"
\"bycount\"
, \"bytime\"
or `\"bytimecount\"```WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_BATCHTHRESHOLD: <your batch threshold count>
100
WRITABLE_PIPELINE_FUNCTIONS_BATCH_PARAMETERS_TIMEINTERVAL: <your batch time interval>
\"60s\"
Starter profile used for exporting data via MQTT. Requires further configuration which can easily be accomplished using environment variable overrides
Required:
WRITABLE_PIPELINE_FUNCTIONS_MQTTEXPORT_PARAMETERS_BROKERADDRESS: [Your Broker Address]
There are many optional functions and parameters provided in this profile. See the complete profile for more details
Sample profile with all available functions declared and a sample pipeline. Provided as a sample that can be copied and modified to create new custom profiles. See the complete profile for more details
"},{"location":"microservices/application/services/AppServiceConfigurable/#functional-tests","title":"functional-tests","text":"Profile used for the TAF functional testing
"},{"location":"microservices/application/services/AppServiceConfigurable/#external-mqtt-trigger","title":"external-mqtt-trigger","text":"Profile used for the TAF functional testing of external MQTT Trigger
"},{"location":"microservices/application/services/AppServiceConfigurable/#what-if-my-input-data-isnt-an-edgex-event","title":"What if my input data isn't an EdgeX Event ?","text":"The default TargetType
for data flowing into the functions pipeline is an EdgeX Event DTO. There are cases when this incoming data might not be an EdgeX Event DTO. There are two setting that configure the TargetType to non-Event data.
In these cases the Pipeline
can be configured using TargetType=\"raw\"
to set the TargetType
to be a byte array/slice, i.e. []byte
. The first function in the pipeline must then be one that can handle the []byte
data. The compression, encryption and export functions are examples of pipeline functions that will take input data that is []byte
.
Example - Configure the functions pipeline to compress, encrypt and then export the []byte
data via HTTP
Writable:\nPipeline:\nTargetType: \"raw\"\nExecutionOrder: \"Compress, Encrypt, HTTPExport\"\nFunctions:\nCompress:\nParameters:\nAlogrithm: \"gzip\"\nEncrypt:\nParameters:\nAlgorithm: \"aes256\" SecretName: \"aes\"\nSecretValueKey: \"key\"\nHTTPExport:\nParameters:\nMethod: \"post\"\nUrl: \"http://my.api.net/edgexdata\"\nMimeType: \"application/text\"\n
If along with this pipeline configuration, you also configured the Trigger
to be http
trigger, you could then send any data to the app-service-configurable' s /api/v3/trigger
endpoint and have it compressed, encrypted and sent to your configured URL above.
Example - HTTP Trigger configuration
Trigger:\nType: \"http\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#metric-targettype","title":"Metric TargetType","text":"This setting when set to true will cause the TargeType
to be &dtos.Metric{}
and is meant to be used in conjunction with the new ToLineProtocol
function. See ToLineProtocol section below for more details. In addition the Trigger
SubscribeTopics
must be set to \"edgex/telemetry/#\"
so that the function receives the metric data from the other services.
Example - Metric TargetType
Writable:\nPipeline:\nTargetType: \"metric\"\nExecutionOrder: \"ToLineProtocol, ...\"\n...\nFunctions:\nToLineProtocol:\nParameters:\nTags: \"\" # optional comma separated list of additional tags to add to the metric in to form \"tag:value,...\"\n...\nTrigger:\nSubscribeTopics: telemetry/#\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#multiple-instances-of-a-function","title":"Multiple Instances of a Function","text":"Now multiple instances of the same configurable pipeline function can be specified, configured differently and used together in the functions pipeline. Previously the function names specified in the [Writable.Pipeline.Functions]
section had to match a built-in configurable pipeline function name exactly. Now the names specified only need to start with a built-in configurable pipeline function name. See the HttpExport section below for an example.
Below are the functions that are available to use in the configurable pipeline function pipeline ([Writable.Pipeline]
) section of the configuration. The function names below can be added to the Writable.Pipeline.ExecutionOrder
setting (comma separated list) and must also be present or added to the [Writable.Pipeline.Functions]
section as {FunctionName}]
. The functions will also have the {FunctionName}.Parameters:
section where the function's parameters are configured. Please refer to the Introduction section above for an example.
Note
The Parameters
section for each function is a key/value map of string
values. So even tough the parameter is referred to as an Integer or Boolean, it has to be specified as a valid string representation, e.g. \"20\" or \"true\".
Please refer to the function's detailed documentation by clicking the function name below.
"},{"location":"microservices/application/services/AppServiceConfigurable/#addtags","title":"AddTags","text":"Parameters
tags
- String containing comma separated list of tag key/value pairs. The tag key/value pairs are colon seperatedExample
AddTags:\nParameters:\ntags: \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#batch","title":"Batch","text":"Parameters
Mode
- The batch mode to use. can be 'bycount', 'bytime' or 'bytimecount'BatchThreshold
- Number of items to batch before sending batched items to the next function in the pipeline. Used with 'bycount' and 'bytimecount' modesTimeInterval
- Amount of time to batch before sending batched items to the next function in the pipeline. Used with 'bytime' and 'bytimecount' modesIsEventData
- If true, specifies that the data being batched is Events
and to un-marshal the batched data to []Event
prior to returning the batched data. By default the batched data returned is [][]byte
MergeOnSend
- If true, specifies that the data being batched is to be merged to a single []byte
prior to returning the batched data. By default the batched data returned is [][]byte
Example
Batch:\nParameters:\nMode: \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\"\nBatchThreshold: \"30\"\nTimeInterval: \"60s\"\nIsEventData: \"false\"\nMergeOnSend: \"false\" or\nBatch:\nParameters:\nMode: \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\"\nBatchThreshold: \"30\"\nTimeInterval: \"60s\"\nIsEventData: \"true\"\nMergeOnSend: \"false\" or\nBatch:\nParameters:\nMode: \"bytimecount\" # can be \"bycount\", \"bytime\" or \"bytimecount\"\nBatchThreshold: \"30\"\nTimeInterval: \"60s\"\nIsEventData: \"false\"\nMergeOnSend: \"true\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#compress","title":"Compress","text":"Parameters
Algorithm
- Compression algorithm to use. Can be 'gzip' or 'zlib'Example
Compress:\nParameters:\nAlgorithm: \"gzip\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#encrypt","title":"Encrypt","text":"Parameters
Algorithm
- AES256SecretName
- (required for AES256) Name of the secret in the Secret Store
where the encryption key is located.SecretValueKey
- (required for AES256) Key of the secret data for the encryption key in the secret's data.Example
# Encrypt with key pulled from Secret Store\nEncrypt:\nParameters:\nAlgorithm: \"aes256\"\nSecretName: \"aes\"\nSecretValueKey: \"key\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbydevicename","title":"FilterByDeviceName","text":"Parameters
DeviceNames
- Comma separated list of device names or regular expressions for filteringFilterOut
- Boolean indicating if the data matching the device names should be filtered out or filtered for.Example
FilterByDeviceName:\nParameters:\nDeviceNames: \"Random-Float-Device,Random-Integer-Device\"\nFilterOut: \"false\"\nor\nFilterByDeviceName:\nParameters:\nDeviceNames: \"[a-zA-Z-]+(Integer-)[a-zA-Z-]+\"\nFilterOut: \"true\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbyprofilename","title":"FilterByProfileName","text":"Parameters
ProfileNames
- Comma separated list of profile names or regular expressions for filteringFilterOut
- Boolean indicating if the data matching the profile names should be filtered out or filtered for.Example
FilterByProfileName:\nParameters:\nProfileNames: \"Random-Float-Device, Random-Integer-Device\"\nFilterOut: \"false\"\nor\nFilterByProfileName:\nParameters:\nProfileNames: \"(Random-)[a-zA-Z-]+\"\nFilterOut: \"false\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbyresourcename","title":"FilterByResourceName","text":"Parameters
ResourceName
- Comma separated list of reading resource names or regular expressions for filteringFilterOut
- Boolean indicating if the readings matching the resource names should be filtered out or filtered for.Example
FilterByResourceName:\nParameters:\nResourceNames: \"Int8, Int64\"\nFilterOut: \"true\"\nor\nFilterByResourceName:\nParameters:\nDeviceNames: \"(Int)[0-9]+\"\nFilterOut: \"false\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#filterbysourcename","title":"FilterBySourceName","text":"Parameters
SourceNames
- Comma separated list of source names or regular expressions for filtering. Source name is either the device command name or the resource name that created the EventFilterOut
- Boolean indicating if the data matching the device names should be filtered out or filtered for.Example
FilterBySourceName:\nParameters:\nSourceNames: \"Bool, BoolArray\"\nFilterOut: \"false\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#httpexport","title":"HTTPExport","text":"Parameters
Method
- HTTP Method to use. Can be post
or put
Url
- HTTP endpoint to POST/PUT the data.MimeType
- Optional mime type for the data. Defaults to application/json
if not set.PersistOnError
- Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\".ContinueOnSendError
- For chained multi destination exports, if true continues after send error so next export function executes.ReturnInputData
- For chained multi destination exports if true, passes the input data to next export function.HeaderName
- (Optional) Name of the header key to add to the HTTP headerSecretName
- (Optional) Name of the secret in the Secret Store
where the header value is stored.SecretValueKey
- (Optional) Key for the header value in the secret data.HttpRequestHeaders
- (Optional) HTTP Request header parameters in json format.Example
# Simple HTTP Export\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"
# HTTP Export with multiple HTTP Request header Parameters\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"\nHttpRequestHeaders: \"{\"Connection\": \"keep-alive\", \"From\": \"[user@example.com](mailto:user@example.com)\" }\"\n
# HTTP Export with secret header data pull from Secret Store\nHTTPExport:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api.net/edgexdata\"\nHeaderName: \"MyApiKey\" SecretName: \"http\"\nSecretValueKey: \"apikey\"\n
# Http Export to multiple destinations\nWritable:\nPipeline:\nExecutionOrder: \"HTTPExport1, HTTPExport2\"\nFunctions:\nHTTPExport1:\nParameters:\nMethod: \"post\" MimeType: \"application/xml\" Url: \"http://my.api1.net/edgexdata2\" ContinueOnSendError: \"true\"\nReturnInputData: \"true\"\nHTTPExport2:\nParameters:\nMethod: \"put\" MimeType: \"application/xml\" Url: \"http://my.api2.net/edgexdata2\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#jsonlogic","title":"JSONLogic","text":"Parameters
Rule
- The JSON formatted rule that with be executed on the data by JSONLogic Example
JSONLogic:\nParameters:\nRule: \"{ \\\"and\\\" : [{\\\"<\\\" : [{ \\\"var\\\" : \\\"temp\\\" }, 110 ]}, {\\\"==\\\" : [{ \\\"var\\\" : \\\"sensor.type\\\" }, \\\"temperature\\\" ]} ] }\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#mqttexport","title":"MQTTExport","text":"Parameters
BrokerAddress
- URL specify the address of the MQTT BrokerTopic
- Topic to publish the dataClientId
- Id to use when connecting to the MQTT BrokerQos
- MQTT Quality of Service (QOS) setting to use (0, 1 or 2). Please refer here for more details on QOS valuesAutoReconnect
- Boolean specifying if reconnect should be automatic if connection to MQTT broker is lostRetain
- Boolean specifying if the MQTT Broker should save the last message published as the \u201cLast Good Message\u201d on that topic.SkipVerify
- Boolean indicating if the certificate verification should be skipped. PersistOnError
- Indicates to persist the data if the POST fails. Store and Forward must also be enabled if this is set to \"true\".AuthMode
- Mode of authentication to use when connecting to the MQTT Brokernone
- No authentication requiredusernamepassword
- Use username and password authentication. The Secret Store (Vault or InsecureSecrets) must contain the username
and password
secrets.clientcert
- Use Client Certificate authentication. The Secret Store (Vault or InsecureSecrets) must contain the clientkey
and clientcert
secrets.cacert
- Use CA Certificate authentication. The Secret Store (Vault or InsecureSecrets) must contain the cacert
secret.SecretName
- Name of the secret in the SecretStore where authentication secrets are stored.Note
Authmode=cacert
is only needed when client authentication (e.g. usernamepassword
) is not required, but a CA Cert is needed to validate the broker's SSL/TLS cert.
Example
# Simple MQTT Export\nMQTTExport:\nParameters:\nBrokerAddress: \"tcps://localhost:8883\"\nTopic: \"mytopic\"\nClientId: \"myclientid\"\n
# MQTT Export with auth credentials pull from the Secret Store\nMQTTExport:\nParameters:\nBrokerAddress: \"tcps://my-broker-host.com:8883\"\nTopic: \"mytopic\"\nClientId: \"myclientid\"\nQos=\"2\"\nAutoReconnect=\"true\"\nRetain=\"true\"\nSkipVerify: \"false\"\nPersistOnError: \"true\"\nAuthMode: \"usernamepassword\"\nSecretName: \"mqtt\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#setresponsedata","title":"SetResponseData","text":"Parameters
ResponseContentType
- Used to specify content-type header for response - optionalExample
SetResponseData:\nParameters:\nResponseContentType: \"application/json\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#transform","title":"Transform","text":"Parameters
Type
- Type of transformation to perform. Can be 'xml' or 'json'Example
Transform:\nParameters:\nType: \"xml\"\n
"},{"location":"microservices/application/services/AppServiceConfigurable/#tolineprotocol","title":"ToLineProtocol","text":"Parameters
Tags
- optional comma separated list of additional tags to add to the metric in to form \"tag:value,...\"Example
ToLineProtocol:\nParameters:\nTags: \"\" # optional comma separated list of additional tags to add to the metric in to form \"tag:value,...\"\n
Note
The new TargetType
setting must be set to \"metric\" when using this function. See the Metric TargetType section above for more details.
Parameters
ProfileName
- Profile name to use for the new EventDeviceName
- Device name to use for the new EventResourceName
- Resource name name to use for the new Event'sSourceName
and Reading's ResourceName
ValueType
- Value type to use the new Event Reading's value typeMediaType
- Media type to use the new Event Reading's value type. Required when the value type is Binary
Example
WrapIntoEvent:\nParameters:\nProfileName: \"MyProfile\"\nDeviceName: \"MyDevice\"\nResourceName: \"SomeResource\"\nValueType: \"String\"\nMediaType: \"\" # Required only when ValueType=Binary\n
"},{"location":"microservices/application/services/AvailableAppServices/","title":"Available Application Services List","text":"The following table lists the available EdgeX Application Services:
Repository Status Comments Documentation app-service-configurable Active App Service which provides configurable function pipelines capability for built-in pipeline functions app-service-configurable docs app-rfid-llrp-inventory Active App Service which generate Inventory movement Events from raw LLRP events produced by device-rfid-llrp app-rfid-llrp-inventory docs app-record-replay Active App Service for Development/Testing with capability to Record and Replay EdgeX Events app-record-replay docs"},{"location":"microservices/configuration/CommonCommandLineOptions/","title":"Command Line Options","text":"This section describes the command line options that are common to all EdgeX services. Some services have addition command line options which are documented in the specific sections for those services.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#config-directory","title":"Config Directory","text":"-cd/--configDir
EdgeX 3.0
The -c/--confdir
command line option is replaced by -cd/--configDir
in EdgeX 3.0.
Specify local configuration directory. Default is ./res
, but will be ignored if Config File parameter refers to a URI beginning with http
or https
.
Can be overridden with EDGEX_CONFIG_DIR environment variable.
EdgeX 3.0
The EDGEX_CONF_DIR
environment variable is replaced by EDGEX_CONFIG_DIR
in EdgeX 3.0.
-cf/--configFile <name>
EdgeX 3.0
The -f/--file
command line option is replaced by -cf/--configFile
in EdgeX 3.0.
Indicates the name of the local configuration file or the URI of the private configuration. See the URI for Files section for more details. Default is configuration.yaml
.
Can be overridden with EDGEX_CONFIG_FILE environment variable.
EdgeX 3.1
Support for loading private configuration via URI is new in EdgeX 3.1.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#config-provider","title":"Config Provider","text":"-cp/ --configProvider
Indicates to use Configuration Provider service at specified URL. URL Format: {type}.{protocol}://{host}:{port}
. Default is consul.http://localhost:8500
Can be overridden with EDGEX_CONFIG_PROVIDER environment variable.
EdgeX 3.0
The EDGEX_CONFIGURATION_PROVIDER
environment variable is replaced by EDGEX_CONFIG_PROVIDER
in EdgeX 3.0.
-cc/ --commonConfig
EdgeX 3.0
The Common Config flag is new to EdgeX 3.0
Takes the location where the common configuration is loaded from - either a local file path or a URI when not using the Configuration Provider. See the URI for Files section for more details. Default is blank.
Can be overridden with EDGEX_COMMON_CONFIG environment variable.
EdgeX 3.1
Support for loading common configuration via URI is new in EdgeX 3.1.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#profile","title":"Profile","text":"-p/--profile <name>
Indicates configuration profile other than default. Default is no profile name resulting in using ./res/configuration.yaml
if -f
and -c
are not used.
Can be overridden with EDGEX_PROFILE environment variable.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#registry","title":"Registry","text":"-r/ --registry
Indicates service should use the Registry. Connection information is pulled from the [Registry]
configuration section.
Can be overridden with EDGEX_USE_REGISTRY environment variable.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#overwrite","title":"Overwrite","text":"-o/--overwrite
Overwrite configuration in provider with local configuration.
Use with caution
This will clobber existing settings in provider, which is problematic if those settings were intentionally edited by hand. Typically only used during development.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#remote-service-hosts","title":"Remote Service Hosts","text":"EdgeX 3.1
New in EdgeX 3.1
-rsh/--remoteServiceHosts <host names>
Warning
This command line option is intended to be used in non-secure EdgeX deployments that are run with in a secured network. See Remote Device Services in Secure Mode section for details of deploying remote EdgeX services in secure EdgeX deployments.
Sets the three host names required when running the service remotely so that it can connect to the core EdgeX services running on another system and also be connected to from those same core EdgeX services.
<host names>
must contain and only contain the following three host names in a comma separated string
Host name of local system where the service is running
Host name of the system where the core EdgeX services are running
Host name to bind to for the internal WebServer for hosting the REST API
This allows the service to be accessed from external network. When running native it can be set to the local system Hostname/IP or 0.0.0.0
When running in docker it must be set to localhost
or 0.0.0.0
and use docker port mapping to expose the service to external network.
Note
Each host name can be a known DNS host name or the IP address of the host
Example setting Remote Service Hosts
--remoteServiceHosts 172.26.113.174,172.26.113.150,0.0.0.0\nor\n-rsh 172.26.113.174,172.26.113.150,localhost\n
Can be overridden with EDGEX_REMOTE_SERVICE_HOSTS environment variable.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#developer-mode","title":"Developer Mode","text":"EdgeX 3.0
New in EdgeX 3.0
-d/--dev
Indicates service should run in developer mode. The allows the service running from command-line to properly communicate with other EdgeX services running in Docker (aka hybrid mode). This flag cause all Host
configuration values pulled from common configuration via the Configuration Provider to be overridden with the value \"localhost\".
Development Only
This flag should only be used for development purposes when running from command-line.
"},{"location":"microservices/configuration/CommonCommandLineOptions/#help","title":"Help","text":"-h/--help
Show the help message
"},{"location":"microservices/configuration/CommonConfiguration/","title":"Service Configuration","text":"The configuration for EdgeX services is broken into multiple layers. The layers are as follows:
Subsequent layers have higher precedence. As a result, the configuration values set in subsequent layers override those of underlying layers.
EdgeX 3.0
This layered configuration is new in EdgeX 3.0
"},{"location":"microservices/configuration/CommonConfiguration/#common-configuration","title":"Common Configuration","text":"EdgeX 3.0
Common configuration is new in Edgex 3.0
The common configuration is divided into 3 sections:
All Services- Configuration that is common to all EdgeX Services See below for details.
App Services - Configuration that is common to just application services. See App Service Configuration section for more details.
When the Configuration Provider is used, the common configuration is seeded by the core-common-config-bootstrapper service, otherwise the common configuration comes from a file specified by the -cc/--commonConfig
command-line option.
Note
Common environment variable overrides set on the core-common-config-bootstrapper service are applied to the common configuration prior to seeding the values into the Configuration Provider. See Common Configuration Overrides section for more details.
"},{"location":"microservices/configuration/CommonConfiguration/#common-configuration-properties","title":"Common Configuration Properties","text":"The tables in each of the tabs below document configuration properties that are common to all services in the EdgeX Foundry platform.
Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It now has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
Edgex 3.0
In EdgeX 3.0, the MessageBus configuration is now common to all services. In addition, the internal MessageBus topic configuration has been replaced by internal constants. The new BaseTopicPrefix setting has been added to allow customization of all topics under a common base prefix. See the new common MessageBus section below.
WritableWritable.TelemetryServiceService.CORSConfigurationRegistryDatabaseMessageBusMessageQueue.Optional Property Default Value Description entries in the Writable section of the configuration can be changed on the fly while the service is running if the service is running with the-cp/--configProvider
flag LogLevel --- log entry severity level. (specific for each service) InsecureSecrets --- This section a map of secrets which simulates the SecretStore for accessing secrets when running in non-secure mode. All services have a default entry for Redis DB credentials called redisdb
Note
LogLevel is included here for documentation purposes since all services have this setting. Since it should always be set at an individual service level it is not included in the new common configuration file and is present in all the individual service private configuration.
Property Default Value Description Interval 30s The interval in seconds at which to report the metrics currently being collected and enabled. Value of 0s disables reporting. Metrics Boolean map of service metrics that are being collected. The boolean flag for each indicates if the metric is enabled for reporting. i.e.EventsPersisted = true
. The metric name must match one defined by the service. Metrics.SecuritySecretsRequested false Enable/Disable reporting of number of secrets requested Metrics.SecuritySecretsStored false Enable/Disable reporting of number of secrets stored Metrics.SecurityConsulTokensRequested false Enable/Disable reporting of number of Consul token requested Metrics.SecurityConsulTokenDuration false Enable/Disable reporting of duration for obtaining Consul token Tags <Common Tags>
String map of arbitrary tags to be added to every metric that is reported for all services . i.e. Gateway=\"my-iot-gateway\"
. The tag names are arbitrary. Property Default Value Description HealthCheckInterval 10s The interval in seconds at which the service registry(Consul) will conduct a health check of this service. Host localhost Micro service host name Port --- Micro service port number (specific for each service) ServerBindAddr '' (empty string) The interface on which the service's REST server should listen. By default the server is to listen on the interface to which the Host
option resolves (leaving it blank). A value of 0.0.0.0
means listen on all available interfaces. App & Device service do not implement this setting. (specific for each service) StartupMsg --- Message logged when service completes bootstrap start-up MaxResultCount 1024* Read data limit per invocation. *Default value is for core/support services. Application and Device services do not implement this setting. MaxRequestSize 0 Defines the maximum size of http request body in kilbytes. 0 represents default to system max. RequestTimeout 5s Specifies a timeout duration for handling requests EnableNameFieldEscape false The name field escape could allow the system to use special or Chinese characters in the different name fields, including device, profile, and so on. If the EnableNameFieldEscape is false, some special characters might cause system error. If EnableNameFieldEscape is true, the client of event or command message bus API clients have to escape the name to subscribe the topics, for example, if the device name is test-device
, the escaped device name should be test%2Ddevice
, and the event topic is similar to edgex/events/device/device%2Dvirtual/test%2Dprofile/test%2Ddevice/test%2Dresource
. Property Default Value Description The settings of controling CORS http headers EnableCORS false Enable or disable CORS support. CORSAllowCredentials false The value of Access-Control-Allow-Credentials
http header. It appears only if the value is true
. CORSAllowedOrigin \"https://localhost\" The value of Access-Control-Allow-Origin
http header. CORSAllowedMethods \"GET, POST, PUT, PATCH, DELETE\" The value of Access-Control-Allow-Methods
http header. CORSAllowedHeaders \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\" The value of Access-Control-Allow-Headers
http header. CORSExposeHeaders \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\" The value of Access-Control-Expose-Headers
http header. CORSMaxAge 3600 The value of Access-Control-Max-Age
http header. To understand more details about these HTTP headers, please refer to MDN Web Docs, and refer to CORS enabling to learn more. Property Default Value Description configuration that govern how to connect to the registry to register for service registration Host localhost Registry host name Port 8500 Registry port number Type consul Registry implementation type Property Default Value Description configuration that govern database connectivity and the type of database to use. While not all services require DB connectivity, most do and so this has been included in the common configuration docs. Host localhost DB host name Port 6379 DB port number Name ---- Database or document store name (Specific to the service) Timeout 5s DB connection timeout Type redisdb DB type. Redis is the only supported DB Property Default Value Description Entries in the MessageBus section of the configuration allow for connecting to the internal MessageBus and define a common base topic prefix Protocol redis Indicates the connectivity protocol to use when connecting to the bus. Host localhost Indicates the host of the messaging broker, if applicable. Port 6379 Indicates the port to use when publishing a message. Type redis Indicates the type of messaging library to use. Currently this is Redis by default. Refer to the go-mod-messaging module for more information. AuthMode usernamepassword Auth Mode to connect to EdgeX MessageBus. SecretName redisdb Name of the secret in the Secret Store to find the MessageBus credentials. BaseTopicPrefix edgex Indicates the base topic prefix which is prepended to all internal MessageBus topics. Property Default Value Description Configuration and connection parameters for use with MQTT or NATS message bus - in place of Redis ClientId --- Client ID used to put messages on the bus (specific for each service) Qos '0' Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once) KeepAlive '10' Period of time in seconds to keep the connection alive when there are no messages flowing (must be 2 or greater) Retained false Whether to retain messages AutoReconnect true Whether to reconnect to the message bus on connection loss ConnectTimeout 5 Message bus connection timeout in seconds SkipCertVerify false TLS configuration - Only used if Cert/Key file or Cert/Key PEMblock are specified Additional Default NATS Specific options Format nats Format of the actual message published. See NATs section of the MessageBus documentation. RetryOnFailedConnect true Retry on connection failure - expects a string representation of a boolean QueueGroup blank Specifies a queue group to distribute messages from a stream to a pool of worker services Durable blank Specifies a durable consumer should be used with the given name. Note that if a durable consumer with the specified name does not exist it will be considered ephemeral and deleted by the client on drain / unsubscribe (JetStream only) AutoProvision true Automatically provision NATS streams. (JetStream only) Deliver new Specifies delivery mode for subscriptions - options are \"new\", \"all\", \"last\" or \"lastpersubject\". See the NATS documentation for more detail (JetStream only) DefaultPubRetryAttempts 2 Number of times to attempt to retry on failed publish (JetStream only)"},{"location":"microservices/configuration/CommonConfiguration/#private-configuration","title":"Private Configuration","text":"Each EdgeX service has a private configuration with values specific to that service. Some of these values may override values found in the common configuration layers described above. This private configuration is initially found in the service's configuration.yaml
file.
When the Configuration Provider is used, the EdgeX services will self-seed their private configuration, with environment variable overrides applied, into the Configuration Provider on first start-up. On restarts, the services will pull their private configuration from the Configuration Provider and apply it over the common configuration previously loaded from the Configuration Provider.
When the Configuration Provider is not used the service's private configuration will be applied over the common configuration loaded via the -cc/--commonConfig
command-line option.
Note
The -cc/--commonConfig
option is not required when the Configuration Provider is not used. If it is not provided, the service's private configuration must be complete for its needs. A complete configuration will have the private configuration settings as well as the necessary common configuration settings. Some of the Security services that do not use the Configuration Provider operate in this manner since they do not have common configuration like other EdgeX services.
The service specific private values and additional settings can be found on the respective documentation page for each service here.
"},{"location":"microservices/configuration/CommonConfiguration/#writable-vs-readable-settings","title":"Writable vs Readable Settings","text":"Within each configuration layer, there are settings whose values can be edited via the Configuration Provider and change the behavior of the service while it is running. These writable settings are grouped under Writable
in each layer. Any configuration settings found in a common or private Writable
section may be changed and affect a service's behavior without a restart. Any modifications to the other settings (read-only configuration) require a restart of the service(s).
Note
Runtime changes to a common Writable setting will be ignored by services which have that setting overridden in a subsequent layer, i.e. app/device or private. This is to avoid changing values that have been explicitly overridden in a lower layer Writable section by changing the same setting in a higher layer Writable section. The setting value should be changed at the lowest layer in which it exists for a service.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/","title":"Environment Variables","text":"There are three types of environment variables used by all EdgeX services. They are standard, command-line overrides, and configuration overrides.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#standard-environment-variables","title":"Standard Environment Variables","text":"This section describes the standard environment variables common to all EdgeX services. Standard environment variables do not override any command line flag or service configuration. Some services may have additional standard environment variables which are documented in those service specific sections. See Notable Other Standard Environment Variables below for list of these additional standard environment variables.
Note
All standard environment variables have the EDGEX_
prefix
This environment variable indicates whether the service is expected to initialize the secure SecretStore which allows the service to access secrets from Vault. Defaults to true
if not set or not set to false
. When set to true
the EdgeX security services must be running. If running EdgeX in non-secure
mode you then want this explicitly set to false
.
Example - Using docker-compose to disable secure SecretStore
environment: EDGEX_SECURITY_SECRET_STORE: \"false\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_disable_jwt_validation","title":"EDGEX_DISABLE_JWT_VALIDATION","text":"This environment variable disables, at the microservice-level, validation of the Authorization
HTTP header of inbound REST API requests. (Microservice-level authentication was added in EdgeX 3.0.)
Normally, when EDGEX_SECURITY_SECRET_STORE
is unset or true
, EdgeX microservices authenticate inbound HTTP requests by parsing the Authorization
header, extracting a JWT bearer token, and validating it with the EdgeX secret store, returning an HTTP 401 error if token validation fails.
If for some reason it is not possible to pass a valid JWT to an EdgeX microservice -- for example, the eKuiper rule engine making an unauthenticated HTTP API call, or other legacy code -- it may be necessary to disable JWT validation in the receiving microservice.
Example - Using docker-compose environment variable to disable secure JWT validation
environment: EDGEX_DISABLE_JWT_VALIDATION: \"true\"\n
Regardless of the setting of this variable, the API gateway (and related security-proxy-auth microservice) will always validate the incoming JWT.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_startup_duration","title":"EDGEX_STARTUP_DURATION","text":"This environment variable sets the total duration in seconds allowed for the services to complete the bootstrap start-up. Default is 60 seconds.
Example - Using docker-compose to set start-up duration to 120 seconds
environment: EDGEX_STARTUP_DURATION: \"120\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_startup_interval","title":"EDGEX_STARTUP_INTERVAL","text":"This environment variable sets the retry interval in seconds for the services retrying a failed action during the bootstrap start-up. Default is 1 second.
Example - Using docker-compose to set start-up interval to 3 seconds
environment: EDGEX_STARTUP_INTERVAL: \"3\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#notable-other-standard-environment-variables","title":"Notable Other Standard Environment Variables","text":"This section covers other standard environment variables that are not common to all services.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_secretstore_tokens","title":"EDGEX_ADD_SECRETSTORE_TOKENS","text":"This environment variable tells the Secret Store Setup service which add-on services to generate SecretStore tokens for. See Configure Service's Secret Store section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_known_secrets","title":"EDGEX_ADD_KNOWN_SECRETS","text":"This environment variable tells the Secret Store Setup service which add-on services need which known secrets added to their Secret Stores. See Configure Known Secrets section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_registry_acl_roles","title":"EDGEX_ADD_REGISTRY_ACL_ROLES","text":"This environment variable tells the Consul service entry point script which add-on services need ACL roles created. See Configure ACL Role section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_add_proxy_route","title":"EDGEX_ADD_PROXY_ROUTE","text":"This environment variable tells the Proxy Setup Service which additional routes need to be added for add-on services. See Configure API Gateway Route section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_ikm_hook","title":"EDGEX_IKM_HOOK","text":"This environment variable tells the Secret Store Setup service the path to an executable that implements the IKM interface. See IKM HOOK section for more details.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#command-line-overrides","title":"Command-line Overrides","text":"This section describes the command-line overrides that are common to most services. These overrides allow the use of the specific command-line flag to be overridden each time a service starts up.
Note
All command-line overrides also have the EDGEX_
prefix.
This environment variable overrides the -cd/--configDir
command-line option.
Example - Using docker-compose to override the configuration folder name
environment: EDGEX_CONF_DIR: \"/my-config\"\n
EdgeX 3.0
The EDGEX_CONF_DIR
environment variable is replaced by EDGEX_CONFIG_DIR
in EdgeX 3.0.
This environment variable overrides the -cf/--configFile
command-line option.
Example - Using docker-compose to override the configuration file name used
environment: EDGEX_CONFIG_FILE: \"my-config.yaml\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_config_provider","title":"EDGEX_CONFIG_PROVIDER","text":"This environment variable overrides the -cp/--configProvider
command-line option.
Overriding with a value of none
disables the use of the Configuration Provider.
Note
All EdgeX service Docker images have this option set to -cp=consul.http://edgex-core-consul:8500
.
Example - Using docker-compose to override with different port number
environment: EDGEX_CONFIG_PROVIDER: \"consul.http://edgex-consul:9500\"\n\nor\n\nenvironment: EDGEX_CONFIG_PROVIDER: \"none\"\n
EdgeX 3.0
The EDGEX_CONFIGURATION_PROVIDER
environment variable is replaced by EDGEX_CONFIG_PROVIDER
in EdgeX 3.0.
This environment variable overrides the -cc/--commonConfig
command-line option.
Note
The Common Config can only be specified when not using the Configuration Provider.
Example - Override with a common configuration file at the command line
$ export EDGEX_COMMON_CONFIG=./my-common-configuration.yaml\n$ ./core-data\n
EdgeX 3.0
The EDGEX_COMMON_CONFIG
variable is new to EdgeX 3.0.
This environment variable overrides the -p/--profile
command-line option. When non-empty, the value is used in the path to the configuration file. i.e. /res/my-profile/configuation.yaml. This is useful when running multiple instances of a service such as App Service Configurable.
Example - Using docker-compose to override the profile to use
app-service-rules:\nimage: edgexfoundry/docker-app-service-configurable:2.0.0\nenvironment: EDGEX_PROFILE: \"rules-engine\"\n...\n
This sets the profile
so that the App Service Configurable uses the rules-engine
configuration profile which resides at /res/rules-engine/configuration.yaml
This environment variable overrides the -r/--registry
command-line option.
Note
All EdgeX service Docker images have this option set to --registry
.
Example - Using docker-compose to override use of the Registry
environment: EDGEX_USE_REGISTRY: \"false\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#edgex_remote_service_hosts","title":"EDGEX_REMOTE_SERVICE_HOSTS","text":"This environment variable overrides the -rsh/--remoteServiceHosts
command-line option.
Example - Using docker-compose to override Remote Service Hosts
environment: EDGEX_REMOTE_SERVICE_HOSTS: \"172.26.113.174,172.26.113.150,localhost\"\n
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#configuration-overrides","title":"Configuration Overrides","text":"EdgeX 3.0
New in EdgeX 3.0. When used, the Configuration Provider is the System of Record for all configuration. The environment variables for configuration overrides no longer have the highest precedence. However, environment variables for standard and command-line overrides still maintain their role and higher precedence.
Configuration Provider is the System of Record for all configurations
When using the Configuration Provider, it is the System of Record for all configurations. Environment variables are only applied when the configuration is first read from file. These overridden values are used to seed the services' configuration into the Configuration Provider. Once the Configuration Provider has been seeded, services always get their configuration from the Configuration Provider on start up. Any subsequent changes to configuration must be done via the Configuration Provider. Changing an environment variable override for configuration and restating the service will not impact the service's configuration. The services configuration must first be removed from the Configuration Provider for any new/updated environment variable override(s) to impact the service's configuration.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#service-configuration-overrides","title":"Service Configuration Overrides","text":"Any configuration setting from a service's configuration.yaml
file can be overridden by environment variables. The environment variable names have the following format:
<SECTION-NAME>_<KEY-NAME>\n<SECTION-NAME>_<SUB-SECTION-NAME>_<KEY-NAME>\n
Example - Environment Variable Overrides of Configuration
Service configuration YAML Environment variable Writable:LogLevel: \"INFO\"WRITABLE_LOGLEVEL=DEBUG Service:
Host: \"localhost\"SERVICE_HOST=edgex-core-data
Important
Private configuration overrides are only applied to configuration settings that exist in the service's private configuration file.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#secretstore-configuration-overrides","title":"SecretStore Configuration Overrides","text":"The environment variables overrides for SecretStore configuration follow the same rules as the regular configuration overrides. The following are the SecretStore fields that are commonly overridden.
Example SecretStore Configuration Override
Configuration Setting: SecretStore.Host\nEnvironment Variable Override: SECRETSTORE_HOST=edgex-vault
The complete list of SecretStore fields and defaults can be found in the file here. The defaults for the remaining fields typically do not need to be overridden, but may be overridden if needed using that same naming scheme as above.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#notable-configuration-overrides","title":"Notable Configuration Overrides","text":"This section describes configuration overrides that have special utility, such as enabling a debug capability or facilitating code development.
"},{"location":"microservices/configuration/CommonEnvironmentVariables/#tokenfileprovider_defaulttokenttl-security-secretstore-setup-service","title":"TOKENFILEPROVIDER_DEFAULTTOKENTTL (security-secretstore-setup service)","text":"This configuration override variable controls the TTL of the default SecretStore tokens that are created for EdgeX microservices by the Secret Store Setup service. This variable defaults to 1h
(one hour) if unspecified. It is often useful when developing a new microservice to set this value to a higher value, such as 12h
. This higher value will allow the secret store token to remain valid long enough for a developer to get a new microservice working and into a state where it can renew its own token. (All secret store tokens in EdgeX expire if not renewed periodically.)
The EdgeX registry and configuration service provides other EdgeX Foundry micro services with information about associated services within EdgeX Foundry (such as location and status) and configuration properties (i.e. - a repository of initialization and operating values). Today, EdgeX Foundry uses Consul by Hashicorp as its reference implementation configuration and registry providers. However, abstractions are in place so that these functions could be provided by an alternate implementation. In fact, registration and configuration could be provided by different services under the covers. For more, see the Configuration Provider and Registry Provider sections in this page.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration","title":"Configuration","text":"Please refer to the following EdgeX Foundry ADRs for details (and design decisions) behind the configuration in EdgeX
EdgeX 3.0
Common configuration in single location is new in Edgex 3.0
Many of EdgeX service's configuration settings are the same as all other services. These common configuration settings have been consolidated into a single common configuration location which is seeded by the core-common-config-bootstrapper service. This service seeds the configuration provider with the common configuration from its local file located in the cmd/res/configuration.yaml
. See the Common Configuration for list of all the common configuration settings.
Because EdgeX Foundry may be deployed and run in several different ways, it is important to understand how configuration is loaded and from where it is sourced. Referring to the cmd directory within the edgex-go repository, each service has its own folder. Inside each service's folder there is a res
directory (short for \"resource\"). There the configuration files in YAML format define each service's configuration. A service may support several different configuration profiles, such as a App Service Configurable does. In this case, the configuration file located directly in the res
directory should be considered the default configuration profile. Sub-directories will contain configurations appropriate to the respective profile.
As of the Geneva release, EdgeX recommends using environment variable overrides instead of creating profiles to override some subset of config values. App Service Configurable is an exception to this as this is how it defined unique instances using the same executable.
If you choose to use profiles as described above, the config profile can be indicated using one of the following command line flags:
--profile / -p
Taking the Core Data
and App Service Configurable
services as an examples:
./core-data
starts the service using the default profile found locally./app-service-configurable --profile=rules-engine
starts the service using the rules-engine
profile found locallyNote
Again, utilizing environment variables for configuration overrides is the recommended path. Config profiles, for the most part, are not used.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#seeding-configuration","title":"Seeding Configuration","text":"EdgeX 3.0
Seeding of the new separate common configuration is new in Edgex 3.0
When utilizing the centralized configuration management for the EdgeX Foundry microservices, it is necessary to seed the required configuration before starting the services. The new core-common-config-bootstrapper is responsible for seeding the common configuration that all services now depend on. Each service has the built-in capability to perform the seeding operation for its private configuration. A service will use its local configuration file to seeded into the configuration provider if such is being used.
In order for a service to seed/load the configuration to/from the configuration provider, use one of the following flags:
--configProvider / -cp
Again, taking the core-data
service as an example:
./core-data -cp=consul.http://localhost:8500
will start the service using configuration values found in the provider or seed them if they do not exist.
EdgeX 3.0
In EdgeX 3.0, the common environment variable overrides are applied to this common configuration prior to pushing the configuration into the configuration provider. This dramatically reduces the number of duplicate environment variable overrides in the Docker compose files.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#configuration-structure","title":"Configuration Structure","text":"EdgeX 3.0
In EdgeX 3.0, the configuration is no longer organized into a hierarchical structure grouped by service types.
The root namespace separates EdgeX Foundry related configuration information from other applications that may be using the same configuration provider. Below the root is the configuration version and then all the individual services in a flat list. As an example, the nodes shown when one views the configuration provider might be as follows:
Example configuration structure
**edgex/v3** (root namespace)\n - app-* (app services)\n - core-* (core services which includes common config)\n - devices-* (device services)\n - security-* (security services)\n - support-* (support services)\n
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#versioning","title":"Versioning","text":"The version is now part of the root namespace , i.e. edgex/v3
An advantage of grouping all minor/patch versions under a major version involves end-user configuration changes that need to be persisted during an upgrade. A service on startup will not overwrite existing configuration when it runs unless explicitly told to do so via the --overwrite / -o
command line flag. Therefore, if a user leaves their configuration provider running during an EdgeX Foundry upgrade any customization will be left in place. Environment variable overrides such as those supplied in the docker-compose for a given release will always override existing content in the configuration provider.
You can supply and manage configuration in a centralized manner by utilizing the -cp/--configProvider
flag when starting a service. If the flag is provided and points to an application such as HashiCorp's Consul, the service will bootstrap its configuration into the provider, if it doesn't exist. If configuration does already exist, it will load the content from the given location applying any environment variables overrides of which the service is aware. Integration with the configuration provider is handled through the go-mod-configuration module referenced by all services.
The registry refers to any platform you may use for service discovery. For the EdgeX Foundry reference implementation, the default provider for this responsibility is Consul. Integration with the registry is handled through the go-mod-registry module referenced by all services.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#introduction-to-registry","title":"Introduction to Registry","text":"The objective of the registry is to enable micro services to find and to communicate with each other. When each micro service starts up, it registers itself with the registry, and the registry continues checking its availability periodically via a specified health check endpoint. When one micro service needs to connect to another one, it connects to the registry to retrieve the available host name and port number of the target micro service and then invokes the target micro service. The following figure shows the basic flow.
Consul is the default registry implementation and provides native features for service registration, service discovery, and health checking. Please refer to the Consul official web site for more information:
https://www.consul.io
Physically, the \"registry\" and \"configuration\" management services are combined and running on the same Consul server node.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#web-user-interface","title":"Web User Interface","text":"A web user interface is also provided by Consul. Users can view the available service list and their health status through the web user interface. The web user interface is available at the /ui path on the same port as the HTTP API. By default this is http://localhost:8500/ui. For more detail, please see:
https://developer.hashicorp.com/consul/tutorials/certification-associate-tutorials/get-started-explore-the-ui
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#running-on-docker","title":"Running on Docker","text":"For ease of use to install and update, the microservices of EdgeX Foundry are published as Docker images onto Docker Hub and compose files that allow you to run EdgeX and dependent service such as Consul. These compose files can be found here in the edgex-compose repository. See the Getting Started using Docker for more details.
Once the EdgeX stack is running in docker verify Consul is running by going to http://localhost:8500/ui in your browser.
"},{"location":"microservices/configuration/ConfigurationAndRegistry/#running-on-local-machine","title":"Running on Local Machine","text":"To run Consul on the local machine, following these steps:
Execute the following command:
consul agent -data-dir \\${DATA_FOLDER} -ui -advertise 127.0.0.1 -server -bootstrap-expect 1\n\n# ${DATA_FOLDER} could be any folder to put the data files of Consul and it needs the read/write permission.\n
Verify the result: http://localhost:8500/ui
As stated in the top level V3 Migration guide, common configuration has been separated out from each service's private configuration. See the Service Configuration page for more details on the new Common Configuration.
There have also been changes to some sections of the common configuration in order to make them consistent and stream-lined for all EdgeX services
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#messagebus","title":"MessageBus","text":"In EdgeX 3.0 the EdgeX MessageBus configuration has been refactored and renamed to be MessageBus
. Prior to EdgeX 3.0, Core/Support Services and Device services had it as MessageQueue
and Applications Services had it as MessageBus
under the Trigger
configuration. Now all services have it as top level MessageBus
. In addition to the rename, the following fields have been add or removed:
false
. Set to true
by Application Services that don't need the EdgeX MessageBus for Trigger or Metrics. When set to false
this allows for Metrics to still be published to the EdgeX MessageBus when the Trigger is set to http
or external-mqtt
edgex
if not set.BaseTopicPrefix
BaseTopicPrefix
BaseTopicPrefix
PersistData
is set totrue
the Core Data will always subscribe to events from the EdgeX MessageBusIf your deployment has customized any of the EdgeX provided service's MessageBus
configuration, you will need to re-apply your customizations to the EdgeX 3.0 version of the service's MessageBus
configuration in the new separated out common configuration.
Example V3 MessageBus configuration - Common
MessageBus:\nProtocol: \"redis\"\nHost: \"localhost\"\nPort: 6379\nType: \"redis\"\nAuthMode: \"usernamepassword\" # required for redis MessageBus (secure or insecure).\nSecretName: \"redisdb\"\nBaseTopicPrefix: \"edgex\" # prepended to all topics as \"edgex/<additional topic levels>\nOptional:\n# Default MQTT Specific options that need to be here to enable environment variable overrides of them\nQos: \"0\" # Quality of Service values are 0 (At most once), 1 (At least once) or 2 (Exactly once)\nKeepAlive: \"10\" # Seconds (must be 2 or greater)\nRetained: \"false\"\nAutoReconnect: \"true\"\nConnectTimeout: \"5\" # Seconds\nSkipCertVerify: \"false\"\n# Additional Default NATS Specific options that need to be here to enable environment variable overrides of them\nFormat: \"nats\"\nRetryOnFailedConnect: \"true\"\nQueueGroup: \"\"\nDurable: \"\"\nAutoProvision: \"true\"\nDeliver: \"new\"\nDefaultPubRetryAttempts: \"2\"\nSubject: \"edgex/#\" # Required for NATS JetStream only for stream auto-provisioning\n
With the separation of Common Configuration, each service needs set the Optional.ClientId
in their private configuration to a unique value
Example V3 MessageBus configuration - Private
MessageBus:\nOptional:\nClientId: \"core-data\"\n
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#database","title":"Database","text":"In EdgeX 3.0 the database configuration for Core/Support services has changed from Databases map[string]bootstrapConfig.Database
to Database bootstrapConfig.Database
. This aligns it with the database configuration used by Application Services
Example V3 Database configuration
Database:\n Host: \"localhost\"\n Port: 6379\n Timeout: \"5s\"\n Type: \"redisdb\"\n
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#secretstore","title":"SecretStore","text":"In EdgeX 3.0 the SecretStore
settings have been remove from the service configuration and are now controlled via default values and environment variable overrides. The environment variable override names have not changed. See SecretStore Configuration Overrides section for more details.
If you have customized SecretStore
configuration, simply remove the SecretStore
section and use environment variable overrides to apply your customizations.
In EdgeX 3.0 some InsecureSecrets
configuration fields names have changed.
SecretName
SecretData
Example V3 InsecureSecrets configuration
InsecureSecrets:\nDB:\nSecretName: \"redisdb\"\nSecretData:\nusername: \"\"\npassword: \"\"\n
"},{"location":"microservices/configuration/V3MigrationCommonConfig/#custom-insecuresecrets","title":"Custom InsecureSecrets","text":""},{"location":"microservices/configuration/V3MigrationCommonConfig/#in-file","title":"In File","text":"If you have customized InsecureSecrets
in the configuration file you will need to adjust the field names described above.
If you have used Environment Variable Overrides to customize InsecureSecrets
, the Environment Variable names will need to change to account for the new field names above.
Example V3 Environment Variable Overrides for InsecureSecrets
WRITABLE_INSECURESECRETS_<KEY>_SECRETNAME: mySecretName\nWRITABLE_INSECURESECRETS_<KEY>_SECRETDATA_<DATAKEY>: mySecretDataItem\n
"},{"location":"microservices/core/Ch-CoreServices/","title":"Core Services","text":"Core services provide the intermediary between the north and south sides of EdgeX. As the name of these services implies, they are \u201ccore\u201d to EdgeX functionality. Core services is where the innate knowledge of \u201cthings\u201d connected, sensor data collected, and EdgeX configuration resides. Core consists of the following micro services:
The command micro service (often called the command and control micro service) enables the issuance of commands or actions to devices on behalf of:
The command micro service exposes the commands in a common, normalized way to simplify communications with the devices. There are two types of commands that can be sent to a device.
In most cases, GET commands are simple requests for the latest sensor reading from the device. Therefore, the request is often parameter-less (requiring no parameters or body in the request). SET commands require a request body where the body provides a key/value pair array of values used as parameters in the request (i.e. {\"additionalProp1\": \"string\", \"additionalProp2\": \"string\"}
).
The command micro service gets its knowledge about the devices from the metadata service. The command service always relays commands (GET or SET) to the devices through the device service. The command service never communicates directly to a device. Therefore, the command micro service is a proxy service for command or action requests from the north side of EdgeX (such as analytic or application services) to the protocol-specific device service and associated device.
While not currently part of its duties, the command service could provide a layer of protection around device. Additional security could be added that would not allow unwarranted interaction with the devices (via device service). The command service could also regulate the number of requests on a device do not overwhelm the device - perhaps even caching responses so as to avoid waking a device unless necessary.
"},{"location":"microservices/core/command/Ch-Command/#data-model","title":"Data Model","text":""},{"location":"microservices/core/command/Ch-Command/#data-dictionary","title":"Data Dictionary","text":"DeviceProfileDeviceCoreCommandCoreCommandCoreCommandParameters Property Description Id uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources deviceResource collection DeviceCommands collect of deviceCommand Property Description DeviceName reference to a device by name ProfileName reference to a device profile by name CoreCommands array of core commands Property Description Name Get bool indicating a get command Set bool indicating a set command Path Url Parameters array of core command parameters Property Description ResourceName ValueType"},{"location":"microservices/core/command/Ch-Command/#high-level-interaction-diagrams","title":"High Level Interaction Diagrams","text":"The two following High Level Diagrams show:
Command PUT Request
Request for Devices and Available Commands
"},{"location":"microservices/core/command/Ch-Command/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Core Command.
Edgex 3.0
For EdgeX 3.0 the MessageQueue.Internal
configuration has been moved to MessageBus
in Common Configuration and MessageQueue.External
has been moved to ExternalMQTT
below
-cp/--configProvider
flag LogLevel INFO log entry severity level. Log entries not of the default level or higher are ignored. Property Default Value Description .mqtt --- Secrets for when connecting to secure External MQTT when running in non-secure mode Property Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics <TBD>
Service metrics that Core Command collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary Core Metadata service level tags to included with every metric that is reported. Property Default Value Description Unique settings for Core Command. The common settings can be found at Common Configuration Port 59882 Micro service port number StartupMsg This is the EdgeX Core Command Microservice Message logged when service completes bootstrap start-up Property Default Value Description Protocol http The protocol to use when building a URI to the service endpoint Host localhost The host name or IP address where the service is hosted Port 59881 The port exposed by the target service Property Default Value Description Unique settings for Core Command. The common settings can be found at Common Configuration ClientId \"core-command Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description Enabled false Indicates whether to connect to external MQTT broker for the Commands via messaging Url tcp://localhost:1883
Fully qualified URL to connect to the MQTT broker ClientId core-command
ClientId to connect to the broker with ConnectTimeout 5s Time duration indicating how long to wait before timing out broker connection, i.e \"30s\" AutoReconnect true Indicates whether or not to retry connection if disconnected KeepAlive 10 Seconds between client ping when no active data flowing to avoid client being disconnected. Must be greater then 2 QOS 0 Quality of Service 0 (At most once), 1 (At least once) or 2 (Exactly once) Retain true Retain setting for MQTT Connection SkipCertVerify false Indicates if the certificate verification should be skipped SecretName mqtt
Name of the path in secret provider to retrieve your secrets. Must be non-blank. AuthMode none
Indicates what to use when connecting to the broker. Must be one of \"none\", \"cacert\" , \"usernamepassword\", \"clientcert\". If a CA Cert exists in the SecretPath then it will be used for all modes except \"none\". Property Default Value Description Key-value mappings allow for publication and subscription to the external message bus CommandRequestTopic edgex/command/request/#
For subscribing to 3rd party command requests CommandResponseTopicPrefix edgex/command/response
For publishing responses back to 3rd party systems. /<device-name>/<command-name>/<method>
will be added to this publish topic prefix QueryRequestTopic edgex/commandquery/request/#
For subscribing to 3rd party command query requests QueryResponseTopic edgex/commandquery/response
For publishing command query responses back to 3rd party systems"},{"location":"microservices/core/command/Ch-Command/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"RequireMessageBus
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/core/command/Ch-Command/#commands-via-messaging","title":"Commands via Messaging","text":""},{"location":"microservices/core/command/Ch-Command/#introduction_1","title":"Introduction","text":"Previously, communications from a 3rd party system (enterprise application, cloud application, etc.) to EdgeX in order to acuate a device or get the latest information from a sensor was only accomplished via REST. The 3rd party system makes a REST call of the command service which then relays a request to a device service also using REST. There was no built-in means to make a message-based request of EdgeX or the devices/sensors it manages.
From Levski release, core command service adds support for an external MQTT connection (in the same manner that app services provide an external MQTT connection), which will allow it to act as a bridge between the internal message bus (implemented via either MQTT or Redis Pub/Sub) and external MQTT message bus.
"},{"location":"microservices/core/command/Ch-Command/#core-command-as-message-bus-bridge","title":"Core Command as Message Bus Bridge","text":"The Core Command service will serve as the EdgeX entry point for external, commands via message bus requests to the south side.
3rd party systems should not be granted access to the EdgeX internal message bus. Therefore, in order to implement communications via message bus (specifically MQTT), the command service needs to take messages from the 3rd party or external MQTT topics and pass them internally onto the EdgeX internal message bus where they can eventually be routed to the device services and then on to the devices/sensors (southside).
In reverse, response messages from the southside will also be sent through the internal EdgeX message bus to the command service where they can then be bridged to the external MQTT topics and respond to the 3rd party system requester.
"},{"location":"microservices/core/command/Ch-Command/#message-structure","title":"Message Structure","text":"Since most message bus protocols lack a generic message header mechanism (as in HTTP), providing request/response metadata is accomplished by defining a MessageEnvelope
object associated with each request/response. The message topic names act like the HTTP paths and methods in REST requests. That is, the topic names specify the device receiver of any command request as paths do in the HTTP requests.
Below is an example of the MessageEnvelope
for command query request:
{\n\"apiVersion\" : \"v3\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ContentType\": \"application/json\",\n\"QueryParams\": {\n\"offset\": \"0\",\n\"limit\": \"10\"\n}\n}\n
Below is an example of the MessageEnvelope
of command query response:
{\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ErrorCode\":0,\n\"Payload\":\"...\",\n\"ContentType\":\"application/json\"\n}\n
The messages for formatted requests and responses are sharing a common base structure. The outermost JSON object represents the message envelope, which is used to convey metadata about request/response including ApiVersion
, RequestID
, CorrelationID
...etc.
The Payload
field contains the base64-encoded response body. The ErrorCode
field provides the indication of error. The ErrorCode
will be 0 (no error) or 1 (indicating error) as the two enums for error conditions. When there is an error (with ErrorCode
set to 1), then the payload contains a message string indicating more information about the error. When there is no error (errorCode 0) then there is no message string in the payload.
Core Command service subscribes to the QueryRequestTopic
and publishes the response to QueryResponseTopic
defined in the configuration file. After receiving the request, Core Command service will try to parse the <device-name>
from request topic level. The 3rd party system or application must publish command query requests messages and subscribe to responses from the same topics. Below is the default topic naming used by Core Command:
edgex/commandquery/request/#
edgex/commandquery/response
The last topic level in request topic must be either all
or the <device-name>
to query for.
Example of querying device core commands by device name via messaging:
Send query request message to external MQTT broker on topic edgex/commandquery/request/Random-Boolean-Device
:
{\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\"\n}\n
Receive query response message from external MQTT broker on topic edgex/commandquery/response
:
{\n\"ReceivedTopic\":\"\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":\"eyJhcGlWZXJzaW9uIjoidjIiLCJyZXF1ZXN0SWQiOiJlNmU4YTJmNC1lYjE0LTQ2NDktOWUyYi0xNzUyNDc5MTEzNjkiLCJzdGF0dXNDb2RlIjoyMDAsImRldmljZUNvcmVDb21tYW5kIjp7ImRldmljZU5hbWUiOiJSYW5kb20tQm9vbGVhbi1EZXZpY2UiLCJwcm9maWxlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsImNvcmVDb21tYW5kcyI6W3sibmFtZSI6IldyaXRlQm9vbFZhbHVlIiwic2V0Ijp0cnVlLCJwYXRoIjoiL2FwaS92Mi9kZXZpY2UvbmFtZS9SYW5kb20tQm9vbGVhbi1EZXZpY2UvV3JpdGVCb29sVmFsdWUiLCJ1cmwiOiJodHRwOi8vZWRnZXgtY29yZS1jb21tYW5kOjU5ODgyIiwicGFyYW1ldGVycyI6W3sicmVzb3VyY2VOYW1lIjoiQm9vbCIsInZhbHVlVHlwZSI6IkJvb2wifSx7InJlc291cmNlTmFtZSI6IkVuYWJsZVJhbmRvbWl6YXRpb25fQm9vbCIsInZhbHVlVHlwZSI6IkJvb2wifV19LHsibmFtZSI6IldyaXRlQm9vbEFycmF5VmFsdWUiLCJzZXQiOnRydWUsInBhdGgiOiIvYXBpL3YyL2RldmljZS9uYW1lL1JhbmRvbS1Cb29sZWFuLURldmljZS9Xcml0ZUJvb2xBcnJheVZhbHVlIiwidXJsIjoiaHR0cDovL2VkZ2V4LWNvcmUtY29tbWFuZDo1OTg4MiIsInBhcmFtZXRlcnMiOlt7InJlc291cmNlTmFtZSI6IkJvb2xBcnJheSIsInZhbHVlVHlwZSI6IkJvb2xBcnJheSJ9LHsicmVzb3VyY2VOYW1lIjoiRW5hYmxlUmFuZG9taXphdGlvbl9Cb29sQXJyYXkiLCJ2YWx1ZVR5cGUiOiJCb29sIn1dfSx7Im5hbWUiOiJCb29sIiwiZ2V0Ijp0cnVlLCJzZXQiOnRydWUsInBhdGgiOiIvYXBpL3YyL2RldmljZS9uYW1lL1JhbmRvbS1Cb29sZWFuLURldmljZS9Cb29sIiwidXJsIjoiaHR0cDovL2VkZ2V4LWNvcmUtY29tbWFuZDo1OTg4MiIsInBhcmFtZXRlcnMiOlt7InJlc291cmNlTmFtZSI6IkJvb2wiLCJ2YWx1ZVR5cGUiOiJCb29sIn1dfSx7Im5hbWUiOiJCb29sQXJyYXkiLCJnZXQiOnRydWUsInNldCI6dHJ1ZSwicGF0aCI6Ii9hcGkvdjIvZGV2aWNlL25hbWUvUmFuZG9tLUJvb2xlYW4tRGV2aWNlL0Jvb2xBcnJheSIsInVybCI6Imh0dHA6Ly9lZGdleC1jb3JlLWNvbW1hbmQ6NTk4ODIiLCJwYXJhbWV0ZXJzIjpbeyJyZXNvdXJjZU5hbWUiOiJCb29sQXJyYXkiLCJ2YWx1ZVR5cGUiOiJCb29sQXJyYXkifV19XX19\",\n\"ContentType\":\"application/json\",\n\"QueryParams\":{}\n}\n
Base64-decoding the Payload:
{\n\"apiVersion\":\"v2\",\n\"requestId\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"statusCode\":200,\n\"deviceCoreCommand\":{\n\"deviceName\":\"Random-Boolean-Device\",\n\"profileName\":\"Random-Boolean-Device\",\n\"coreCommands\":[\n{\n\"name\":\"WriteBoolValue\",\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/WriteBoolValue\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"Bool\", \"valueType\":\"Bool\"},\n{\"resourceName\":\"EnableRandomization_Bool\",\"valueType\":\"Bool\"}\n]\n},\n{\n\"name\":\"WriteBoolArrayValue\",\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/WriteBoolArrayValue\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"BoolArray\",\"valueType\":\"BoolArray\"},\n{\"resourceName\":\"EnableRandomization_BoolArray\",\"valueType\":\"Bool\"}\n]\n},\n{\n\"name\":\"Bool\",\n\"get\":true,\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/Bool\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"Bool\",\"valueType\":\"Bool\"}\n]\n},\n{\n\"name\":\"BoolArray\",\n\"get\":true,\n\"set\":true,\n\"path\":\"/api/v3/device/name/Random-Boolean-Device/BoolArray\",\n\"url\":\"http://edgex-core-command:59882\",\n\"parameters\":[\n{\"resourceName\":\"BoolArray\",\"valueType\":\"BoolArray\"}\n]\n}\n]\n}\n}\n
"},{"location":"microservices/core/command/Ch-Command/#query-all","title":"Query All","text":"Example of querying all device core commands via messaging:
Send query request message to external MQTT broker on topic edgex/commandquery/request/all
:
{\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"QueryParams\": {\n\"offset\": \"0\",\n\"limit\": \"5\"\n}\n}\n
Receive query response message from external MQTT broker on topic edgex/commandquery/response
:
{\n\"ApiVersion\":\"v2\",\n\"ContentType\":\"application/json\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":\"...\"\n}\n
Core Command service subscribes to the CommandRequestTopic
defined in the configuration file. After receiving the request, Core Command service will try to parse <device-name>
<command-name>
and <method>
from request topic level, and send the response back with <device-name>
, <command-name>
and <method>
appended to CommandResponseTopicPrefix
defined in the configuration file. The 3rd party system or application must publish command requests messages and subscribe to responses from the same topics. Below is the default topic naming used by Core Command:
edgex/command/request/#
edgex/command/response/<device-name>/<command-name>/<method>
The last topic level (<method>
) in request topic must be either get
or set
.
Example of making get command request via messaging:
edgex/command/request/Random-Boolean-Device/Bool/get
: {\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"QueryParams\": {\n\"ds-pushevent\": \"false\",\n\"ds-returnevent\": \"true\"\n}\n}\n
edgex/command/response/#
: {\n\"ReceivedTopic\":\"edgex/command/response/Random-Boolean-Device/Bool/get\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":\"eyJhcGlWZXJzaW9uIjoidjIiLCJyZXF1ZXN0SWQiOiJlNmU4YTJmNC1lYjE0LTQ2NDktOWUyYi0xNzUyNDc5MTEzNjkiLCJzdGF0dXNDb2RlIjoyMDAsImV2ZW50Ijp7ImFwaVZlcnNpb24iOiJ2MiIsImlkIjoiM2JiMDBlODYtMTZkZi00NTk1LWIwMWEtMWFhNTM2ZTVjMTM5IiwiZGV2aWNlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsInByb2ZpbGVOYW1lIjoiUmFuZG9tLUJvb2xlYW4tRGV2aWNlIiwic291cmNlTmFtZSI6IkJvb2wiLCJvcmlnaW4iOjE2NjY1OTE2OTk4NjEwNzcwNzYsInJlYWRpbmdzIjpbeyJpZCI6IjFhMmM5NTNkLWJmODctNDhkZi05M2U3LTVhOGUwOWRlNDIwYiIsIm9yaWdpbiI6MTY2NjU5MTY5OTg2MTA3NzA3NiwiZGV2aWNlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsInJlc291cmNlTmFtZSI6IkJvb2wiLCJwcm9maWxlTmFtZSI6IlJhbmRvbS1Cb29sZWFuLURldmljZSIsInZhbHVlVHlwZSI6IkJvb2wiLCJ2YWx1ZSI6ImZhbHNlIn1dfX0=\",\n\"ContentType\":\"application/json\",\n\"QueryParams\":{}\n}\n
Base64-decoding the Payload:
{\n\"apiVersion\":\"v2\",\n\"requestId\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"statusCode\":200,\n\"event\":{\n\"apiVersion\":\"v2\",\n\"id\":\"3bb00e86-16df-4595-b01a-1aa536e5c139\",\n\"deviceName\":\"Random-Boolean-Device\",\n\"profileName\":\"Random-Boolean-Device\",\n\"sourceName\":\"Bool\",\n\"origin\":1666591699861077076,\n\"readings\":[\n{\n\"id\":\"1a2c953d-bf87-48df-93e7-5a8e09de420b\",\n\"origin\":1666591699861077076,\n\"deviceName\":\"Random-Boolean-Device\",\n\"resourceName\":\"Bool\",\n\"profileName\":\"Random-Boolean-Device\",\n\"valueType\":\"Bool\",\n\"value\":\"false\"\n}\n]\n}\n}\n
"},{"location":"microservices/core/command/Ch-Command/#set-command","title":"Set Command","text":"Example of making put command request via messaging:
edgex/command/request/Random-Boolean-Device/WriteBoolValue/set
: {\n\"apiVersion\" : \"v3\",\n\"ContentType\": \"application/json\",\n\"CorrelationID\": \"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"RequestId\": \"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"Payload\": \"eyJCb29sIjogImZhbHNlIn0=\"\n}\n
The payload is the base64-encoding json struct:
{\"Bool\": \"false\"}\n
edgex/command/response/#
{\n\"ReceivedTopic\":\"edgex/command/response/Random-Boolean-Device/WriteBoolValue/set\",\n\"CorrelationID\":\"14a42ea6-c394-41c3-8bcd-a29b9f5e6835\",\n\"ApiVersion\":\"v2\",\n\"RequestID\":\"e6e8a2f4-eb14-4649-9e2b-175247911369\",\n\"ErrorCode\":0,\n\"Payload\":null,\n\"ContentType\":\"application/json\",\n\"QueryParams\":{}\n}\n
Note
There are some cases that Core Command service will be unable to publish the response correctly, for example: - Response topic is not specified in configuration file - Failed to JSON-decoding the request MessageEnvelope
- Failed to parse either <device-name>
, <command-name>
or <method>
In real word, users usually need to provide credentials or certificates to connect to external MQTT broker. To seed such secrets to Secret Store for Command service, you can follow the instructions from the Seeding Service Secrets document.
The following example shows how to set up Command service to connect to external MQTT broker with usernamepassword
authentication.
Example - Setting SecretsFile and ExternalMQTT via environment override
environment:\nEXTERNALMQTT_ENABLED: \"true\"\nEXTERNALLMQTT_URL: \"<url>\" # e.g. tcps://broker.hivemq.com:8883\nEXTERNALMQTT_AUTHMODE: usernamepassword\nSECRETSTORE_SECRETSFILE: \"/tmp/core-command/secrets.json\"\n...\nvolumes:\n- /tmp/core-command/secrets.json:/tmp/core-command/secrets.json\n
Example - secrets.json
{\n\"secrets\": [\n{\n\"secretName\": \"mqtt\",\n\"imported\": false,\n\"secretData\": [\n{\n\"key\": \"username\",\n\"value\": \"edgexuser\"\n},\n{\n\"key\": \"password\",\n\"value\": \"p@55w0rd\"\n}\n]\n}\n]\n}\n
Note
Since EdgeX 3.0, the SecretPath
configuration property of ExternalMQTT
section is renamed to SecretName
. However, in source code it is still referred as SecretPath
and will break down the Command service if ExternalMQTT is enabled. This is a known issue and will be fixed in EdgeX 3.1. Before EdgeX 3.1, to get rid of this issue you need to manually add SecretPath
to configuration via Consul UI and restart Command service to take effect.
Edgex 3.0
Regex Get Command is new in EdgeX 3.0
Command service supports regex syntax for command name. Regex syntax will match against all DeviceResources in the DeviceProfile.
Consider the following example device profile:
apiVersion: \"v2\"\nname: \"Simple-Device\"\ndeviceResources:\n-\nname: \"Xrotation\"\nisHidden: true\ndescription: \"X axis rotation rate\"\nproperties:\nvalueType: \"Int32\"\nreadWrite: \"RW\"\nunits: \"rpm\"\n-\nname: \"Yrotation\"\nisHidden: true\ndescription: \"Y axis rotation rate\"\nproperties:\nvalueType: \"Int32\"\nreadWrite: \"RW\"\n\"units\": \"rpm\"\n-\nname: \"Zrotation\"\nisHidden: true\ndescription: \"Z axis rotation rate\"\nproperties:\nvalueType: \"Int32\"\nreadWrite: \"RW\"\n\"units\": \"rpm\"\n
regex command name .rotation
will return event including Xrotation
, Yrotation
and Zrotation
readings. Note that the RE2 syntax accepted by Go's regexp
package contains character like .
, *
, +
...etc. These characters need to be URL-encoded before executing:
$ curl http://localhost:59882/api/v3/device/name/Simple-Device01/%2Erotation\n\n{\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"event\": {\n\"apiVersion\" : \"v3\",\n \"id\": \"821f9a5d-e521-4ea7-83f9-f6bce6881dce\",\n \"deviceName\": \"Simple-Device01\",\n \"profileName\": \"Simple-Device\",\n \"sourceName\": \".rotation\",\n \"origin\": 1679464105224933600,\n \"readings\": [\n{\n\"id\": \"c008960a-c3cc-4cfc-b9f7-a1f1516168ea\",\n \"origin\": 1679464105224933600,\n \"deviceName\": \"Simple-Device01\",\n \"resourceName\": \"Xrotation\",\n \"profileName\": \"Simple-Device\",\n \"valueType\": \"Int32\",\n \"units\": \"rpm\",\n \"value\": \"0\"\n},\n {\n\"id\": \"7f38677a-aa1f-446b-9e28-4555814ea79d\",\n \"origin\": 1679464105224933600,\n \"deviceName\": \"Simple-Device01\",\n \"resourceName\": \"Yrotation\",\n \"profileName\": \"Simple-Device\",\n \"valueType\": \"Int32\",\n \"units\": \"rpm\",\n \"value\": \"0\"\n},\n {\n\"id\": \"ad72be23-1d0e-40a3-b4ec-2fa0fa5aba58\",\n \"origin\": 1679464105224933600,\n \"deviceName\": \"Simple-Device01\",\n \"resourceName\": \"Zrotation\",\n \"profileName\": \"Simple-Device\",\n \"valueType\": \"Int32\",\n \"units\": \"rpm\",\n \"value\": \"0\"\n}\n]\n}\n}\n
"},{"location":"microservices/core/command/Ch-Command/#api-reference","title":"API Reference","text":"Core Command API Reference
"},{"location":"microservices/core/data/Ch-CoreData/","title":"Core Data","text":""},{"location":"microservices/core/data/Ch-CoreData/#introduction","title":"Introduction","text":"The core data micro service provides centralized persistence for data collected by devices. Device services that collect sensor data call on the core data service to store the sensor data on the edge system (such as in a gateway) until the data gets moved \"north\" and then exported to Enterprise and cloud systems. Core data persists the data in a local database. Redis is used by default, but a database abstraction layer allows for other databases to be used.
Other services and systems, both within EdgeX Foundry and outside of EdgeX Foundry, access the sensor data through the core data service. Core data could also provide a degree of security and protection of the data collected while the data is at the edge.
Note
Core data is completely optional. Device services can send data via message bus directly to application services. If local persistence is not needed, the service can be removed.
If persistence is needed, sensor data can be sent via message bus to core data which then persita the data. See below for more details.
Sensor data can be sent to core data via two different means:
Services (like devices services) and other systems can put sensor data on a message bus topic and core data can be configured to subscribed to that topic. This is the default means of getting data to core data. Any service (like an application service or rules engine service) or 3rd system could also subscribe to the same topic. If the sensor data does not need to persisted locally, core data does not have to subscribe to the message bus topic - making core data completely optional. By default, the message bus is implemented using Redis Pub/Sub. MQTT can be used as an alternate message bus implementation.
Services and systems can call on the core data REST API to send data to core data and have the data put in local storage. Prior to EdgeX 2.0, this was the default and only means to send data to core data. Today, it is an alternate means to send data to core data. When data is sent via REST to core data, core data re-publishes the data on to message bus so that other services can subscribe to it.
Core data moves data to the application service (and edge analytcs) via Redis Pub/Sub by default. MQTT or NATS (opt-in at build time) can alternately be used. Use of MQTT requires the installation of a broker such as ActiveMQ. Use of NATS requires all service to be built with NATS enabled and the installation of NATS Server. A messaging infrastructure abstraction is in place that allows for other message bus (e.g., AMQP) implementations to be created and used.
"},{"location":"microservices/core/data/Ch-CoreData/#core-data-streaming","title":"Core Data \"Streaming\"","text":"By default, core data persists all data sent to it by services and other systems. However, when the data is too sensitive to keep at the edge, or there is no use for the data at the edge by other local services (e.g., by an analytics micro service), the data can be \"streamed\" through core data without persisting it. A configuration change to core data (Writable.PersistData=false) has core data send data to the application services without persisting the data. This option has the advantage of reducing latency through this layer and storage needs at the network edge. But the cost is having no historical data to use for analytics that need to look back in time to make a decision.
Note
When persistence is turned off via the PersistData flag, it is off for all devices. At this time, you cannot specify which device data is persisted and which device data is not. Application services do allow filtering of device data before it is exported or sent to another service like the rules engine, but this is not based on whether the data is persisted or not.
Note
As mentioned, core data is completely optional. Therefore, if persistence is not needed, and if sensor data is sent from device services directly to application services via message bus, core data can be removed. In addition to reducing resource utilization (memory and CPU for core data), it also removes latency of throughput as the core data layer can be completely bypassed. However, if device services are still using REST to send data into the system, core data is the central receiving endpoint and must remain in place; even if persistence is turned off.
"},{"location":"microservices/core/data/Ch-CoreData/#events-and-readings","title":"Events and Readings","text":"Data collected from sensors is marshalled into EdgeX event and reading objects (delivered as JSON objects or a binary object encoded as CBOR to core data). An event represents a collection of one or more sensor readings. Some sensors or devices are only providing a single value \u2013 a single reading - at a time. Other sensors spew multiple values whenever they are read.
An event must have at least one reading. Events are associated to a sensor or device \u2013 the \u201cthing\u201d that sensed the environment and produced the readings. Readings represent a sensing on the part of a device or sensor. Readings only exist as part of (are owned by) an event. Readings are essentially a simple key/value pair of what was sensed (the key - called a ResourceName) and the value sensed (the value). A reading may include other bits of information to provide more context (for example, the data type of the value) for the users of that data. Consumers of the reading data could include things like user interfaces, data visualization systems and analytics tools.
In the diagram below, an example event/reading collection is depicted. The event coming from the \u201cmotor123\u201d device has two readings (or sensed values). The first reading indicates that the motor123 device reported the pressure of the motor was 1300 (the unit of measure might be something like PSI).
The value type property (shown as type above) on the reading lets the consumer of the information know that the value is an integer, base 64. The second reading indicates that the motor123 device also reported the temperature of the motor was 120 at the same time it reported the pressure (perhaps in degrees Fahrenheit).
"},{"location":"microservices/core/data/Ch-CoreData/#data-model","title":"Data Model","text":"The following diagram shows the Data Model for core data. Device services send Event objects containing a collection or Readings to core data when a device captures a sensor reading.
"},{"location":"microservices/core/data/Ch-CoreData/#data-dictionary","title":"Data Dictionary","text":"EventReading Property Description Event represents a single measurable event read from a device. Event has a one-to-many relationship with Reading. ID Uniquely identifies an event, for example a UUID. DeviceName DeviceName identifies the source of the event; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resources collected in the readings of the event. SourceName Name of the source request from the device profile (ResourceName or Command) associated to the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. Tags An arbitrary set of labels or additional information associated with the event. It can be used, for example, to add location information (like GPS coordinates) to the event. Readings A collection (one to many) of associated readings of a given event. Property Description ID Uniquely identifies a reading, for example a UUID. DeviceName DeviceName identifies the source of the reading; the device's name. ProfileName Identifies the name of the device profile associated with the device and corresponding resource collected in the reading. Origin A timestamp indicating when the original event/reading took place. Most of the time, this indicates when the device service collected/created the event. ResourceName ResourceName-Value provide the key/value pair of what was sensed by a device. ResourceName specifies what was the value collected. ResourceName should match a device resource name in the device profile. Value The sensor data value ValueType The type of the sensor data - from a list of allowed value types that includes Bool, String, Uint8, Int8, ... BinaryValue Byte array of sensor data when the data captured is not structured; for example an image is captured. This information is not persisted in the Database and is expected to be empty when retrieving a Reading for the ValueType of Binary. MediaType Indicating the type of binary data when collected. ObjectValue Complex value of sensor data when the data captured is structured; for example a BACnet date object:\"date\":{ \"year\":2021, \"month\":8, \"day\":26, \"wday\":4 }
. This is expected to be empty when the Reading for the ValueType is not Object
."},{"location":"microservices/core/data/Ch-CoreData/#high-level-interaction-diagrams","title":"High Level Interaction Diagrams","text":"The two following High Level Interaction Diagrams show:
Core Data Add Sensor Readings
Core Data Request Event / Reading for a Device
"},{"location":"microservices/core/data/Ch-CoreData/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Core Data.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics Service metrics that Core Data collects. Boolean value indicates if reporting of the metric is enabled. Metrics.EventsPersisted false Enable/Disable reporting of number of events persisted. Metrics.ReadingsPersisted false Enable/Disable reporting of number of readings persisted. Tags <empty>
List of arbitrary Core Data service level tags to included with every metric that is reported. Property Default Value Description Unique settings for Core Data. The common settings can be found at Common Configuration Port 59880 Micro service port number StartupMsg This is the EdgeX Core Data Microservice Message logged when service completes bootstrap start-up Property Default Value Description Unique settings for Core Data. The common settings can be found at Common Configuration Name coredata Database or document store name Property Default Value Description Unique settings for Core Data. The common settings can be found at Common Configuration ClientId \"core-data Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description MaxEventSize 25000 maximum event size in kilobytes accepted via REST or MessageBus. 0 represents default to system max. Property Default Value Description Enabled false Enable or disable data retention. Interval 30s Purging interval defines when the database should be rid of readings above the MaxCap. MaxCap 10000 The maximum capacity defines where the high watermark of readings should be detected for purging the amount of the reading to the minimum capacity. MinCap 8000 The minimum capacity defines where the total count of readings should be returned to during purging."},{"location":"microservices/core/data/Ch-CoreData/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"No configuration updated
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/core/data/Ch-CoreData/#api-reference","title":"API Reference","text":"Core Data API Reference
"},{"location":"microservices/core/database/Ch-Redis/","title":"Redis Database","text":"EdgeX Foundry's reference implementation database (for sensor data, metadata and all things that need to be persisted in a database) is Redis.
Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory.
"},{"location":"microservices/core/database/Ch-Redis/#memory-utilization","title":"Memory Utilization","text":"Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see the list below) and those strategies has continued to evolve. When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual).
Redis supports a number of different levels of on-disk persistence. By default, snapshots of the data are persisted every 60 seconds or after 1000 keys have changed. Beyond increasing the frequency of snapshots, append only files that log every database write are also supported. See https://redis.io/topics/persistence for a detailed discussion on how to balance the options.
Redis supports setting a memory usage limit and a policy on what to do if memory cannot be allocated for a write. See the MEMORY MANAGEMENT section of https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf for the configuration options. Since EdgeX and Redis do not currently communicate on data evictions, you will need to use the EdgeX scheduler to control memory usage rather than a Redis eviction policy.
"},{"location":"microservices/core/metadata/Ch-Metadata/","title":"Core Metadata","text":""},{"location":"microservices/core/metadata/Ch-Metadata/#introduction","title":"Introduction","text":"The core metadata micro service has the knowledge about the devices and sensors and how to communicate with them used by the other services, such as core data, core command, and so forth.
Specifically, metadata has the following abilities:
Although metadata has the knowledge, it does not do the following activities:
To understand metadata, its important to understand the EdgeX data objects it manages. Metadata stores its knowledge in a local persistence database. Redis is used by default, but a database abstraction layer allows for other databases to be used.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-profile","title":"Device Profile","text":"Device profiles define general characteristics about devices, the data they provide, and how to command them. Think of a device profile as a template of a type or classification of device. For example, a device profile for BACnet thermostats provides general characteristics for the types of data a BACnet thermostat sends, such as current temperature and humidity level. It also defines which types of commands or actions EdgeX can send to the BACnet thermostat. Examples might include actions that set the cooling or heating point. Device profiles are typically specified in YAML file and uploaded to EdgeX. More details are provided below.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-profile-details","title":"Device Profile Details","text":"Metadata device profile object model
General PropertiesDevice ResourcesAttributesPropertiesDevice CommandsCore CommandsA device profile has a number of high level properties to give the profile context and identification. Its name field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes:
Here is an example general information section for a sample KMC 9001 BACnet thermostat device profile provided with the BACnet device service (you can find the profile in Github) . Only the name is required in this section of the device profile. The name of the device profile must be unique in any EdgeX deployment. The manufacturer, model and labels are all optional bits of information that allow better queries of the device profiles in the system.
name: \"BAC-9001\"\nmanufacturer: \"KMC\"\nmodel: \"BAC-9001\"\nlabels: - \"B-AAC\"\ndescription: \"KMC BAC-9001 BACnet thermostat\"\n
Labels provided a way to tag, organize or categorize the various profiles. They serve no real purpose inside of EdgeX.
A device resource (in the deviceResources section of the YAML file) specifies a sensor value within a device that may be read from or written to either individually or as part of a device command (see below). Think of a device resource as a specific value that can be obtained from the underlying device or a value that can be set to the underlying device. In a thermostat, a device resource may be a temperature or humidity (values sensed from the devices) or cooling point or heating point (values that can be set/actuated to allow the thermostat to determine when associated heat/cooling systems are turned on or off). A device resource has a name for identification and a description for informational purposes.
The properties section of a device resource has also been greatly simplified. See details below.
Back to the BACnet example, here are two device resources. One will be used to get the temperature (read) the current temperature and the other to set (write or actuate) the active cooling set point. The device resource name must be provided and it must also be unique in any EdgeX deployment.
name: Temperature\ndescription: \"Get the current temperature\"\nisHidden: false\n\nname: ActiveCoolingSetpoint\ndescription: \"The active cooling set point\"\nisHidden: false\n
Note
While made explicit in this example, isHidden
is false by default when not specified. isHidden
indicates whether to expose the device resource to the core command service.
The device service allows access to the device resources via REST endpoint. Values specified in the device resources section of the device profile can be accessed through the following URL patterns:
The attributes associated to a device resource are the specific parameters required by the device service to access the particular value. In other words, attributes are \u201cinward facing\u201d and are used by the device service to determine how to speak to the device to either read or write (get or set) some of its values. Attributes are detailed protocol and/or device specific information that informs the device service how to communication with the device to get (or set) values of interest.
Returning to the BACnet device profile example, below are the complete device resource sections for Temperature and ActiveCoolingSetPoint \u2013 inclusive of the attributes \u2013 for the example device.
-\nname: Temperature\ndescription: \"Get the current temperature\"\nisHidden: false\nattributes: { type: \"analogValue\", instance: \"1\", property: \"presentValue\", index: \"none\" }\n-\nname: ActiveCoolingSetpoint\ndescription: \"The active cooling set point\"\nisHidden: false\nattributes:\n{ type: \"analogValue\", instance: \"3\", property: \"presentValue\", index: \"none\" }\n
The properties of a device resource describe the value obtained or set on the device. The properties can optionally inform the device service of some simple processing to be performed on the value. Again, using the BACnet profile as an example, here are the properties associated to the thermostat's temperature device resource.
name: Temperature\ndescription: \"Get the current temperature\"\nattributes: { type: \"analogValue\", instance: \"1\", property: \"presentValue\", index: \"none\" }\nproperties: valueType: \"Float32\"\nreadWrite: \"R\"\nunits: \"Degrees Fahrenheit\"\n
The 'valueType' property of properties gives more detail about the value collected or set. In this case giving the details of the temperature value to be set. The value provides details such as the type of the data collected or set, whether the value can be read, written or both.
The following fields are available in the value property:
The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI)
Device commands (in the deviceCommands section of the YAML file) define access to reads and writes for multiple simultaneous device resources. Device commands are optional. Each named device command should contain a number of get and/or set resource operations, describing the read or write respectively.
Device commands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes (X, Y and Z) together.
A device command consists of the following properties:
Each resourceOperation will specify:
The device commands can also be accessed through a device service\u2019s REST API in a similar manner as described for device resources.
If a device command and device resource have the same name, it will be the device command which is available.
Device resources or device commands that are not hidden are seen and available via the EdgeX core command service.
Other services (such as the rules engine) or external clients of EdgeX, should make requests of device services through the core command service, and when they do, they are calling on the device service\u2019s unhidden device commands or device resources. Direct access to the device commands or device resources of a device service is frowned upon. Commands, made available through the EdgeX command service, allow the EdgeX adopter to add additional security or controls on who/what/when things are triggered and called on an actual device.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device","title":"Device","text":"Data about actual devices is another type of information that the metadata micro service stores and manages. Each device managed by EdgeX Foundry registers with metadata (via its owning device service. Each device must have a unique name associated to it.
Metadata stores information about a device (such as its address) against the name in its database. Each device is also associated to a device profile. This association enables metadata to apply knowledge provided by the device profile to each device. For example, a thermostat profile would say that it reports temperature values in Celsius. Associating a particular thermostat (the thermostat in the lobby for example) to the thermostat profile allows metadata to know that the lobby thermostat reports temperature value in Celsius.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-service","title":"Device Service","text":"Metadata also stores and manages information about the device services. Device services serve as EdgeX's interfaces to the actual devices and sensors.
Device services are other micro services that communicate with devices via the protocol of that device. For example, a Modbus device service facilitates communications among all types of Modbus devices. Examples of Modbus devices include motor controllers, proximity sensors, thermostats, and power meters. Device services simplify communications with the device for the rest of EdgeX.
When a device service starts, it registers itself with metadata. When EdgeX provisions a new devices the device gets associated to its owning device service. That association is also stored in metadata.
Metadata Device, Device Service and Device Profile Model
Metadata's Device Profile, Device and Device Service object model and the association between them
"},{"location":"microservices/core/metadata/Ch-Metadata/#provision-watcher","title":"Provision Watcher","text":"Device services may contain logic to automatically provision new devices. This can be done statically or dynamically. In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts.
In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration.
Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher, is specific configuration information provided to a device service (usually at startup) that gets stored in metadata. In addition to providing details about what devices to look for during a scan, a provision watcher may also contain \u201cblocking\u201d indicators, which define parameters about devices that are not to be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided.
Metadata's provision watcher object model
"},{"location":"microservices/core/metadata/Ch-Metadata/#data-dictionary","title":"Data Dictionary","text":"EdgeX 3.0
Two fields--LastConnected and LastReported--of Device Service are removed in EdgeX 3.0. A new field Properties is added into Device in EdgeX 3.0, so that device-level properties can be defined and then consumed by the implementation of device services to retrieve extra device-level information. For example, assume a device service may require extra device-level information, such as DeviceInstance
, Firmware
, InstanceID
, and ObjectName
in the runtime, and these extra device-level information can be defined in the properties. A new field Properties is added into ProvisionWatcher in EdgeX 3.0, so that the implementation of device services can retrieve extra information when automatically provisioning a device. For example, assume a device service would like to generate the device name in certain format during auto discovery, a property, e.g. DeviceNameTemplate
with the template format of device name can be defined in the ProvisionWatcher, so that the implementation of device service can generate the device name based on such property.
DeviceInstance
, Firmware
, InstanceID
, and ObjectName
in the runtime, and these extra device-level information can be defined in the properties Property Description Represents the attributes and operational capabilities of a device. It is a template for which there can be multiple matching devices within a given system. Id Uniquely identifies the device, a UUID for example Description Name Name for identifying a device Manufacturer Manufacturer of the device Model Model of the device Labels Labels used to search for groups of profiles DeviceResources DeviceResource collection DeviceCommands Collect of deviceCommand Property Description The atomic description of a particular protocol level interface for a class of Devices; represents a value on a device that can be read or written Description Name Tags Tags for adding additional information on reading level Properties List of associated properties Attributes List of associated attributes Property Description Defines read/write capabilities native to the device Description Name isHidden Indicate the visibility of the DeviceCommand via a CoreCommand. Tags Tags for adding additional information on event level readWrite Read/Write Permissions set for this DeviceCommand. The value can be R, W, or RW. R enables GET command, and W enables SET command. resourceOperations List of associated resources and attributes. Should contain more than one, otherwise it is redundant to the single Resource. Property Description DeviceResource Name of a DeviceResource in this profile to be include in a Device Command DefaultValue Default value set to DeviceResource and it should be compatible with the Type field of the named DeviceResource Mappings Map the GET resourceOperation value to another string value and only valid where the Type of the named DeviceResource is String Property Description Represents a service that is responsible for proxying connectivity between a set of devices and the EdgeX Foundry core services; the current state and reachability information for a registered device service Id Uniquely identifies the device service, a UUID for example Name Labels BaseAddress Address (MQTT topic, HTTP address, serial bus, etc.) for reaching the service AdminState Property Description The transformation and constraint properties for a device resource. ValueType Type of the value ReadWrite Read/Write Permissions set for this property Minimum Minimum value that can be get/set from this property Maximum Maximum value that can be get/set from this property DefaultValue Default value set to this property if no argument is passed Mask Mask to be applied prior to get/set of property Shift Shift to be applied after masking, prior to get/set of property Scale Multiplicative factor to be applied after shifting, prior to get/set of property Offset Additive factor to be applied after multiplying, prior to get/set of property Base Base for property to be applied to, leave 0 for no power operation (i.e. base ^ property: 2 ^ 10) Assertion Required value of the property, set for checking error state. Failing an assertion condition will mark the device with an error state MediaType Property Description The metadata used by a Service for automatically provisioning matching Devices. Id Name Unique name and identifier of the provision watcher Labels Identifiers Set of key value pairs that identify property (MAC, HTTP,...) and value to watch for (00-05-1B-A1-99-99, 10.0.0.1,...) BlockingIdentifiers Set of key-values pairs that identify devices which will not be added despite matching on Identifiers ServiceName The base name of the device service that new devices will be associated to AdminState Administrative state for provision watcher - either unlocked or locked DiscoveredDevice A DiscoveredDevice defines the data to be assigned on the new discovered device Property Description A DiscoveredDevice defines the data to be assigned on the new discovered device. ProfileName Name of the device profile that should be applied to the devices available at the identifier addresses AdminState Administrative state for new devices - either unlocked or locked AutoEvents Associated auto events to this new devices Properties A map of extendable properties required by the implementation of device services to retrieve extra information when automatically provisioning a device. For example, assume a device service would like to generate the device name in certain format during auto discovery, a property, e.g. DeviceNameTemplate
with the template format of device name can be defined in the ProvisionWatcher, so that the implementation of device service can generate the device name based on such property"},{"location":"microservices/core/metadata/Ch-Metadata/#high-level-interaction-diagrams","title":"High Level Interaction Diagrams","text":"Sequence diagrams for some of the more critical or complex events regarding metadata. These High Level Interaction Diagrams show:
Add a New Device Profile (Step 1 to provisioning a new device)
Add a New Device (Step 2 to provisioning a new device)
What happens on a device service startup?
"},{"location":"microservices/core/metadata/Ch-Metadata/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Core Metadata.
EdgeX 3.0
Notifications configuration is removed in EdgeX 3.0. Metadata will leverage Device System Events to replace the original device change notifications.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
-cp/--configProvider
flag LogLevel INFO log entry severity level. Log entries not of the default level or higher are ignored. Property Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics <TBD>
Service metrics that Core Metadata collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary Core Metadata service level tags to included with every metric that is reported. Property Default Value Description StrictDeviceProfileChanges false Whether to allow device profile modifications, set to true
to reject all modifications which might impact the existing events and readings. Thus, the changes like manufacture
, isHidden
, or description
can still be made. StrictDeviceProfileDeletes false Whether to allow device profile deletionsm set to true
to reject all deletions. Property Default Value Description Validation false Whether to enable units of measure validation, set to true
to validate all device profile units
against the list of units of measure by core metadata. Property Default Value Description Unique settings for Core Metadata. The common settings can be found at Common Configuration Port 59881 Micro service port number StartupMsg This is the EdgeX Core Metadata Microservice Message logged when service completes bootstrap start-up Property Default Value Description UoMFile './res/uom.yaml' path to the location of units of measure configuration Property Default Value Description Unique settings for Core Metadata. The common settings can be found at Common Configuration Name metadata Database or document store name Property Default Value Description Unique settings for Core Metadata. The common settings can be found at Common Configuration ClientId \"core-metadata Id used when connecting to MQTT or NATS base MessageBus"},{"location":"microservices/core/metadata/Ch-Metadata/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"RequireMessageBus
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/core/metadata/Ch-Metadata/#device-system-events","title":"Device System Events","text":"Device System Events are events triggered by the add, update or delete of devices. A System Event DTO is published to the EdgeX MessageBus each time a new Device is added, an existing Device is updated or when an existing Device is deleted.
"},{"location":"microservices/core/metadata/Ch-Metadata/#system-event-dto","title":"System Event DTO","text":"Edgex 3.0
System Event types deviceservice
, deviceprofile
and provisionwatcher
are new in EdgeX 3.0
The System Event DTO has the following properties:
Property Description Value Type Type of System Eventdevice
, deviceservice
, deviceprofile
, or provisionwatcher
Action System Event action add
, update
, or delete
in this case Source Source of the System Event core-metadata
in this case Owner Owner of the data in the System Event In this case it is the name of the device service that owns the device or core-metadata
Tags Key value map of additional data empty in this case Details The data object that trigger the System Event the added, updated, or deleted Device/Device Profile/Device Service/Provision Watcher in this case Timestamp Date and time of the System Event timestamp"},{"location":"microservices/core/metadata/Ch-Metadata/#publish-topic","title":"Publish Topic","text":"The System Event DTO for Device System Events is published to the topic specified by the MessageQueue.PublishTopicPrefix
configuration setting above, which has a default of edgex/system-events
, plus the following data items, which are added to allow receivers to filter by subscription.
Example Device System Event publish topics
edgex/system-events/core-metadata/device/add/device-onvif-camera/onvif-camera\nedgex/system-events/core-metadata/device/update/device-rest/sample-numeric\nedgex/system-events/core-metadata/device/delete/device-virtual/Random-Boolean-Device\n
"},{"location":"microservices/core/metadata/Ch-Metadata/#units-of-measure","title":"Units of Measure","text":"Core metadata will read unit of measure configuration (see configuration example below) located in UoM.UoMFile
during startup. The specified configuration may be a local configuration file or the URI of the configuration. See the URI for Files section for more details.
EdgeX 3.1
Support for loading the UoM.UoMFile
configuration via URI is new in EdgeX 3.1.
Sample unit of measure configuration
Source: reference to source for all UoM if not specified below\nUnits:\ntemperature:\nSource: www.weather.com\nValues:\n- C\n- F\n- K\nweights:\nSource: www.usa.gov/federal-agencies/weights-and-measures-division\nValues:\n- lbs\n- ounces\n- kilos\n- grams\n
When validation is turned on (Writable.UoM.Validation
is set to true
), all device profile units
(in device resource, device properties) will be validated against the list of units of measure by core metadata.
In other words, when a device profile is created or updated via the core metadata API, the units specified in the device resource's units
field will be checked against the valid list of UoM provided via core metadata configuration.
If the units
value matches any one of the configuration units of measure, then the device resource is considered valid - allowing the create or update operation to continue. If the units
value does not match any one of the configuration units of measure, then the device profile or device resource operation (create or update) is rejected (error code 500 is returned) and an appropriate error message is returned in the response to the caller of the core metadata API.
Note
The units
field on a profile is and shall remain optional. If the units
field is not specified in the device profile, then it is assumed that the device resource does not have well-defined units of measure. In other words, core metadata will not fail a profile with no units
field specified on a device resource.
Core Metadata API Reference
"},{"location":"microservices/device/Ch-DeviceServices/","title":"Device Services Overview","text":""},{"location":"microservices/device/Ch-DeviceServices/#introduction","title":"Introduction","text":"The Device Services Layer interacts with Device Services.
Device services are the edge connectors interacting with the devices that include, but are not limited to: appliances in your home, alarm systems, HVAC equipment, lighting, machines in any industry, irrigation systems, drones, traffic signals, automated transportation, and so forth.
EdgeX device services translate information coming from devices via hundreds of protocols and thousands of formats and bring them into EdgeX. In other terms, device services ingest sensor data provided by \u201cthings\u201d. When it ingests the sensor data, the device service converts the data produced and communicated by the \u201cthing\u201d into a common EdgeX Foundry data structure, and sends that converted data into the core services layer, and to other micro services in other layers of EdgeX Foundry.
Device services also receive and handle any request for actuation back to the device. Device services take a general command from EdgeX to perform some sort of action and it translates that into a protocol specific request and forwards the request to the desired device.
Device services serve as the main means EdgeX interacts with sensors/devices. So, in addition to getting sensor data and actuating devices, device services also:
Device services may service one or a number of devices at one time.
A device that a device service manages, could be something other than a simple, single, physical device. The device could be an edge/IoT gateway (and all of that gateway's devices), a device manager, a sensor hub, a web service available over HTTP, or a software sensor that acts as a device, or collection of devices, to EdgeX Foundry.
The device service communicates with the devices through protocols native to each device object. EdgeX comes with a number of device services speaking many common IoT protocols such as Modbus, BACnet, BLE, etc. EdgeX also provides the means to create new devices services through device service software development kits (SDKs) when you encounter a new protocol and need EdgeX to communicate with a new device.
"},{"location":"microservices/device/Ch-DeviceServices/#device-service-abstraction","title":"Device Service Abstraction","text":"A device service is really just a software abstraction around a device and any associated firmware, software and protocol stack. It allows the rest of EdgeX (and users of EdgeX) to talk to a device via the abstraction API so that all devices look the same from the perspective of how you communicate with them. Under the covers, the implementation of the device service has some common elements, but can also vary greatly depending on the underlying device, protocol, and associate software.
A device service provides the abstraction between the rest of EdgeX and the physical device. In other terms, the device service \u201cwraps\u201d the protocol communication code, device driver/firmware and actual device.
Each device service in EdgeX is an independent micro service. Devices services are typically created using a device service SDK. The SDK is really just a library that provides common scaffolding code and convenience methods that are needed by all device services. While not required, the EdgeX community use the SDKs as the basis for the all device services the community provides. The SDKs make it easier to create device service by allowing a developer to focus on device specific communications, features, etc. versus having to code a lot of EdgeX service boilerplate code. Using the SDKs also helps to ensure the device services adhere to rules required of the device services.
Unless you need to create a new device service or modify an existing device service, you may not ever have to go under the covers, so to speak, to understand how a device service works. However, having some general understanding of what a device service does and how it does it can be helpful in customization, setting configuration and diagnosing problems.
"},{"location":"microservices/device/Ch-DeviceServices/#device-service-functionality","title":"Device Service Functionality","text":"All device services must perform the following tasks:
As you can imagine, many of these tasks (like registering with core metadata) are generic and the same for all device services and thereby provided by the SDK. Other tasks (like getting sensor data from the underlying device) are quite specific to the underlying device. In these cases, the device service SDK provides empty functions for performing the work, but the developer would need to fill in the function code as it relates to the specific device, the communication protocol, device driver, etc.
"},{"location":"microservices/device/Ch-DeviceServices/#device-service-functional-requirements","title":"Device Service Functional Requirements","text":"Requirements for the device service are provided in this documentation. These requirements are being used to define what functionality needs to be offered via any Device Service SDK to produce the device service scaffolding code. They may also help the reader further understand the duties and role of a device service.
"},{"location":"microservices/device/Ch-DeviceServices/#device-profile","title":"Device Profile","text":"EdgeX comes with a number of existing device services for communicating with devices that speak many IoT protocols \u2013 such as Modbus, BACnet, BLE, etc. While these devices services know how to speak to devices that communicate by the associated protocol, the device service doesn\u2019t know the specifics of all devices that speak that protocol. For example, there are thousands of Modbus devices in the world. It is a common industrial protocol used in a variety of devices. Some Modbus devices measure temperature and humidity and provide thermostatic control over building HVAC systems, while other Modbus devices are used in automation control of flare gas meters in the oil and gas industry. This diversity of devices means that the Modbus device service could never know how to communicate with each Modbus device directly. The device service just knows the Modbus protocol generically and must be informed of how to communicate with each individual device based on what that device knows and communicates. Using an analogy, you may speak a language or two. Just because you speak English, doesn\u2019t mean you know everything about all English-speaking people. For example, just because someone spoke English, you would not know if they could solve a calculus problem for you or if they can sing your favorite song.
Device profiles describe a specific device to a device service. Each device managed by a device service has an association device profile, which defines that device in terms of the data it reports and operations that it supports. General characteristics about the type of device, the data the device provides, and how to command the device is all provided in a device profile. A device profile is described in YAML which is a human-readable data serialization language (similar to a markup language like XML). See the page on device profiles to learn more about how they provide the detail EdgeX device services need to communicate with a device.
Info
Device profiles, while normally provided to EdgeX in a YAML file, can also be specified to EdgeX in JSON. See the metadata API for upload via JSON versus upload YAML file.
"},{"location":"microservices/device/Ch-DeviceServices/#device-discovery-and-provision-watchers","title":"Device Discovery and Provision Watchers","text":"Device Services may contain logic to automatically provision new devices. This can be done statically or dynamically.
"},{"location":"microservices/device/Ch-DeviceServices/#static-provisioning","title":"Static Provisioning","text":"In static device configuration (also known as static provisioning) the device service connects to and establishes a new device that it manages in EdgeX (specifically metadata) from configuration the device service is provided. For example, a device service may be provided with the specific IP address and additional device details for a device (or devices) that it is to onboard at startup. In static provisioning, it is assumed that the device will be there and that it will be available at the address or place specified through configuration. The devices and the connection information for those devices is known at the point that the device service starts.
"},{"location":"microservices/device/Ch-DeviceServices/#dynamic-provisioning","title":"Dynamic Provisioning","text":"In dynamic discovery (also known as automatic provisioning), a device service is given some general information about where to look and general parameters for a device (or devices). For example, the device service may be given a range of BLE address space and told to look for devices of a certain nature in this range. However, the device service does not know that the device is physically there \u2013 and the device may not be there at start up. It must continually scan during its operations (typically on some sort of schedule) for new devices within the guides of the location and device parameters provided by configuration.
Not all device services support dynamic discovery. If it does support dynamic discovery, the configuration about what and where to look (in other words, where to scan) for new devices is specified by a provision watcher. A provision watcher is created via a call to the core metadata provision watcher API (and is stored in the metadata database).
A Provision Watcher is a filter which is applied to any new devices found when a device service scans for devices. It contains a set of ProtocolProperty names and values, these values may be regular expressions. If a new device is to be added, each of these must match the corresponding properties of the new device. Furthermore, a provision watcher may also contain \u201cblocking\u201d identifiers, if any of these match the properties of the new device (note that matching here is not regex-based), the device will not be automatically provisioned. This allows the scope of a device scan to be narrowed or allow specific devices to be avoided.
More than one Provision Watcher may be provided for a device service, and discovered devices are added if they match with any one of them. In addition to the filtering criteria, a Provision Watcher includes specification of various properties to be associated with the new device which matches it: these are the Profile name, the initial AdminState, and optionally any AutoEvents to be applied.
"},{"location":"microservices/device/Ch-DeviceServices/#admin-state","title":"Admin State","text":"The adminState is either LOCKED
or UNLOCKED
for each device. This is an administrative condition applied to the device. This state is periodically set by an administrator of the system \u2013 perhaps for system maintenance or upgrade of the sensor. When LOCKED
, requests to the device via the device service are stopped and an indication that the device is locked (HTTP 423 status code) is returned to the caller.
Data collected from devices by a device service is marshalled into EdgeX event and reading objects (delivered as JSON objects in service REST calls). This is one of the primary responsibilities of a device service. Typically, a configurable schedule - called an auto event schedule - determines when a device service sends data to core data via core data\u2019s REST API (future EdgeX implementations may afford alternate means to send the data to core data or to send sensor data to other services).
"},{"location":"microservices/device/Ch-DeviceServices/#test-and-demonstration-device-services","title":"Test and Demonstration Device Services","text":"Among the many available device services provided by EdgeX, there are two device services that are typically used for demonstration, education and testing purposes only. The random device service (device-random-go) is a very simple device service used to provide device service authors a bare bones example inclusive of a device profile. It can also be used to create random integer data (either 8, 16, or 32 bit signed or unsigned) to simulate integer readings when developing or testing other EdgeX micro services. It was created from the Go-based device service SDK.
The virtual device service (device-virtual-go) is also used for demonstration, education and testing. It is a more complex simulator in that it allows any type of data to be generated on a scheduled basis and used an embedded SQL database (ql) to provide simulated data. Manipulating the data in the embedded database allows the service to mimic almost any type of sensing device. More information on the virtual device service is available in this documentation.
"},{"location":"microservices/device/Ch-DeviceServices/#running-multiple-instances","title":"Running multiple instances","text":"Device services support one additional command-line argument, --instance
or -i
. This allows for running multiple instances of a device service in an EdgeX deployment, by giving them different names.
For example, running device-modbus -i 1
results in a service named device-modbus_1
, ie the parameter given to the instance
argument is added as a suffix to the device service name. The same effect may be obtained by setting the EDGEX_INSTANCE_NAME
environment variable.
Device services now have the capability to publish Events directly to the EdgeX MessageBus, rather than POST the Events to Core Data via REST. This capability is controlled by the Device.UseMessageBus
configuration property (see below), which is set to true
by default. Core Data is configured by default to subscribe to the EdgeX MessageBus to receive and persist the Events. Application services, as in EdgeX 1.x, subscribe to the EdgeX MessageBus to receive and process the Events.
Edgex 3.0
Upon successful PUT command, Device services will also publish an Event with the updated Resource value(s) to the EdgeX MessageBus as long as the Resource(s) are not write-only.
"},{"location":"microservices/device/Ch-DeviceServices/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services.
EdgeX 3.0
UpdateLastConnected is removed in EdgeX 3.0.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been move to MessageBus
in Common Configuration
Note
The *
on the configuration section names below denoted that these sections are pulled from the device service common configuration thus are not in the individual device service's private configuration file.
false
to not include units in the Reading. Property Default Value Description See Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics Service metrics that the device service collects. Boolean value indicates if reporting of the metric is enabled. Common and custom metrics are also included. EventsSent
= false Enable/disable reporting of the built-in EventsSent metric ReadingsSent
= false Enable/disable reporting of the built-in ReadingsSent metric <CustomMetric>
= false Enable/disable reporting of custom device service's custom metric. See Custom Device Service Metrics for more details. Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. Property Default Value Description Protocol http The protocol to use when building a URI to the service endpoint Host localhost The host name or IP address where the service is hosted Port 59881 The port exposed by the target service Property Default Value Description Properties that determine how the device service communicates with a device DataTransform true Controls whether transformations are applied to numeric readings MaxCmdOps 128 Maximum number of resources in a device command (hence, readings in an event) MaxCmdResultLen 256 Maximum JSON string length for command results ProfilesDir './res/profiles' If set, directory or index URI containing profile definition files to upload to core-metadata. See URI for Device Service Files for more information on URI index files. Also may be in device service private config, so it can be overridden with environment variable DevicesDir './res/devices' If set, directory or index URI containing device definition files to upload to core-metadata. See URI for Device Service Files for more information on URI index files. Also may be in device service private config, so it can be overridden with environment variable ProvisionWatchersDir '' If set, directory or index URI containing provision watcher definition files to upload to core-metadata (service specific when needed). See URI for Device Service Files for more information on URI index files. EnableAsyncReadings true Enables/Disables the Device Service ability to handle async readings AsyncBufferSize 16 Size of the buffer for async readings Discovery/Enabled false Controls whether device discovery is enabled Discovery/Interval 30s Interval between automatic discovery runs. Zero means do not run discovery automatically Property Default Value Description MaxEventSize 0 maximum event size in kilobytes sent to Core Data or MessageBus. 0 represents default to system max."},{"location":"microservices/device/Ch-DeviceServices/#uris-for-device-service-files","title":"URIs for Device Service Files","text":"EdgeX 3.1
Support for URIs for Devices, Profiles, and Provision Watchers is new in EdgeX 3.1.
When loading device definitions, device profiles, and provision watchers from a URI, the directory field (ie DevicesDir
, ProfilesDir
, ProvisionWatchersDir
) loads an index file instead of a folder name. The contents of the index file will specify the individual files to load by URI by appending the filenames to the URI as shown in the example below. Any authentication specified in the original URI will be used in subsequent URIs. See the URI for Files section for more details.
Example Device Dir loaded from URI in service configuration
...\nProfilesDir = \"./res/profiles\"\nDevicesDir = \"http://example.com/devices/index.json\"\nProvisionWatchersDir = \"./res/provisionwatchers\"\n...\n
"},{"location":"microservices/device/Ch-DeviceServices/#device-definition-uri-example","title":"Device Definition URI Example","text":"For device definitions, the index file contains the list of references to device files that contain one or more devices.
Example Device Index File at http://example.com/devices/index.json
and resulting URIs
[\n\"device1.yaml\", \"device2.yaml\"\n]\nwhich results in the following URIs:\nhttp://example.com/devices/device1.yaml\nhttp://example.com/devices/device2.yaml\n
"},{"location":"microservices/device/Ch-DeviceServices/#device-profile-and-provision-watchers-uri-example","title":"Device Profile and Provision Watchers URI Example","text":"For device profiles and provision watchers, the index file contains a dictionary of key-value pairs that map the name of the profile or provision watcher to its file. The name is mapped so that the resources are only loaded from a URI if a device profile or provision watcher by that name has not been loaded yet.
Example Device Profile Index File at http://example.com/profiles/index.json
and resulting URIs
{\n\"Simple-Device\": \"Simple-Driver.yaml\",\n\"Simple-Device2\": \"Simple-Driver2.yml\"\n}\nwhich results in the following URIs:\nhttp://example.com/profiles/Simple-Driver.yaml\nhttp://example.com/profiles/Simple-Driver2.yml\n
"},{"location":"microservices/device/Ch-DeviceServices/#custom-configuration","title":"Custom Configuration","text":"Device services can have custom configuration in one of two ways. See the table below for details.
DriverCustom Structured ConfigurationDriver
- The Driver section used for simple custom settings and is accessed via the SDK's DriverConfigs() API. The DriverConfigs API returns a map[string] string
containing the contents on the Driver
section of the configuration.yaml
file.
Driver:\nMySetting: \"My Value\"\n
For Go Device Services see Go Custom Structured Configuration for more details.
For C Device Service see C Custom Structured Configuration for more details.
"},{"location":"microservices/device/Ch-DeviceServices/#secrets","title":"Secrets","text":""},{"location":"microservices/device/Ch-DeviceServices/#configuration","title":"Configuration","text":"Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
All instances of Device Services running in secure mode require a SecretStore
to be created for the service by the Security Services. See Configuring Add-on Service for details on configuring a SecretStore
to be created for the Device Service. With the use of Redis Pub/Sub
as the default EdgeX MessageBus all Device Services need the redisdb
known secret added to their SecretStore
so they can connect to the Secure EdgeX MessageBus. See the Secure MessageBus documentation for more details.
Each Device Service also has detailed configuration to enable connection to it's exclusive SecretStore
When running an Device Service in secure mode, secrets can be stored in the SecretStore by making an HTTP POST
call to the /api/v3/secret
API route on the Device Service. The secret data POSTed is stored to the service's secureSecretStore
. Once a secret is stored, only the service that added the secret will be able to retrieve it. See the Secret API Reference for more details and example.
When running in insecure mode, the secrets are stored and retrieved from the Writable.InsecureSecrets section of the service's configuration.yaml file. Insecure secrets and their paths can be configured as below.
Example - InsecureSecrets Configuration
Writable:\nInsecureSecrets: DB:\nSecretName: \"redisdb\"\nSecretData:\nusername: \"\"\npassword: \"\"\nMQTT:\nSecretName: \"credentials\"\nSecretData:\nusername: \"mqtt-user\"\npassword: \"mqtt-password\"\n
"},{"location":"microservices/device/Ch-DeviceServices/#retrieving-secrets","title":"Retrieving Secrets","text":"Device Services retrieve secrets from their SecretStore
using the SDK API. See Retrieving Secrets for more details using the Go SDK.
Device Service - SDK- API Reference
"},{"location":"microservices/device/V3Migration/","title":"V3 Device Service Migration Guide","text":""},{"location":"microservices/device/V3Migration/#all-device-services","title":"All Device Services","text":"This section is specific to changes made that impact only and all device services.
See Top Level V3 Migration Guide for details applicable to all EdgeX Services.
"},{"location":"microservices/device/V3Migration/#device-files","title":"Device Files","text":"LastConnected
and LastReported
configs.ProtocolProperties now supports typed values.
ProtocolProperty with typed values
protocols:\nother:\nAddress: simple01\nPort: 300\n
The boolean field notify
has been removed as it is never used.
properties
has been added to Device. See Metadata Dictionary and point to Device tab for complete details.tags
field to Device for event level tagging. See Metadata Dictionary and point to Device tab for complete details.optional
field in ResourceProperties to allow any additional or customized data.Change the data type of mask
, shift
, scale
, base
, offset
, maximum
and minimum
from string to number in ResourceProperties.
NOTE: When the device profile is in JSON format, please ensure that the values for mask
are specified in decimal, as the JSON number type does not support hexadecimal. YAML does not have this limitation.
Added tags
field in DeviceResource for reading level tagging. See Metadata Dictionary and point to DeviceResource tab for complete details.
tags
field in DeviceCommand for event level tagging. See Metadata Dictionary and point to DeviceCommand tab for complete details.DiscoveredDevice
; such as profileName
, Device adminState
, and autoEvents
.properties
field in the DiscoveredDevice
object.adminState
now. The Device adminState
is moved into the DiscoveredDevice
object.ProvisionWatcher can now be added during device service startup by loading the definition files from the ProvisionWatchersDir
configuration.
Example Configuration
Device:\nProvisionWatchersDir: ./res/provisionwatchers\n
ProvisionWatcher definition file is in YAML format.
Pre-defined ProvisionWatcher
name: Simple-Provision-Watcher\nserviceName: device-simple\nlabels:\n- simple\nidentifiers:\nAddress: simple[0-9]+\nPort: 3[0-9]{2}\nblockingIdentifiers:\nPort:\n- 397\n- 398\n- 399\nadminState: UNLOCKED\ndiscoveredDevice:\nprofileName: Simple-Device\nadminState: UNLOCKED\nautoEvents:\n- interval: 15s\nsourceName: SwitchButton\nproperties:\ntestPropertyA: weather\ntestPropertyB: meter\n
An extendable field properties
has been added to ProvisionWatcher. See Metadata Dictionary and point to DiscoveredDevice tab for complete details.
This section is specific to changes made that impact existing custom device services.
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#dependencies","title":"Dependencies","text":"You first need to update the go.mod
file to specify go 1.20
and the V3 versions of the Device SDK and any EdgeX go-mods directly used by your service. Note the extra /v3
for the modules.
Example go.mod for V3
module <your service>\n\ngo 1.20\n\nrequire (\ngithub.com/edgexfoundry/device-sdk-go/v3 v3.0.0\ngithub.com/edgexfoundry/go-mod-core-contracts/v3 v3.0.0\n...\n)\n
Once that is complete then the import statements for these dependencies must be updated to include the /v3
in the path.
Example import statements for V3
import (\n...\n\n\"github.com/edgexfoundry/device-sdk-go/v3/pkg/models\"\n\"github.com/edgexfoundry/go-mod-core-contracts/v3/common\"\n)\n
"},{"location":"microservices/device/V3Migration/#go-device-services","title":"Go Device Services","text":"map[string]any
instead of map[string]string
to support typed values.ProvisionWatchersDir
configuration to support adding provision watchers during device service startup.UpdateLastConnected
from configuration.UseMessageBus
from configuration. MessageBus is always enabled in 3.0 for sending events and receiving system events for callbacks.Start
method. The Start
method is called after the device service is completely initialized, allowing the service to run startup tasks.Discover
method. The Discover
method triggers protocol specific device discovery, asynchronously writes the results to the channel which is passed to the implementation via ProtocolDriver.Initialize()
. The results may be added to the device service based on a set of acceptance criteria (i.e. Provision Watchers).ValidateDevice
method. The ValidateDevice
method triggers device's protocol properties validation, returns error if validation failed and the incoming device will not be added into EdgeX.Initialize
method signature to pass DeviceServiceSDK interface as parameter.ds *DeviceService
in service package. Instead, the DeviceServiceSDK interface introduced in Levski release is passed to ProtocolDriver as the only parameter in Initialize method so that developer can still access, mock and test with it.Run
method.PatchDevice
method.DeviceExistsForName
method.AsyncValuesChannel
method.DiscoveredDeviceChannel
method.UpdateDeviceOperatingState
method to accept a OperatingState
value.AsyncReadings
to AsyncReadingsEnabled
.DeviceDiscovery
to DeviceDiscoveryEnabled
.GetLoggingClient
to LoggingClient
.GetSecretProvider
to SecretProvider
.GetMetricsManager
to MetricsManager
.Stop
method as it should only be called by SDK.SetDeviceOperatingState
method.Service
function that returns the device service SDK instance.RunningService
function that returns the Device Service instance.<PublishTopicPrefix>/<device-service-name>/<device-profile-name>/<device-name>/<source-name>
/validate/device
/callback/service
/callback/watcher
/callback/watcher/name/{name}
/callback/profile
/callback/device
/callback/device/name/{name}
/metrics
endpoint.There is a new dependency on IOTech's C Utilities which should be satisfied by installing the relevant package. Previous versions built the utilities into the SDK library. Installation instructions for the utility package may be found in the C SDK repository.
Configuration file changes:
UseMessageBus
from configuration. MessageBus is always enabled in 3.0 for sending events and receiving system events for callbacks.The type
field in both devsdk_resource_t
and devsdk_device_resources
is now an iot_typecode_t
rather than a pointer to one. Additionally the type
field in edgex_resourceoperation
is an iot_typecode_t
.
The edgex_propertytype
enum and the functions for obtaining one from iot_data_t
have been removed. Instead, first consult the type
field of an iot_typecode_t
. This is an instance of the iot_data_type_t
enumeration, the enumerands of which are similar to the EdgeX types, except that there are some additional values (not used in the C SDK) such as Vectors and Pointers, and there is a singular Array type. The type of array elements is held in the element_type
field of the iot_typecode_t
.
Binary data is now supported directly in the utilities, so instead of allocating an array of uint8, the iot_data_alloc_binary
function is available.
Add additional level in event publish topic for device service name. The topic is now <PublishTopicPrefix>/<device-service-name>/<device-profile-name>/<device-name>/<source-name>
The following REST callback endpoints are removed and replaced by the System Events mechanism:
/validate/device
/callback/service
/callback/watcher
/callback/watcher/name/{name}
/callback/profile
/callback/device
/callback/device/name/{name}
Remove old metrics collection and REST /metrics
endpoint.
This section is specific to changes made only to Device MQTT.
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#metadata-in-mqtt-topics","title":"Metadata in MQTT Topics","text":"For EdgeX 3.0, Device MQTT now only supports the multi-level topics. Publishing the metadata and command/reading data wrapped in a JSON object is no longer supported. The published payload is now always only the reading data.
Example V2 JSON object wrapper no longer used
{\n\"name\": \"<device-name>\",\n\"cmd\": \"<source-name>\",\n\"<source-name>\": Base64 encoded JSON containing\n{\n\"<resource1>\" : value1,\n\"<resource2>\" : value2,\n...\n}\n}\n
Your MQTT based device(s) must be migrated to use this new approach. See below for more details.
"},{"location":"microservices/device/V3Migration/#async-data","title":"Async Data","text":"A sync data is published to the incoming/data/{device-name}/{source-name}
topic where:
device-name is the name of the device sending the reading(s)
source-name is the command or resource name for the published data
If the source-name matches a command name the published data must be JSON object with the resource names specified in the command as field names.
Example async published command data
Topic=incoming/data/MQTT-test-device/allValues
{\n\"randfloat32\" : 3.32,\n\"randfloat64\" : 5.64,\n\"message\" : \"Hi World\"\n}\n
If the source-name only matches a resource name the published data can either be just the reading value for the resource or a JSON object with the resource name as the field name.
Example async published resource data
Topic=incoming/data/MQTT-test-device/randfloat32
5.67\n\nor\n\n{\n\"randfloat32\" : 5.67\n}\n
Commands send to the device will be sent on thecommand/{device-name}/{command-name}/{method}/{uuid}
topic where:
get
or set
If the command method is a set
, the published payload contains a JSON object with the resource names and the values to set those resources.
Example Data for Set Command
{\n\"randfloat32\" : 3.32,\n\"randfloat64\" : 5.64\n}\n
The device is expected to publish an empty response to the topic command/response/{uuid}
where uuid is the unique identifier sent in command request topic.
If the command method is a get
, the published payload is empty and the device is expected to publish a response to the topic command/response/{uuid}
where uuid is the unique identifier sent in command request topic. The published payload contains a JSON object with the resource names for the specified command and their values.
Example Response Data for Get Command
{\n\"randfloat32\" : 3.32,\n\"randfloat64\" : 5.64,\n\"message\" : \"Hi World\"\n}\n
"},{"location":"microservices/device/V3Migration/#device-onvif-camera","title":"Device ONVIF Camera","text":"This section is specific to changes made only to Device ONVIF Camera.
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#configuration","title":"Configuration","text":"DiscoverySubnets
.Some commands have been renamed for clarity. See the latest Swagger API Documentation for full details.
EdgeX v2 Command Name EdgeX v3 Command Name Profiles MediaProfiles Scopes DiscoveryScopes AddScopes AddDiscoveryScopes RemoveScopes RemoveDiscoveryScopes GetNodes PTZNodes GetNode PTZNode GetConfigurations PTZConfigurations Configuration PTZConfiguration GetConfigurationOptions PTZConfigurationOptions AbsoluteMove PTZAbsoluteMove RelativeMove PTZRelativeMove ContinuousMove PTZContinuousMove Stop PTZStop GetStatus PTZStatus SetPreset PTZPreset GetPresets PTZPresets GotoPreset PTZGotoPreset RemovePreset PTZRemovePreset GotoHomePosition PTZGotoHomePosition SetHomePosition PTZHomePosition SendAuxiliaryCommand PTZSendAuxiliaryCommand GetAnalyticsConfigurations Media2AnalyticsConfigurations AddConfiguration Media2AddConfiguration RemoveConfiguration Media2RemoveConfiguration GetSupportedRules AnalyticsSupportedRules Rules AnalyticsRules CreateRules AnalyticsCreateRules DeleteRules AnalyticsDeleteRules GetRuleOptions AnalyticsRuleOptions SetSystemFactoryDefault SystemFactoryDefault GetVideoEncoderConfigurations VideoEncoderConfigurations GetEventProperties EventProperties OnvifCameraEvent CameraEvent GetSupportedAnalyticsModules SupportedAnalyticsModules GetAnalyticsModuleOptions AnalyticsModuleOptionsSnapshot
command requires a media profile token to be sent in the jsonObject parameter, similar to StreamUri
command.Capabilities
command's Category
field format is now an array of strings instead of a single string. This now matches the spec.VideoStream
has been removed. It was never tested, and the same functionality can be done through the use of MediaProfiles
and StreamUri
calls.This section is specific to changes made only to Device USB Camera
See Top Level V3 Migration Guide for details applicable to all EdgeX services and All Device Services section above for details applicable to all EdgeX device services.
"},{"location":"microservices/device/V3Migration/#rtsp-authentication","title":"RTSP Authentication","text":"All USB camera rtsp streams need authentication by default. To properly configure credentials for the stream refer here. This will require the building of custom images. To see how to use this feature once the service is deployed, see here.
"},{"location":"microservices/device/profile/Ch-DeviceProfile/","title":"Device Profile","text":"The device profile describes a type of device within the EdgeX system. Each device managed by a device service has an association with a device profile, which defines that device type in terms of the operations which it supports.
For a full list of device profile fields and their required values see the device profile reference.
For a detailed look at the device profile model and all its properties, see the metadata device profile data model.
"},{"location":"microservices/device/profile/Ch-DeviceProfile/#identification","title":"Identification","text":"The profile contains various identification fields. The Name
field is required and must be unique in an EdgeX deployment. Other fields are optional - they are not used by device services but may be populated for informational purposes:
A deviceResource specifies a sensor value within a device that may be read from or written to either individually or as part of a deviceCommand. It has a name for identification and a description for informational purposes.
The device service allows access to deviceResources via its device
REST endpoint.
The Attributes
in a deviceResource are the device-service-specific parameters required to access the particular value. Each device service implementation will have its own set of named values that are required here, for example a BACnet device service may need an Object Identifier and a Property Identifier whereas a Bluetooth device service could use a UUID to identify a value.
The Properties
of a deviceResource describe the value and optionally request some simple processing to be performed on it. The following fields are available:
Bool
, Int8
- Int64
, Uint8
- Uint64
, Float32
, Float64
, String
, Binary
, Object
and arrays of the primitive types (ints, floats, bool). Arrays are specified as eg. Float32Array
, BoolArray
etc.R
, RW
, or W
indicating whether the value is readable or writable.Binary
value.The processing defined by base, scale, offset, mask and shift is applied in that order. This is done within the SDK. A reverse transformation is applied by the SDK to incoming data on set operations (NB mask transforms on set are NYI)
"},{"location":"microservices/device/profile/Ch-DeviceProfile/#devicecommands","title":"DeviceCommands","text":"DeviceCommands define access to reads and writes for multiple simultaneous device resources. Each named deviceCommand should contain a number of resourceOperations
.
DeviceCommands may be useful when readings are logically related, for example with a 3-axis accelerometer it is helpful to read all axes together.
A resourceOperation consists of the following properties:
The device service allows access to deviceCommands via the same device
REST endpoint as is used to access deviceResources.
This chapter details the structure of a Device Profile and allowable values for its fields.
"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#device-profile","title":"Device Profile","text":"Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://datatracker.ietf.org/doc/html/rfc3986#section-2.3. description String N manufacturer String N model String N labels Array of String N deviceResources Array of DeviceResource Y deviceCommands Array of DeviceCommand N"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#deviceresource","title":"DeviceResource","text":"Field Name Type Required? Notes name String Y Must be unique in the EdgeX deployment. Only allow unreserved characters as defined in https://datatracker.ietf.org/doc/html/rfc3986#section-2.3. description String N isHidden Bool N Expose the DeviceResource to Command Service or not, default false tag String N attributes String-Interface Map N Each Device Service should define required and optional keys properties ResourceProperties Y"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#resourceproperties","title":"ResourceProperties","text":"Field Name Type Required? Notes valueType Enum YUint8
, Uint16
, Uint32
, Uint64
, Int8
, Int16
, Int32
, Int64
, Float32
, Float64
, Bool
, String
, Binary
, Object
, Uint8Array
, Uint16Array
, Uint32Array
, Uint64Array
, Int8Array
, Int16Array
, Int32Array
, Int64Array
, Float32Array
, Float64Array
, BoolArray
readWrite Enum Y R
, W
, RW
units String N Developer is open to define units of value minimum Float64 N Error if SET command value out of minimum range maximum Float64 N Error if SET command value out of maximum range defaultValue String N If present, should be compatible with the Type field mask Uint64 N Only valid where Type is one of the unsigned integer types shift Int64 N Only valid where Type is one of the unsigned integer types scale Float64 N Only valid where Type is one of the integer or float types offset Float64 N Only valid where Type is one of the integer or float types base Float64 N Only valid where Type is one of the integer or float types assertion String N String value to which the reading is compared mediaType String N Only required when valueType is Binary
optional String-Any Map N Optional mapping for the given resource"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#devicecommand","title":"DeviceCommand","text":"Field Name Type Required? Notes name String Y Must be unique in this profile. A DeviceCommand with a single DeviceResource is redundant unless renaming and/or restricting R/W access. For example DeviceResource is RW, but DeviceCommand is read-only. Only allow unreserved characters as defined in https://datatracker.ietf.org/doc/html/rfc3986#section-2.3. isHidden Bool N Expose the DeviceCommand to Command Service or not, default false readWrite Enum Y R
, W
, RW
resourceOperations Array of ResourceOperation Y"},{"location":"microservices/device/profile/Ch-DeviceProfileRef/#resourceoperation","title":"ResourceOperation","text":"Field Name Type Required? Notes deviceResource String Y Must name a DeviceResource in this profile defaultValue String N If present, should be compatible with the Type field of the named DeviceResource mappings String-String Map N Map the GET resourceOperation value to another string value"},{"location":"microservices/device/sdk/Ch-DeviceSDK/","title":"Device Services SDK","text":""},{"location":"microservices/device/sdk/Ch-DeviceSDK/#introduction-to-the-sdks","title":"Introduction to the SDKs","text":"EdgeX provides two software development kits (SDKs) to help developers create new device services. While the EdgeX community and the larger EdgeX ecosystem provide a number of open source and commercially available device services for use with EdgeX, there is no way that every protocol and every sensor can be accommodated and connected to EdgeX with a pre-existing device service. Even if all the device service connectivity were provided, your use case, sensor or security infrastructure may require customization. Therefore, the device service SDKs provide the means to extend or customize EdgeX\u2019s device connectivity.
EdgeX is mostly written in Go and C. There is a device service SDK written in both Go and C to support the more popular languages used in EdgeX today. In the future, alternate language SDKs may be provided by the community or made available by the larger ecosystem.
The SDKs are really libraries to be incorporated into a new micro service. They make writing a new device service much easier. By importing the SDK library of choice into your new device service project, you can focus on the details associated with getting and manipulating sensor data from your device via the specific protocol of your device. Other details, such as initialization of the device service, getting the service configured, sending sensor data to core data, managing communications with core metadata, and much more are handled by the code in the SDK library. The code in the SDK also helps to ensure your device service adheres to rules and standards of EdgeX \u2013 such as making sure the service registers with the EdgeX registry service when it starts up.
The EdgeX Foundry Device Service Software Development Kit (SDK) takes the developer through the step-by-step process to create an EdgeX Foundry device service micro service. Then setup the SDK and execute the code to generate the device service scaffolding to get you started using EdgeX.
The Device Service SDK supports:
This page provides detail on the API provided by the C SDK. A device service implementation will define a number of callback functions, and a main
function which registers these functions with the SDK and uses the SDK lifecycle methods to start the service and shut it down. The implementation may also use some of the helper functions which the SDK provides.
In various places information is passed between the SDK and the DS implementation using the iot_data_t
type. This is a holder for data of different types, and its use is described in its own page : Use of iot_data_t
This struct represents a running device service. An instance of it is created by calling devsdk_service_new
, and this instance should be passed in subsequent sdk function calls.
This struct type holds pointers to the various callback functions which the device service implementor needs to define in order to do the device-specific work of the service
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_address_t","title":"devsdk_address_t","text":"This is an alias to void*
. Implementations should define their own structure for device addresses and cast devsdk_address_t*
to pointers to that structure.
This is an alias to void*
. Implementations should define their own structure for device resource information and cast devsdk_resource_attr_t*
to pointers to that structure.
This is an opaque structure which holds protocol properties. The devsdk_protocols_properties
function is used to find the properties for a particular protocol.
This structure is used to pass errors back from the device service startup and shutdown functions
Field Type Content code uint32_t A numeric code indicating the error. Zero is used for success reason const char * A string describing the errorAn instance of devsdk_error with the code field set to zero should be passed by reference when calling startup and shutdown functions
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_device_t","title":"devsdk_device_t","text":"Specifies a device
Field Type Content name char* The device's name (for logging purposes) address devsdk_address_t Address of the device in parsed form"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_resource_t","title":"devsdk_resource_t","text":"Specifies a resource on a device
Field Type Content name char* The resource name (for logging purposes) attrs devsdk_resource_attr_t Resource attributes in parsed form type iot_typecode_t Expected type of values read from or written to the resource"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_commandrequest","title":"devsdk_commandrequest","text":"Specifies a resource in a get or put request
Field Type Content resource devsdk_resource_t* The resource definition mask uint64_t Mask to be applied (put requests only)"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_commandresult","title":"devsdk_commandresult","text":"Holds a value which has been read from a resource
Field Type Content value iot_data_t* The value which has been read origin uint64_t Timestamp of the valueThe timestamp is specified in nanoseconds past the epoch. It should only be set if one is provided by the device itself. Otherwise the timestamp should be left at zero and the SDK will use the current time.
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_device_resources","title":"devsdk_device_resources","text":"A list of device resources available on a device
Field Type Content resname char* Name of the resource attributes iot_data_t* String-keyed map of the resource attributes type iot_typecode_t Type of the data which may be read or written readable bool Whether this resource is readable writable bool Whether this resource is writable next devsdk_device_resources* The next resource in the list, or NULL if this is the last"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_devices","title":"devsdk_devices","text":"A description of a device or a list of such descriptions
Field Type Content device devsdk_device_t* The device's name and addressing information resources devsdk_device_resources* Information on the device's resources next devsdk_devices* The next device in the list, or NULL if this is the last"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#callbacks","title":"Callbacks","text":"Note that each of the callback functions has as its first parameter a void*
pointer. This pointer is specified by the implementation when the device service is created, and is passed to all callbacks. It may therefore be used to hold whatever state is required by the implementation.
This function is called during the service start operation. Its purpose is to supply the implementation with a logger and configuration.
Parameter Type Description impl void* The context data passed in when the service was created lc iot_logger_t* A logging client for the device service config iot_data_t* A string-keyed map containing the configuration specified in the service's \"Driver\" sectionThe function should return true to indicate that initialization was successful, or false to abort the service startup - eg if the supplied configuration was invalid or resources were not available
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_create_address","title":"devsdk_create_address","text":"This function should take the protocol properties that were specified for a device, and create an object representing the device's address in a form suitable for subsequent access.
Parameter Type Description impl void* The context data passed in when the service was created protocols const devsdk_protocols* The protocol properties for the device exception iot_data_t** Additional information in the event of an errorIf the supplied protocol properties are valid (ie, mandatory elements are supplied and have valid values), the function should return an allocated structure representing the address. Otherwise the function should return NULL, and set *exception
to a string (using eg. iot_data_alloc_string
) containing an error message.
This function should free a structure that was previously allocated in the devsdk_create_address
implementation.
This function should take the attributes that were specified for a deviceResource, and create an object representing these attributes in a form suitable for subsequent access.
Parameter Type Description impl void* The context data passed in when the service was created attributes const iot_data_t* The attributes for the device exception iot_data_t** Additional information in the event of an errorIf the supplied attributes are valid (ie, mandatory elements are supplied and have valid values), the function should return an allocated structure representing the resource within the device. Otherwise the function should return NULL, and set *exception
to a string (using eg. iot_data_alloc_string
) containing an error message.
This function should free a structure that was previously allocated in the devsdk_create_resource_attr
implementation
This function is called when a get (read) request on a deviceResource or deviceCommand is made. In the former case, the request is for a single reading and in the latter, for multiple readings. These readings will be packaged by the SDK into an Event.
Parameter Type Description impl void* The context data passed in when the service was created device devsdk_device_t* The name and address of the device to be queried nreadings uint32_t The number of readings being requested requests devsdk_commandrequest* Array containing details of the resources to be queried readings devsdk_commandresult* Array that the function should populate, with results of this request options iot_data_t* Any options which were specified in this request exception iot_data_t** Additional information in the event of an errorThe readings array will have been allocated in the SDK; the implementation should set the results into readings[0]...readings[nreadings - 1]
.
Options
will be a string-keyed map which contains any options set specifically on this request. In the current implementation these may have been set via query parameters in the URL used to make the request.
The function should return true if all of the requested resources were successfully read. Otherwise, *exception
should be allocated with a string value indicating the problem (this will be logged and returned to the caller), and false returned.
This function is called when a put (write) request on a deviceResource or deviceCommand is made. In the former case, the request is for a single resource and in the latter, for multiple resources.
Parameter Type Description impl void* The context data passed in when the service was created device devsdk_device_t* The name and address of the device to be written to nreadings uint32_t The number of resources to be written requests devsdk_commandrequest* Array containing details of the resources to be written values iot_data_t*[] Array of values to be written options iot_data_t* Any options which were specified in this request exception iot_data_t** Additional information in the event of an errorIf the mask
field in an element of the request array is nonzero, the implementation should implement the following:
new-value = (current-value & mask) | request-value\n
Options
will be a string-keyed map which contains any options set specifically on this request. In the current implementation these may have been set via query parameters in the URL used to make the request.
The function should return true if all of the requested resources were successfully written. Otherwise, *exception
should be allocated with a string value indicating the problem (this will be logged and returned to the caller), and false returned.
The implementation should perform any cleanup necessary before shutdown. At the time that this function is called, the service will be quiescent, ie there will be no new incoming requests.
Parameter Type Description impl void* The context data passed in when the service was created force bool An unclean shutdown may be performed if necessary. Long or indefinite timeouts should not occur."},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_init","title":"devsdk_callbacks_init","text":"Call this function in order to create a devsdk_callbacks object containing the required callback functions. This may then be passed to the SDK when starting the service
Parameter Type init devsdk_initialize gethandler devsdk_handle_get puthandler devsdk_handle_put stop devsdk_stop create_addr devsdk_create_address free_addr devsdk_free_address create_res devsdk_create_resource_attr free_res devsdk_free_resource_attr"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#optional-callback-functions","title":"Optional callback functions","text":""},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_reconfigure","title":"devsdk_reconfigure","text":"Implement this function in order to allow changes in the device-specific configuration to be made without restarting the service.
Parameter Type Description impl void* The context data passed in when the service was created config iot_data_t* The new configuration (contains all elements, not just those which have changed)"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_set_reconfiguration","title":"devsdk_callbacks_set_reconfiguration","text":"Call this to add your reconfiguration function to the callbacks structure
Parameter Type Description cb devsdk_callbacks* structure to be modified reconf devsdk_reconfigure function to add"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_discover","title":"devsdk_discover","text":"This function is called when a request for discovery is made. This may occur automatically at intervals or due to an external request. The SDK implements locking such that multiple invocations of this function will not be made in parallel.
Implementations should perform a scan for devices, and use the devsdk_add_discovered_devices
function to register them.
This is a placeholder function for future use. Its purpose will be to allow automatic generation of device profiles. It is not used in current versions of EdgeX.
Parameter Type Description impl void* The context data passed in when the service was created dev devsdk_device_t* The device which is to be described options iot_data_t* Service specific discovery options map. May be NULL resources devsdk_device_resources** The operations supported by the device exception iot_data_t** Additional information in the event of an errorImplementations should populate the resources
parameter and return true if it is possible to automatically describe the device. Otherwise return false and set exception
.
Call this to add your discovery functions to the callbacks structure
Parameter Type Description cb devsdk_callbacks* structure to be modified discover devsdk_discover device discovery function describe devsdk_describe device description function, may be NULL (currently unused)"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_add_device_callback","title":"devsdk_add_device_callback","text":"To be notified when a device is added to the system (and assigned to this device service), provide an implementation of this function
Parameter Type Description impl void* The context data passed in when the service was created devname char* The name of the new device protocols devsdk_protocols* The protocol properties that comprise the device's address resources devsdk_device_resources* The operations supported by the device adminEnabled bool Whether the device is administratively enabled"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_update_device_callback","title":"devsdk_update_device_callback","text":"To be notified when a device managed by this service is modified, provide an implementation of this function
Parameter Type Description impl void* The context data passed in when the service was created devname char* The name of the updated device protocols devsdk_protocols* The protocol properties that comprise the device's address state bool Whether the device is administratively enabled"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_remove_device_callback","title":"devsdk_remove_device_callback","text":"To be notified when a device managed by this service is removed, provide an implementation of this function
Parameter Type Description impl void* The context data passed in when the service was created devname char* The name of the removed device protocols devsdk_protocols* The protocol properties that comprise the device's address"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_set_listeners","title":"devsdk_callbacks_set_listeners","text":"Call this to add your add, remove and/or update listener functions to the callbacks structure. Any of the functions may be NULL
Parameter Type Description cb devsdk_callbacks* structure to be modified device_added devsdk_add_device_callback device addition listener device_updated devsdk_update_device_callback device update listener device_removed devsdk_remove_device_callback device removal listener"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_autoevent_start_handler","title":"devsdk_autoevent_start_handler","text":"Some device types may be configured to generate readings automatically at intervals. Such behavior may be enabled by providing implementations of this function and the stop handler described below. If \"AutoEvents\" have been defined for a device, this function will be called to request that automatic events should begin. The events when generated should be posted using the devsdk_post_readings
function. In the absence of an implementation of this function, the SDK will poll the device via the get handler.
The function should return a pointer to a data structure that will be provided in a subsequent call to the stop handler when this autoevent is t be stopped
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_autoevent_stop_handler","title":"devsdk_autoevent_stop_handler","text":"This function is called to request that automatic events should cease
Parameter Type Description impl void* The context data passed in when the service was created handle void* The data structure returned by a previous call to the start handler"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_callbacks_set_autoevent_handlers","title":"devsdk_callbacks_set_autoevent_handlers","text":"Call this to add your autoevent management functions to the callbacks structure. Both start and stop handlers are required
Parameter Type Description cb devsdk_callbacks* structure to be modified ae_starter devsdk_autoevent_start_handler Autoevent start handler ae_stopper devsdk_autoevent_stop_handler Autoevent stop handler"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#initialisation-and-shutdown","title":"Initialisation and Shutdown","text":"These functions manage the lifecycle of the device service and should be called in the order presented here
"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_service_new","title":"devsdk_service_new","text":"This function creates a new device service
Parameter Type Description defaultname char* The device service name, used in logging, metadata lookups and to scope configuration. This may be overridden via the commandline version char* The version string for this service. This is for information only, and will be logged during startup impldata void* An object pointer which will be passed back whenever one of the callback functions is invoked implfns devsdk_callbacks* Structure containing the device implementation functions. The SDK will call these functions in order to carry out its various actions argc int* A pointer to argc as passed into main(). This will be adjusted to account for arguments consumed by the SDK argv char** argv as passed into main(). This will be adjusted to account for arguments consumed by the SDK err devsdk_error* Nonzero reason codes will be set here in the event of errorsThe newly created service is represented by an object of type devsdk_service_t, which is returned if the service is created successfully
The SDK modifies the commandline argument parameters argc
and argv
, removing those arguments which it supports. The implementation may support additional arguments by inspecting these modified values after the create function has been called
Start the device service. Default values for the implementation-specific configuration are passed in here. These must be provided in a string-keyed iot_data_t map. A value named \"X\" may be over-ridden in the configuration file by an entry for X in the [Driver]
section. For dynamically-updatable configuration, set a value for \"Writable/X\". This will correspond to a configuration file entry in the [Writable.Driver]
section and updates may be received by implementing the devsdk_reconfigure
function
Stop the device service. Any automatic events will be cancelled and the REST API for the device service will be shut down
Parameter Type Description svc devsdk_service_t* The device service force bool Force stop. Currently unused but is passed through to the stop handler err devsdk_error* Nonzero reason codes will be set here in the event of errors"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_service_free","title":"devsdk_service_free","text":"This function disposes of the device service object and all associated resources
Parameter Type Description svc devsdk_service_t* The device service"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#additional-functionality","title":"Additional functionality","text":""},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_usage","title":"devsdk_usage","text":"This function writes out the commandline options supported by the SDK. It may be useful if a --help
option is to be implemented
This function returns a map of properties (keyed on string) for the named protocol.
Parameter Type Description prots devsdk_protocols* The protocols to search name char* The name of the protocol to search for"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_protocols_new","title":"devsdk_protocols_new","text":"This function creates a new protocols object, or adds a property set to an existing one.
Parameter Type Description name char* The name of the new protocol properties iot_data_t* The properties of the new protocol list devsdk_protocols* The protocols object to extend, or NULL to create a new one"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_protocols_dup","title":"devsdk_protocols_dup","text":"This function duplicates a protocols object
Parameter Type Description e devsdk_protocols* object to duplicate"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_protocols_free","title":"devsdk_protocols_free","text":"This function disposes of the memory used by a protocols object
Parameter Type Description e devsdk_protocols* object to free"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_get_secrets","title":"devsdk_get_secrets","text":"This function returns secrets (credentials) for the service. In insecure mode these will be part of the service configuration, in secure mode they will be retrieved from the secret store (eg, Vault).
The secrets are returned as a string-keyed map. This should be disposed after use using iot_data_free
This function posts readings to EdgeX. Depending on configuration this may be via REST to core-data or via the Message Bus to various upstream services. The readings are assembled into an Event and then posted
This function may be used in services which implement the autoevent handlers or by any other service where the natural operation is that readings are generated by the device rather than being explicitly requested
Parameter Type Description svc devsdk_service_t* The device service device_name char* Name of the device that has generated the readings resource_name char* Name of the resource (or command) corresponding to this set of readings values devsdk_commandresult* The readings to be postedThe cardinality of the values
array will depend on the resource - if it is a deviceResource
there should be a single reading; for a deviceCommand
there may be several
This function should be called in response to a request for device discovery, but may be called at any time if for a particular device class immediate automatic discovery is appropriate. The function takes an array of devices in order to allow for batching, but it may be called multiple times during the course of a single invocation of discovery if necessary
Parameter Type Description svc devsdk_service_t* The device service ndevices uint32_t Number of devices discovered devices devsdk_discovered_device* Array of discovered devices"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_set_device_opstate","title":"devsdk_set_device_opstate","text":"This function can be used to indicate that a device has become non-operational or non-responsive, or that a device has returned from such a state. The SDK will return errors for requests for a device marked non-operational without calling the get or set handler
Parameter Type Description svc devsdk_service_t* The device service devname char* The device that has changed state operational bool The new operational state"},{"location":"microservices/device/sdk/Ch-Ref-SDK-C/#devsdk_get_devices","title":"devsdk_get_devices","text":"Returns a list of devices registered with this service
Parameter Type Description svc devsdk_service_t* The device serviceThe returned list should be disposed after use using devsdk_free_devices
Returns information on a device
Parameter Type Description svc devsdk_service_t* The device service name char* The device to query forThe returned device should be disposed after use using devsdk_free_devices
Frees a devices structure returned by devsdk_get_devices
or devsdk_get_device
The iot_data_t
type is a holder for various types of data, and it is used in the SDK API to hold reading values and name-value collections (maps keyed by string). This chapter describes how to use iot_data_t
in interactions with the SDK. It is not a complete guide to either the type or to the IOT utilities package which includes it
The type of data held in an iot_data_t
object is represented by the iot_typecode_t
type. This has a field type
, which is an iot_data_type_t
, and can take the following values:
IOT_DATA_INT8 IOT_DATA_INT16 IOT_DATA_INT32 IOT_DATA_INT64
for signed integersIOT_DATA_UINT8 IOT_DATA_UINT16 IOT_DATA_UINT32 IOT_DATA_UINT64
for unsigned integersIOT_DATA_FLOAT32 IOT_DATA_FLOAT64
for floating point valuesIOT_DATA_BOOL
for booleansIOT_DATA_STRING
for stringsIOT_DATA_ARRAY
for arraysIOT_DATA_BINARY
for binary dataIOT_DATA_MAP
for maps (used for EdgeX Object type)For the array case, the iot_typecode_t
has an element_type
field, also of type iot_data_type_t
which indicates the type of the array elements - integers, floats and booleans are supported.
Instances of iot_data_t
are created with the iot_data_alloc_*
functions
For primitive types, use
iot_data_alloc_i8 iot_data_alloc_i16 iot_data_alloc_i32 iot_data_alloc_i64
for signed integersiot_data_alloc_ui8 iot_data_alloc_ui16 iot_data_alloc_ui32 iot_data_alloc_ui64
for unsigned integersiot_data_alloc_f32 iot_data_alloc_f64
for floatsiot_data_alloc_bool
for booleansEach takes a single parameter which is the value to hold
"},{"location":"microservices/device/sdk/Ch-Using-iot-data-t/#strings","title":"Strings","text":"Strings are allocated using iot_data_alloc_string
. In addition to the const char*
which specifies the string to hold, a further parameter of type iot_data_ownership_t
must be provided. This sets the ownership semantics for the string, and can take the following values:
iot_data_t
object is freed IOT_DATA_COPY A copy will be made of the string. This copy will be freed when the iot_data_t
object is freed, but the calling code remains responsible for the original"},{"location":"microservices/device/sdk/Ch-Using-iot-data-t/#arrays","title":"Arrays","text":"For array readings use iot_data_alloc_array
For binary data use iot_data_alloc_binary
Object-typed readings are represented by a map. Allocate it using
iot_data_alloc_map (IOT_DATA_STRING)
Values are added to the map using the iot_data_string_map_add
function
The accessors for primitive types are
iot_data_i8 iot_data_i16 iot_data_i32 iot_data_i64
iot_data_ui8 iot_data_ui16 iot_data_ui32 iot_data_ui64
iot_data_f32 iot_data_f64
iot_data_bool
Each function takes an iot_data_t*
as parameter and returns the value in the expected C type
The iot_data_string
function returns the char*
held in the data object
iot_data_array_length
returns the length of an arrayiot_data_address
returns a pointer to the first elementiot_data_array_type
returns the type of the elements (as iot_data_type_t
)iot_data_address
returns a pointer to the binary dataiot_data_array_length
returns the length in bytesUse iot_data_string_map_get
to obtain the iot_data_t
instance representing a field
Instances of iot_data_t
are freed using the iot_data_free
function
The DeviceServiceSDK
API provides the following APIs for the device service developer to use.
type DeviceServiceSDK interface {\nAddDevice(device models.Device) (string, error)\nDevices() []models.Device\nGetDeviceByName(name string) (models.Device, error)\nUpdateDevice(device models.Device) error\nRemoveDeviceByName(name string) error\nAddDeviceProfile(profile models.DeviceProfile) (string, error)\nDeviceProfiles() []models.DeviceProfile\nGetProfileByName(name string) (models.DeviceProfile, error)\nUpdateDeviceProfile(profile models.DeviceProfile) error\nRemoveDeviceProfileByName(name string) error\nAddProvisionWatcher(watcher models.ProvisionWatcher) (string, error)\nProvisionWatchers() []models.ProvisionWatcher\nGetProvisionWatcherByName(name string) (models.ProvisionWatcher, error)\nUpdateProvisionWatcher(watcher models.ProvisionWatcher) error\nRemoveProvisionWatcher(name string) error\nDeviceResource(deviceName string, deviceResource string) (models.DeviceResource, bool)\nDeviceCommand(deviceName string, commandName string) (models.DeviceCommand, bool)\nAddDeviceAutoEvent(deviceName string, event models.AutoEvent) error\nRemoveDeviceAutoEvent(deviceName string, event models.AutoEvent) error\nUpdateDeviceOperatingState(name string, state models.OperatingState) error\nDeviceExistsForName(name string) bool\nPatchDevice(updateDevice dtos.UpdateDevice) error\nRun() error\nName() string\nVersion() string\nAsyncReadingsEnabled() bool\nAsyncValuesChannel() chan *sdkModels.AsyncValues\nDiscoveredDeviceChannel() chan []sdkModels.DiscoveredDevice\nDeviceDiscoveryEnabled() bool\nDriverConfigs() map[string]string\nAddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error\nAddCustomRoute(route string, authenticated Authenticated, handler func(echo.Context) error, methods ...string) error\nLoadCustomConfig(customConfig UpdatableConfig, sectionName string) error\nListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error\nLoggingClient() logger.LoggingClient\nSecretProvider() interfaces.SecretProvider\nMetricsManager() interfaces.MetricsManager\n}\n
"},{"location":"microservices/device/sdk/SDK-Go-API/#apis","title":"APIs","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#auto-event","title":"Auto Event","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#adddeviceautoevent","title":"AddDeviceAutoEvent","text":"AddDeviceAutoEvent(deviceName string, event models.AutoEvent) error
This API adds a new AutoEvent to the Device with given name. An error is returned if not able to add AutoEvent
"},{"location":"microservices/device/sdk/SDK-Go-API/#removedeviceautoevent","title":"RemoveDeviceAutoEvent","text":"RemoveDeviceAutoEvent(deviceName string, event models.AutoEvent) error
This API removes an AutoEvent from the Device with given name. An error is returned if not able to remove AutoEvent
"},{"location":"microservices/device/sdk/SDK-Go-API/#device","title":"Device","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#adddevice","title":"AddDevice","text":"AddDevice(device models.Device) (string, error)
This API adds a new Device to Core Metadata and device service's cache. Returns new Device id or an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#updatedevice","title":"UpdateDevice","text":"UpdateDevice(device models.Device) error
This API updates the Device in Core Metadata and device service's cache. An error is returned if the Device can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#updatedeviceoperatingstate","title":"UpdateDeviceOperatingState","text":"UpdateDeviceOperatingState(deviceName string, state models.OperatingState) error
This API updates the Device's operating state for the given name in Core Metadata and device service's cache. An error is return if the operating state can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#removedevicebyname","title":"RemoveDeviceByName","text":"RemoveDeviceByName(name string) error
This API removes the specified Device by name from Core Metadata and device service cache. An error is return if the Device can not be removed.
"},{"location":"microservices/device/sdk/SDK-Go-API/#devices","title":"Devices","text":"Devices() []models.Device
This API returns all managed Devices from the device service's cache
"},{"location":"microservices/device/sdk/SDK-Go-API/#getdevicebyname","title":"GetDeviceByName","text":"GetDeviceByName(name string) (models.Device, error)
This API returns the Device by its name if it exists in the device service's cache, or returns an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#patchdevice","title":"PatchDevice","text":"PatchDevice(updateDevice dtos.UpdateDevice) error
This API patches the specified device properties in Core Metadata. Device name is required to be provided in the UpdateDevice.
Note
All properties of UpdateDevice are pointers and anything that is nil
will not modify the device. In the case of Arrays and Maps, the whole new value must be sent, as it is applied as an overwrite operation.
Example - PatchDevice()
service := interfaces.Service()\nlocked := models.Locked\nreturn service.PatchDevice(dtos.UpdateDevice{\nName: &name,\nAdminState: &locked,\n})\n
"},{"location":"microservices/device/sdk/SDK-Go-API/#deviceexistsforname","title":"DeviceExistsForName","text":"DeviceExistsForName(name string) bool
This API returns true if a device exists in cache with the specified name, otherwise it returns false.
"},{"location":"microservices/device/sdk/SDK-Go-API/#device-profile","title":"Device Profile","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#adddeviceprofile","title":"AddDeviceProfile","text":"AddDeviceProfile(profile models.DeviceProfile) (string, error)
This API adds a new DeviceProfile to Core Metadata and device service's cache. Returns new DeviceProfile id or error
"},{"location":"microservices/device/sdk/SDK-Go-API/#updatedeviceprofile","title":"UpdateDeviceProfile","text":"UpdateDeviceProfile(profile models.DeviceProfile) error
This API updates the DeviceProfile in Core Metadata and device service's cache. An error is returned if the DeviceProfile can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#removedeviceprofilebyname","title":"RemoveDeviceProfileByName","text":"RemoveDeviceProfileByName(name string) error
This API removes the specified DeviceProfile by name from Core Metadata and device service's cache. An error is return if the DeviceProfile can not be removed.
"},{"location":"microservices/device/sdk/SDK-Go-API/#deviceprofiles","title":"DeviceProfiles","text":"DeviceProfiles() []models.DeviceProfile
This API returns all managed DeviceProfiles from device service's cache.
"},{"location":"microservices/device/sdk/SDK-Go-API/#getprofilebyname","title":"GetProfileByName","text":"GetProfileByName(name string) (models.DeviceProfile, error)
This API returns the DeviceProfile by its name if it exists in the cache, or returns an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#provision-watcher","title":"Provision Watcher","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#addprovisionwatcher","title":"AddProvisionWatcher","text":"AddProvisionWatcher(watcher models.ProvisionWatcher) (string, error)
This API adds a new Watcher to Core Metadata and device service's cache. Returns new ProvisionWatcherid or error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#updateprovisionwatcher","title":"UpdateProvisionWatcher","text":"UpdateProvisionWatcher(watcher models.ProvisionWatcher) error
This API updates the ProvisionWatcherin in Core Metadata and device service's cache. An error is returned if the ProvisionWatcher can not be updated.
"},{"location":"microservices/device/sdk/SDK-Go-API/#removeprovisionwatcher","title":"RemoveProvisionWatcher","text":"RemoveProvisionWatcher(name string) error
This API removes the specified ProvisionWatcherby name from Core Metadata and device service's cache. An error is return if the ProvisionWatcher can not be removed.
"},{"location":"microservices/device/sdk/SDK-Go-API/#provisionwatchers","title":"ProvisionWatchers","text":"ProvisionWatchers() []models.ProvisionWatcher
This API returns all managed ProvisionWatchers from device service's cache.
"},{"location":"microservices/device/sdk/SDK-Go-API/#getprovisionwatcherbyname","title":"GetProvisionWatcherByName","text":"GetProvisionWatcherByName(name string) (models.ProvisionWatcher, error)
This API returns the ProvisionWatcher by its name if it exists in the device service's , or returns an error.
"},{"location":"microservices/device/sdk/SDK-Go-API/#resource-command","title":"Resource & Command","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#deviceresource","title":"DeviceResource","text":"DeviceResource(deviceName string, deviceResource string) (models.DeviceResource, bool)
This API retrieves the specific DeviceResource instance from device service's cache for the specified Device name and Resource name. Returns the DeviceResource and true if found in device service's cache or false if not found.
"},{"location":"microservices/device/sdk/SDK-Go-API/#devicecommand","title":"DeviceCommand","text":"DeviceCommand(deviceName string, commandName string) (models.DeviceCommand, bool)
This API retrieves the specific DeviceCommand instance from device service's cache for the specified Device name and Command name. Returns the DeviceCommand and true if found in device service's cache or false if not found.
"},{"location":"microservices/device/sdk/SDK-Go-API/#custom-configuration","title":"Custom Configuration","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#loadcustomconfig","title":"LoadCustomConfig","text":"LoadCustomConfig(customConfig service.UpdatableConfig, sectionName string) error
This API attempts to load service's custom configuration. It uses the same command line flags to process the custom config in the same manner as the standard configuration. Returns an error is custom configuration can not be loaded. See Custom Structured Configuration section for more details.
"},{"location":"microservices/device/sdk/SDK-Go-API/#listenforcustomconfigchanges","title":"ListenForCustomConfigChanges","text":"ListenForCustomConfigChanges(configToWatch interface{}, sectionName string, changedCallback func(interface{})) error
This API attempts to start listening for changes to the specified custom configuration section. LoadCustomConfig API must be called before this API. See Custom Structured Configuration section for more details.
"},{"location":"microservices/device/sdk/SDK-Go-API/#miscellaneous","title":"Miscellaneous","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#name","title":"Name","text":"Name() string
This API returns the name of the Device Service.
"},{"location":"microservices/device/sdk/SDK-Go-API/#version","title":"Version","text":"Version() string
This API returns the version number of the Device Service.
"},{"location":"microservices/device/sdk/SDK-Go-API/#driverconfigs","title":"DriverConfigs","text":"DriverConfigs() map[string]string
This API returns the driver specific configuration
"},{"location":"microservices/device/sdk/SDK-Go-API/#asyncreadingsenabled","title":"AsyncReadingsEnabled","text":"AsyncReadingsEnabled() bool
This API returns a bool value to indicate whether the asynchronous reading is enabled via configuration.
"},{"location":"microservices/device/sdk/SDK-Go-API/#devicediscoveryenabled","title":"DeviceDiscoveryEnabled","text":"DeviceDiscoveryEnabled() bool
This API returns a bool value to indicate whether the device discovery is enabled via configuration.
"},{"location":"microservices/device/sdk/SDK-Go-API/#addroute-deprecated","title":"AddRoute (Deprecated)","text":"AddRoute(route string, handler func(http.ResponseWriter, *http.Request), methods ...string) error
This API is deprecated in favor of AddCustomRoute()
which has an explicit parameter to indicate whether the route should require authentication.
AddCustomRoute(route string, authenticated interfaces.Authenticated, handler func(echo.Context) error, methods ...string) error
This API allows leveraging the existing internal web server to add routes specific to the Device Service. If the route is marked authenticated, it will require an EdgeX JWT when security is enabled. Returns error is route could not be added.
Note
The handler
function uses the signature of echo.HandlerFunc
which is func(echo.Context) error
. See echo API HandlerFunc section for more details.
LoggingClient() logger.LoggingClient
This API returns the LoggingClient
used to log messages.
SecretProvider() interfaces.SecretProvider
This API returns the SecretProvider used to get/save the service secrets. See Secret Provider API section for more details.
"},{"location":"microservices/device/sdk/SDK-Go-API/#metricsmanager","title":"MetricsManager","text":"MetricsManager () interfaces.MetricsManager
This API returns the MetricsManager used to register custom service metrics. See Service Metrics for more details
"},{"location":"microservices/device/sdk/SDK-Go-API/#asyncvalueschannel","title":"AsyncValuesChannel","text":"AsyncValuesChannel() chan *sdkModels.AsyncValues
This API returns a channel to allow developer send asynchronous reading back to SDK.
"},{"location":"microservices/device/sdk/SDK-Go-API/#discovereddevicechannel","title":"DiscoveredDeviceChannel","text":"DiscoveredDeviceChannel() chan []sdkModels.DiscoveredDevice
This API returns a channel to allow developer send discovered devices back to SDK.
"},{"location":"microservices/device/sdk/SDK-Go-API/#internal","title":"Internal","text":""},{"location":"microservices/device/sdk/SDK-Go-API/#run","title":"Run","text":"Run() error
This internal API call starts this Device Service. It should not be called directly by a device service. Instead, call startup.Bootstrap(...)
.
The following table lists the EdgeX device services and protocols they support.
Device Service Repository Protocol Status Comments Documentation device-bacnet-c BACnet Active Supports BACnet via ethernet (IP) or serial (MSTP). Uses the Steve Karag BACnet stack device-bacnet docs device-coap-c CoAP Active EdgeX device service for CoAP-based REST protocol device-coap docs device-gpio GPIO Active Linux only; uses sysfs ABI device-gpio docs device-modbus-go Modbus Active Supports Modbus over TCP or RTU device-modbus docs device-mqtt-go MQTT Active Two way communications via multiple MQTT topics device-mqtt docs device-onvif-camera ONVIF Active Full implementation of ONVIF spec. Note that not all cameras implement the complete ONVIF spec. device-onvif-camera docs device-rest-go REST Active provides one-way communications only. Allows posting of binary and JSON data via REST. Events are single reading only. device-rest docs device-rfid-llrp-go LLRP Active Communications with RFID readers via LLRP. device-rfid-llrp docs device-snmp-go SNMP Active Basic implementation of SNMP protocol. Async callbacks and traps not currently supported. device-snmp docs device-uart UART Active Linux only; for connecting serial UART devices to EdgeX device-urt docs device-usb-camera USB Active USB using V4L2 API. ONLY works on Linux with kernel v5.10 or higher. Includes RTSP server for video streaming. device-usb-camera docs device-virtual-go Active Simulates sensor readings of type binary, Boolean, float, integer and unsigned integer device-virtual docsNote
Check the above Device Service README(s) for known devices that have been tested with the Device Service. Not all Device Service READMEs will have this information.
"},{"location":"microservices/device/services/device-bacnet/","title":"Device BACNET","text":"Device service for BACnet protocol written in C. This service may be built to support BACnet devices connected via ethernet (/IP) or serial (/MSTP).
See README for more details
"},{"location":"microservices/device/services/device-coap/","title":"Device COAP","text":"Device service for CoAP-based REST protocol
See README for more details
"},{"location":"microservices/device/services/device-gpio/","title":"Device GPIO","text":"Device service for connecting GPIO devices to EdgeX
See README for more details
"},{"location":"microservices/device/services/device-modbus/","title":"Device ModBus","text":"Device service for connecting Modbus devices to EdgeX.
See README for more details
"},{"location":"microservices/device/services/device-mqtt/","title":"Device MQTT","text":"Device service for connecting a MQTT enabled devices to EdgeX.
See README for more details
Also see Adding MQTT Device Tutorial for more details on using Device MQTT.
"},{"location":"microservices/device/services/device-rest/","title":"Device REST","text":"Device service for REST protocol
See README for more details
"},{"location":"microservices/device/services/device-rfid-llrp/","title":"Device RFID LLRP","text":"Device service for communicating with LLRP-based RFID readers.
See README for more details
"},{"location":"microservices/device/services/device-snmp/","title":"Device SNMP","text":"Device service for SNMP protocol
See README for more details
"},{"location":"microservices/device/services/device-uart/","title":"Device UART","text":"Device service to connect serial UART devices to EdgeX
See README for more details
"},{"location":"microservices/device/services/device-onvif-camera/General/","title":"General","text":""},{"location":"microservices/device/services/device-onvif-camera/General/#overview","title":"Overview","text":"The Open Network Video Interface Forum (ONVIF) Device Service is a microservice created to address the lack of standardization and automation of camera discovery and onboarding. EdgeX Foundry is a flexible microservice-based architecture created to promote the interoperability of multiple device interface combinations at the edge. In an EdgeX deployment, the ONVIF Device Service controls and communicates with ONVIF-compliant cameras, while EdgeX Foundry presents a standard interface to application developers. With normalized connectivity protocols and a vendor-neutral architecture, EdgeX paired with ONVIF Camera Device Service, simplifies deployment of edge camera devices.
Use the ONVIF Device Service to streamline and scale your edge camera device deployment.
"},{"location":"microservices/device/services/device-onvif-camera/General/#how-it-works","title":"How It Works","text":"The figure below illustrates the software flow through the architecture components.
Figure 1: Software Flow
A brief video demonstration of building and using the device service:
Get Started>
"},{"location":"microservices/device/services/device-onvif-camera/General/#examples","title":"Examples","text":"To see an example utilizing the ONVIF device service, refer to the camera management example application
"},{"location":"microservices/device/services/device-onvif-camera/General/#security","title":"Security","text":"This software has numerous security features. For production environments, it is recommended to use secure mode when running the EdgeX software stack. This documentation will contain warnings about any known security vulnerabilities or risks. In addition to the security features, it is suggested to use best security practices. These include, but are not limited to:
For more information, please visit the EdgeX Security documentation
"},{"location":"microservices/device/services/device-onvif-camera/General/#resources","title":"Resources","text":"Learn more about EdgeX Core Metadata Learn more about EdgeX Core Command
"},{"location":"microservices/device/services/device-onvif-camera/General/#references","title":"References","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/swagger/","title":"Device ONVIF Swagger API Documentation","text":"Use this RESTful API documentation to learn more about the capabilities of the device service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/","title":"Custom Build","text":"Follow this guide to make custom configurations and build the device service image from the source.
Warning
This is not the recommended method of deploying the service. To use the default images, see here.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#get-the-source-code","title":"Get the Source Code","text":"Clone the device-onvif-camera repository.
git clone https://github.com/edgexfoundry/device-onvif-camera.git\n
Navigate into the directory
cd device-onvif-camera\n
Checkout the latest release (main):
git checkout main\n
Configuring pre-defined devices will allow the service to automatically provision them into core-metadata. Create a list of devices with the appropriate information as outlined below.
Make a copy of the camera.yaml.example
:
cp ./cmd/res/devices/camera.yaml.example ./cmd/res/devices/camera.yaml\n
Warning
Be careful when storing any potentially important information in cleartext on files in your computer. Potentially sensitive information in this case could include the IP address of your ONVIF camera or any custom metadata you configure.
Open the cmd/res/devices/camera.yaml
file using your preferred text editor and update the Address
and Port
fields to match the IP address of the Camera and port used for ONVIF services:
Sample: Snippet from camera.yaml
deviceList:\n- name: Camera001 # Modify as desired\nprofileName: onvif-camera # Default profile\ndescription: onvif conformant camera # Modify as desired\nprotocols:\nOnvif:\nAddress: 191.168.86.34 # Set to your camera IP address\nPort: '2020' # Set to the port your camera uses\nCustomMetadata:\nCommonName: Outdoor camera\n
Optionally, modify the Name
and Description
fields to more easily identify the camera. The Name
is the camera name used when using ONVIF Device Service Rest APIs. The Description
is simply a more detailed explanation of the camera.
You can also optionally configure the CustomMetadata
with custom fields and values to store any extra information you would like.
To add more pre-defined devices, copy the above configuration and edit to match your extra devices.
Open the cmd/res/configuration.yaml
file using your preferred text editor
Make sure secret name
is set to match SecretName
in camera.yaml
. In the sample below, it is \"credentials001\"
. If you have multiple cameras, make sure the secret names match.
Under secretName
, set username
and password
to your camera credentials. If you have multiple cameras copy the Writable.InsecureSecrets
section and edit to include the new information.
Warning
Be careful when storing any potentially important information in cleartext on files in your computer. In this case, the credentials for the camera(s) are stored in cleartext in the configuration.yaml
file on your system. InsecureSecrets
is for non-production use only.
Sample: Snippet from configuration.yaml
Writable:\nLogLevel: INFO\nInsecureSecrets:\ncredentials001:\nSecretName: credentials001\nSecretData:\nusername: <Credentials 1 username>\npassword: <Credentials 1 password>\nmode: usernametoken # assign \"digest\" | \"usernametoken\" | \"both\" | \"none\"\ncredentials002:\nSecretName: credentials002\nSecretData:\nusername: <Credentials 2 username>\npassword: <Credentials 2 password>\nmode: usernametoken # assign \"digest\" | \"usernametoken\" | \"both\" | \"none\"\n
For optional configurations, see here.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#build-the-docker-image","title":"Build the Docker Image","text":"In the device-onvif-camera
directory, run make docker:
make docker\n
[Optional] Build with NATS Messaging Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. This means that the published Docker image and Snaps do not include the NATS messaging capability. To build the docker image using NATS, run make docker-nats: make docker-nats\n
See Compose Builder nat-bus
option to generate compose file for NATS and local dev images. Verify the ONVIF Device Service Docker image was successfully created:
docker images\n
REPOSITORY TAG IMAGE ID CREATED SIZE\nedgexfoundry-holding/device-onvif-camera 0.0.0-dev 75684e673feb 6 weeks ago 21.3MB\n
Navigate to edgex-compose
and enter the compose-builder
directory. bash cd edgex-compose/compose-builder
Update .env
file to add the registry and image version variable for device-onvif-camera: Add the following registry and version information:
DEVICE_ONVIFCAM_VERSION=0.0.0-dev\n
Update the add-device-onvif-camera.yml
to point to the local image.
services:\n device-onvif-camera:\n image: edgexfoundry/device-onvif-camera:${DEVICE_ONVIFCAM_VERSION}\n
Here is some information on how to specially configure parts of the service beyond the provided defaults.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#configure-the-device-profiles","title":"Configure the Device Profiles","text":"The device profile contains general information about the camera and includes all of the device resources and commands that the device resources can use to manage the cameras. The default profile found at cmd/res/devices/camera.yaml
contains all possible resources a camera could implement. Enable and disable supported resources in this file, or create an entirely new profile. It is important to set up the device profile to match the capabilities of the camera. Information on the resources supported by specific cameras can be found here. Learn more about device profiles in EdgeX here.
Sample: Snippet from camera.yaml
name: \"onvif-camera\" # general information about the profile\nmanufacturer: \"Generic\"\nmodel: \"Generic ONVIF\"\nlabels:\n- \"onvif\"\ndescription: \"EdgeX device profile for ONVIF-compliant IP camera.\" deviceResources:\n# Network Configuration\n- name: \"Hostname\" # an example of a resource with get/set values\nisHidden: false\ndescription: \"Camera Hostname\"\nattributes:\nservice: \"Device\"\ngetFunction: \"GetHostname\"\nsetFunction: \"SetHostname\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"RW\"\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#configure-the-provision-watchers","title":"Configure the Provision Watchers","text":"The provision watcher sets up parameters for EdgeX to automatically add devices to core-metadata. They can be configured to look for certain features, as well as block features. The default provision watcher is sufficient unless you plan on having multiple different cameras with different profiles and resources. Learn more about provision watchers here.
Sample: Snippet from generic.provision.watcher.yaml
name: Generic-Onvif-Provision-Watcher\nidentifiers:\nAddress: .\nblockingIdentifiers: {}\nadminState: UNLOCKED\ndiscoveredDevice:\nserviceName: device-onvif-camera\nprofileName: onvif-camera\nadminState: UNLOCKED\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#next-steps","title":"Next Steps","text":"Deploy and Run the Service>
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/custom-build/#license","title":"License","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/","title":"Deployment","text":"Follow this guide to deploy and run the service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#deploy-edgex-and-onvif-device-camera-microservice","title":"Deploy EdgeX and ONVIF Device Camera Microservice","text":"DockerNativeNavigate to the EdgeX compose-builder
directory:
cd edgex-compose/compose-builder/\n
Checkout the latest release (main):
git checkout main\n
Run Edgex with the ONVIF microservice in secure or non-secure mode.
Note
Go version 1.20+ is required to run natively. See here for more information.
Navigate to the EdgeX compose-builder
directory:
cd edgex-compose/compose-builder/\n
Checkout the latest release (main):
git checkout main\n
Run EdgeX:
make run no-secty\n
Navigate out of the edgex-compose
directory to the device-onvif-camera
directory:
cd device-onvif-camera\n
Checkout the latest release (main):
git checkout main\n
Run the service
make run\n
[Optional] Run with NATS
make run-nats\n
make run no-secty ds-onvif-camera\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#secure-mode","title":"Secure mode","text":"Note
Recommended for secure and production level deployments.
make run ds-onvif-camera\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#token-generation-secure-mode-only","title":"Token Generation (secure mode only)","text":"Note
Need to wait for sometime for the services to be fully up before executing the next set of commands. Securely store Consul ACL token and the JWT token generated which are needed to map credentials and execute apis. It is not recommended to store these secrets in cleartext in your machine.
Note
The JWT token expires after 119 minutes, and you will need to generate a new one.
Generate the Consul ACL Token. Use the token generated anywhere you see <consul-token>
in the documentation.
make get-consul-acl-token\n
Example output: 12345678-abcd-1234-abcd-123456789abc\n
Generate the JWT Token. Use the token generated anywhere you see <jwt-token>
in the documentation.
make get-token\n
Example output: eyJhbGciOiJFUzM4NCIsImtpZCI6IjUyNzM1NWU4LTQ0OWYtNDhhZC05ZGIwLTM4NTJjOTYxMjA4ZiJ9.eyJhdWQiOiJlZGdleCIsImV4cCI6MTY4NDk2MDI0MSwiaWF0IjoxNjg0OTU2NjQxLCJpc3MiOiIvdjEvaWRlbnRpdHkvb2lkYyIsIm5hbWUiOiJlZGdleHVzZXIiLCJuYW1lc3BhY2UiOiJyb290Iiwic3ViIjoiMGRjNThlNDMtNzBlNS1kMzRjLWIxM2QtZTkxNDM2ODQ5NWU0In0.oa8Fac9aXPptVmHVZ2vjymG4pIvF9R9PIzHrT3dAU11fepRi_rm7tSeq_VvBUOFDT_JHwxDngK1VqBVLRoYWtGSA2ewFtFjEJRj-l83Vz33KySy0rHteJIgVFVi1V7q5
Note
Secrets such as passwords, certificates, tokens and more in Edgex are stored in a secret store which is implemented using Vault a product of Hashicorp. Vault supports security features allowing for the issuing of consul tokens. JWT token is required for the API Gateway which is a trust boundry for Edgex services. It allows for external clients to be verified when issuing REST requests to the microservices. For more info refer Secure Consul, API Gateway and Edgex Security.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#verify-service-and-device-profiles","title":"Verify Service and Device Profiles","text":"via Command Linevia EdgeX UICheck the status of the container:
docker ps\n
The status column will indicate if the container is running, and how long it has been up.
Example output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n33f9c5ecb70e nexus3.edgexfoundry.org:10004/device-onvif-camera:latest \"/device-onvif-camer\u2026\" 7 weeks ago Up 48 minutes 127.0.0.1:59985->59985/tcp edgex-device-onvif-camera\n
Check whether the device service is added to EdgeX:
Note
If running in secure mode all the api executions need the JWT token generated previously. E.g.
curl --location --request GET 'http://localhost:59881/api/v3/deviceservice/name/device-onvif-camera' \\\n--header 'Authorization: Bearer <jwt-token>' \\\n--data-raw ''\n
curl -s http://localhost:59881/api/v3/deviceservice/name/device-onvif-camera | jq .\n
Good response: {\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"service\": {\n\"created\": 1657227634593,\n\"modified\": 1657291447649,\n\"id\": \"e1883aa7-f440-447f-ad4d-effa2aeb0ade\",\n\"name\": \"device-onvif-camera\",\n\"baseAddress\": \"http://edgex-device-onvif-camera:59984\",\n\"adminState\": \"UNLOCKED\"\n} }\n
Bad response: {\n\"apiVersion\" : \"v3\",\n\"message\": \"fail to query device service by name device-onvif-camer\",\n\"statusCode\": 404\n}\n
Check whether the device profile is added:
curl -s http://localhost:59881/api/v3/deviceprofile/name/onvif-camera | jq -r '\"profileName: \" + '.profile.name' + \"\\nstatusCode: \" + (.statusCode|tostring)'\n
Good response: profileName: onvif-camera\nstatusCode: 200\n
Bad response: profileName: \nstatusCode: 404\n
Note
jq -r
is used to reduce the size of the displayed response. The entire device profile with all resources can be seen by removing -r '\"profileName: \" + '.profile.name' + \"\\nstatusCode: \" + (.statusCode|tostring)', and replacing it with '.'
Note
Secure mode login to Edgex UI requires the JWT token generated in the above step
Entering the JWT token
Visit http://localhost:4000 to go to the dashboard for EdgeX Console GUI:
Figure 1: EdgeX Console Dashboard
To see Device Services, Devices, or Device Profiles, click on their respective tab:
Figure 2: EdgeX Console Device Service List
Figure 3: EdgeX Console Device List
Figure 4: EdgeX Console Device Profile List
Additionally, ensure that the service config has been deployed and that Consul is reachable.
Note
If running in secure mode this command needs the Consul ACL token generated previously.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera?keys=true\"\n
Example output:
[\"edgex/v3/device-onvif-camera/AppCustom/BaseNotificationURL\", \"edgex/v3/device-onvif-camera/AppCustom/CheckStatusInterval\",\n \"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/NoAuth\", ... , \"edgex/v3/device-onvif-camera/Writable/InsecureSecrets/credentials001/SecretData/username\", \"edgex/v3/device-onvif-camera/Writable/InsecureSecrets/credentials001/SecretName\",\n \"edgex/v3/device-onvif-camera/Writable/LogLevel\"]\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#manage-devices","title":"Manage Devices","text":"Follow these instructions to add and update devices manually.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#curl-commands","title":"Curl Commands","text":""},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#add-device","title":"Add Device","text":"Warning
Be careful when storing any potentially important information in cleartext on files in your computer. This includes information such as your camera IP and MAC addresses.
Edit the information to appropriately match the camera. The fields Address
, MACAddress
and Port
should match that of the camera:
Note
If running in secure mode the commands might need the JWT or consul token generated previously.
curl -X POST -H 'Content-Type: application/json' \\\nhttp://localhost:59881/api/v3/device \\\n-d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\":\"Camera001\",\n \"serviceName\": \"device-onvif-camera\",\n \"profileName\": \"onvif-camera\",\n \"description\": \"My test camera\",\n \"adminState\": \"UNLOCKED\",\n \"operatingState\": \"UP\",\n \"protocols\": {\n \"Onvif\": {\n \"Address\": \"10.0.0.0\",\n \"Port\": \"10000\",\n \"MACAddress\": \"aa:bb:cc:11:22:33\",\n \"FriendlyName\":\"Default Camera\"\n },\n \"CustomMetadata\": {\n \"Location\":\"Front door\"\n }\n }\n }\n }\n]'\n
Example output:
[{\"apiVersion\" : \"v3\",\"statusCode\":201,\"id\":\"fb5fb7f2-768b-4298-a916-d4779523c6b5\"}]\n
Update credentials in Secret Store.
Secure modeNon-secure modeNote
If running in secure mode all the api executions need the JWT token generated previously.
Enter your chosen username, password, and authentication mode and credentials name and then execute the command to create the secrets.
Note
The options for authentication mode are: usernametoken
, digest
, or both
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"<creds-name>\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<password>\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"<auth-mode>\"\n }\n ]\n }' --header 'Authorization:Bearer <jwt-token>' -X POST \"http://localhost:59984/api/v3/secret\"\n
Example output: {\"apiVersion\":\"v3\",\"statusCode\":201}\n
Enter your chosen username, password, and authentication mode and credentials name and then execute the command to create the secrets.
Note
The options for authentication mode are: usernametoken
, digest
, or both
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"<creds-name>\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<password>\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"<auth-mode>\"\n }\n ]\n }' -X POST \"http://localhost:59984/api/v3/secret\"\n
Example output:
{\"apiVersion\":\"v3\",\"statusCode\":201}\n
Map credentials to devices.
Secure ModeNon-secure modea. Enter your mac-address(es) and then execute the command to add the mac address(es) to the mapping.
Note
If you want to map multiple mac addresses, enter a comma separated list in the command
curl --data '<mac-address>' -H \"X-Consul-Token:<consul-token>\" -X PUT \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>\"\n
Example output: true\n
b. Check the status of the credentials map.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap?keys=true\" | jq .\n
Example output: [\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/NoAuth\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials001\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials002\"\n]\n
c. Check the mac addresses mapped to a specific credenential name. Insert the credential name in the command to see the mac addresses associated with it.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>?raw=true\"\n
Example output: 11:22:33:44:55:66\n
a. Enter your mac-address(es) and then execute the command to add the mac address(es) to the mapping.
Note
If you want to map multiple mac addresses, enter a comma separated list in the command
curl --data '<mac-address>' -X PUT \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>\"\n
Example output:
true\n
b. Check the status of the credentials map.
curl -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap?keys=true\" | jq .\n
Example output: [\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/NoAuth\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials001\",\n\"edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/credentials002\"\n]\n
c. Check the mac addresses mapped to a specific credenential name. Insert the credential name in the command to see the mac addresses associated with it.
curl -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/CredentialsMap/<creds-name>?raw=true\"\n
Example response: 11:22:33:44:55:66\n
Note
The helper scripts may also be used, but they have been deprecated.
Verify device(s) have been successfully added to core-metadata.
curl -s http://localhost:59881/api/v3/device/all | jq -r '\"deviceName: \" + '.devices[].name''\n
Example output:
deviceName: Camera001\ndeviceName: device-onvif-camera\n
Note
jq -r
is used to reduce the size of the displayed response. The entire device with all information can be seen by removing -r '\"deviceName: \" + '.devices[].name'', and replacing it with '.'
There are multiple commands that can update aspects of the camera entry in meta-data. Refer to the Swagger documentation for Core Metadata for more information. For editing specific fields, see the General Usage tab.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#delete-device","title":"Delete Device","text":"curl -X 'DELETE' \\\n'http://localhost:59881/api/v3/device/name/<device name>' \\\n-H 'accept: application/json'
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#shutting-down","title":"Shutting Down","text":"To stop all EdgeX services (containers), execute the make down
command. This will stop all services but not the images and volumes, which still exist.
edgex-compose/compose-builder
directory.make down\n
make clean\n
Learn how to use the device service>
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/deployment/#license","title":"License","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/","title":"General Usage","text":"This document will describe how to execute some of the most important commands used with the device service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/#execute-getstreamuri-command-through-edgex","title":"Execute GetStreamURI Command through EdgeX","text":"Note
Make sure to replace Camera001
in all the commands below, with the proper deviceName.
Get the profile token by executing the GetProfiles
command:
curl -s http://0.0.0.0:59882/api/v3/device/name/Camera001/MediaProfiles | jq -r '\"profileToken: \" + '.event.readings[].objectValue.Profiles[].Token''\n
Example output: profileToken: profile_1\nprofileToken: profile_2\n
To get the RTSP URI from the ONVIF device, execute the GetStreamURI
command, using a profileToken found in step 1: In this example, profile_1
is the profileToken:
curl -s \"http://0.0.0.0:59882/api/v3/device/name/Camera001/StreamUri?jsonObject=$(base64 -w 0 <<< '{\n \"StreamSetup\" : {\n \"Stream\" : \"RTP-Unicast\",\n \"Transport\" : {\n \"Protocol\" : \"RTSP\"\n }\n },\n \"ProfileToken\": \"profile_1\"\n}')\" | jq -r '\"streamURI: \" + '.event.readings[].objectValue.MediaUri.Uri''\n
Example output: streamURI: rtsp://192.168.86.34:554/stream1\n
Stream the RTSP stream.
Warning
RTSP streams are insecure, as the credentials are included in plaintext. Always keep this in mind when streaming via RTSP.
ffplay can be used to stream. The command follows this format:ffplay -rtsp_transport tcp \"rtsp://<user>:<password>@<IP address>:<port>/<streamname>\"\n
Using the streamURI
returned from the previous step, run ffplay: ffplay -rtsp_transport tcp \"rtsp://admin:Password123@192.168.86.34:554/stream1\"\n
While the streamURI
returned did not contain the username and password, those credentials are required in order to correctly authenticate the request and play the stream. Therefore, it is included in both the VLC and ffplay streaming examples.
If the password uses special characters, you must use percent-encoding.
To shut down ffplay, use the ctrl-c command.
To learn more about the API, see here
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/#troubleshooting-guide","title":"Troubleshooting Guide","text":""},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/general-usage/#axis-camera-authentication-failure","title":"Axis camera authentication failure","text":"If while using Axis cameras you face authentication failure it might help by disabling its replay attack protection
. For doing so please refer to Axis-replay-attack-protection. For more info on this refer to Axis-onvif-stackoverflow.
Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/","title":"Setup","text":"Follow this guide to set up your system to run the ONVIF Device Service.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#system-requirements","title":"System Requirements","text":"Note
The instructions in this guide were developed and tested using Ubuntu 20.04 LTS and the Tapo C200 Pan/Tilt Wi-Fi Camera, referred to throughout this document as the Tapo C200 Camera. However, the software may work with other Linux distributions and ONVIF-compliant cameras. Refer to our list of tested cameras for more information
Other Requirements
You must have administrator (sudo) privileges to execute the user guide commands.
Make sure that the cameras are secured and the computer system runnning this software is secure.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#dependencies","title":"Dependencies","text":"The software has dependencies, including Git, Docker, Docker Compose, and assorted tools. Follow the instructions below to install any dependency that is not already installed.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#install-git","title":"Install Git","text":"Install Git from the official repository as documented on the Git SCM site.
Update installation repositories:
sudo apt update\n
Add the Git repository:
sudo add-apt-repository ppa:git-core/ppa -y\n
Install Git:
sudo apt install git\n
Install Docker from the official repository as documented on the Docker site.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#verify-docker","title":"Verify Docker","text":"To enable running Docker commands without the preface of sudo, add the user to the Docker group. Then run Docker with the hello-world
test.
Create Docker group:
sudo groupadd docker\n
Note
If the group already exists, groupadd
outputs a message: groupadd: group docker
already exists. This is OK.
Add User to group:
sudo usermod -aG docker $USER\n
Restart your computer for the changes to take effect.
To verify the Docker installation, run hello-world
:
docker run hello-world\n
A Hello from Docker! greeting indicates successful installation. Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n2db29710123e: Pull complete \nDigest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n...\n
Install Docker Compose from the official repository as documented on the Docker Compose site.
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#install-tools","title":"Install Tools","text":"Install the build, media streaming, and parsing tools:
sudo apt install build-essential ffmpeg jq curl\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#tool-descriptions","title":"Tool Descriptions","text":"The table below lists command line tools this guide uses to help with EdgeX configuration and device setup.
Tool Description Note curl Allows the user to connect to services such as EdgeX. Use curl to get transfer information either to or from this service. In the tutorial, usecurl
to communicate with the EdgeX API. The call will return a JSON object. jq Parses the JSON object returned from the curl
requests. The jq
command includes parameters that are used to parse and format data. In this tutorial, the jq
command has been configured to return and format appropriate data for each curl
command that is piped into it. base64 Converts data into the Base64 format. Table 1: Command Line Tools
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#download-edgex-compose","title":"Download EdgeX Compose","text":"Clone the EdgeX compose repository:
git clone https://github.com/edgexfoundry/edgex-compose.git\n
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#proxy-setup-optional","title":"Proxy Setup (Optional)","text":"Note
These steps are only required if a proxy is present in the user environment.
Setup Docker Daemon or Docker Desktop to use proxied environment.
Follow guide here for Docker Daemon proxy setup (Linux)
Follow guide here for Docker Desktop proxy setup (Windows)
Configuration file to set Docker Daemon proxy via daemon.json
{\n \"proxies\": {\n \"http-proxy\": \"http://proxy.example.com:3128\",\n \"https-proxy\": \"https://proxy.example.com:3129\",\n \"no-proxy\": \"*.test.example.com,.example.org,127.0.0.0/8\"\n }\n }\n
Note if building custom images
If building your own custom images, set environment variables for HTTP_PROXY, HTTPS_PROXY and NO_PROXY
Example
export HTTP_PROXY=http://proxy.example.com:3128\nexport HTTPS_PROXY=https://proxy.example.com:3129\nexport NO_PROXY=*.test.example.com,localhost,127.0.0.0/8\n
Note
Automated discovery of ONVIF device requires updating proper discovery subnets and proper network interface in ONVIF configuration.yaml or setting up EdgeX environment variables
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#next-steps","title":"Next Steps","text":"Default Images>
Warning
While not recommended, you can follow the process for manually building the images.
Build Images>
"},{"location":"microservices/device/services/device-onvif-camera/Walkthrough/setup/#license","title":"License","text":"Apache-2.0
"},{"location":"microservices/device/services/device-onvif-camera/assets/onvif-mermaid/","title":"Onvif mermaid","text":"Render
sequenceDiagram Onvif Device Service->>Onvif Camera: WS-Discovery Probe Onvif Camera->>Onvif Device Service: Probe Response Onvif Device Service->>Onvif Camera: GetDeviceInformation Onvif Camera->>Onvif Device Service: GetDeviceInformation Response Onvif Device Service->>Onvif Camera: GetNetworkInterfaces Onvif Camera->>Onvif Device Service: GetNetworkInterfaces Response Onvif Device Service->>EdgeX Core-Metadata: Create Device EdgeX Core-Metadata->>Onvif Device Service: Device AddedRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD %% -------- Node Definitions -------- %% Multicast[/Devices Discoveredvia Multicast/] Netscan[/Devices Discoveredvia Netscan/] DupeFilter[Filter Duplicate Devicesbased on EndpointRef] MACMatches{MAC Addressmatches existingdevice?} RefMatches{EndpointRefmatches existingdevice?} IPChanged{IP AddressChanged?} MACChanged{MAC AddressChanged?} UpdateIP[Update IP Address] UpdateMAC(Update MAC Address) RegisterDevice(Register New DeviceWith EdgeX) DeviceNotRegistered(Device Not Registered) PWMatches{Device matchesProvision Watcher?} %% -------- Graph Definitions -------- %% Multicast --> DupeFilter Netscan --> DupeFilter DupeFilter --> ForEachDevice subgraph ForEachDevice[For Each Unique Device] MACMatches -->|Yes| IPChanged MACMatches -->|No| RefMatches RefMatches -->|Yes| IPChanged RefMatches -->|No| ForEachPW ForEachPW --> PWMatches PWMatches-->|No Matches| DeviceNotRegistered IPChanged -->|No| MACChanged IPChanged -->|Yes| UpdateIP UpdateIP --> MACChanged MACChanged -->|Yes| UpdateMAC PWMatches -->|Yes| RegisterDevice endRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD; %% -------- Node Definitions -------- %% DiscoveredDevice[/Discovered Device/] UseDefault[Use Default Credentials] EndpointRefHasMAC{Does EndpointRefcontainMAC Address?} InNoAuthGroup{MAC Belongsto NoAuth group?} AuthModeNone[Set AuthMode to 'none'] ApplyCreds[Apply Credentials] InSecretStore{Credentials existin SecretStore?} CreateClient[Create Onvif Client] GetDeviceInfo[Get Device Information] GetNetIfaces[Get Network Interfaces] CreateDevice(Create Device:<Mfg>-<Model>-<EndpointRef>) CreateUnknownDevice(Create Device:unknown_unknown_<EndpointRef>) %% -------- Graph Definitions -------- %% DiscoveredDevice --> ForAllMAC subgraph ForAllMAC[For all MAC Addresses in CredentialsMap] EndpointRefHasMAC end EndpointRefHasMAC -->|Yes| InNoAuthGroup EndpointRefHasMAC -- No Matches --> UseDefault InNoAuthGroup -->|Yes| AuthModeNone InNoAuthGroup -->|No| InSecretStore UseDefault --> InSecretStore AuthModeNone --> CreateClient InSecretStore -->|Yes| ApplyCreds InSecretStore -->|No| AuthModeNone ApplyCreds --> CreateClient CreateClient --> GetDeviceInfo GetDeviceInfo -->|Failed| CreateUnknownDevice GetDeviceInfo -->|Success| GetNetIfaces GetNetIfaces ----> CreateDeviceRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD; %% -------- Node Definitions -------- %% ExistingDevice[/Existing Device/] ContainsMAC{Device Metadata containsMAC Address?} ValidMAC{Is it a validMAC Address?} InMap{MAC exists inCredentialsMap?} InNoAuth{MAC Belongsto NoAuth group?} UseDefault[Use Default Credentials] InSecretStore{Credentials existin SecretStore?} AuthModeNone(Set AuthMode to 'none') ApplyCreds(Apply Credentials) CreateClient(Create Onvif Client) %% -------- Edge Definitions -------- %% ExistingDevice --> ContainsMAC ContainsMAC -->|Yes| ValidMAC ValidMAC -->|Yes| InMap ValidMAC -->|No| AuthModeNone InMap -->|Yes| InNoAuth InMap -->|No| AuthModeNone ContainsMAC -->|No| UseDefault InNoAuth -->|Yes| AuthModeNone InNoAuth -->|No| InSecretStore UseDefault --> InSecretStore InSecretStore -->|Yes| ApplyCreds InSecretStore -->|No| AuthModeNone AuthModeNone ----> CreateClient ApplyCreds ----> CreateClientRender
%% Note: The node and edge definitions are split up to make it easier to adjust the %% links between the various nodes. flowchart TD; %% -------- Node Definitions -------- %% CheckDeviceStatus(Check Device Status) UpdateDeviceStatus[Update Device Statusin Core-Metadata] SetLastSeen[Set LastSeen = Now] UpdateMetadata[Update Core-Metadata] CheckNowUpWithAuth{Status Changed&&Status == UpWithAuth?} DeviceHasMAC{Device HasMAC Address?} CreateClient[Create Onvif Client] GetCapabilities[Device::GetCapabilities] CheckUpdatedMAC[Check CredentialsMap forupdated MAC Address] TCPProbe[TCP Probe] GetDeviceInfo[GetDeviceInformation] UpdateDeviceInfo[Update Device Information] UpdateMACAddress[Update MAC Address] UpdateEndpointRef[Update EndpointRefAddress] DeviceUnknown{Device Namebegins withunknown_unknown_?} RemoveDevice[Remove Deviceunknown_unknown_<EndpointRef>] CreateDevice[Create Device<Mfg>-<Model>-<EndpointRef>] %% -------- Graph Definitions -------- %% CheckDeviceStatus --> DeviceHasMAC DeviceHasMAC -->|No| CheckUpdatedMAC DeviceHasMAC -->|Yes| CreateClient CheckUpdatedMAC --> CreateClient subgraph TestConnection[Test Connection Methods] CreateClient --> GetCapabilities GetCapabilities -->|Failed| TCPProbe GetCapabilities -->|Success| GetDeviceInfo GetDeviceInfo -->|Success| UpWithAuth GetDeviceInfo -->|Failed| UpWithoutAuth TCPProbe -->|Failed| Unreachable TCPProbe -->|Success| Reachable end UpWithAuth --> SetLastSeen UpWithoutAuth --> SetLastSeen Reachable --> SetLastSeen Unreachable --> UpdateDeviceStatus UpdateDeviceStatus --> CheckNowUpWithAuth SetLastSeen --> UpdateDeviceStatus CheckNowUpWithAuth -->|Yes| RefreshDevice subgraph RefreshDevice[Refresh Device] UpdateDeviceInfo --> UpdateMACAddress UpdateMACAddress --> UpdateEndpointRef UpdateEndpointRef --> DeviceUnknown DeviceUnknown -->|No| UpdateMetadata DeviceUnknown -->|Yes| RemoveDevice RemoveDevice --> CreateDevice end"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/","title":"Onvif Camera Device Service Specifications","text":"This Onvif Camera Device Service is developed to control/communicate ONVIF-compliant cameras accessible via http in an EdgeX deployment
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#table-of-contents","title":"Table of Contents","text":"The latest version main of the device service API specifications can be found here.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#onvif-device-service-protocol-properties","title":"ONVIF Device Service Protocol Properties","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#onvif-protocol","title":"ONVIF Protocol","text":"All properties in the Onvif
protocol field are defined by internal device information and some user defined information.
All properties in the CustomMetadata
protocol field are user defined. It can hold multiple different entries. For more information, see here
The device service supports the onvif features listed in the following table:
Feature Onvif Web Service Onvif Function EdgeX Value Type User Authentication Core WS-Usernametoken Authentication HTTP Digest Auto Discovery Core WS-Discovery Device GetDiscoveryMode Object SetDiscoveryMode Object GetScopes Object SetScopes Object AddScopes Object RemoveScopes Object Network Configuration Device GetHostname Object SetHostname Object GetDNS Object SetDNS Object GetNetworkInterfaces Object SetNetworkInterfaces Object GetNetworkProtocols Object SetNetworkProtocols Object GetNetworkDefaultGateway Object SetNetworkDefaultGateway Object System Function Device GetDeviceInformation Object GetSystemDateAndTime Object SetSystemDateAndTime Object SetSystemFactoryDefault Object SystemReboot Object User Handling Device GetUsers Object CreateUsers Object DeleteUsers Object SetUser Object Metadata Configuration Media GetMetadataConfiguration Object GetMetadataConfigurations Object GetCompatibleMetadataConfigurations Object GetMetadataConfigurationOptions Object AddMetadataConfiguration Object RemoveMetadataConfiguration Object SetMetadataConfiguration Object Video Streaming Media GetProfiles Object GetStreamUri Object VideoEncoder Config Media GetVideoEncoderConfiguration Object SetVideoEncoderConfiguration Object GetVideoEncoderConfigurationOptions Object PTZ Node PTZ GetNode Object GetNodes Object PTZ Configuration GetConfigurations Object GetConfiguration Object GetConfigurationOptions Object SetConfiguration Object Media AddPTZConfiguration Object Media RemovePTZConfiguration Object PTZ Actuation PTZ AbsoluteMove Object RelativeMove Object ContinuousMove Object Stop Object GetStatus Object GetPresets Object GotoPreset Object RemovePreset Object PTZ Home Position PTZ GotoHomePosition Object SetHomePosition Object PTZ Auxiliary Operations PTZ SendAuxiliaryCommand Object Event Handling Event Notify Object Subscribe Object Renew Object Unsubscribe Object CreatePullPointSubscription Object PullMessages Object TopicFilter Object MessageContentFilter Object Analytics Profile Configuration Media2 GetProfiles Object GetAnalyticsConfigurations Object AddConfiguration Object RemoveConfiguration Object Analytics Module Configuration Analytics GetSupportedAnalyticsModules Object GetAnalyticsModules Object CreateAnalyticsModules Object DeleteAnalyticsModules Object GetAnalyticsModuleOptions Object ModifyAnalyticsModules Object Rule Configuration Analytics GetSupportedRules Object GetRules Object CreateRules Object DeleteRules Object GetRuleOptions Object ModifyRule ObjectNote
The functions in the bold text are mandatory for Onvif protocol.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#custom-features","title":"Custom Features","text":"The device service also include custom function to enhance the usage for the EdgeX user.
Feature Service Function EdgeX Value Type Description System Function EdgeX RebootNeeded Bool Read only. Used to indicate the camera should reboot to apply the configuration change System Function EdgeX CameraEvent Bool A device resource which is used to send the async event to north bound System Function EdgeX SubscribeCameraEvent Bool Create a subscription to subscribe the event from the camera System Function EdgeX UnsubscribeCameraEvent Bool Unsubscribe all subscription from the camera Media EdgeX GetSnapshot Binary Get Snapshot from the snapshot uri Custom Metadata EdgeX CustomMetadata Object Read and write custom metadata to the camera entry in EdgeX Custom Metadata EdgeX DeleteCustomMetadata Object Delete custom metadata fields from the camera entry in EdgeX"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#how-does-the-device-service-work","title":"How does the device service work?","text":"The Onvif camera uses Web Services standards such as XML, SOAP 1.2 and WSDL1.1 over an IP network. - XML is used as the data description syntax - SOAP is used for message transfer - and WSDL is used for describing the services.
The spec can refer to ONVIF-Core-Specification.
For example, we can send a SOAP request to the Onvif camera as below:
curl --request POST 'http://192.168.12.128:2020/onvif/service' \\\n--header 'Content-Type: application/soap+xml' \\\n--data-raw '<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<soap-env:Envelope xmlns:soap-env=\"http://www.w3.org/2003/05/soap-envelope\" xmlns:soap-enc=\"http://www.w3.org/2003/05/soap-encoding\" xmlns:tan=\"http://www.onvif.org/ver20/analytics/wsdl\" xmlns:onvif=\"http://www.onvif.org/ver10/schema\" xmlns:trt=\"http://www.onvif.org/ver10/media/wsdl\" xmlns:timg=\"http://www.onvif.org/ver20/imaging/wsdl\" xmlns:tds=\"http://www.onvif.org/ver10/device/wsdl\" xmlns:tev=\"http://www.onvif.org/ver10/events/wsdl\" xmlns:tptz=\"http://www.onvif.org/ver20/ptz/wsdl\" >\n <soap-env:Header>\n <Security xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\">\n <UsernameToken>\n <Username>myUsername</Username>\n <Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest\">+HKcvc+LCGClVwuros1sJuXepQY=</Password>\n <Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">w490bn6rlib33d5rb8t6ulnqlmz9h43m</Nonce>\n <Created xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">2021-10-21T03:43:21.02075Z</Created>\n </UsernameToken>\n </Security>\n </soap-env:Header>\n <soap-env:Body>\n <trt:GetStreamUri>\n <trt:ProfileToken>profile_1</trt:ProfileToken>\n </trt:GetStreamUri>\n </soap-env:Body>\n </soap-env:Envelope>'\n
And the response should be like the following XML data: <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<SOAP-ENV:Envelope\nxmlns:SOAP-ENV=\"http://www.w3.org/2003/05/soap-envelope\" xmlns:SOAP-ENC=\"http://www.w3.org/2003/05/soap-encoding\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsa=\"http://schemas.xmlsoap.org/ws/2004/08/addressing\"\nxmlns:wsdd=\"http://schemas.xmlsoap.org/ws/2005/04/discovery\" xmlns:chan=\"http://schemas.microsoft.com/ws/2005/02/duplex\"\nxmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\"\nxmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsa5=\"http://www.w3.org/2005/08/addressing\"\nxmlns:xmime=\"http://tempuri.org/xmime.xsd\" xmlns:xop=\"http://www.w3.org/2004/08/xop/include\" xmlns:wsrfbf=\"http://docs.oasis-open.org/wsrf/bf-2\"\nxmlns:wstop=\"http://docs.oasis-open.org/wsn/t-1\" xmlns:wsrfr=\"http://docs.oasis-open.org/wsrf/r-2\" xmlns:wsnt=\"http://docs.oasis-open.org/wsn/b-2\"\nxmlns:tt=\"http://www.onvif.org/ver10/schema\" xmlns:ter=\"http://www.onvif.org/ver10/error\" xmlns:tns1=\"http://www.onvif.org/ver10/topics\"\nxmlns:tds=\"http://www.onvif.org/ver10/device/wsdl\" xmlns:trt=\"http://www.onvif.org/ver10/media/wsdl\"\nxmlns:tev=\"http://www.onvif.org/ver10/events/wsdl\" xmlns:tdn=\"http://www.onvif.org/ver10/network/wsdl\" xmlns:timg=\"http://www.onvif.org/ver20/imaging/wsdl\"\nxmlns:trp=\"http://www.onvif.org/ver10/replay/wsdl\" xmlns:tan=\"http://www.onvif.org/ver20/analytics/wsdl\" xmlns:tptz=\"http://www.onvif.org/ver20/ptz/wsdl\">\n<SOAP-ENV:Header></SOAP-ENV:Header>\n<SOAP-ENV:Body>\n<trt:GetStreamUriResponse>\n<trt:MediaUri>\n<tt:Uri>rtsp://192.168.12.128:554/stream1</tt:Uri>\n<tt:InvalidAfterConnect>false</tt:InvalidAfterConnect>\n<tt:InvalidAfterReboot>false</tt:InvalidAfterReboot>\n<tt:Timeout>PT0H0M2S</tt:Timeout>\n</trt:MediaUri>\n</trt:GetStreamUriResponse>\n</SOAP-ENV:Body>\n</SOAP-ENV:Envelope>\n
Since the SOAP message is an HTTP call, the device service can just do the transformation between REST(JSON) and SOAP(XML).
For the concept of implementation: - The device service accepts the REST request from the client, then transforms the request to SOAP format and forward it to the Onvif camera. - Once the device service receives the response from the Onvif camera, the device service will transform the SOAP response to REST format for the client.
- Onvif Web Service\n\n - Onvif Function \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Input Parameter \u2502 Device Service \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 \u2502 REST request \u2502 \u2502 SOAP request \u2502 \u2502\n\u2502 Client \u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u25ba Transform \u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u25ba Onvif Camera \u2502\n\u2502 \u2502 \u2502 to SOAP request \u2502 \u2502 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 Device Service \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 \u2502 REST response \u2502 \u2502 SOAP response \u2502 \u2502\n\u2502 Client \u25c4\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500 Transform \u25c4\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500 Onvif Camera \u2502\n\u2502 \u2502 \u2502 to REST response \u2502 \u2502 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
Warning
Both REST and SOAP commands over the network can be subject to attacks while in transit. Please take all necessary precautions to protect network traffic.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ONVIF-protocol/#tested-onvif-cameras","title":"Tested Onvif Cameras","text":"The following table shows the Onvif functions tested for various Onvif cameras:
Use these links to access maufacturer documentation
Warning
Information in this page may be outdated.
The device-onvif-camera implement the Analytic function according to Onvif Profile M
to manage the Analytics Module and Rule configuration.
The spec can refer to
This page uses the BOSCH DINION IP starlight 6000 HD
as the test camera and used the BOSCH Configuration Manager
as the camera viewer. - The product page refer to https://commerce.boschsecurity.com/tw/en/DINION-IP-starlight-6000-HD/p/20827877387/ - The configuration manager can download from https://downloadstore.boschsecurity.com/index.php?type=CM
In the scope of profile M, the device-onvif-camera should be able to manage the Analytics Module
and Rule
configuration, we can illustrate the APIs scope as following example:
For more information, please refer to the Annex D. Radiometry https://www.onvif.org/specs/srv/analytics/ONVIF-Analytics-Service-Spec.pdf
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#manage-the-analytics-module-configuration","title":"Manage the Analytics Module Configuration","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#query-the-analytics-module","title":"Query the Analytics Module","text":"curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsModules?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n...\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n...\n \"objectValue\" : {\n\"AnalyticsModule\" : [\n{\n\"Name\" : \"Viproc\",\n \"Parameters\" : {\n\"SimpleItem\" : [\n{\n\"Name\" : \"Mode\",\n \"Value\" : \"Profile 1\"\n},\n {\n\"Name\" : \"AnalysisType\",\n \"Value\" : \"Intelligent Video Analytics\"\n}\n]\n},\n \"Type\" : \"tt:Viproc\"\n}\n]\n},\n }\n],\n \"sourceName\" : \"AnalyticsModules\"\n},\n \"statusCode\" : 200\n}\n
Note
The jsonObject parameter is encoded from {\"ConfigurationToken\": \"{ANALYTIC_CONFIG_TOKEN}\"}
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/SupportedAnalyticsModules' | jq .\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 692 100 692 0 0 2134 0 --:--:-- --:--:-- --:--:-- 2217\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n...\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"70545263-30e7-4c03-9741-0011300f2f9c\",\n \"objectValue\" : {\n\"SupportedAnalyticsModules\" : {\n\"AnalyticsModuleDescription\" : [\n{\n\"Fixed\" : true,\n \"MaxInstances\" : 1,\n \"Name\" : \"tt:Viproc\",\n \"Parameters\" : {\n\"SimpleItemDescription\" : [\n{\n\"Name\" : \"Mode\",\n \"Type\" : \"xs:string\"\n},\n {\n\"Name\" : \"AnalysisType\",\n \"Type\" : \"xs:string\"\n}\n]\n}\n}\n]\n}\n},\n }\n],\n \"sourceName\" : \"SupportedAnalyticsModules\"\n},\n \"statusCode\" : 200\n}\n
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsModuleOptions?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"deviceName\" : \"Camera003\",\n \"profileName\" : \"onvif-camera\",\n ...\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"43f0e59b-6f3e-4119-978e-299ccd59049d\",\n \"objectValue\" : {\n\"Options\" : [\n{\n\"AnalyticsModule\" : \"tt:Viproc\",\n \"Name\" : \"Mode\",\n \"StringItems\" : {\n\"Item\" : [\n\"Off\",\n \"Silent VCA\",\n \"Profile 1\",\n \"Profile 2\",\n \"Scheduled\",\n \"Event Triggered\"\n]\n}\n},\n {\n\"AnalyticsModule\" : \"tt:Viproc\",\n \"Name\" : \"AnalysisType\",\n \"StringItems\" : {\n\"Item\" : [\n\"MOTION+\",\n \"Intelligent Video Analytics\"\n]\n}\n}\n]\n},\n ...\n \"resourceName\" : \"AnalyticsModuleOptions\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"AnalyticsModuleOptions\"\n},\n \"statusCode\" : 200\n}\n
Note
The jsonObject parameter is encoded from {\"ConfigurationToken\": \"{ANALYTIC_CONFIG_TOKEN}\"}
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#modify-the-analytics-module-options","title":"Modify the Analytics Module Options","text":"
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsModules' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"AnalyticsModules\": {\n \"ConfigurationToken\": \"1\",\n \"AnalyticsModule\": [\n {\n \"Name\": \"Viproc\",\n \"Type\": \"tt:Viproc\",\n \"Parameters\": {\n \"SimpleItem\": [\n {\n \"Name\": \"Mode\",\n \"Value\": \"Profile 1\"\n },\n {\n \"Name\": \"AnalysisType\",\n \"Value\": \"Intelligent Video Analytics\"\n }\n ]\n }\n\n }\n ]\n }\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#manage-the-rule-configuration","title":"Manage the Rule Configuration","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#query-the-rules","title":"Query the Rules","text":"curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsRules?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"profileName\" : \"onvif-camera\",\n ...\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"1abea901-ad51-4a55-b9bb-0b00271307df\",\n \"objectValue\" : {\n\"Rule\" : [\n{\n\"Name\" : \"Detect any object\",\n \"Parameters\" : {\n\"SimpleItem\" : [\n{\n\"Name\" : \"Armed\",\n \"Value\" : \"true\"\n}\n]\n},\n \"Type\" : \"tt:ObjectInField\"\n}\n]\n},\n \"origin\" : 1639480270526564000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"AnalyticsRules\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"AnalyticsRules\"\n},\n \"statusCode\" : 200\n}\n
Note
The jsonObject parameter is encoded from {\"ConfigurationToken\": \"{ANALYTIC_CONFIG_TOKEN}\"}
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsSupportedRules?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 9799 0 9799 0 0 9605 0 --:--:-- 0:00:01 --:--:-- 9740\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"id\" : \"07f7b42e-835b-4ecc-97b1-fe4d5f52575b\",\n \"origin\" : 1639482296788863000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"6fca707b-3c52-4694-be37-2e23ecf65de1\",\n \"objectValue\" : {\n\"SupportedRules\" : {\n\"RuleDescription\" : [\n....\n {\n\"MaxInstances\" : 16,\n \"Messages\" : {\n\"Data\" : {\n\"SimpleItemDescription\" : [\n{\n\"Name\" : \"Count\",\n \"Type\" : \"xs:int\"\n}\n]\n},\n \"IsProperty\" : true,\n \"ParentTopic\" : \"tns1:RuleEngine/CountAggregation/Counter\",\n \"Source\" : {\n\"SimpleItemDescription\" : [\n{\n\"Name\" : \"VideoSource\",\n \"Type\" : \"tt:ReferenceToken\"\n},\n {\n\"Name\" : \"Rule\",\n \"Type\" : \"xs:string\"\n}\n]\n}\n},\n \"Name\" : \"tt:LineCounting\",\n \"Parameters\" : {\n\"ElementItemDescription\" : [\n{\n\"Name\" : \"Segments\"\n}\n],\n \"SimpleItemDescription\" : [\n{\n\"Name\" : \"Armed\",\n \"Type\" : \"xs:boolean\"\n},\n {\n\"Name\" : \"Direction\",\n \"Type\" : \"tt:Direction\"\n},\n {\n\"Name\" : \"MinObjectHeight\",\n \"Type\" : \"xs:int\"\n},\n ...\n {\n\"Name\" : \"ClassFilter\",\n \"Type\" : \"tt:StringList\"\n}\n]\n}\n}\n]\n}\n},\n \"origin\" : 1639482296788863000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"AnalyticsSupportedRules\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"AnalyticsSupportedRules\"\n},\n \"statusCode\" : 200\n}\n
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera003/RuleOptions?jsonObject=eyJDb25maWd1cmF0aW9uVG9rZW4iOiIxIn0=' | jq .\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 1168 100 1168 0 0 755 0 0:00:01 0:00:01 --:--:-- 759\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"id\" : \"3ac81a5c-48f2-46d7-a3f9-d4919f97ae8d\",\n \"origin\" : 1639482979553667000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"6eae2e16-71f7-4b92-95b6-32e398be25ca\",\n \"objectValue\" : {\n\"RuleOptions\" : [\n...\n {\n\"MaxOccurs\" : \"3\",\n \"MinOccurs\" : \"0\",\n \"Name\" : \"Field\",\n \"PolygonOptions\" : {\n\"VertexLimits\" : {\n\"Max\" : 16,\n \"Min\" : 3\n}\n}\n},\n {\n\"IntRange\" : {\n\"Max\" : 16,\n \"Min\" : 2\n},\n \"MaxOccurs\" : \"3\",\n \"MinOccurs\" : \"1\",\n \"Name\" : \"Segments\"\n},\n {\n\"Name\" : \"Direction\",\n \"StringList\" : \"Any Right Left\"\n},\n {\n\"Name\" : \"ClassFilter\",\n \"StringList\" : \"Person Bike Car Truck\"\n}\n]\n},\n \"origin\" : 1639482979553667000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"RuleOptions\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"RuleOptions\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-analytic-support/#add-the-rule","title":"Add the Rule","text":"curl --location --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera003/AnalyticsCreateRules' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"AnalyticsCreateRules\": {\n \"ConfigurationToken\": \"1\",\n \"Rule\": [\n {\n \"Name\": \"Object Counting\",\n \"Type\": \"tt:LineCounting\",\n \"Parameters\": {\n \"SimpleItem\": [\n {\n \"Name\":\"Armed\", \n \"Value\":\"true\"\n }\n ],\n \"ElementItem\": [\n {\n \"Name\":\"Segments\", \n \"Polyline\": {\n \"Point\": [\n {\n \"x\":\"0.16\",\n \"y\": \"0.5\"\n },\n {\n \"x\":\"0.16\",\n \"y\": \"-0.5\"\n }\n ]\n }\n }\n ]\n }\n\n }\n ]\n }\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/","title":"Event Handling","text":"Warning
Information in this page may be outdated.
The device service shall be able to use at least one way to retrieve events out of the following: * PullPoint - \"Pull\" using the CreatePullPointSubscription and PullMessage operations * BaseNotification - \"Push\" using Notify, Subscribe and Renew operations from WSBaseNotification
The spec can refer to https://www.onvif.org/ver10/events/wsdl/event.wsdl and https://docs.oasis-open.org/wsn/wsn-ws_base_notification-1.3-spec-os.pdf
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-the-device-resources-for-event-handling","title":"Define the device resources for Event Handling","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-a-cameraevent-resource-for-device-service-to-publish-the-event","title":"Define a CameraEvent resource for device service to publish the event","text":"Before receiving the event data from the camera, we must define a device resource for the event.
- name: \"CameraEvent\"\nisHidden: true\ndescription: \"This resource is used to send the async event reading to north bound\"\nattributes:\nservice: \"EdgeX\"\ngetFunction: \"CameraEvent\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"R\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-device-resource-for-pullpoint","title":"Define device resource for PullPoint","text":"Define a SubscribeCameraEvent resource with PullPoint subscribeType for creating the subscription
- name: \"SubscribeCameraEvent\"\nisHidden: false\ndescription: \"Create a subscription to subscribe the event from the camera\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"SubscribeCameraEvent\"\n# PullPoint | BaseNotification\nsubscribeType: \"PullPoint\"\ndefaultSubscriptionPolicy: \"\"\ndefaultInitialTerminationTime: \"PT1H\"\ndefaultAutoRenew: true\ndefaultTopicFilter: \"tns1:RuleEngine/TamperDetector\"\ndefaultMessageContentFilter: \"boolean(//tt:SimpleItem[@Name=\u201dIsTamper\u201d])\"\ndefaultMessageTimeout: \"PT5S\"\ndefaultMessageLimit: 10\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Define an UnsubscribeCameraEvent resource for unsubscribing
- name: \"UnsubscribeCameraEvent\"\nisHidden: false\ndescription: \"Unsubscribe all event from the camera\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"UnsubscribeCameraEvent\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Define a SubscribeCameraEvent resource with BaseNotification subscribeType
- name: \"SubscribeCameraEvent\"\nisHidden: false\ndescription: \"Create a subscription to subscribe the event ...\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"SubscribeCameraEvent\"\n# PullPoint | BaseNotification\nsubscribeType: \"BaseNotification\"\ndefaultSubscriptionPolicy: \"\"\ndefaultInitialTerminationTime: \"PT1H\"\ndefaultAutoRenew: true\ndefaultTopicFilter: \"...\"\ndefaultMessageContentFilter: \"...\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Define a driver config BaseNotificationURL to indicate the device service network location
# configuration.yaml\nAppCustom:\n# BaseNotificationURL indicates the device service network location (which should be accessible from onvif devices on the network), when\n# configuring an Onvif Event subscription.\nBaseNotificationURL: 'http://192.168.12.112:59984'\n
Device service will generate the following path for pushing event from Camera to device service: - {BaseNotificationURL}/api/v3/resource/{DeviceName}/{ResourceName} - {BaseNotificationURL}/api/v3/resource/Camera1/CameraEvent
Note
The user can also override the config from the docker-compose environment variable:
export HOST_IP=$(ifconfig eth0 | grep \"inet \" | awk '{ print $2 }')\n
environment:\nDRIVER_BASENOTIFICATIONURL: http://${HOST_IP}:59984\n
Then the device service can be accessed by the external camera from the other subnetwork."},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#define-device-resource-for-unsubscribing-the-event","title":"Define device resource for unsubscribing the event","text":" - name: \"UnsubscribeCameraEvent\"\nisHidden: true\ndescription: \"Unsubscribe all subscription from the camera\"\nattributes:\nservice: \"EdgeX\"\nsetFunction: \"UnsubscribeCameraEvent\"\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#find-the-supported-event-topics","title":"Find the supported Event Topics","text":"Finding out what notifications a camera supports and what information they contain:
curl --request GET 'http://localhost:59882/api/v3/device/name/Camera003/EventProperties'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-event-handling/#create-a-pull-point","title":"Create a Pull Point","text":"User can create pull point with the following command:
curl --request PUT 'http://localhost:59882/api/v3/device/name/Camera003/PullPointSubscription' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"PullPointSubscription\": {\n \"MessageContentFilter\": \"boolean(//tt:SimpleItem[@Name=\\\"Rule\\\"])\",\n \"InitialTerminationTime\": \"PT120S\",\n \"MessageTimeout\": \"PT20S\"\n }\n}'\n
Note
User can create subscription, the InitialTerminationTime is required and should greater than ten seconds:
curl --request PUT 'http://localhost:59882/api/v3/device/name/Camera003/BaseNotificationSubscription' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"BaseNotificationSubscription\": {\n \"TopicFilter\": \"tns1:RuleEngine/TamperDetector/Tamper\",\n \"InitialTerminationTime\": \"PT180S\"\n }\n}'\n
Note
The user can unsubscribe all subscriptions(PullPoint and BaseNotification) from the camera with the following command:
curl --request PUT 'http://localhsot:59882/api/v3/device/name/Camera003/UnsubscribeCameraEvent' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"UnsubscribeCameraEvent\": {\n }\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/","title":"User Handling","text":"Warning
Information in this page may be outdated.
The device service shall be able to create, list, modify and delete users from the device using the CreateUsers, GetUsers, SetUser and DeleteUsers operations.
The spec can refer to https://www.onvif.org/ver10/device/wsdl/devicemgmt.wsdl
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#getusers","title":"GetUsers","text":"This operation lists the registered users and corresponding credentials on a device.
curl --request GET 'http://0.0.0.0:59882/api/v3/device/name/Camera001/Users'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#createusers","title":"CreateUsers","text":"This operation creates new camera users and corresponding credentials on a device for authentication purposes.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/CreateUsers' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"CreateUsers\": {\n \"User\": [\n {\n \"Username\": \"user1\",\n \"Password\": \"Password1\",\n \"UserLevel\": \"User\"\n },\n {\n \"Username\": \"user2\",\n \"Password\": \"Password1\",\n \"UserLevel\": \"User\"\n }\n ]\n }\n }'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#setuser","title":"SetUser","text":"This operation updates the settings for one or several users on a device for authentication purposes.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/Users' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"Users\": {\n \"User\": [\n {\n \"Username\": \"user1\",\n \"UserLevel\": \"Administrator\"\n },\n {\n \"Username\": \"user2\",\n \"UserLevel\": \"Operator\"\n }\n ]\n }\n }'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/api-usage-user-handling/#deleteusers","title":"DeleteUsers","text":"This operation deletes users on a device.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/DeleteUsers' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"DeleteUsers\": {\n \"Username\": [\"user1\",\"user2\"]\n }\n }'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/","title":"Auto Discovery","text":"There are two methods that the device service can use to discover and add ONVIF compliant cameras using WS-Discovery: multicast and netscan.
For more info on how WS-Discovery works, see here.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#how-to","title":"How To","text":"Note
Ensure that the cameras are all installed and configured before attempting discovery.
Device discovery is triggered by the device SDK. Once the device service starts, it will discover the Onvif camera(s) at the specified interval.
Note
You can also manually trigger discovery using this command: curl -X POST http://<service-host>:59984/api/v3/discovery
See Configuration Section for full details
Note
Alternatively, for netscan
you can set the DiscoverySubnets
automatically after the service has been deployed by running the bin/configure-subnets.sh script
Netscan
, there is a one line command to determine the DiscoverySubnets
of your current machine: ip -4 -o route list scope link | sed -En \"s/ dev ($(find /sys/class/net -mindepth 1 -maxdepth 2 -not -lname '*devices/virtual*' -execdir grep -q 'up' \"{}/operstate\" \\; -printf '%f\\n' | paste -sd\\| -)).+//p\" | grep -v \"169.254.0.0/16\" | sort -u | paste -sd, -\n
Example Output: 192.168.1.0/24
Define the following configurations in cmd/res/configuration.yaml
for auto-discovery mechanism:
Device:\n# The location of Provision Watcher yaml files to import when using auto-discovery\nProvisionWatchersDir: ./res/provisionwatchers\nDiscovery:\nEnabled: true\nInterval: 1h\n\n# Custom configs\nAppCustom:\nDefaultSecretName: credentials001\n# Select which discovery mechanism(s) to use\nDiscoveryMode: both # netscan, multicast, or both\n# The target ethernet interface for multicast discovering\nDiscoveryEthernetInterface: eth0\n# List of IPv4 subnets to perform netscan discovery on, in CIDR format (X.X.X.X/Y)\n# separated by commas ex: \"192.168.1.0/24,10.0.0.0/24\"\nDiscoverySubnets: \"192.168.1.0/24\" # Fill in with your actual subnet(s)\n
Define the following environment variables in docker-compose.yaml
:
device-onvif-camera:\nenvironment:\nDEVICE_DISCOVERY_ENABLED: \"true\" # enable device discovery\nDEVICE_DISCOVERY_INTERVAL: \"1h\" # set to desired interval\n\n# The target ethernet interface for multicast discovering\nAPPCUSTOM_DISCOVERYETHERNETINTERFACE: \"eth0\"\n# The Secret Name of the default credentials to use for devices\nAPPCUSTOM_DEFAULTSECRETNAME: \"credentials001\"\n# Select which discovery mechanism(s) to use\nAPPCUSTOM_DISCOVERYMODE: \"both\" # netscan, multicast, or both\n# List of IPv4 subnets to perform netscan discovery on, in CIDR format (X.X.X.X/Y)\n# separated by commas ex: \"192.168.1.0/24,10.0.0.0/24\"\nAPPCUSTOM_DISCOVERYSUBNETS: \"192.168.1.0/24\" # Fill in with your actual subnet(s)\n
Enter the subnet into this command, and execute it to set the DiscoverySubnets
Note
If you are operating in secure mode, you must use the Consul ACL Token generated previously. If not, you can omit the -H \"X-Consul-Token:<consul-token>\"
portion of the command.
curl --data '<subnet>' -H \"X-Consul-Token:<consul-token>\" -X PUT \"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/AppCustom/DiscoverySubnets\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#step-2-set-credentialsmap","title":"Step 2. Set CredentialsMap","text":"See Credentials Guide for more information.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#configuration-guide","title":"Configuration Guide","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#discoverymode","title":"DiscoveryMode","text":"Note
For docker, set the env var APPCUSTOM_DISCOVERYMODE
DiscoveryMode
allows you to select which discovery mechanism(s) to use. The three options are: netscan
, multicast
, and both
.
netscan
works by sending unicast UDP WS-Discovery probes to a set of IP addresses on the CIDR subnet(s) configured via DiscoverySubnets
.
For example, if the provided CIDR is 10.0.0.0/24
, it will probe the all IP addresses from 10.0.0.1
to 10.0.0.254
. This will result in a total of 254 probes on the network.
This method is a little slower and more network-intensive than multicast WS-Discovery, because it has to make individual connections. However, it can reach a much wider set of networks and works better behind NATs (such as docker networks).
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#multicast","title":"multicast","text":"multicast
works by sending a single multicast UDP WS-Discovery Probe to the multicast address 239.255.255.250
on port 3702
. In certain networks this traffic is blocked, and it is also not forwarded across subnets, so it is not compatible with NATs such as docker networks (except in the case of running an Onvif simulator inside the same docker network).
multicast
requires some additional configuration. Edit the add-device-onvif-camera.yml
in the edgex-compose/compose-builder
as follows:
Example
services:\n device-onvif-camera:\n image: edgexfoundry/device-onvif-camera${ARCH}:0.0.0-dev\n container_name: edgex-device-onvif-camera\n hostname: edgex-device-onvif-camera\n read_only: true\n restart: always\n network_mode: \"host\"\n environment:\n SERVICE_HOST: 192.168.93.151 # set to internal ip of your machine\n MESSAGEQUEUE_HOST: localhost\n EDGEX_SECURITY_SECRET_STORE: \"false\"\n REGISTRY_HOST: localhost\n CLIENTS_CORE_DATA_HOST: localhost\n CLIENTS_CORE_METADATA_HOST: localhost\n # Host Network Interface, IP, Subnet\n APPCUSTOM_DISCOVERYETHERNETINTERFACE: wlp1s0 # determine this setting for your machine\n APPCUSTOM_DISCOVERYSUBNETS: 192.168.93.0/24 # determine this setting for your machine\n APPCUSTOM_DISCOVERYMODE: multicast\n depends_on:\n - consul\n - data\n - metadata\n security_opt:\n - no-new-privileges:true\n user: \"${EDGEX_USER}:${EDGEX_GROUP}\"\n command: --cp=consul.http://localhost:8500\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#both","title":"both","text":"This option combines both netscan and multicast.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#discoverysubnets","title":"DiscoverySubnets","text":"Note
For docker, set the env var APPCUSTOM_DISCOVERYSUBNETS
This is the list of IPv4 subnets to perform netscan discovery on, in CIDR format (X.X.X.X/Y) separated by commas ex: \"192.168.1.0/24,10.0.0.0/24\". See how to configure this value here.
Also, the following one-line command can determine the subnets of your machine:
ip -4 -o route list scope link | sed -En \"s/ dev ($(find /sys/class/net -mindepth 1 -maxdepth 2 -not -lname '*devices/virtual*' -execdir grep -q 'up' \"{}/operstate\" \\; -printf '%f\\n' | paste -sd\\| -)).+//p\" | grep -v \"169.254.0.0/16\" | sort -u | paste -sd, -\n
Example Output: 192.168.1.0/24
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#discoveryethernetinterface","title":"DiscoveryEthernetInterface","text":"Note
For docker, set the env var APPCUSTOM_DISCOVERYETHERNETINTERFACE
This is the target Ethernet Interface to use for multicast discovering. Keep in mind this interface is relative to the environment it is being run under. For example, when running in docker, those interfaces are different from your host machine's interfaces.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#probeasynclimit","title":"ProbeAsyncLimit","text":"Note
For docker, set the env var APPCUSTOM_PROBEASYNCLIMIT
This is the maximum simultaneous network probes when running netscan discovery.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#probetimeoutmillis","title":"ProbeTimeoutMillis","text":"Note
For docker, set the env var APPCUSTOM_PROBETIMEOUTMILLIS
This is the maximum amount of milliseconds to wait for each IP probe before timing out. This will also be the minimum time the discovery process can take.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#maxdiscoverdurationseconds","title":"MaxDiscoverDurationSeconds","text":"Note
For docker, set the env var APPCUSTOM_MAXDISCOVERDURATIONSECONDS
This is the maximum amount of seconds the discovery process is allowed to run before it will be cancelled. It is especially important to have this configured in the case of larger subnets such as /16 and /8.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#adding-the-devices-to-edgex","title":"Adding the Devices to EdgeX","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#rediscovery","title":"Rediscovery","text":"The device service is able to rediscover and update devices that have been discovered previously. Nothing additional is needed to enable this. It will run whenever the discover call is sent, regardless of whether it is a manual or automated call to discover.
The following logic is to determine if the device is already registered or not.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#troubleshooting","title":"Troubleshooting","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/auto-discovery/#netscan-discovery-was-called-but-discoverysubnets-are-empty","title":"netscan discovery was called, but DiscoverySubnets are empty!","text":"This message occurs when you have not configured the AppCustom.DiscoverySubnets
configuration. It is required in order to know which subnets to scan for Onvif Cameras. See here
This message occurs when you have multicast discovery enabled, but AppCustom.DiscoveryEthernetInterface
is configured to a network interface that does not exist. See here
Control plane events have been added to enable the Core Metadata to emit events onto the message bus when a device has been added, updated, or deleted.
Refer Device System Events for more information.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/","title":"Credentials","text":"Camera credentials are stored in the EdgeX Secret Store and referenced by MAC Address. All devices by default are configured with credentials from DefaultSecretName
unless configured as part of a group within AppCustom.CredentialsMap
.
Three things must be done in order to add an authenticated camera to EdgeX: - Add device to EdgeX - Manually or via auto-discovery - Add Credentials
to Secret Store
- Manually or via utility scripts - Map Credentials
to devices - Manually or via utility scripts - Configure as DefaultSecretName
Secret Store
under a specific Secret Name
key.Secret
which contains a mapping of username
, password
, and authentication mode
.Secrets
Vault
Consul
). They can be pre-configured via configuration.yaml
's Writable.InsecureSecrets
section.Secret
as they are stored in the Secret Store
.AppCustom.CredentialsMap
) this contains the mappings between Secret Name
and MAC Address
. Each key in the map is a Secret Name
which points to Credentials
in the Secret Store
. The value for each key is a comma separated list of MAC Addresses
which should use those Credentials
.Secret Name
which points to the Credentials
to use as the default for all devices which are not configured in the CredentialsMap
.Secret Name
that does not exist in the Secret Store
. It is pre-configured as Credentials
with Authentication Mode
of none
. NoAuth
can be used most places where a Secret Name
is expected.Camera credentials are stored in the EdgeX Secret Store, which is Vault in secure mode, and Consul in non-secure mode. The term Secret Name
is often used to refer to the name of the credentials as they are stored in the Secret Store. Credentials are then mapped to devices either using the DefaultSecretName
which applies to all devices by default, or by configuring the AppCustom.CredentialsMap
which maps one or more MAC Addresses to the desired credentials.
Credentials
are SecretData
comprised of three fields: - username
: the admin username for the camera - password
: the admin password - mode
: the type of Authentication to use - usernametoken
: use a username and token based authentication - digest
: use a digest based authentication - both
: use both usernametoken
and digest
- none
: do not send any authentication headers
Note
Credentials can be added and modified via utility scripts after the service is running
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#non-secure-mode","title":"Non-Secure Mode","text":"Helper ScriptsManualSee here for the full guide.
Replace <secret-name>
with the name of the secret, <username>
with the username, <password>
with the password, and <mode>
with the auth mode.
Set SecretName to <device-name>
curl -X PUT --data \"<secret-name>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretName\"\n
Set username to <username>
curl -X PUT --data \"<username>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretData/username\"\n
Set password to <password>
curl -X PUT --data \"<password>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretData/password\"\n
Set auth mode to <auth-mode>
curl -X PUT --data \"<auth-mode>\" \\\n\"http://localhost:8500/v1/kv/edgex/v3/device-onvif-camera/Writable/InsecureSecrets/<secret-name>/SecretData/mode\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#secure-mode","title":"Secure Mode","text":"Helper ScriptsManual See here for the full guide.
Credentials can be added via EdgeX Secrets:
Replace <secret-name>
with the name of the secret, <username>
with the username, <password>
with the password, and <mode>
with the auth mode.
curl --location --request POST 'http://localhost:59984/api/v3/secret' \\\n--header 'Content-Type: application/json' \\\n--data-raw '\n{\n \"apiVersion\" : \"v3\",\n \"name\": \"<secret-name>\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<password>\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"<mode>\"\n }\n ]\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#mapping-credentials-to-devices","title":"Mapping Credentials to Devices","text":"Note
Credential mappings can be set via utility scripts after the service is running
The device service supports three types of credential mapping. All three types can be used in conjunction with each other.
1 to All
- All devices are given the default credentials based on DefaultSecretName
1 to Many
- In the CredentialsMap
, one secret name can be assigned multiple MAC addresses1 to 1
- In the CredentialsMap
, assign each secret name 1 MAC AddressNote
Any key present in AppCustom.CredentialsMap
must also exist in the secret store!
# AppCustom.CredentialsMap is a map of SecretName -> Comma separated list of mac addresses.\n# Every SecretName used here must also exist as a valid secret in the Secret Store.\n#\n# Note: Anything not defined here will be assigned the default credentials configured via `DefaultSecretName`.\n#\n# Example: (Single mapping for 1 mac address to 1 credential)\n# credentials001 = \"aa:bb:cc:dd:ee:ff\"\n#\n# Example: (Multi mapping for 3 mac address to 1 shared credentials)\n# credentials002 = \"11:22:33:44:55:66,ff:ee:dd:cc:bb:aa,ab:12:12:34:34:56:56\"\n#\n# These mappings can also be referred to as \"groups\". In the above case, the `credentials001` group has 1 MAC\n# Address, and the `credentials002` group has 3 MAC Addresses.\n#\n# The special group 'NoAuth' defines mac addresses of cameras where no authentication is needed.\n# The 'NoAuth' key does not exist in the SecretStore. It is not required to add MAC Addresses in here,\n# however it avoids sending the default credentials to cameras which do not need it.\n#\n# IMPORTANT: A MAC Address may only exist in one credential group. If a MAC address is defined in more\n# than one group, it is unpredictable which group the MAC will end up in! If you wish to change the group a MAC\n# address belongs to, first remove it from its existing group, and then add it to the new one.\nCredentialsMap:\nNoAuth: \"\"\ncredentials001: \"aa:bb:cc:dd:ee:ff\"\ncredentials002: \"11:22:33:44:55:66,ff:ee:dd:cc:bb:aa,ab:12:12:34:34:56:56\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/credentials/#credential-lookup","title":"Credential Lookup","text":"Here is an in-depth look at the logic behind mapping Credentials
to Devices.
Custom metadata can be applied and retrieved for each camera added to the service.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#usage","title":"Usage","text":"CustomMetadata
map is an element in the ProtocolProperties
device field. It is initialized to be empty on discovery, so the user can add their desired fields. Otherwise, the user can pre-define this field in a camera.yaml file.If you add pre-defined devices, set up the CustomMetadata
object as shown in the cmd/res/devices/camera.yaml.example
.
deviceList:\n- name: Camera001\nprofileName: onvif-camera\ndescription: onvif conformant camera\nprotocols:\n...\nCustomMetadata:\nLocation: Front door\nColor: Black and white\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#set-custom-metadata","title":"Set Custom Metadata","text":"Use the CustomMetadata resource to set the fields of CustomMetadata
. Choose the key/value pairs to represent your custom fields.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/CustomMetadata' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"CustomMetadata\": {\n \"Location\":\"Front Door\",\n \"Color\":\"Black and white\",\n \"Condition\": \"Good working condition\"\n }\n }' | jq .\n
{\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
Note
Ensure all data is properly formatted json, and that all special characters are escaped if necessary
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#get-custom-metadata","title":"Get Custom Metadata","text":"Use the CustomMetadata resource to get and display the fields of CustomMetadata
.
curl http://localhost:59882/api/v3/device/name/<device name>/CustomMetadata | jq .\n
2. The repsonse from the curl command. {\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"ba3987f9-b45b-480a-b582-f5501d673c4d\",\n \"origin\" : 1655409814077374935,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"cf96e5c0-bde1-4c0b-9fa4-8f765c8be456\",\n \"objectValue\" : {\n\"Color\" : \"Black and white\",\n \"Condition\" : \"Good working condition\",\n \"Location\" : \"Front Door\"\n},\n \"origin\" : 1655409814077374935,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"CustomMetadata\",\n \"value\" : \"\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"CustomMetadata\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/custom-metadata-feature/#get-specific-custom-metadata","title":"Get Specific Custom Metadata","text":"Pass the CustomMetadata
resource a query to get specific field(s) in CustomMetadata. The query must be a base64 encoded json object with an array of fields you want to access.
Json object holding an array of fields you want to query.
'[\n\"Color\",\n\"Location\"\n]'\n
Use this command to convert the json object to base64.
echo '[\n \"Color\",\n \"Location\"\n]' | base64\n
The response converted to base64.
WwogICAgIkNvbG9yIiwKICAgICJMb2NhdGlvbiIKXQo=\n
Use this command to query the fields you provided in the json object.
curl http://localhost:59882/api/v3/device/name/<device name>/CustomMetadata?jsonObject=WwogICAgIkNvbG9yIiwKICAgICJMb2NhdGlvbiIKXQo= | jq .\n
Curl response.
{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"24c3eb0a-48b1-4afe-b874-965aeb2e42a2\",\n \"origin\" : 1655410556448058195,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"id\" : \"d0c26303-20b5-4ccd-9e63-fb02b87b8ebc\",\n \"objectValue\" : {\n\"Color\": \"Black and white\",\n \"Location\" : \"Front Door\"\n},\n \"origin\" : 1655410556448058195,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"CustomMetadata\",\n \"value\" : \"\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"CustomMetadata\"\n},\n \"statusCode\" : 200\n}\n
Use the DeleteCustomMetadata resource to delete entries in custom metadata
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/DeleteCustomMetadata' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"DeleteCustomMetadata\": [\n \"Color\", \"Condition\"\n ]\n }' | jq .\n
{\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
The device status goes hand in hand with the rediscovery of the cameras, but goes beyond the scope of just discovery. It is a separate background task running at a specified interval (default 30s) to determine the most accurate operating status of the existing cameras. This applies to all devices regardless of how or where they were added from.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/device-status/#states-and-descriptions","title":"States and Descriptions","text":"Currently, there are 4 different statuses that a camera can have
EnableStatusCheck
to enable the device status background service.CheckStatusInterval
is the interval at which the service will determine the status of each camera.# Enable or disable the built in status checking of devices, which runs every CheckStatusInterval.\nEnableStatusCheck: true\n# The interval in seconds at which the service will check the connection of all known cameras and update the device status \n# A longer interval will mean the service will detect changes in status less quickly\n# Maximum 300s (5 minutes)\nCheckStatusInterval: 30\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/device-status/#automatic-triggers","title":"Automatic Triggers","text":"Currently, there are some actions that will trigger an automatic status check: - Any modification to the CredentialsMap
from the config provider (Consul)
Friendly name and MAC address can be set and retrieved for each camera added to the service.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#preset-friendlyname","title":"Preset FriendlyName","text":"FriendlyName
is an element in the Onvif ProtocolProperties
device field. It is initialized to be empty or <Manufacturer+Model>
if credentials are provided on discovery. The user can also pre-define this field in a camera.yaml file.
If you add pre-defined devices, set up the FriendlyName
field as shown in the cmd/res/devices/camera.yaml.example
.
# Pre-defined Devices\ndeviceList:\n- name: Camera001\nprofileName: onvif-camera\ndescription: onvif conformant camera\nprotocols:\nOnvif:\nAddress: 192.168.12.123\nPort: '80'\nFriendlyName: Home camera\nCustomMetadata:\nLocation: Front door\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#set-friendly-name","title":"Set Friendly Name","text":"Friendly name can also be set via Edgex device command. FriendlyName device resource is used to set FriendlyName
of a camera.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/FriendlyName' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"FriendlyName\":\"Home camera\"\n }' | jq .\n
2. The response from the curl command. {\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
Note
Ensure all data is properly formatted json, and that all special characters are escaped if necessary
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#get-friendly-name","title":"Get Friendly Name","text":"Use the FriendlyName device resource to retrieve FriendlyName
of a camera.
curl http://localhost:59882/api/v3/device/name/<device name>/FriendlyName | jq .\n
2. Response from the curl command. FriendlyName value can be found under value
field in the json response. {\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"event\": {\n\"apiVersion\" : \"v3\",\n \"id\": \"5b924351-31c7-469e-a9ba-dea063fdbf3a\",\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"profileName\": \"onvif-camera\",\n \"sourceName\": \"FriendlyName\",\n \"origin\": 1658441317910501400,\n \"readings\": [\n{\n\"id\": \"62a0424b-a3c1-45ea-b640-58c7aa3ea476\",\n \"origin\": 1658441317910501400,\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-cc32e5000688\",\n \"resourceName\": \"FriendlyName\",\n \"profileName\": \"onvif-camera\",\n \"valueType\": \"String\",\n \"value\": \"Home camera\"\n}\n]\n}\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#preset-macaddress","title":"Preset MACAddress","text":"MACAddress
is an element in the Onvif ProtocolProperties
device field. It will be set to empty string if no value is provided, or it will be set with the MAC address value of the camera if valid credentials are provided. The user can pre-define this field in a camera.yaml file.
If you add pre-defined devices, set up the MACAddress
field as shown in the cmd/res/devices/camera.yaml.example
).
MACAddress can also be set via Edgex device command.This is useful for setting the MAC Address for devices which do not contain the MAC Address in the Endpoint Reference Address, or have been added manually without a MAC Address. Since the MAC is used to map credentials for cameras, it is important to have this field filled out.
Note
When a camera successfully becomes UpWithAuth
, the MAC Address is automatically queried and overridden by the system if available.
Device resource MACAddress is used to set MACAddress
of a camera.
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/<device name>/MACAddress' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"MACAddress\":\"11:22:33:44:55:66\"\n }' | jq .\n
{\n \"apiVersion\" : \"v3\",\n \"statusCode\": 200\n}\n
Note
Ensure all data is properly formatted json, and that all special characters are escaped if necessary.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/friendlyname-mac/#get-mac-address","title":"Get MAC Address","text":"Use the MACAddress device resource to retrieve MACAddress
of a camera.
curl http://localhost:59882/api/v3/device/name/<device name>/MACAddress | jq .\n
2. Response from the curl command. MACAddress value can be found under value
field in the json response. {\n\"apiVersion\" : \"v3\",\n \"statusCode\": 200,\n \"event\": {\n\"apiVersion\" : \"v3\",\n \"id\": \"c13245b0-397f-47c0-84b2-4de3d2fb891d\",\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-1027f5ea8888\",\n \"profileName\": \"onvif-camera\",\n \"sourceName\": \"MACAddress\",\n \"origin\": 1658441498356294000,\n \"readings\": [\n{\n\"id\": \"7a7735ed-3b61-4426-84df-5e9a524e4022\",\n \"origin\": 1658441498356294000,\n \"deviceName\": \"TP-Link-C200-3fa1fe68-b915-4053-a3e1-1027f5ea8888\",\n \"resourceName\": \"MACAddress\",\n \"profileName\": \"onvif-camera\",\n \"valueType\": \"String\",\n \"value\": \"11:22:33:44:55:66\"\n}\n]\n}\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/","title":"Getting Started With Docker (Security Mode)","text":"Warning
Information in this page may be outdated.
This section describes how to run device-onvif-camera with docker and EdgeX security mode.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#1-build-docker-image","title":"1. Build docker image","text":"Build docker image named edgex/device-onvif-camera:0.0.0-dev with the following command:
make docker\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#2-prepare-edgex-composecompose-builder","title":"2. Prepare edgex-compose/compose-builder","text":"edgex-compose/compose-builder
make run ds-onvif-camera\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#31-check-whether-the-services-are-running-from-consul","title":"3.1 Check whether the services are running from Consul","text":"$ make get-consul-acl-token\n14891947-51b3-603d-9e35-628fb82993f4\n
http://localhost:8500/
curl --location --request POST 'http://0.0.0.0:59984/api/v3/secret' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"bosch\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"administrator\"\n },\n {\n \"key\":\"password\",\n \"value\":\"Password1!\"\n },\n {\n \"key\":\"mode\",\n \"value\":\"digest\"\n }\n ]\n}'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#5-add-the-device-profile-to-edgex","title":"5. Add the device profile to EdgeX","text":"Change directory back to the device-onvif-camera
and add the device profile to core-metadata service with the following command:
curl http://localhost:59881/api/v3/deviceprofile/uploadfile \\\n-F \"file=@./cmd/res/profiles/camera.yaml\"\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#6-add-the-device-to-edgex","title":"6. Add the device to EdgeX","text":"Add the device data to core-metadata service with the following command:
curl -X POST -H 'Content-Type: application/json' \\\nhttp://localhost:59881/api/v3/device \\\n-d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\":\"Camera003\",\n \"serviceName\": \"device-onvif-camera\",\n \"profileName\": \"onvif-camera\",\n \"description\": \"My test camera\",\n \"adminState\": \"UNLOCKED\",\n \"operatingState\": \"UNKNOWN\",\n \"protocols\": {\n \"Onvif\": {\n \"Address\": \"192.168.12.148\",\n \"Port\": \"80\",\n \"AuthMode\": \"digest\",\n \"SecretName\": \"bosch\"\n }\n }\n }\n }\n ]'\n
Check the available commands from core-command service:
$ curl http://localhost:59882/api/v3/device/name/Camera003 | jq .\n{\n\"apiVersion\" : \"v3\",\n \"deviceCoreCommand\" : {\n\"coreCommands\" : [\n{\n\"get\" : true,\n \"set\" : true,\n \"name\" : \"DNS\",\n \"parameters\" : [\n{\n\"resourceName\" : \"DNS\",\n \"valueType\" : \"Object\"\n}\n],\n \"path\" : \"/api/v3/device/name/Camera003/DNS\",\n \"url\" : \"http://edgex-core-command:59882\"\n},\n ...\n {\n\"get\" : true,\n \"name\" : \"StreamUri\",\n \"parameters\" : [\n{\n\"resourceName\" : \"StreamUri\",\n \"valueType\" : \"Object\"\n}\n],\n \"path\" : \"/api/v3/device/name/Camera003/StreamUri\",\n \"url\" : \"http://edgex-core-command:59882\"\n}\n],\n \"deviceName\" : \"Camera003\",\n \"profileName\" : \"onvif-camera\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/getting-started-with-docker-security/#7-execute-a-get-command","title":"7. Execute a Get Command","text":"$ curl http://0.0.0.0:59882/api/v3/device/name/Camera003/Users | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera003\",\n \"id\" : \"c0826f49-2840-421b-9474-7ad63a443302\",\n \"origin\" : 1639525215434025100,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera003\",\n \"id\" : \"d4dc823a-d75f-4fe1-8ee4-4220cc53ddc6\",\n \"objectValue\" : {\n\"User\" : [\n{\n\"UserLevel\" : \"Operator\",\n \"Username\" : \"user\"\n},\n {\n\"UserLevel\" : \"Administrator\",\n \"Username\" : \"service\"\n},\n {\n\"UserLevel\" : \"Administrator\",\n \"Username\" : \"administrator\"\n}\n]\n},\n \"origin\" : 1639525215434025100,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"Users\",\n \"valueType\" : \"Object\"\n}\n],\n \"sourceName\" : \"Users\"\n},\n \"statusCode\" : 200\n}\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-footnotes/","title":"ONVIF Footnotes","text":"Warning
Information in this page may be outdated.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-footnotes/#command-support","title":"Command Support","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-footnotes/#tapo-c200-user-management","title":"Tapo C200 - User Management","text":"Tapo returns 200 OK
for all User Management commands, but none of them actually do anything. The only way to modify the users is through the Tapo app.
Tapo does not support setting the DaylightSavings
field to false
. Regardless of the setting, the camera will always use daylight savings time.
You must use Digest Auth
or Both
as the Auth-Mode in order for this to work.
Warning
Information in this page may be outdated.
According to the Onvif user authentication flow, the device service shall: * Implement WS-Usernametoken according to WS-security as covered by the core specification. * Implement HTTP Digest as covered by the core specification.
The spec can refer to https://www.onvif.org/specs/core/ONVIF-Core-Specification.pdf
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-user-authentication/#ws-usernametoken","title":"WS-Usernametoken","text":"When the Onvif camera requires authentication through WS-UsernameToken, the device service must set user information with the appropriate privileges in WS-UsernameToken.
This use case contains an example of setting that user information using GetHostname.
WS-UsernameToken requires the following parameters: * Username \u2013 The user name for a certified user. * Password \u2013 The password for a certified user. According to the ONVIF specification, Password should not be set in plain text. Setting a password generates PasswordDigest, a digest that is calculated according to an algorithm defined in the specification for WS-UsernameToken: Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) ) * Nonce \u2013 A random string generated by a client. * Created \u2013 The UTC Time when the request is made.
For example:
curl --request POST 'http://192.168.56.101:10000/onvif/device_service' \\\n--header 'Content-Type: application/soap+xml' \\\n-d '<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n <soap-env:Envelope xmlns:soap-env=\"http://www.w3.org/2003/05/soap-envelope\" ...>\n <soap-env:Header>\n <Security xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\">\n <UsernameToken>\n <Username>administrator</Username>\n <Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordDigest\">\n +HKcvc+LCGClVwuros1sJuXepQY=\n </Password>\n <Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">\n w490bn6rlib33d5rb8t6ulnqlmz9h43m\n </Nonce>\n <Created xmlns=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">\n 2021-10-21T03:43:21.02075Z\n </Created>\n </UsernameToken>\n </Security>\n </soap-env:Header>\n <soap-env:Body>\n <tds:GetHostname>\n </tds:GetHostname>\n </soap-env:Body>\n </soap-env:Envelope>'\n
The spec can refer to https://www.onvif.org/wp-content/uploads/2016/12/ONVIF_WG-APG-Application_Programmers_Guide-1.pdf
You can inspect the request by network tool like the Wireshark:
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/onvif-user-authentication/#http-digest","title":"HTTP Digest","text":"The Digest scheme is based on a simple challenge-response paradigm and the spec can refer to https://datatracker.ietf.org/doc/html/rfc2617#page-6
The authentication follow can be illustrated as below: 1. The device service sends the request without the acceptable Authorization header. 2. The Onvif camera return the response with a \"401 Unauthorized\" status code, and a WWW-Authenticate header. - The WWW-Authenticate header contains the required data - qop: Indicates what \"quality of protection\" the client has applied to the message. - nonce: A server-specified data string which should be uniquely generated each time a 401 response is made. The onvif camera can limit the time of the nonce's validity. - realm: name of the host performing the authentication - And the device service will put the qop, nonce, realm in the header at next request 3. The device service sends the request again, and the Authorization header must contain: - qop: retrieve from the previous response - nonce: retrieve from the previous response - realm: retrieve from the previous response - username: The user's name in the specified realm. - uri: Request uri - nc: The nc-value is the hexadecimal count of the number of requests (including the current request) that the client has sent with the nonce value in this request. - cnonce: A random string generated by a client. - response: A string of 32 hex digits computed as defined below, which proves that the user knows a password. - MD5( hash1:nonce:nc:cnonce:qop:hash2) - hash1: MD5(username:realm:password) - hash2: MD5(POST:uri)
Inspect the request by the Wireshark:
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/reboot-needed/","title":"RebootNeeded","text":"Warning
Information in this page may be outdated.
Currently, only the SetNetworkInterfaces function returns the RebootNeeded value. If RebootNeeded is true, the user needs to reboot the camera to apply the config changes.
Since the Set command can't return the RebootNeeded value in the command response, the device service will store the value, then the user can use the custom web service EdgeX and function RebootNeeded to check the value.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/reboot-needed/#how-does-the-rebootneeded-work-with-edgex","title":"How does the RebootNeeded work with EdgeX?","text":"curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/NetworkInterfaces' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"NetworkInterfaces\": {\n \"InterfaceToken\": \"eth0\",\n \"NetworkInterface\": {\n \"Enabled\": true,\n \"IPv4\": {\n \"DHCP\": true\n }\n } \n }\n}'\n
Check the RebootNeeded value:
$ curl 'http://0.0.0.0:59882/api/v3/device/name/Camera001/RebootNeeded' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera001\",\n \"id\" : \"e370bbb5-55d2-4392-84ca-8d9e7f097dae\",\n \"origin\" : 1635750695886624000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera001\",\n \"id\" : \"abd5c555-ef7d-44a7-9273-c1dbb4d14de2\",\n \"origin\" : 1635750695886624000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"RebootNeeded\",\n \"value\" : \"true\",\n \"valueType\" : \"Bool\"\n}\n],\n \"sourceName\" : \"RebootNeeded\"\n},\n \"statusCode\" : 200\n}\n
The RebootNeeded value is true which indicates the camera should reboot to apply the necessary changes. Reboot the camera to apply the change:
curl --request PUT 'http://0.0.0.0:59882/api/v3/device/name/Camera001/SystemReboot' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"SystemReboot\": {}\n}'\n
Check The RebootNeeded value:
$ curl 'http://0.0.0.0:59882/api/v3/device/name/Camera001/RebootNeeded' | jq .\n{\n\"apiVersion\" : \"v3\",\n \"event\" : {\n\"apiVersion\" : \"v3\",\n \"deviceName\" : \"Camera001\",\n \"id\" : \"53585696-ec1a-4ac7-9a42-7d480c0a75d9\",\n \"origin\" : 1635750854455262000,\n \"profileName\" : \"onvif-camera\",\n \"readings\" : [\n{\n\"deviceName\" : \"Camera001\",\n \"id\" : \"87819d3a-25d0-4313-b69a-54c4a0c389ed\",\n \"origin\" : 1635750854455262000,\n \"profileName\" : \"onvif-camera\",\n \"resourceName\" : \"RebootNeeded\",\n \"value\" : \"false\",\n \"valueType\" : \"Bool\"\n}\n],\n \"sourceName\" : \"RebootNeeded\"\n},\n \"statusCode\" : 200\n}\n
Because of the reboot, RebootNeeded is now false
. This instruction introduce how to test with the Post REST client tool.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#test-onvif-api","title":"Test ONVIF API","text":"Before using device-onvif-camera
, the user can verify the camera's functionality via ONVIF APIs, we provide the following collections for testing: - Capabilities - Auto Discovery - Network Configuration - System Function - User Handling - Metadata Configuration - Video Streaming - Video Encoder Configuration - PTZ - Event Handling - Analytics
Download and import the following JSON files into Postman REST client tool: - onvif_camera_without_edgex_postman_collection.json - onvif_camera_without_edgex_postman_environment.json
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#set-up-the-authentication-for-onvif-security","title":"Set Up the Authentication for ONVIF security","text":"Replace the following onvif environment variable
on the Postman REST client. - WS_USERNAME - The username for a certified user - WS_NONCE - A random, unique number generated by a client - WS_UTC_TIME - The UtcTime when the request is made. - WS_PASSWORD_DIGEST - a digest that is calculated according to an algorithm defined in the specification for WS-UsernameToken: Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) )
According to the ONVIF spec and programmer guide, the client needs to provide the password digest for WS-UsernameToken. For example, we can generate the password digest in golang:
package main\n\nimport (\n\"crypto/sha1\"\n\"encoding/base64\"\n\"fmt\"\n)\n\nfunc main() {\nnonce := \"abcd\"\npassword := \"Password1!\"\ncreated := \"2022-06-06T12:26:37.769698Z\"\npasswordDigest := generatePasswordDigest(nonce, created, password)\n\nfmt.Println(\"Nonce:\", nonce)\nfmt.Println(\"Created:\", created)\nfmt.Println(\"PasswordDigest:\", passwordDigest)\n}\n\n//Digest = B64ENCODE( SHA1( B64DECODE( Nonce ) + Date + Password ) )\nfunc generatePasswordDigest(Nonce string, Created string, Password string) string {\nsDec, _ := base64.StdEncoding.DecodeString(Nonce)\nhasher := sha1.New()\nhasher.Write([]byte(string(sDec) + Created + Password))\nreturn base64.StdEncoding.EncodeToString(hasher.Sum(nil))\n}\n
The runnable code: https://go.dev/play/p/ZnE2nZYorg9"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#set-up-the-api-endpoint","title":"Set Up the API Endpoint","text":"Generally, the device web service endpoint is http:/${address}:${port}/onvif/device_service, then we can use GetCapabilities
ONVIF function to query other web service's endpoint:
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<env:Envelope ...>\n<env:Body>\n<tds:GetCapabilitiesResponse>\n<tds:Capabilities>\n<tt:Device>\n<tt:XAddr>http://192.168.12.123/onvif/device_service</tt:XAddr>\n...\n </tt:Device>\n<tt:Events>\n<tt:XAddr>http://192.168.12.123/onvif/Events</tt:XAddr>\n...\n </tt:Events>\n...\n </tds:GetCapabilitiesResponse>\n</env:Body>\n</env:Envelope>\n
And we should replace the following onvif environment variable
on the Postman REST client. - DEVICE_ENDPOINT - device web service endpoint - MEDIA_ENDPOINT - media web service endpoint - EVENT_ENDPOINT - event web service endpoint - PTZ_ENDPOINT - ptz web service endpoint
Then we can execute other ONVIF function via Postman REST client tool.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/test-with-postman/#test-device-onvif-camera-api","title":"Test device-onvif-camera API","text":"After adding the device according to the Getting Started Guide, then we can import the following Postman collections for testing the APIs: - onvif_camera_with_edgex_postman_collection.json - onvif_camera_with_edgex_postman_environment.json
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/","title":"Utility Scripts","text":"Note
If running EdgeX in Secure Mode, you will need a Consul ACL Token and JWT Token in order to use these scripts.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#use-cases","title":"Use Cases","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#create-new-credentials-and-assign-mac-addresses","title":"Create new credentials and assign MAC Addresses","text":"bin/map-credentials.sh
(Create New)
Note
Currently EdgeX is unable to provide a way to query the names of existing secrets from the secret store, so this method only works with credentials which have a key in the CredentialsMap. If the credentials were added via these utility scripts, a placeholder key was added for you to the CredentialsMap.
bin/map-credentials.sh
bin/edit-credentials.sh
Warning
This will modify the username/password for ALL devices using these credentials. Proceed with caution!
bin/query-mappings.sh
Output will look something like this:
Credentials Map:\n mycreds = 'aa:bb:cc:dd:ee:ff'\n mycreds2 = ''\n simcreds = 'cb:4f:86:30:ef:19,87:52:89:4d:66:4d,f0:27:d2:e8:9e:e1,9d:97:d9:d8:07:4b,99:70:6d:f5:c2:16'\n tapocreds = '10:27:F5:EA:88:F3'\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#configure-discoverysubnets","title":"Configure DiscoverySubnets","text":"bin/configure-subnets.sh
bin/configure-subnets.sh [-s/--secure-mode] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about","title":"About","text":"The purpose of this script is to make it easier for an end user to configure Onvif device discovery without the need to have knowledge about subnets and/or CIDR format. The DiscoverySubnets
config option defaults to blank in the configuration.yaml
file, and needs to be provided before a discovery can occur. This allows the device-onvif-camera device service to be run in a NAT-ed environment without host-mode networking, because the subnet information is user-provided and does not rely on device-onvif-camera
to detect it.
This script finds the active subnet for any and all network interfaces that are on the machine which are physical (non-virtual) and online (up). It uses this information to automatically fill out the DiscoverySubnets
configuration option through Consul of a deployed device-onvif-camera
instance.
bin/edit-credentials.sh [-s/--secure-mode] [-u <username>] [-p <password>] [--auth-mode {usernametoken|digest|both}] [-P secret-name] [-M mac-addresses] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about_1","title":"About","text":"The purpose of this script is to allow end-users to modify credentials either through EdgeX InsecureSecrets via Consul, or EdgeX Secrets via the device service.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#map-credentialssh","title":"map-credentials.sh","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#usage_2","title":"Usage","text":"bin/map-credentials.sh [-s/--secure-mode] [-u <username>] [-p <password>] [--auth-mode {usernametoken|digest|both}] [-P secret-name] [-M mac-addresses] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about_2","title":"About","text":"The purpose of this script is to allow end-users to add credentials either through EdgeX InsecureSecrets via Consul, or EdgeX Secrets via the device service. It then allows the end-user to add a list of MAC Addresses to map to those credentials via Consul.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#query-mappingssh","title":"query-mappings.sh","text":""},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#usage_3","title":"Usage","text":"bin/query-mappings.sh [-s/--secure-mode] [-u <username>] [-p <password>] [--auth-mode {usernametoken|digest|both}] [-P secret-name] [-M mac-addresses] [-t <consul token>]\n
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/utility-scripts/#about_3","title":"About","text":"The purpose of this script is to allow end-users to see what MAC Addresses are mapped to what credentials.
"},{"location":"microservices/device/services/device-onvif-camera/supplementary-info/ws-discovery/","title":"How does WS-Discovery work?","text":"ONVIF devices support WS-Discovery, which is a mechanism that supports probing a network to find ONVIF capable devices.
Probe messages are sent over UDP to a standardized multicast address and UDP port number.
WS-Discovery is generally faster than netscan becuase it only sends out one broadcast signal. However, it is normally limited by the network segmentation since the multicast packages typically do not traverse routers.
Example: 1. The client sends Probe message to find Onvif camera on the network.
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<soap-env:Envelope\nxmlns:soap-env=\"http://www.w3.org/2003/05/soap-envelope\"\nxmlns:soap-enc=\"http://www.w3.org/2003/05/soap-encoding\"\nxmlns:a=\"http://schemas.xmlsoap.org/ws/2004/08/addressing\">\n<soap-env:Header>\n<a:Action mustUnderstand=\"1\">http://schemas.xmlsoap.org/ws/2005/04/discovery/Probe</a:Action>\n<a:MessageID>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</a:MessageID>\n<a:ReplyTo>\n<a:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</a:Address>\n</a:ReplyTo>\n<a:To mustUnderstand=\"1\">urn:schemas-xmlsoap-org:ws:2005:04:discovery</a:To>\n</soap-env:Header>\n<soap-env:Body>\n<Probe\nxmlns=\"http://schemas.xmlsoap.org/ws/2005/04/discovery\"/>\n</soap-env:Body>\n</soap-env:Envelope>\n
The Onvif camera responds the Hello message according to the Probe message > The Hello message from HIKVISION
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<env:Envelope\nxmlns:env=\"http://www.w3.org/2003/05/soap-envelope\"\n...>\n<env:Header>\n<wsadis:MessageID>urn:uuid:cea94000-fb96-11b3-8260-686dbc5cb15d</wsadis:MessageID>\n<wsadis:RelatesTo>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</wsadis:RelatesTo>\n<wsadis:To>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</wsadis:To>\n<wsadis:Action>http://schemas.xmlsoap.org/ws/2005/04/discovery/ProbeMatches</wsadis:Action>\n<d:AppSequence InstanceId=\"1637072188\" MessageNumber=\"17\"/>\n</env:Header>\n<env:Body>\n<d:ProbeMatches>\n<d:ProbeMatch>\n<wsadis:EndpointReference>\n<wsadis:Address>urn:uuid:cea94000-fb96-11b3-8260-686dbc5cb15d</wsadis:Address>\n</wsadis:EndpointReference>\n<d:Types>dn:NetworkVideoTransmitter tds:Device</d:Types>\n<d:Scopes>onvif://www.onvif.org/type/video_encoder onvif://www.onvif.org/Profile/Streaming onvif://www.onvif.org/MAC/68:6d:bc:5c:b1:5d onvif://www.onvif.org/hardware/DFI6256TE http:123</d:Scopes>\n<d:XAddrs>http://192.168.12.123/onvif/device_service</d:XAddrs>\n<d:MetadataVersion>10</d:MetadataVersion>\n</d:ProbeMatch>\n</d:ProbeMatches>\n</env:Body>\n</env:Envelope>\n
The Hello message from Tapo C200
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<SOAP-ENV:Envelope\nxmlns:SOAP-ENV=\"http://www.w3.org/2003/05/soap-envelope\"\n...>\n<SOAP-ENV:Header>\n<wsa:MessageID>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</wsa:MessageID>\n<wsa:RelatesTo>uuid:a86f9421-b764-4256-8762-5ed0d8602a9c</wsa:RelatesTo>\n<wsa:ReplyTo SOAP-ENV:mustUnderstand=\"true\">\n<wsa:Address>http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous</wsa:Address>\n</wsa:ReplyTo>\n<wsa:To SOAP-ENV:mustUnderstand=\"true\">urn:schemas-xmlsoap-org:ws:2005:04:discovery</wsa:To>\n<wsa:Action SOAP-ENV:mustUnderstand=\"true\">http://schemas.xmlsoap.org/ws/2005/04/discovery/ProbeMatches</wsa:Action>\n</SOAP-ENV:Header>\n<SOAP-ENV:Body>\n<wsdd:ProbeMatches>\n<wsdd:ProbeMatch>\n<wsa:EndpointReference>\n<wsa:Address>uuid:3fa1fe68-b915-4053-a3e1-c006c3afec0e</wsa:Address>\n<wsa:ReferenceProperties></wsa:ReferenceProperties>\n<wsa:PortType>ttl</wsa:PortType>\n</wsa:EndpointReference>\n<wsdd:Types>tdn:NetworkVideoTransmitter</wsdd:Types>\n<wsdd:Scopes>onvif://www.onvif.org/name/TP-IPC onvif://www.onvif.org/hardware/MODEL onvif://www.onvif.org/Profile/Streaming onvif://www.onvif.org/location/ShenZhen onvif://www.onvif.org/type/NetworkVideoTransmitter </wsdd:Scopes>\n<wsdd:XAddrs>http://192.168.12.128:2020/onvif/device_service</wsdd:XAddrs>\n<wsdd:MetadataVersion>1</wsdd:MetadataVersion>\n</wsdd:ProbeMatch>\n</wsdd:ProbeMatches>\n</SOAP-ENV:Body>\n</SOAP-ENV:Envelope>\n
The USB Device Service is a microservice created to address the lack of standardization and automation of camera discovery and onboarding. EdgeX Foundry is a flexible microservice-based architecture created to promote the interoperability of multiple device interface combinations at the edge. In an EdgeX deployment, the USB Device Service controls and communicates with USB cameras, while EdgeX Foundry presents a standard interface to application developers. With normalized connectivity protocols and a vendor-neutral architecture, EdgeX paired with USB Camera Device Service, simplifies deployment of edge camera devices.
Specifically, the device service uses V4L2 API to get camera metadata, FFmpeg framework to capture video frames and stream them to an RTSP server, which is embedded in the dockerized device service. This allows the video stream to be integrated into the larger architecture.
Use the USB Device Service to streamline and scale your edge camera device deployment.
"},{"location":"microservices/device/services/device-usb-camera/General/#how-it-works","title":"How It Works","text":"The figure below illustrates the software flow through the architecture components.
Figure 1: Software Flow
Core Metadata
.Core Metadata
for devices and associated configuration.Video Analytics Pipeline
through HTTP Post Request.Get Started>
"},{"location":"microservices/device/services/device-usb-camera/General/#examples","title":"Examples","text":"To see an example utilizing the USB device service, refer to the camera management example application
"},{"location":"microservices/device/services/device-usb-camera/General/#references","title":"References","text":"Apache-2.0
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/USB-protocol/","title":"USB Camera Device Service Specifications","text":""},{"location":"microservices/device/services/device-usb-camera/supplementary-info/USB-protocol/#usb-protocol-properties","title":"USB Protocol Properties","text":"Property Description EdgeX Value Type Path Specifies the internal /dev/video path for the camera device. DEPRECATED: Path will be removed in the next major release, use Paths. String Paths A list of internal /dev/video paths for the camera device. This list includes all streaming capable video paths for each device. Object SerialNumber The serial number of the camera device. String CardName The manufacturer specified name of the camera device. String AutoStreaming A value indicating if the device should automatically start streaming. String"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/","title":"Advanced Options","text":""},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#rtsp-authentication","title":"RTSP Authentication","text":"The device service allows for rtsp stream authentication using the rtsp-simple-server. Authentication is enabled by default.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#secret-configuration","title":"Secret Configuration","text":"To configure the username and password for rtsp authentication when building your own images, edit the fields in the 'configuration.yaml'.
Note
This should only be used when you are in non-secure mode.
Warning
Be careful when storing any potentially important information in cleartext on files in your computer. In this case, the credentials for the stream are stored in cleartext in the configuration.yaml
file on your system. InsecureSecrets
is for non-production use only.
Note
Leaving the fields blank will NOT disable authentication. The stream will not be able to be authenticated until credentials are provided.
Snippet from configuration.yaml
...\nWritable:\nLogLevel: \"INFO\"\nInsecureSecrets:\nrtspauth:\nSecretName: rtspauth\nSecretData:\nusername: \"<enter-username>\"\npassword: \"<enter-password>\"\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#authentication-server-configuration","title":"Authentication Server Configuration","text":"externalAuthenticationURL line from the Dockerfile
RUN sed -i 's,externalAuthenticationURL:,externalAuthenticationURL: http://localhost:8000/rtspauth,g' rtsp-simple-server.yml\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#set-device-configuration-parameters","title":"Set Device Configuration Parameters","text":""},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#set-frame-rate","title":"Set frame rate","text":"This command sets the frame rate for the capture device.
Before setting the frame rate first execute the DataFormat
command to see the available frame rates of a device for any of its video streaming path or stream format:
Example DataFormat Command with PathIndex
query parameter
curl http://localhost:59882/api/v3/device/name/<device name>/DataFormat?PathIndex=<path_index>\n
OR
Example DataFormat Command with StreamFormat
query parameter
curl http://localhost:59882/api/v3/device/name/<device name>/DataFormat?StreamFormat=<stream_format>\n
Note
The PathIndex
refers to the index of the device video streaming path from the path list. For example if a usb device has one video streaming path such as /dev/video0 the PathIndex
value will be 0. In case of Intel\u2122 RealSense\u00ae cameras there are three video streaming paths, hence the user will have 3 options for PathIndex
which are 0, 1 and 2. The default value is 0 if no PathIndex
input is provided. StreamFormat
refers to different video streaming formats and the formats currently supported by the service are RGB
, Depth
or Greyscale
.
Example DataFormat Response
{\n\"apiVersion\": \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\": \"v3\",\n\"id\": \"bf48b7c6-5e94-4831-a7ba-cea4e9773ae1\",\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"profileName\": \"USB-Camera-General\",\n\"sourceName\": \"DataFormat\",\n\"origin\": 1689621129335558590,\n\"readings\": [\n{\n\"id\": \"7f4918ca-31c9-4bcf-9490-a328eb62beab\",\n\"origin\": 1689621129335558590,\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"resourceName\": \"DataFormat\",\n\"profileName\": \"USB-Camera-General\",\n\"valueType\": \"Object\",\n\"value\": \"\",\n\"objectValue\": {\n\"/dev/video6\": {\n\"BytesPerLine\": 1280,\n\"Colorspace\": \"sRGB\",\n\"Field\": \"none\",\n\"FrameRates\": [\n{\n\"Denominator\": 1,\n\"Numerator\": 30\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 24\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 20\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 15\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 10\n},\n{\n\"Denominator\": 2,\n\"Numerator\": 15\n},\n{\n\"Denominator\": 1,\n\"Numerator\": 5\n}\n],\n\"Height\": 480,\n\"PixelFormat\": \"YUYV 4:2:2\",\n\"Quantization\": \"Limited range\",\n\"SizeImage\": 614400,\n\"Width\": 640,\n\"XferFunc\": \"Rec. 709\",\n\"YcbcrEnc\": \"ITU-R 601\"\n}\n}\n]\n}\n}\n
Use one of the supported FrameRates
value from the previous command output to set the frame rate based on PathIndex
or StreamFormat
.
Example Set FrameRate Command
curl -X PUT -d '{\n \"FrameRate\": {\n \"FrameRateValueDenominator\": \"1\"\n \"FrameRateValueNumerator\": \"10\",\n }\n }' http://localhost:59882/api/v3/device/name/<device name>/FrameRate?PathIndex=<path_index>\n
Example Set FrameRate Response
{\n\"apiVersion\": \"v3\",\n \"statusCode\": 200\n}
The newly set framerate can be verified using a GET request:
Example Get FrameRate command
curl -X GET http://localhost:59882/api/v3/device/name/<device name>/FrameRate?PathIndex=<path_index>\n
Example Get FrameRate response
{\n\"apiVersion\": \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\": \"v3\",\n\"id\": \"8ee12059-fed6-401c-b268-992fede19840\",\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"profileName\": \"USB-Camera-General\",\n\"sourceName\": \"FrameRate\",\n\"origin\": 1692730015347762386,\n\"readings\": [{\n\"id\": \"b991d703-b7ac-4139-a598-87e0f190d617\",\n\"origin\": 1692730015347762386,\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"resourceName\": \"FrameRate\",\n\"profileName\": \"USB-Camera-General\",\n\"valueType\": \"Object\",\n\"value\": \"\",\n\"objectValue\": {\n\"/dev/video6\": {\n\"Denominator\": 1,\n\"Numerator\": 10\n}\n}\n}]\n}\n}\n
Warning
3rd party applications such as vlc or ffplay may overwrite your chosen frame rate value, so make sure to keep that in mind when using other applications.
This command sets the desired pixel format for the capture device.
Before setting the pixel format the ImageFormats
command can be executed to see the available pixel formats for a camera for any of its video streaming path or stream format (RGB, Greyscale or Depth)
Example Get ImageFormats Command
curl -X GET http://localhost:59882/api/v3/device/name/<device name>/ImageFormats?PathIndex=<path_index>\n
Use one of the supported PixelFormat
values to set the pixel format based on PathIndex
or StreamFormat
.
Note
PixelFormat
has to be specified in the set request with a specific code which is acceptable by the v4l2 driver. This service currently supports the formats whose codes are YUYV
,GREY
,MJPG
,Z16
,RGB
,JPEG
,MPEG
,H264
,MPEG4
,UYVY
,BYR2
,Y8I
,Y12I
. Refer to V4l2 Image Formats for more info. The service only supports setting of height, width or pixel format.
Example Set PixelFormat Command
curl -X PUT -d '{\n \"PixelFormat\": {\n \"Width\":\"640\",\n \"Height\":\"480\",\n \"PixelFormat\": \"YUYV\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/PixelFormat?PathIndex=<path_index>\n
Example Set PixelFormat Response
{\n\"apiVersion\": \"v3\",\n \"statusCode\": 200\n}\n
The newly set pixel format can be verified using a GET request:
Example Get PixelFormat command
curl -X GET http://localhost:59882/api/v3/device/name/<device name>/PixelFormat?PathIndex=<path_index>\n
Example Get PixelFormat Response
{\n\"apiVersion\": \"v3\",\n\"statusCode\": 200,\n\"event\": {\n\"apiVersion\": \"v3\",\n\"id\": \"03cc2182-6a48-4869-ac00-52f968850452\",\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"profileName\": \"USB-Camera-General\",\n\"sourceName\": \"PixelFormat\",\n\"origin\": 1692728351448270645,\n\"readings\": [\n{\n\"id\": \"ded64ad7-955a-4979-9acd-ff5f1cbc9e9c\",\n\"origin\": 1692728351448270645,\n\"deviceName\": \"C270_HD_WEBCAM-8184F580\",\n\"resourceName\": \"PixelFormat\",\n\"profileName\": \"USB-Camera-General\",\n\"valueType\": \"Object\",\n\"value\": \"\",\n\"objectValue\": {\n\"BytesPerLine\": 1280,\n\"Colorspace\": \"sRGB\",\n\"Field\": \"none\",\n\"Flags\": 0,\n\"HSVEnc\": \"Default\",\n\"Height\": 480,\n\"PixelFormat\": \"YUYV 4:2:2\",\n\"Priv\": 4276996862,\n\"Quantization\": \"Default\",\n\"SizeImage\": 614400,\n\"Width\": 640,\n\"XferFunc\": \"Default\",\n\"YcbcrEnc\": \"Default\"\n}\n}\n]\n}\n}\n
There are two types of options:
Input
prefix are used for the camera, such as specifying the image size and pixel format. Output
prefix are used for the output video, such as specifying aspect ratio and quality. These options can be passed in through object value when calling the StartStreaming
command.
Query parameter: - device name
: The name of the camera
Example StartStreaming Command
curl -X PUT -d '{\n \"StartStreaming\": {\n \"InputImageSize\": \"640x480\",\n \"OutputVideoQuality\": \"5\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/StartStreaming\n
Supported Input options:
InputFps
: Ignore original timestamps and instead generate timestamps assuming constant frame rate fps. (default - same as source) InputImageSize
: Specifies the image size of the camera. The format is wxh
, for example \"640x480\". (default - automatically selected by FFmpeg) InputPixelFormat
: Set the preferred pixel format (for raw video). (default - automatically selected by FFmpeg)Supported Output options:
OutputFrames
: Set the number of video frames to output. (default - no limitation on frames) OutputFps
: Duplicate or drop input frames to achieve constant output frame rate fps. (default - same as InputFps) OutputImageSize
: Performs image rescaling. The format is wxh
, for example \"640x480\". (default - same as InputImageSize) OutputAspect
: Set the video display aspect ratio specified by aspect. For example \"4:3\", \"16:9\". (default - same as source) OutputVideoCodec
: Set the video codec. For example \"mpeg4\", \"h264\". (default - mpeg4) OutputVideoQuality
: Use fixed video quality level. Range is a integer number between 1 to 31, with 31 being the worst quality. (default - dynamically set by FFmpeg) You can also set default values for these options by adding additional attributes to the device resource StartStreaming
. The attribute name consists of a prefix \"default\" and the option name.
Snippet from device.yaml
deviceResources:\n- name: \"StartStreaming\"\ndescription: \"Start streaming process.\"\nattributes:\n{ command: \"VIDEO_START_STREAMING\",\n defaultInputFrameSize: \"320x240\",\n defaultOutputVideoQuality: \"31\"\n}\nproperties:\nvalueType: \"Object\"\nreadWrite: \"W\"\n
Note
It's NOT recommended to set default video options in the 'cmd/res/profiles/general.usb.camera.yaml' as they may not be supported by every camera.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#keep-the-paths-of-existing-cameras-up-to-date","title":"Keep the paths of existing cameras up to date","text":"The paths (/dev/video*) of the connected cameras may change whenever the cameras are re-connected or the system restarts. To ensure the paths of the existing cameras are up to date, the device service scans all the existing cameras to check whether their serial numbers match the connected cameras. If there is a mismatch between them, the device service will scan all paths to find the matching device and update the existing device with the correct path.
This check can also be triggered by using the Device Service API /refreshdevicepaths
.
curl -X POST http://localhost:59983/api/v3/refreshdevicepaths\n
It's recommended to trigger a check after re-plugging cameras.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#configurable-rtsp-server-hostname-and-port","title":"Configurable RTSP server hostname and port","text":"Enable/Disable RTSP server and set hostname and port of the RTSP server to which the device service publishes video streams can be configured in the [Driver] section of the service configuration located in the cmd/res/configuration.yaml
file. RTSP server is enabled by default.
Snippet from configuration.yaml
Driver:\nEnableRtspServer: \"true\"\nRtspServerHostName: \"localhost\"\nRtspTcpPort: \"8554\"\nRtspAuthenticationServer: \"localhost:8000\"\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/advanced-options/#camerastatus-command","title":"CameraStatus Command","text":"Use the following query to determine the status of the camera. URL parameter:
Example CameraStatus Command
curl -X GET http://localhost:59882/api/v3/device/name/<DeviceName>/CameraStatus?InputIndex=0 | jq -r '\"CameraStatus: \" + (.event.readings[].value|tostring)'\n
Example Output:
CameraStatus: 0\n
Response meanings:
Response Description 0 Ready 1 No Power 2 No Signal 3 No Color"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/","title":"Dynamic Discovery","text":"The device service supports dynamic discovery. During dynamic discovery, the device service scans all connected USB devices and sends the discovered cameras to Core Metadata. The device name of the camera discovered by the device service is comprised of Card Name and Serial Number, and the characters colon, space and dot will be replaced with underscores as they are invalid characters for device names in EdgeX. Take the camera Logitech C270 as an example, it's Card Name is \"C270 HD WEBCAM\" and the Serial Number is \"B1CF0E50\" hence the device name - \"C270_HD_WEBCAM-B1CF0E50\".
Note
Card Name and Serial number are used by the device service to uniquely identify a camera. Some manufactures, however, may not support unique serial numbers for their cameras. Please check with your camera manufacturer.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#dynamic-discovery-function","title":"Dynamic Discovery function","text":"Dynamic discovery is enabled by default to make setup easier. It can be disabled by changing the Enabled
option to false
as shown below.
Snippet from device.yaml
Device: ...\nDiscovery:\nEnabled: false\nInterval: \"1h\"\n
export DEVICE_DISCOVERY_ENABLED=false\nexport DEVICE_DISCOVERY_INTERVAL=1h\n
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#configure-discovery-interval","title":"Configure discovery interval","text":"configuration.yamlDocker / Env Vars Snippet from device.yaml
Device: ...\nDiscovery:\nEnabled: true\nInterval: \"1h\"\n
export DEVICE_DISCOVERY_ENABLED=true\nexport DEVICE_DISCOVERY_INTERVAL=1h\n
To manually trigger a Dynamic Discovery, use this device service API.
curl -X POST http://<service-host>:59983/api/v3/discovery\n
The interval value must be a Go duration.
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#rediscovery","title":"Rediscovery","text":"The device service is able to rediscover and update devices that have been discovered previously. Nothing additional is needed to enable this. It will run whenever the discover call is sent, regardless of whether it is a manual or automated call to discover. The steps to configure discovery or to manually trigger discovery is explained here
"},{"location":"microservices/device/services/device-usb-camera/supplementary-info/discovery/#configure-the-provision-watchers","title":"Configure the Provision Watchers","text":"Note
This section is for manually adding provision watchers, one is already added by default.
The provision watcher sets up parameters for EdgeX to automatically add devices to core-metadata. They can be configured to look for certain features, as well as block features. The default provision watcher is sufficient unless you plan on having multiple different cameras with different profiles and resources. Learn more about provision watchers here. The provision watchers are located at ./cmd/res/provision_watchers
.
Example Command
curl -X POST \\\n-d '[\n{\n \"provisionwatcher\":{\n \"apiVersion\" : \"v3\",\n \"name\":\"USB-Camera-Provision-Watcher\",\n \"adminState\":\"UNLOCKED\",\n \"identifiers\":{\n \"Path\": \".\"\n },\n \"serviceName\": \"device-usb-camera\",\n \"profileName\": \"USB-Camera-General\"\n },\n \"apiVersion\" : \"v3\"\n}\n]' http://localhost:59881/api/v3/provisionwatcher\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/","title":"Custom Build","text":""},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#get-the-device-usb-camera-source-code","title":"Get the Device USB Camera Source Code","text":"Change into the edgex directory:
cd ~/edgex\n
Clone the device-usb-camera repository:
git clone https://github.com/edgexfoundry/device-usb-camera.git\n
Checkout the latest release (main):
git checkout main\n
Each device resource should have a mandatory attribute named command
to indicate what action the device service should take for it.
Commands can be one of two types:
METADATA_
prefix are used to get camera metadata.Snippet from general.usb.device.yaml
deviceResources:\n- name: \"CameraInfo\"\ndescription: >-\nCamera information including driver name, device name, bus info, and capabilities.\nSee https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-querycap.html.\nattributes:\n{ command: \"METADATA_DEVICE_CAPABILITY\" }\nproperties:\nvalueType: \"Object\"\nreadWrite: \"R\"\n
VIDEO_
prefix are related to video stream.Snippet from general.usb.device.yaml
deviceResources:\n- name: \"StreamURI\"\ndescription: \"Get video-streaming URI.\"\nattributes:\n{ command: \"VIDEO_STREAM_URI\" }\nproperties:\nvalueType: \"String\"\nreadWrite: \"R\"\n
For all supported commands, refer to the sample at cmd/res/profiles/general.usb.camera.yaml
.
Note
In general, this sample should be applicable to all types of USB cameras.
Note
You don't need to define device profile yourself unless you want to modify resource names or set default values for video options.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#define-the-device","title":"Define the device","text":"The device's protocol properties contain: * Path
is a file descriptor of camera created by OS. You can find the path of the connected USB camera through v4l2-ctl utility. * AutoStreaming
indicates whether the device service should automatically start video streaming for cameras. Default value is false.
Snippet from general.usb.camera.yaml.example
deviceList:\n- name: \"example-camera\"\nprofileName: \"USB-Camera-General\"\ndescription: \"Example Camera\"\nlabels: [ \"device-usb-camera-example\", ]\nprotocols:\nUSB:\nPath: \"/dev/video0\"\nAutoStreaming: \"false\"\n
See the examples at cmd/res/devices
Note
When a new device is created in Core Metadata, a callback function of the device service will be called to add the device card name and serial number to protocol properties for identification purposes. These two pieces of information are obtained through V4L2
API and udev
utility.
Enable/Disable RTSP server and set hostname and port in the Driver
section of device-usb-camera/cmd/res/configuration.yaml
file. The default values can be used in this guide. The RtspAuthenticationServer value indicates the internal hostname and port on which the device service will listen for RTSP authentication requests on. If this value is changed, you will have to also change the mediamtx configuration to point to the new hostname/port as well.
Snippet from configuration.yaml
Driver:\nEnableRtspServer: \"true\"\nRtspServerHostName: \"localhost\"\nRtspTcpPort: \"8554\"\nRtspAuthenticationServer: \"localhost:8000\"\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#configure-rtsp-authentication","title":"Configure RTSP authentication","text":"Set the username and password
Snippet from configuration.yaml
...\nWritable:\nLogLevel: \"INFO\"\nInsecureSecrets:\nrtspauth:\nSecretName: rtspauth\nSecretData:\nusername: \"<set-username>\"\npassword: \"<set-password>\"\n
For more information on rtsp authentication, including how to disable it, see here
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/custom-build/#building-the-docker-image","title":"Building the docker image","text":"Change into newly created directory:
cd ~/edgex/device-usb-camera\n
Build the docker image of the device-usb-camera service:
make docker\n
[Optional] Build with NATS Messaging Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. This means that the published Docker image and Snaps do not include the NATS messaging capability. To build the docker image using NATS, run make docker-nats: make docker-nats\n
See Compose Builder nat-bus
option to generate compose file for NATS and local dev images. Navigate to the Edgex compose directory.
cd ~/edgex/edgex-compose/compose-builder\n
.env
file to add the registry and image version variable for device-usb-camera:Add the following registry and version information:
DEVICE_USBCAM_VERSION=0.0.0-dev\n
add-device-usb-camera.yml
to point to the local image:services:\ndevice-usb-camera:\n image: edgexfoundry/device-usb-camera${ARCH}:${DEVICE_USBCAM_VERSION}\n
Deploy the device service>
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/","title":"Deployment","text":"Follow this guide to deploy and run the service.
DockerNativeNavigate to the Edgex compose directory.
cd edgex-compose/compose-builder\n
Checkout the latest release (main):
git checkout main\n
Run EdgeX with the USB microservice in secure or non-secure mode:
Navigate to the Edgex compose directory.
cd edgex-compose/compose-builder\n
Checkout the latest release (main):
git checkout main\n
Run EdgeX:
make run no-secty\n
Navigate out of the edgex-compose
directory to the device-usb-camera
directory:
cd device-usb-camera\n
Checkout the latest release (main):
git checkout main\n
Build the executable
make build\n
[Optional] Build with NATS Messaging Currently, the NATS Messaging capability (NATS MessageBus) is opt-in at build time. To build using NATS, run make build-nats:
make build-nats\n
Deploy the service
cd cmd && EDGEX_SECURITY_SECRET_STORE=false ./device-usb-camera\n
make run ds-usb-camera no-secty\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#secure-mode","title":"Secure mode","text":"Note
Recommended for secure and production level deployments.
make run ds-usb-camera\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#token-generation-secure-mode-only","title":"Token Generation (secure mode only)","text":"Note
Need to wait for sometime for the services to be fully up before executing the next set of commands. Securely store Consul ACL token and the JWT token generated which are needed to map credentials and execute apis. It is not recommended to store these secrets in cleartext in your machine.
Note
The JWT token expires after 119 minutes, and you will need to generate a new one.
Generate the Consul ACL Token. Use the token generated anywhere you see <consul-token>
in the documentation.
make get-consul-acl-token\n
Example output: 12345678-abcd-1234-abcd-123456789abc\n
Generate the JWT Token. Use the token generated anywhere you see <jwt-token>
in the documentation.
make get-token\n
Example output: eyJhbGciOiJFUzM4NCIsImtpZCI6IjUyNzM1NWU4LTQ0OWYtNDhhZC05ZGIwLTM4NTJjOTYxMjA4ZiJ9.eyJhdWQiOiJlZGdleCIsImV4cCI6MTY4NDk2MDI0MSwiaWF0IjoxNjg0OTU2NjQxLCJpc3MiOiIvdjEvaWRlbnRpdHkvb2lkYyIsIm5hbWUiOiJlZGdleHVzZXIiLCJuYW1lc3BhY2UiOiJyb290Iiwic3ViIjoiMGRjNThlNDMtNzBlNS1kMzRjLWIxM2QtZTkxNDM2ODQ5NWU0In0.oa8Fac9aXPptVmHVZ2vjymG4pIvF9R9PIzHrT3dAU11fepRi_rm7tSeq_VvBUOFDT_JHwxDngK1VqBVLRoYWtGSA2ewFtFjEJRj-l83Vz33KySy0rHteJIgVFVi1V7q5
Note
Secrets such as passwords, certificates, tokens and more in Edgex are stored in a secret store which is implemented using Vault a product of Hashicorp. Vault supports security features allowing for the issuing of consul tokens. JWT token is required for the API Gateway which is a trust boundry for Edgex services. It allows for external clients to be verified when issuing REST requests to the microservices. For more info refer Secure Consul, API Gateway and Edgex Security.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#verify-service-device-profiles-and-device","title":"Verify Service, Device Profiles, and Device","text":"Check the status of the container:
docker ps -f name=device-usb-camera\n
The status column will indicate if the container is running and how long it has been up.
Example output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nf0a1c646f324 edgexfoundry/device-usb-camera:0.0.0-dev \"/docker-entrypoint.\u2026\" 26 hours ago Up 20 hours 127.0.0.1:8554->8554/tcp, 127.0.0.1:59983->59983/tcp edgex-device-usb-camera edgex-device-onvif-camera\n
Check whether the device service is added to EdgeX:
Note
If running in secure mode all the api executions need the JWT token generated previously. E.g.
curl --location --request GET 'http://localhost:59881/api/v3/deviceservice/name/device-usb-camera' \\\n--header 'Authorization: Bearer <jwt-token>' \\\n--data-raw ''\n
curl -s http://localhost:59881/api/v3/deviceservice/name/device-usb-camera | jq .\n
Successful:
{\n\"apiVersion\" : \"v3\",\n\"statusCode\": 200,\n\"service\": {\n\"created\": 1658769423192,\n\"modified\": 1658872893286,\n\"id\": \"04470def-7b5b-4362-9958-bc5ff9f54f1e\",\n\"name\": \"device-usb-camera\",\n\"baseAddress\": \"http://edgex-device-usb-camera:59983\",\n\"adminState\": \"UNLOCKED\"\n}\n}\n
Unsuccessful: {\n\"apiVersion\" : \"v3\",\n\"message\": \"fail to query device service by name device-usb-camera\",\n\"statusCode\": 404\n}\n
Verify device(s) have been successfully added to core-metadata.
curl -s http://localhost:59881/api/v3/device/all | jq -r '\"deviceName: \" + '.devices[].name''\n
Example output:
deviceName: NexiGo_N930AF_FHD_Webcam_NexiG-20201217010\n
Note
The jq -r
option is used to reduce the size of the displayed response. The entire device with all information can be seen by removing -r '\"deviceName: \" + '.devices[].name'', and replacing it with '.'
Note
If running in secure mode this command needs the Consul ACL token generated previously.
curl -H \"X-Consul-Token:<consul-token>\" -X GET \"http://localhost:8500/v1/kv/edgex/v3/device-usb-camera?keys=true\"\n
Note
If you want to disable rtsp authentication entirely, you must build a custom image.
Non-secure ModeSecure ModeExample credential command
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"rtspauth\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<pick-a-username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<pick-a-secure-password>\"\n }\n ]\n}' -X POST http://localhost:59983/api/v3/secret\n
edgex-compose/compose-builder
directory.make get-token\n
Example credential command
curl --data '{\n \"apiVersion\" : \"v3\",\n \"secretName\": \"rtspauth\",\n \"secretData\":[\n {\n \"key\":\"username\",\n \"value\":\"<pick-a-username>\"\n },\n {\n \"key\":\"password\",\n \"value\":\"<pick-a-secure-password>\"\n }\n ]\n}' -H Authorization:Bearer \"<enter your JWT token here (make get-token)>\" -X POST http://localhost:59983/api/v3/secret\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/deployment/#manage-devices","title":"Manage Devices","text":"Warning
This section only needs to be performed if discovery is disabled. Discovery is enabled by default.
Devices can either be added to the service by defining them in a static configuration file, discovering devices dynamically, or with the REST API. For this example, the device will be added using the REST API.
Run the following command to determine the Path
to the usb camera for video streaming:
v4l2-ctl --list-devices\n
The output should look similar to this:
NexiGo N930AF FHD Webcam: NexiG (usb-0000:00:14.0-1):\n /dev/video6\n /dev/video7\n /dev/media2\n
For this example, the Path
is /dev/video6
.
Edit the information to appropriately match the camera. Find more information about the device protocol properties here.
Example Command
curl -X POST -H 'Content-Type: application/json' \\\nhttp://localhost:59881/api/v3/device \\\n-d '[\n {\n \"apiVersion\" : \"v3\",\n \"device\": {\n \"name\": \"Camera001\",\n \"serviceName\": \"device-usb-camera\",\n \"profileName\": \"USB-Camera-General\",\n \"description\": \"My test camera\",\n \"adminState\": \"UNLOCKED\",\n \"operatingState\": \"UP\",\n \"protocols\": {\n \"USB\": {\n \"CardName\": \"NexiGo N930AF FHD Webcam: NexiG\",\n \"Paths\": [\"/dev/video6\",],\n \"AutoStreaming\": \"false\"\n }\n }\n }\n }\n]'\n
Example output:
[{\"apiVersion\" : \"v3\",\"statusCode\":201,\"id\":\"fb5fb7f2-768b-4298-a916-d4779523c6b5\"}]\n
Learn how to use the device service>
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/","title":"General Usage","text":"This document will describe how to execute some of the most important types of commands used with the device service.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#start-video-streaming","title":"Start Video Streaming","text":"Unless the device service is configured to stream video from the camera automatically, a StartStreaming
command must be sent to the device service.
Note
Streaming credentials for the rtsp stream must be added prior to starting the stream. Please refer to Deployment for additional information.
There are two types of options: - The options that start with Input
as a prefix are used for camera configuration, such as specifying the image size and pixel format. - The options that start with Output
as a prefix are used for video output configuration, such as specifying aspect ratio and quality.
These options can be passed in through Object value when calling StartStreaming.
Query parameter: - device name
: The name of the camera
Example StartStreaming Command
curl -X PUT -d '{\n \"StartStreaming\": {\n \"InputImageSize\": \"640x480\",\n \"OutputVideoQuality\": \"5\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/StartStreaming\n
Note
If running in secure mode all the api executions (for this api and subsequent apis) need the JWT token generated previously. E.g.
curl -X PUT -d '{\n \"StartStreaming\": {\n \"InputImageSize\": \"640x480\",\n \"OutputVideoQuality\": \"5\"\n }\n}' http://localhost:59882/api/v3/device/name/<device name>/StartStreaming \\\n--header 'Authorization: Bearer <jwt-token>'\n
Example output:
{\"apiVersion\":\"v3\",\"statusCode\":200}\n
Supported Input options:
InputFps
: Ignore original timestamps and instead generate timestamps assuming constant frame rate fps. (default - same as source) InputImageSize
: Specifies the image size of the camera. The format is wxh
, for example \"640x480\". (default - automatically selected by FFmpeg) InputPixelFormat
: Set the preferred pixel format (for raw video). (default - automatically selected by FFmpeg) Supported Output options:
OutputFrames
: Set the number of video frames to output. (default - no limitation on frames) OutputFps
: Duplicate or drop input frames to achieve constant output frame rate fps. (default - same as InputFps) OutputImageSize
: Performs image rescaling. The format is wxh
, for example \"640x480\". (default - same as InputImageSize) OutputAspect
: Set the video display aspect ratio specified by aspect. For example \"4:3\", \"16:9\". (default - same as source) OutputVideoCodec
: Set the video codec. For example \"mpeg4\", \"h264\". (default - mpeg4) OutputVideoQuality
: Use fixed video quality level. Range is a integer number between 1 to 31, with 31 being the worst quality. (default - dynamically set by FFmpeg) The device service provides a way to determine the stream URI of a camera.
Query parameter: - device name
: The name of the camera
Example StreamURI Command
curl -s http://localhost:59882/api/v3/device/name/<device name>/StreamURI | jq -r '\"StreamURI: \" + '.event.readings[].value''\n
Example output:
StreamURI: rtsp://localhost:8554/stream/NexiGo_N930AF_FHD_Webcam__NexiG-20201217010\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#play-the-rtsp-stream","title":"Play the RTSP stream.","text":"mplayer can be used to stream. The command follows this format:
mplayer rtsp://'<username>:<password>'@<IP address>:<port>/<streamname>`.\n
Using the streamURI
returned from the previous step, run mplayer:
Example Stream Command
mplayer rtsp://'admin:pass'@localhost:8554/stream/NexiGo_N930AF_FHD_Webcam__NexiG-20201217010\n
To shut down mplayer, use the ctrl-c command.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#stop-video-streaming","title":"Stop Video Streaming","text":"To stop the usb camera from live streaming, use the following command:
Query parameter: - device name
: The name of the camera
Example StopStreaming Command
curl -X PUT -d '{\n \"StopStreaming\": \"true\"\n}' http://localhost:59882/api/v3/device/name/<device name>/StopStreaming\n
Example output:
{\"apiVersion\":\"v3\",\"statusCode\":200}\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#optional-shutting-down","title":"Optional: Shutting Down","text":"To stop all EdgeX services (containers), execute the make down
command:
Navigate to the edgex-compose/compose-builder
directory.
cd ~/edgex/edgex-compose/compose-builder\n
Run this command
make down\n
To shut down and delete all volumes, run this command
Warning
This will delete all edgex-related data.
make clean\n
To verify the usb camera is set to stream video, use the command below
curl http://localhost:59882/api/v3/device/name/<device name>/StreamingStatus | jq -r '\"StreamingStatus: \" + (.event.readings[].objectValue.IsStreaming|tostring)'\n
If the StreamingStatus is false, the camera is not configured to stream video. Please try the Start Video Streaming section again here."},{"location":"microservices/device/services/device-usb-camera/walkthrough/general-usage/#v4l2-error","title":"V4L2 error","text":"If you get an error like this:
.../go4vl@v0.0.2/v4l2/capability.go:48:33: could not determine kind of name for C.V4L2_CAP_IO_MC\n.../go4vl@v0.0.2/v4l2/capability.go:46:33: could not determine kind of name for C.V4L2_CAP_META_OUTPUT\n
You are missing the appropriate kernel headers needed by the github.com/vladimirvivien/go4vl
module. One possible solution is to manually download and install a more recent version of the libc-dev for your OS. In the case of Ubuntu 20.04, one is not available in the normal repositories, so you can get it via these steps:
wget https://launchpad.net/~canonical-kernel-team/+archive/ubuntu/bootstrap/+build/20950478/+files/linux-libc-dev_5.10.0-14.15_amd64.deb\nsudo dpkg -i linux-libc-dev_5.10.0-14.15_amd64.deb\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/","title":"Setup","text":"Follow this guide to set up your system to run the USB Device Service.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#system-requirements","title":"System Requirements","text":"The software has dependencies, including Git, Docker, Docker Compose, and assorted tools. Follow the instructions from the following link to install any dependency that are not already installed.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#install-git","title":"Install Git","text":"Install Git from the official repository as documented on the Git SCM site.
Update installation repositories:
sudo apt update\n
Add the Git repository:
sudo add-apt-repository ppa:git-core/ppa -y\n
Install Git:
sudo apt install git\n
Install Docker from the official repository as documented on the Docker site.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#verify-docker","title":"Verify Docker","text":"To enable running Docker commands without the preface of sudo, add the user to the Docker group. Then run Docker with the hello-world
test.
Create Docker group:
sudo groupadd docker\n
Note
If the group already exists, groupadd
outputs a message: groupadd: group docker already exists
. This is OK.
Add User to group:
sudo usermod -aG docker $USER\n
Please logout or reboot for the changes to take effect.
To verify the Docker installation, run hello-world
:
docker run hello-world\n
A Hello from Docker! greeting indicates successful installation.
Unable to find image 'hello-world:latest' locally\nlatest: Pulling from library/hello-world\n2db29710123e: Pull complete \nDigest: sha256:10d7d58d5ebd2a652f4d93fdd86da8f265f5318c6a73cc5b6a9798ff6d2b2e67\nStatus: Downloaded newer image for hello-world:latest\n\nHello from Docker!\nThis message shows that your installation appears to be working correctly.\n...\n
Install Docker compose from the official repository as documented on the Docker Compose site.
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#install-tools","title":"Install Tools","text":"Install the build, media streaming, and parsing tools:
sudo apt install build-essential jq curl v4l-utils mplayer\n
Note
The device service ONLY works on Linux with kernel v5.10 or higher.
The table below lists command line tools this guide uses to help with EdgeX configuration and device setup.
Tool Description Note build-essential Developer tools such as libc, gcc, g++ and make. jq Parses the JSON object returned from thecurl
requests. The jq
command includes parameters that are used to parse and format data. In this tutorial, the jq
command has been configured to return and format appropriate data for each curl
command that is piped into it. curl Allows the user to connect to services such as EdgeX. Use curl to get transfer information either to or from this service. In the tutorial, use curl
to communicate with the EdgeX API. The call will return a JSON object. v4l-utils USB camera utility tools This will be used to determine camera paths on the system for manual addition of cameras. mplayer Video player Use this to view the camera stream. >Table 1: Command Line Tools"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#download-edgex-compose","title":"Download EdgeX Compose","text":"Clone the EdgeX compose repository:
git clone https://github.com/edgexfoundry/edgex-compose.git\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#proxy-setup-optional","title":"Proxy Setup (Optional)","text":"Note
These steps are only required if a proxy is present in the user environment.
Setup Docker Daemon or Docker Desktop to use proxied environment.
Follow guide here for Docker Daemon proxy setup (Linux)
Follow guide here for Docker Desktop proxy setup (Windows)
Configuration file to set Docker Daemon proxy via daemon.json
{\n \"proxies\": {\n \"http-proxy\": \"http://proxy.example.com:3128\",\n \"https-proxy\": \"https://proxy.example.com:3129\",\n \"no-proxy\": \"*.test.example.com,.example.org,127.0.0.0/8\"\n }\n }\n
Note if building custom images
If building your own custom images, set environment variables for HTTP_PROXY, HTTPS_PROXY and NO_PROXY
Example
export HTTP_PROXY=http://proxy.example.com:3128\nexport HTTPS_PROXY=https://proxy.example.com:3129\nexport NO_PROXY=*.test.example.com,localhost,127.0.0.0/8\n
"},{"location":"microservices/device/services/device-usb-camera/walkthrough/setup/#next-steps","title":"Next Steps","text":"Deploy the service with default images>
Warning
While not recommended, you can follow the process for manually building the images.
Build a custom image for the service>
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/","title":"Device Virtual","text":""},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#introduction","title":"Introduction","text":"The virtual device service simulates different kinds of devices to generate events and readings to the core data micro service, and users send commands and get responses through the command and control micro service. These features of the virtual device services are useful when executing functional or performance tests without having any real devices.
The virtual device service, built in Go and based on the device service Go SDK, can simulate sensors by generating data of the following data types:
By default, the virtual device service is included and configured to run with all EdgeX Docker Compose files. This allows users to have a complete EdgeX system up and running - with simulated data from the virtual device service - in minutes.
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#using-the-virtual-device-service","title":"Using the Virtual Device Service","text":"The virtual device service contains 4 pre-defined devices as random value generators:
These devices are created by the virtual device service in core metadata when the service first initializes. These devices are defined by device profiles that ship with the virtual device service. Each virtual device causes the generation of one to many values of the type specified by the device name. For example, Random-Integer-Device generates integer values: Int8, Int16, Int32 and Int64. As with all devices, the deviceResources in the associated device profile of the device defind what values are produced by the device service. In the case of Random-Integer-Device, the Int8, Int16, Int32 and Int64 values are defined as deviceResources (see the device profile).
Additionally, there is an accompanying deviceResource for each of the generated value deviceResource. Each deviceResources has an associated EnableRandomization_X deviceResource. In the case of the integer deviceResources above, there are the associated EnableRandomization_IntX deviceResources (see the device profile). The EnableRandomization deviceResources are boolean values, and when set to true, the associated simulated sensor value is generated by the device service. When the EnableRandomization_IntX value is set to false, then the associated simulator sensor value is fixed.
Info
The Enable_Randomization attribute of resource is automatically set to false when you use a PUT
command to set a specified generated value. Furtehr, the minimum and maximum values of generated value deviceResource can be specified in the device profile. Below, Int8 is set to be between -100 and 100.
deviceResources:\n-\nname: \"Int8\"\nisHidden: false\ndescription: \"Generate random int8 value\"\nproperties:\nvalueType: \"Int8\"\nreadWrite: \"RW\"\nminimum: \"-100\"\nmaximum: \"100\"\ndefaultValue: \"0\"\n
For the binary deviceResources, values are generated by the function rand.Read(p []byte) in Golang math package. The []byte size is fixed to MaxBinaryBytes/1000.
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#core-command-and-the-virtual-device-service","title":"Core Command and the Virtual Device Service","text":"Use the following core command service APIs to execute commands against the virtual device service for the specified devices. Both GET
and PUT
commands can be issued with these APIs. GET
command request the next generated value while PUT
commands will allow you to disable randomization (EnableRandomization) and set the fixed values to be returned by the device.
Note
Port 59882 is the default port for the core command service.
"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration properties common to all services.
For each device, the virual device service will contain a DeviceList with associated Protocols and AutoEvents as shown by the example below.
DeviceListDeviceList/DeviceList.Protocols/DeviceList.Protocols.otherDeviceList/DeviceList.AutoEvents Property Example Value Description properties used in defining the static provisioning of each of the virtual devices Name 'Random-Integer-Device' name of the virtual device ProfileName 'Random-Integer-Device' device profile that defines the resources and commands of the virtual device Description 'Example of Device Virtual' description of the virtual device Labels ['device-virtual-example'] labels array used for searching for virtual devices Property Example Value Description Address 'device-virtual-int-01' address for the virtual device Protocol '300' Property Default Value Description properties used to define how often an event/reading is schedule for collection to send to core data from the virtual device Interval '15s' every 15 seconds OnChange false collect data regardless of change SourceName 'Int8' deviceResource to collect - in this case the Int8 resource"},{"location":"microservices/device/services/device-virtual/Ch-VirtualDevice/#api-reference","title":"API Reference","text":"Device Service - SDK- API Reference
"},{"location":"microservices/general/","title":"Cross Cutting Concerns","text":""},{"location":"microservices/general/#event-tagging","title":"Event Tagging","text":"In an edge solution, it is likely that several instances of EdgeX are all sending edge data into a central location (enterprise system, cloud provider, etc.)
In these circumstances, it will be critical to associate the data to its origin. That origin could be specified by the GPS location of the sensor, the name or identification of the sensor, the name or identification of some edge gateway that originally collected the data, or many other means.
EdgeX provides the means to \u201ctag\u201d the event data from any point in the system. The Event object has a Tags
property which is a key/value pair map that allows any service that creates or otherwise handles events to add custom information to the Event in order to help identify its origin or otherwise label it before it is sent to the north side.
For example, a device service could populate the Tags
property with latitude and longitude key/value pairs of the physical location of the sensor when the Event is created to send sensed information to Core Data.
When the Event gets to the Application Service Configurable, for example, the service has an optional function (defined by Writable.Pipeline.Functions.AddTags
in configuration) that will add additional key/value pair to the Event Tags
. The key and value for the additional tag are provided in configuration (as shown by the example below). Multiple tags can be provide separated by commas.
AddTags:\nParameters:\ntags: \"GatewayId:HoustonStore000123,Latitude:29.630771,Longitude:-95.377603\"\n
"},{"location":"microservices/general/#custom-application-service","title":"Custom Application Service","text":"In the case, of a custom application service, an AddTags function can be used to add a collection of specified tags to the Event's Tags collection (see Built in Transforms/Functions)
If the Event already has Tags
when it arrives at the application service, then configured tags will be added to the Tags
map. If the configured tags have the same key as an existing key in the Tags
map, then the configured key/value will override what is already in the Event Tags
map.
All services have the ability to collect Common Service Metrics, only Core Data, Application Services and Device Services are collecting additional service specific metrics. Additional service metrics will be added to all services in future releases. See Writable.Telemetry
at Common Configuration for details on configuring the reporting of service metrics.
See Custom Application Service Metrics for more detail on Application Services capability to collect their own custom service metrics via use of the App SDK API.
See Custom Device Service Metrics for more detail on Go Device Services capability to collect their own custom service metrics via use of the Go Device SDK API.
Each service defines (in code) a set of service metrics that it collects and optionally reports if configured. The names the service gives to its metrics are used in the service's Telemetry
configuration to enable/disable the reporting of those metrics. See Core Data's Writable.Telemetry
at Core Data Configuration as example of the names used for the service metrics that Core Data is currently collecting.
The following metric types are available to be used by the EdgeX services:
counter-count
gauge-value
gaugeFloat64-value
timer-count
, timer-min
, timer-max
, timer-mean
, timer-stddev
and timer-variance
histogram-count
, histogram-min
, histogram-max
, histogram-mean
, histogram-stddev
and histogram-variance
Service metrics which are enabled for reporting are published to the EdgeX MessageBug every configured interval using the configured Telemetry
base topic. See Writable.Telemetry
at Common Configuration for details on these configuration items. The service name
and the metric name
are added to the configured base topic. This allows subscribers to subscribe only for specific metrics or metrics from specific services. Each metric is published (reported) independently using the Metric DTO (Data Transfer Object) define in go-mod-core-contracts.
The aggregation of these service metrics is left to adopters to implement as best suits their deployment(s). This can be accomplished with a custom application service that sets the function pipeline Target Type
to the dtos.Metric
type. Then create a custom pipeline function which aggregates the metrics and provides them to the telemetry dashboard service of choice via push (export) or pull (custom GET endpoint). See App Services here for more details on Target Type
.
Example - DTO from Core Data in JSON format for the EventsPersisted
metric as publish to the EdgeX MessageBus
{\n\"apiVersion\" : \"v3\",\n\"name\": \"EventsPersisted\",\n\"fields\": [\n{\n\"name\": \"counter-count\",\n\"value\": 276\n}\n],\n\"tags\": [\n{\n\"name\": \"service\",\n\"value\": \"core-data\"\n}\n],\n\"timestamp\": 1650301898926166900\n}\n
Note
The service name is added to the tags for every metric reported from each service. Additional tags may be added via the service's Telemetry configuration. See the Writable.Telemetry
at Common Configuration for more details. A service may also add metric specific tags via code when it collects the individual metrics.
All services have the ability to collect the following common service metrics
EdgeX 3.1
Support for loading files from a remote location via URI is new in EdgeX 3.1.
Different files like configurations, units of measurements, device profiles, device definitions, and provision watchers can be loaded either from the local file system or from a remote location. For the remote location, HTTP and HTTPS URIs are supported. When using HTTPS, certificate validation is performed using the system's built-in trust anchors.
"},{"location":"microservices/general/#authentication","title":"Authentication","text":""},{"location":"microservices/general/#username-password-in-uri-not-recommended","title":"username-password in URI (not recommended)","text":"Users can specify the username-password (<username>:<password>@
) in the URI as plain text. This is ok network wise when using HTTPS, but if the credentials are specified in configuration or other service files, this is not a good practice to follow.
Example - configuration file with plain text username-password
in URI
[UoM]\n UoMFile = \"https://myuser:mypassword@example.com/uom.yaml\"\n
"},{"location":"microservices/general/#secure-credentials-preferred","title":"Secure Credentials (preferred)","text":"The edgexSecretName
query parameter can be specified in the URI as a secure way for users to specify credentials. When running in secure mode, this parameter specifies a Secret Name from the service's Secret Store where the credentials must be seeded. If insecure mode is running, edgexSecretName
must be specified in the InsecureSecrets section of the configuration.
Example - configuration file with edgexSecretName
query parameter
[UoM]\nUoMFile = \"https://example.com/uom.yaml?edgexSecretName=mySecretName\"\n
The authentication type and credentials are contained in the secret data specified by the Secret Name. Only httpheader
is currently supported. The headername
specifies the authentication method (ie Basic Auth, API-Key, Bearer)
Example - secret data using httpheader
type=httpheader\nheadername=<name>\nheadercontents=<contents>\n
For a request header set as: GET https://example.com/uom.yaml HTTP/1.1\n<name>: <contents>\n
"},{"location":"microservices/general/messagebus/","title":"EdgeX MessageBus","text":""},{"location":"microservices/general/messagebus/#introduction","title":"Introduction","text":"EdgeX has an internal message bus referred to as the EdgeX MessageBus , which is used for internal communications between EdgeX services. An EdgeX Service is any Core/Support/Application/Device Service from EdgeX or any custom Application or Device Service built with the EdgeX SDKs.
The following diagram shows how each of the EdgeX Service use the EdgeX MessageBus.
The EdgeX MessageBus is meant for internal EdgeX service to service communications. It is not meant as an entry point for external services to communicate with the internal EdgeX services. The eKuiper Rules Engine is an exception to this as it is tightly integrated with EdgeX.
The EdgeX services intended as external entry points are:
REST API on all the EdgeX services - Accessed directly in non-secure mode or via the API Gateway when running in secure mode
App Service using External MQTT Trigger - An App Service configured to use the External MQTT Trigger will accept data from external services on an \"external\" MQTT connection
App Service using HTTP Trigger - An App Service configured to use the HTTP Trigger will accept data from external services on an \"external\" REST connection. Accessed in the same manner as other EdgeX REST APIs.
App Service using Custom Trigger - An App Service configured to use a Custom Trigger can accept data from external services or over additional protocols with few limitations. See Custom Trigger Example for an example.
Core Command External MQTT Connection - Core Command now receives command requests and publishes responses via an external MQTT connection that is separate from the EdgeX MessageBus. The requests are forwarded to the EdgeX MessageBus and the corresponding responses are forwarded back to the external MQTT connection.
Originally, the EdgeX MessageBus was only used to send Event/Readings from Core Data to the Application Services layer. In recent releases, more services use the EdgeX MessageBus rather than REST for inter service communication.
All messages published to the EdgeX MessageBus are wrapped in a MessageEnvelope
. This envelope contains metadata describing the message payload, such as the payload Content Type (JSON or CBOR), Correlation Id, etc.
Note
Unless noted below, the MessageEnvelope
is JSON encoded when publishing it to the EdgeX MessageBus. This does result in the MessageEnvelope
's payload being double encoded.
The EdgeX MessageBus is defined by the message bus abstraction implemented in go-mod-messaging. This module defines an abstract client API which currently has four implementations of the API for the different underlying message bus protocols.
"},{"location":"microservices/general/messagebus/#common-messagebus-configuration","title":"Common MessageBus Configuration","text":"Each service that uses the EdgeX MessageBus has a configuration section which defines the implementation to use, the connection method, and the underlying protocol client. This section is the MessageBus:
section in the service common configuration for all EdgeX services. See the MessageBus tab in Common Configuration for more details.
The common MessageBus configuration elements for each implementation are:
Type=redis
Type=mqtt
Type=nats-core
Type=nats-jetstream
redis
for Redis Pub/Subtcp
for MQTT 3.1tcp
for NATS Coretcp
for NATS JetStreamNote
In general all EdgeX Services running in a deployment must be configured to use the same EdgeX MessageBus implementation. By default all services that use the EdgeX MessageBus are configured to use the Redis Pub/Sub implementation. NATS does support a compatibility mode with MQTT. See the NATS MQTT Mode section below for details.
"},{"location":"microservices/general/messagebus/#redis-pubsub","title":"Redis Pub/Sub","text":"As stated above this is the default implementation that all EdgeX Services are configured to use. It takes advantage of the existing Redis DB instance for the broker. Redis Pub/Sub is a fire and forget protocol, so delivery is not guaranteed. If more robustness is required, use the MQTT or NATS implementations.
"},{"location":"microservices/general/messagebus/#configuration","title":"Configuration","text":"See Common Configuration section above for the common configuration elements for all implementations.
"},{"location":"microservices/general/messagebus/#security-configuration","title":"Security Configuration","text":"Option Default Value Description AuthModeusernamepassword
Mode of authentication to use. Values are none
, usernamepassword
, clientcert
, or cacert
. In secure mode Redis Pub/Sub uses usernamepassword
SecretName redisb
Secret name used to look up credentials in the service's SecretStore"},{"location":"microservices/general/messagebus/#additional-configuration","title":"Additional Configuration","text":"This implementation does not have any additional configuration.
"},{"location":"microservices/general/messagebus/#mqtt-31","title":"MQTT 3.1","text":"Robust message bus protocol, which has additional configuration options for robustness and requires an additional MQTT Broker to be running. See MQTT Spec for more details on this protocol.
"},{"location":"microservices/general/messagebus/#configuration_1","title":"Configuration","text":"See Common Configuration section above for the common configuration elements for all implementations.
"},{"location":"microservices/general/messagebus/#security-configuration_1","title":"Security Configuration","text":"Option Default Value Description AuthModenone
Mode of authentication to use. Values are none
, usernamepassword
, clientcert
, or cacert
. In secure mode the MQTT Broker uses usernamepassword
SecretName blank Secret name used to look up credentials in the service's SecretStore"},{"location":"microservices/general/messagebus/#additional-configuration_1","title":"Additional Configuration","text":"Except where noted default values exist in the service common configuration.
Option Default Value Description ClientId service key Unique name of the client connecting to the MQTT broker (Set in each service's private configuration) Qos0
Quality of Service level 0: At most once delivery1: At least once delivery2: Exactly once deliverySee the MQTT QOS Spec for more details KeepAlive 10
Maximum time interval in seconds that is permitted to elapse between the point at which the client finishes transmitting one control packet and the point it starts sending the next. If exceeded, the broker will close the client connection Retained false
If true, Server MUST store the Application Message and its QoS, so that it can be delivered to future subscribers whose subscriptions match its topic name. See Retained Messages for more details. AutoReconnect true
If true, automatically attempts to reconnect to the broker when connection is lost ConnectTimeout 30
Timeout in seconds for the connection to the broker to be successful CleanSession false
if true, Server MUST discard any previous Session and start a new one. This Session lasts as long as the Network Connection"},{"location":"microservices/general/messagebus/#nats","title":"NATS","text":"NATS is a high performance messaging system that offers some interesting options for local deployments. It uses a lightweight text-based protocol notably similar to http. This protocol includes full header support that can allow conveyance of the EdgeX MessageEnvelope
across service boundaries without the need for double-encoding if all services in the deployment are using NATS. Currently services must be specially built with the include_nats_messaging
tag to enable this option.
An ordinary NATS server uses interest, or existence of a client subscription, as the basis for subject availability on the server. This makes Publish a fire and forget operation much like Redis, and gives the system an at most once
quality of service.
The JetStream persistence layer binds NATS subjects to persistent streams which enables the server to collect messages for subjects that have no registered interest, and allows support for at least once
quality of service. Notably, services running in core-nats
mode can still subscribe and publish to jetstream-enabled subjects without the additional overhead associated with publish acknowledgement.
See Common Configuration section above for the common configuration elements for all implementations.
"},{"location":"microservices/general/messagebus/#security-configuration_2","title":"Security Configuration","text":"Option Default Value Description AuthModenone
Mode of authentication to use. Values are none
, usernamepassword
, clientcert
, or cacert
. The NATS Server is currently not secured in secure mode. SecretName blank Secret name used to look up credentials in the service's SecretStore NKeySeedFile blank Path to a seed file to use for authentication. See the NATS documentation for more detail CredentialsFile blank Path to a credentials file to use for authentication. See the NATS documentation for more detail"},{"location":"microservices/general/messagebus/#additional-configuration_2","title":"Additional Configuration","text":"Except where noted default values exist in the service common configuration.
Option Default Value Description ClientId service key Unique name of the client connecting to the NATS Server (Set in each service's private configuration) Formatnats
Format of the actual message published. Valid values are:- nats : Metadata from the MessageEnvlope
are put into the NATS header and the payload from the MessageEnvlope
is published as is. Preferred format when all services are using NATS- json : JSON encodes the MessageEnvelope
and publish it as the message. Use this format for compatibility when other services using MQTT 3.1 and running the NATS Server in MQTT mode. ConnectTimeout 30
Timeout in seconds for the connection to the broker to be successful RetryOnFailedConnect false
Retry on connection failure - expects a string representation of a boolean QueueGroup blank Specifies a queue group to distribute messages from a stream to a pool of worker services Durable blank Specifies a durable consumer should be used with the given name. Note that if a durable consumer with the specified name does not exist it will be considered ephemeral and deleted by the client on drain / unsubscribe (JetStream only) Subject blank Specifies the subject for subscribing stream if a Durable is not specified - will also be formatted into a stream name to be used on subscription. This subject is used for auto-provisioning the stream if needed as well and should be configured with the 'root' topic common to all subscriptions (eg edgex/#
) to ensure that all topics on the bus are covered. (JetStream only) AutoProvision false
Automatically provision NATS streams. (JetStream only) Deliver new
Specifies delivery mode for subscriptions - options are \"new\", \"all\", \"last\" or \"lastpersubject\". See the NATS documentation for more detail (JetStream only) DefaultPubRetryAttempts 2
Number of times to attempt to retry on failed publish (JetStream only)"},{"location":"microservices/general/messagebus/#resource-provisioning-with-nats-box","title":"Resource Provisioning with nats-box","text":"While the SDK will attempt to auto-provision streams needed if configured to do so, if you need specific features or policies enabled it is generally best to provision your own. A nats-box docker image is available preloaded with various utilities to make this easier.
For information on stream provisioning using the nats cli see here.
For nkey generation a utility called nk is provided with nats-box. For generating nkey seed files see here.
For credential management a utility called nsc is provided with nats-box. For using credentials files see documentation on resolvers and the companion memory resolver tutorial.
"},{"location":"microservices/general/messagebus/#nats-mqtt-mode","title":"NATS MQTT Mode","text":"A JetStream enabled server can support MQTT connections on the same set of underlying subjects. This can be especially useful if you are using prebuilt EdgeX services like device-onvif-camera but want to transition your system towards using NATS. Note that format=json
must be used so that the NATS messagebus client can read the double-encoded envelopes sent by MQTT clients. For more information see NATS MQTT Documentation.
The EdgeX MessageBus uses multi-level topics and wildcards to allow filtering of data via subscriptions and has standardized on a MQTT like scheme. See MQTT multi-level topics and wildcards for more information.
The Redis implementation converts the Redis Pub/Sub multi-level topic scheme to match that of MQTT. In Redis Pub/Sub the \".\" is used as a level separator, \"*\" followed by a level separator is used as the single level wildcard and \"*\" at the end is used as the multiple level wildcard. These are converted to \"/\" and \"+\" and \"#\" respectively, which are used by MQTT.
The NATS implementations convert the NATS multi-level topic scheme to match that of MQTT. In NATS \".\" is used as a level separator, \"*\" is used as the single level wildcard and \">\" is used for the multi-level wild card. These are converted to \"/\", \"+\" and \"#\" respectively, which are compliant with the MQTT scheme.
Example Multi-level topics and wildcards for EdgeX MessageBus
edgex/events/#
All events coming from any device service or core data for any device profile, device or source
edgex/events/device/#
All events coming from any device service for any device profile, device or source
edgex/events/+/device-onvif-camera/#
Events coming from only device service \"device-onvif-camera\" for any device profile, device and source
edgex/events/+/+/+/camera-001/#
Events coming from any device service or core data for any device profile, but only for the device \"camera-001\" and for any source
edgex/events/device/+/onvif/+/status
Events coming from any device service for only the device profile \"onvif\", and any device and only for the source \"status\"
All EdgeX services are capable of using the Redis Pub/Sub without any changes to configuration. The released compose files and snaps use Redis Pub/Sub.
"},{"location":"microservices/general/messagebus/#mqtt-31_1","title":"MQTT 3.1","text":"All EdgeX services are capable of using MQTT 3.1 by simply making changes to each service's configuration.
Note
As mentioned above, the MQTT 3.1 implementation requires the addition of a MQTT Broker service to be running.
"},{"location":"microservices/general/messagebus/#configuration-changes","title":"Configuration Changes","text":"Edgex 3.0
For EdgeX 3.0 MessageQueue
configuration has been renamed to MessageBus and is now in common configuration.
The MessageBus configuration is in common configuration where the following changes only need to be made once and apply to all services. See the MessageBus tab in Common Configuration for more details.
Example MQTT Configurations changes for all services
The following MessageBus
configuration settings must be changed in common configuration for all EdgeX Services to use MQTT 3.1
MessageBus:\nType: \"mqtt\"\nProtocol: \"tcp\" Host: \"localhost\" # in docker this must be overriden to be the docker host name of the MQTT Broker\nPort: 1883\nAuthMode: \"none\" # set to \"usernamepassword\" when running in secure mode\nSecreName: \"message-bus\"\n...\n
Note
The optional settings that apply to MQTT are already in the common configuration, so are not included above.
"},{"location":"microservices/general/messagebus/#docker","title":"Docker","text":"The EdgeX Compose Builder utility provides an option to easily generate a compose file with all the selected services re-configured for MQTT 3.1 using environment overrides. This is accomplished by using the mqtt-bus
option. See Compose Builder README for details on all available options.
Example Secure mode compose generation for MQTT 3.1
make gen ds-virtual ds-rest mqtt-bus\n
Non-secure mode compose generation for MQTT 3.1
make gen no-secty ds-virtual ds-rest mqtt-bus\n
Note
The run
command can be used to generate and run the compose file in one command, but any changes made to the generated compose file will be overridden the next time run
is used. An alternative is to use the up
command, which runs the latest generated compose file with any modifications that may have been made.
For Snap deployment, each services' configuration has to modified manually or via environment overrides after install. For more details see the Configuration section in the Snaps getting started guide.
"},{"location":"microservices/general/messagebus/#nats_1","title":"NATS","text":"The EdgeX Go based services are not capable of using the NATS implementation without being rebuild using the include_nats_messaging
build tag. Any EdgeX Core/Support/Go Device/Application Service targeted to use NATS in a deployment must have the Makefile modified to add this build flag. The service can then be rebuild for native and/or Docker.
Core Data make target modified to include NATS
cmd/core-data/core-data:\n$(GOCGO) build -tags \"include_nats_messaging $(NON_DELAYED_START_GO_BUILD_TAG_FOR_CORE)\" $(CGOFLAGS) -o $@ ./cmd/core-data\n
Note
The C Device SDK does not currently have a NATS implementation, so C Devices can not be used with the NATS based EdgeX MessageBus.
"},{"location":"microservices/general/messagebus/#configuration-changes_1","title":"Configuration Changes","text":"Edgex 3.0
For EdgeX 3.0 MessageQueue
configuration has been renamed to MessageBus and is now in common configuration.
The MessageBus configuration is in common configuration where the following changes only need to be made once and apply to all services. See the MessageBus tab in Common Configuration for more details.
Example NATS Configurations changes for all services
The following MessageBus
configuration settings must be changed in common configuration for all EdgeX Services to use NATS Jetstream
MessageBus:\nType: \"nats-jetstream\"\nProtocol: \"tcp\" Host: \"localhost\" # in docker this must be overriden to be the docker host name of the NATS server\nPort: 4222\nAuthMode: \"none\" # Currently in secure mode the NATS server is not secured\n
Note
The optional setting that apply to NATS are already in the common configuration, so are not included above.
"},{"location":"microservices/general/messagebus/#docker_1","title":"Docker","text":"The EdgeX Compose Builder utility provides an option to easily generate a compose file with all the selected services re-configured for NATS using environment overrides. This is accomplished by using the nats-bus
option. This option configures the services to use the NATS Jetstream implementation. See Compose Builder README for details on all available options. If NATS Core is preferred, simply do a search and replace of nats-jetstream
with nats-core
in the generated compose file.
Example Secure mode compose generation for NATS
make gen ds-virtual ds-rest nats-bus\n
Non-secure mode compose generation for NATS
make gen no-secty ds-virtual ds-rest nats-bus\n
"},{"location":"microservices/general/messagebus/#snaps_1","title":"Snaps","text":"The published Snaps are built without NATS included, so the use of NATS in those Snaps is not possible. One could modify the Makefiles as described above and then build and install local snap packages. In this case it would be easier to modify each service's configuration as describe above so that the locally built and installed snaps are already configured for NATS.
"},{"location":"microservices/support/Ch-SupportingServices/","title":"Supporting Services","text":"The supporting services encompass a wide range of micro services to include edge analytics (also known as local analytics). Micro services in the supporting services layer perform normal software application duties such as scheduler, and notifications/alerting .
These services often need some amount of core services to function. In all cases, consider supporting service optional. Leave these services out of an EdgeX deployment depending on use case needs and system resources.
Supporting services include:
LF Edge eKuiper is the EdgeX reference implementation rules engine (or edge analytics) implementation.
"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#what-is-lf-edge-ekuiper","title":"What is LF Edge eKuiper?","text":"LF Edge eKuiper is a lightweight open source software (Apache 2.0 open source license agreement) package for IoT edge analytics and stream processing implemented in Go lang, which can run on various resource constrained edge devices. Users can realize fast data processing on the edge and write rules in SQL. The eKuiper rules engine is based on three components Source
, SQL
and Sink
.
The relationship among Source, SQL and Sink in eKuiper is shown below.
eKuiper runs very efficiently on resource constrained edge devices. For common IoT data processing, the throughput can reach 12k per second. Readers can refer to here to get more performance benchmark data for eKuiper.
"},{"location":"microservices/support/eKuiper/Ch-eKuiper/#ekuiper-rules-engine-of-edgex","title":"eKuiper rules engine of EdgeX","text":"An extension mechanism allows eKuiper to be customized to analyze and process data from different data sources. By default for the EdgeX configuration, eKuiper analyzes data coming from the EdgeX message bus. EdgeX provides an abstract message bus interface, and implements the Redis Pub/Sub, MQTT and NATS protocols respectively to support information exchange between different micro-services. The integration of eKuiper and EdgeX mainly includes the following:
5566
on which the Application Service publishes messages. After the data from the Core Data Service is processed by the Application Service, it will flow into the eKuiper rules engine for processing.Info
The eKuiper tutorials and documentation are available in both English and Chinese.
For more information on the LF Edge eKuiper project, please refer to the following resources.
When another system or a person needs to know that something occurred in EdgeX, the alerts and notifications microservice sends that notification. Examples of alerts and notifications that other services could broadcast, include the provisioning of a new device, sensor data detected outside of certain parameters (usually detected by a device service or rules engine) or system or service malfunctions (usually detected by system management services).
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#terminology","title":"Terminology","text":"Notifications are informative, whereas Alerts are typically of a more important, critical, or urgent nature, possibly requiring immediate action.
This diagram shows the high-level architecture of the notifications service. On the left side, the APIs are provided for other microservices, on-box applications, and off-box applications to use. The APIs could be in REST, AMQP, MQTT, or any standard application protocols.
This diagram is drawn by diagrams.net with the source file EdgeX_SupportingServicesAlertsArchitecture.xml
Warning
Currently in EdgeX Foundry, only the RESTful interface is provided.
On the right side, the notifications receiver could be a person or an application system on Cloud or in a server room. By invoking the Subscription RESTful interface to subscribe the specific types of notifications, the receiver obtains the appropriate notifications through defined receiving channels when events occur. The receiving channels include SMS message, e-mail, REST callback, AMQP, MQTT, and so on.
Warning
Currently in EdgeX Foundry, e-mail and REST callback channels are provided.
When the notifications service receives notifications from any interface, the notifications are passed to the Notifications Handler internally. The Notifications Handler persists the received notifications first, and passes them to the Distribution Coordinator.
When the Distribution Coordinator receives a notification, it first queries the Subscription database to get receivers who need this notification and their receiving channel information. According to the channel information, the Distribution Coordinator passes this notification to the corresponding channel senders. Then, the channel senders send out the notifications to the subscribed receivers.
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#workflow","title":"Workflow","text":""},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#normalminor-notifications","title":"Normal/Minor Notifications","text":"When a client requests a notification to be sent with \"NORMAL\" or \"MINOR\" status, the notification is immediately sent to its receivers via the Distribution Coordinator, and the status is updated to \"PROCESSED\".
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#critical-notifications","title":"Critical Notifications","text":"Notifications with \"CRITICAL\" status are also sent immediately. When encountering any error during sending critical notification, an individual resend task is scheduled, and each transmission record persists. After exceeding the configurable limit (resend limit), the service escalates the notification and create a new notification to notify particular receivers of the escalation subscription (name = \"ESCALATION\") of the failure.
Note
All notifications are processed immediately. The resend feature is only provided for critical notifications. The resendLimit and resendInterval properties can be defined in each subscription. If the properties are not provided, use the default values in the configuration properties.
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#data-model","title":"Data Model","text":"The latest developed data model will be updated in the Swagger API document.
This diagram is drawn by diagrams.net with the source file EdgeX_SupportingServicesNotificationsModel.xml
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#data-dictionary","title":"Data Dictionary","text":"SubscriptionChannelNotificationTransmissionTransmissionRecord Property Description The object used to describe the receiver and the recipient channels ID Uniquely identifies a subscription, for example a UUID Name Uniquely identifies a subscription Receiver The name of the party interested in the notification Description Human readable description explaining the subscription intent Categories Link the subscription to one or more categories of notification. Labels An array of associated means to label or tag for categorization or identification Channels An array of channel objects indicating the destination for the notification ResendLimit The retry limit for attempts to send notifications ResendInterval The interval in ISO 8691 format of resending the notification AdminState An enumeration string indicating the subscription is locked or unlocked Property Description The object used to describe the notification end point. Channel supports transmissions and notifications with fields for delivery via email or REST Type Object of ChannelType - indicates whether the channel facilitates email or REST MailAddress EmailAddress object for an array of string email addresses RESTAddress RESTAddress object for a REST API destination endpoint Property Description The object used to describe the message and sender content of a notification. ID Uniquely identifies a notification, for example a UUID Sender A string indicating the notification message sender Category A string categorizing the notification Severity An enumeration string indicating the severity of the notification - as either normal or critical Content The message sent to the receivers Description Human readable description explaining the reason for the notification or alert Status An enumeration string indicating the status of the notification as new, processed or escalated Labels Array of associated means to label or tag a notification for better search and filtering ContentType String indicating the type of content in the notification message Property Description The object used to group Notifications ID Uniquely identifies a transmission, for example a UUID Created A timestamp indicating when the notification was created NotificationId The notification id to be sent SubscriptionName The name of the subscription interested in the notification Channel A channel object indicating the destination for the notification Status An enumeration string indicating whether the transmission failed, was sent, was resending, was acknowledged, or was escalated ResendCount Number indicating the number of resent attempts Records An array of TransmissionRecords Property Description Information the status and response of a notification sent to a receiver Status An enumeration string indicating whether the transmission failed, was sent, was acknowledged, or escalated Response The response string from the receiver Sent A timestamp indicating when the notification was sent"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Support Notifications.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics TBD
Service metrics that Support Notification collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. i.e. Gateway=\"my-iot-gateway\"
Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration Port 59860 Micro service port number StartupMsg This is the Support Notifications Microservice Message logged when service completes bootstrap start-up Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration Name 'notifications' Document store or database name Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration ClientId \"support-notifications Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description Config to connect to applicable SMTP (email) service. All the properties with prefix \"smtp\" are for mail server configuration. Configure the mail server appropriately to send alerts and notifications. The correct values depend on which mail server is used. Smtp Host smtp.gmail.com SMTP service host name Smtp Port 587 SMTP service port number Smtp EnableSelfSignedCert false Indicates whether a self-signed cert can be used for secure connectivity. Smtp SecretPath smtp Specify the secret path to store the credential(username and password) for connecting the SMTP server via the /secret API, or set Writable SMTP username and password for insecure secrets Smtp Sender jdoe@gmail.com SMTP service sender/username Smtp Subject EdgeX Notification SMTP notification message subject Property Default Value Description Enabled false Enable or disable notification retention. Interval 30m Purging interval defines when the database should be rid of notifications above the MaxCap. MaxCap 5000 The maximum capacity defines where the high watermark of notifications should be detected for purging the amount of the notification to the minimum capacity. MinCap 4000 The minimum capacity defines where the total count of notifications should be returned to during purging."},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"No configuration updated
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#gmail-configuration-example","title":"Gmail Configuration Example","text":"Before using Gmail to send alerts and notifications, configure the sign-in security settings through one of the following two methods:
Then, use the following settings for the mail server properties:
Smtp Port=25\nSmtp Host=smtp.gmail.com\nSmtp Sender=${Gmail account}\nSmtp Password=${Gmail password or App password}\n
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#yahoo-mail-configuration-example","title":"Yahoo Mail Configuration Example","text":"Similar to Gmail, configure the sign-in security settings for Yahoo through one of the following two methods:
Then, use the following settings for the mail server properties:
Smtp Port=25\nSmtp Host=smtp.mail.yahoo.com\nSmtp Sender=${Yahoo account}\nSmtp Password=${Yahoo password or App password}\n
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#writable","title":"Writable","text":"The Writable.InsecureSecrets.SMTP
section has been added.
Example Writable.InsecureSecrets.SMTP section
Writable:\nInsecureSecrets:\nSMTP:\nSecretName: \"smtp\"\nSecretData:\nusername: \"username@mail.example.com\"\npassword: \"\"\n
"},{"location":"microservices/support/notifications/Ch-AlertsNotifications/#api-reference","title":"API Reference","text":"Support Notifications API Reference
"},{"location":"microservices/support/scheduler/Ch-Scheduler/","title":"Support Scheduler","text":""},{"location":"microservices/support/scheduler/Ch-Scheduler/#introduction","title":"Introduction","text":"The support scheduler microservice provide an internal EdgeX \u201cclock\u201d that can kick off operations in any EdgeX service. At a configuration specified time (called an interval), the service calls on any EdgeX service API URL via REST to trigger an operation (called an interval action). For example, the scheduler service periodically calls on core data APIs to clean up old sensed events that have been successfully exported out of EdgeX.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#default-interval-actions","title":"Default Interval Actions","text":"Scheduled interval actions configured by default with the reference implementation of the service include:
NOTE The removal of stale records occurs on a configurable schedule. By default, the default action above is invoked once a day at midnight.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#scheduler-persistence","title":"Scheduler Persistence","text":"Support scheduler uses a data store to persist the Interval(s) and IntervalAction(s). Persistence is accomplished by the Scheduler DB located in your current configured database for EdgeX.
Info Redis DB is used by default to persist all scheduler service information to include intervals and interval actions.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#iso-8601-standard","title":"ISO 8601 Standard","text":"The times and frequencies defined in the scheduler service's intervals are specified using the international date/time standard - ISO 8601. So, for example, the start of an interval would be represented in YYYYMMDD'T'HHmmss format. 20180101T000000 represents January 1, 2018 at midnight. Frequencies are represented with ISO 8601 durations.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#data-model","title":"Data Model","text":"The latest developed data model will be updated in the Swagger API document.
NOTE Only RESTAddress is supported. The MQTTAddress may be implemented in a future release.
This diagram is drawn by diagram.net, and the source file is here.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#data-dictionary","title":"Data Dictionary","text":"IntervalsIntervalActionsIntervalActions.Address Property Description An object defining a specific \"period\" in time Id Uniquely identifies an interval, for example a UUID Created A timestamp indicating when the interval was created in the database Modified A timestamp indicating when the interval was last modified Name the name of the given interval - unique for the EdgeX instance Start The start time of the given interval in ISO 8601 format using local system timezone End The end time of the given interval in ISO 8601 format using local system timezone Interval How often the specific resource needs to be polled. It represents as a duration string. The format of this field is to be an unsigned integer followed by a unit which may be \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\" representing nanoseconds, microseconds, milliseconds, seconds, minutes or hours. Eg, \"100ms\", \"24h\" Property Description The action triggered by the service when the associated interval occurs Id Uniquely identifies an interval action, for example a UUID Created A timestamp indicating when the interval action was created in the database Modified A timestamp indicating when the interval action was last modified Name the name of the interval action Interval associated interval that defines when the action occurs AdminState interval action state - either LOCKED or UNLOCKED AuthMethod interval action authentication method - either NONE or JWT (EdgeX microservice authentication JWT) Content The actual content to be sent as the body ContentType Indicates which request contentType should be used (i.e.text/html
, application/json
), the default is application/json
Property Description An object inside IntervalActions
indicating how to contact a specific endpoint by HTTP protocol Type Currently only support REST
Host The host targeted by the action when it activates Port The port on the targeted host HttpMethod Indicates which Http verb should be used for the REST endpoint.(Only using when type is REST Path The HTTP path at the targeted host for fulfillment of the action.(Only using when type is REST) See Interval and IntervalAction for more information, please see Interval and IntervalAction endpoints.
Warning
AuthMethod: JWT
exposes a sensitive credential and should only be used for, and is required to be used for, authenticating to peer EdgeX microservices.
Scheduler interval actions to expunge old and exported (pushed) records from Core Data
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#configuration-properties","title":"Configuration Properties","text":"Please refer to the general Common Configuration documentation for configuration settings common to all services. Below are only the additional settings and sections that are specific to Support Scheduler.
Edgex 3.0
For EdgeX 3.0 the MessageQueue
configuration has been moved to MessageBus
in Common Configuration
Writable.Telemetry
at Common Configuration for the Telemetry configuration common to all services Metrics TBD
Service metrics that Support Scheduler collects. Boolean value indicates if reporting of the metric is enabled. Tags <empty>
List of arbitrary service level tags to included with every metric that is reported. i.e. Gateway=\"my-iot-gateway\"
Property Default Value Description ScheduleIntervalTime 500 the time, in milliseconds, to trigger any applicable interval actions Property Default Value Description Unique settings for Support Scheduler. The common settings can be found at Common Configuration Port 59861 Micro service port number StartupMsg This is the Support Scheduler Microservice Message logged when service completes bootstrap start-up Property Default Value Description Unique settings for Support Scheduler. The common settings can be found at Common Configuration Name 'scheduler' Document store or database name Property Default Value Description Unique settings for Support Notifications. The common settings can be found at Common Configuration ClientId \"support-scheduler Id used when connecting to MQTT or NATS base MessageBus Property Default Value Description Default intervals for use with default interval actions Name midnight Name of the every day at midnight interval Start 20180101T000000 Indicates the start time for the midnight interval which is a midnight, Jan 1, 2018 which effectively sets the start time as of right now since this is in the past Interval 24h defines a frequency of every 24 hours Property Default Value Description Configuration of the core data clean old events operation which is to kick off every midnight Name scrub-aged-events name of the interval action Host localhost run the request on core data assumed to be on the localhost Port 59880 run the request against the default core data port Protocol http Make a RESTful request to core data Method DELETE Make a RESTful delete operation request to core data Path /api/v3/event/age/604800000000000 request core data's remove old events API with parameter of 7 days Interval midnight run the operation every midnight as specified by the configuration defined interval"},{"location":"microservices/support/scheduler/Ch-Scheduler/#v3-configuration-migration-guide","title":"V3 Configuration Migration Guide","text":"RequireMessageBus
AuthMethod
is added to IntervalActions.ScrubAged
See Common Configuration Reference for complete details on common configuration changes.
"},{"location":"microservices/support/scheduler/Ch-Scheduler/#api-reference","title":"API Reference","text":"Support Scheduler API Reference
"},{"location":"security/Ch-APIGateway/","title":"API Gateway","text":""},{"location":"security/Ch-APIGateway/#introduction","title":"Introduction","text":"EdgeX 3.0
This content is completely new for EdgeX 3.0. EdgeX 3.0 uses a brand new API gateway solution based on NGINX and Hashicorp Vault instead of Kong and Postgres. The new solution means that EdgeX 3.0 will be able to run in security enabled mode on more resource-constrained devices.
API gateways are used in microservice architectures that expose HTTP-accessible APIs to create a security layer that separates internal and external callers. An API gateway accepts client requests, authenticates the client, forwards the request to a backend microservice, and relays the results back to the client.
Although authentication is done at the microservice layer in EdgeX 3.0, EdgeX Foundry as elected to continue to use an API gateway for the following reasons:
It provides a convenient choke point and policy enforcement point for external HTTP requests and enables EdgeX adopters to easily replace the default authentication logic.
It defers the urgency of implementing fine-grained authorization at the microservice layer.
It provides defense-in-depth against microservice authentication bugs and other technical debt that might otherwise put EdgeX microservices at risk.
The API gateway listens on two ports:
8000: This is an unencrypted HTTP port exposed to localhost-only (also exposed to the edgex-network Docker network). When EdgeX is running in security-enabled mode, the EdgeX UI uses port 8000 for authenticated local microservice calls.
8443: This is a TLS 1.3 encrypted HTTP port exposed via the host's network interface to external clients. The default TLS certificate on this port is untrusted by default and should be replaced with a trusted certificate for production usage.
EdgeX 3.0 uses NGINX as the API gateway implementation and delegates to EdgeX's secret store (powered by Hashicorp Vault) for user and JWT authentication.
"},{"location":"security/Ch-APIGateway/#start-the-api-gateway","title":"Start the API Gateway","text":"The API gateway is started by default in either the snap-based EdgeX deployment or the Docker-based EdgeX deployment using the Docker Compose files found at https://github.com/edgexfoundry/edgex-compose/.
In Docker, the command to start EdgeX inclusive of API gateway related services is (where \"somerelease\" denotes the EdgeX release, such as \"jakarta\" or \"minnesota\"):
git clone -b somerelease https://github.com/edgexfoundry/edgex-compose\ncd edgex-compose\nmake run\n
or
git clone -b somerelease https://github.com/edgexfoundry/edgex-compose\ncd edgex-compose\nmake run arm64\n
The API gateway is not started if EdgeX is started with security features disabled by appending no-secty
to the previous make
commands. This disables all EdgeX security features, not just the API gateway.
The API gateway will generate a default self-signed TLS certificate that is used for external communication. Since this certificate is not trusted by client software, it is commonplace to replace this auto-generated certificate with one generated from a known certificate authority, such as an enterprise PKI, or a commercial certificate authority.
The process for obtaining a certificate is out-of-scope for this document. For purposes of the example, the X.509 PEM-encoded certificate is assumed to be called cert.pem
and the unencrypted PEM-encoded private key is called key.pem
. Do not use an encrypted private key as the API gateway will hang on startup in order to prompt for a password.
Run the following command to install a custom certificate using the assumptions above:
docker compose -p edgex -f docker-compose.yml run --rm -v `pwd`:/host:ro --entrypoint /edgex/secrets-config proxy-setup proxy tls --inCert /host/cert.pem --inKey /host/key.pem\n
The following command can verify the certificate installation was successful.
echo \"GET /\" | openssl s_client -showcerts -servername edge001.example.com -connect 127.0.0.1:8443\n
(where edgex001.example.com
is the hostname by which the client is externally reachable)
The TLS certificate installed in the previous step should be among the output of the openssl
command.
A standard set of routes are configured statically via the security-proxy-setup
microservice. Additional routes can be added via the EDGEX_ADD_PROXY_ROUTE
environment variable. Here is an example:
security-proxy-setup:\n...\nenvironment:\n...\nEDGEX_ADD_PROXY_ROUTE: \"app-myservice.http://edgex-app-myservice:56789\"\n...\n\n...\n\napp-myservice:\n...\ncontainer_name: app-myservice-container\nhostname: edgex-app-myservice\n...\n
The value of EDGEX_ADD_PROXY_ROUTE
takes a comma-separated list of one or more paired additional prefix and URL for which to create proxy routes. The paired specification is given as the following:
<RoutePrefix>.<TargetRouteURL>\n
where RoutePrefix is the base path that will be created off of the root of the API gateway to route traffic to the target. This should typically be the service key that the app uses to register with the EdgeX secret store and configuration provider, as the name of the service in the docker-compose file has security implications when using delayed-start services.
TargetRouteURL is the fullly qualified URL for the target service, like http://edgex-app-myservice:56789
as it is known on on the network on which the API gateway is running. For Docker, the hostname should match the hostname specified in the docker-compose.yml
file.
For example, using the above docker-compose.yml
:
EDGEX_ADD_PROXY_ROUTE: \"app-myservice.http://edgex-app-myservice:56789\"\n
When a request to the API gateway is received, such as GET https://localhost:8443/app-myservice/api/v3/ping
, the API gateway will reissue the request as GET http://edgex-app-myservice:56789/api/v3/ping
. Note that the route prefix is stripped from the re-issued request.
If the EdgeX API gateway is not in use, a client can access and use any REST API provided by the EdgeX microservices by sending an HTTP request to the service endpoint. E.g., a client can consume the ping endpoint of the Core Data microservice with curl command like this:
curl http://<core-data-microservice-ip>:59880/api/v3/ping\n
Where <core-data-microservice-ip>
is the Docker IP address of the container running the core-data microservice (if using Docker), or additionally localhost
in the default configuration for snaps and Docker. This means that in the default configuration, EdgeX microservices are only accessible to local host processes.
The API gateway serves as single external endpoint for all the REST APIs. The curl command to ping the endpoint of the same Core Data service, as shown above, needs to change to:
curl https://<api-gateway-host>:8443/core-data/api/v3/ping\n
Comparing these two curl commands you may notice several differences.
http
is switched to https
as we enable the SSL/TLS for secure communication. This applies to any client side request. (If the certificate is not trusted, the -k
option to curl
may also be required.)/core-data/
path in the URL is used to identify which EdgeX micro service the request is routed to. As each EdgeX micro service has a dedicated service port open that accepts incoming requests, there is a mapping table kept by the API gateway that maps paths to micro service ports. A partial listing of the map between ports and URL paths is shown in the table below.Note that any such request issued will be met with an
401 Not Authorized\n
response to the lack of an authentication token on the request. Authentication will be explained later.
The EdgeX documentation maintains an up-to-date list of default service ports.
Microservice Host Name Port number Partial URL edgex-core-data 59880 core-data edgex-core-metadata 59881 core-metadata edgex-core-command 59882 core-command edgex-support-notifications 59860 support-notifications edgex-support-scheduler 59861 support-scheduler edgex-kuiper 59720 rules-engine device-virtual 59900 device-virtual"},{"location":"security/Ch-APIGateway/#creating-access-token-for-api-gateway-authentication","title":"Creating Access Token for API Gateway Authentication","text":"Authentication is more fully explained in the authentication chapter.
The authentication chapter goes into detail on:
The TL;DR version to get an API gateway token, for development and test purposes, is
make get-token\n
(in the edgex-compose repository, if using Docker).
The get-token
target will return a JWT in the form
eyJ.... \".\" base64chars \".\" base64chars\n
As a bearer token, it has a limited lifetime for security reasons. The get-token
process should be repeated to obtain fresh tokens periodically. In the long form process described in the authentication chapter, this means re-authenticating to the EdgeX secret store and requesting a fresh JWT.
EdgeX versions prior to 3.0 used to support registering a public key with the API gateway, and allowing clients to self-generate their JWT for API gateway authentication. Regrettably, this \"raw key JWT\" authentication method is no longer supported. As consolation, the EdgeX secret store backend, Hashicorp Vault, supports many other authentication backends. EdgeX only enables the userpass
auth engine by default, and only passes the userpass
auth endpoints through the API gateway by default. Customizing an EdgeX implementation to use alternative authentication methods is left as an exercise for the adopter.
Once the resource mapping and access token to API gateway are in place, a client can use the access token to use the protected EdgeX REST API resources behind the API gateway. Again, without the API Gateway in place, here is the sample request to hit the ping endpoint of the EdgeX Core Data microservice using curl:
curl http://<core-data-microservice-ip>:59880/api/v3/ping\n
With the security service and JWT authentication is enabled, the command changes to:
curl -k -H 'Authorization: Bearer <JWT>' https://myhostname:8443/core-data/api/v3/ping\n
In summary the difference between the two commands are listed below:
-k
tells curl to ignore certificate errors. This is for demonstration purposes. In production, a known certificate that the client trusts be installed on the proxy and this parameter omitted.-H \"Authorization: Bearer <JWT>\"
to pass the authentication token as part of the request.EdgeX 3.0
Microservice-level authentication is new for EdgeX 3.0.
"},{"location":"security/Ch-Authenticating/#introduction","title":"Introduction","text":"Starting in EdgeX 3.0, when EdgeX is run in secure mode, EdgeX microservices require an authentication token before they will respond to requests issued over the REST API. (These changes are detailed in the EdgeX microservice authentication ADR and were introduced to mitigate against certain threats that originate from behind the API gateway or have somehow bypassed the API gateway.)
Prior to EdgeX 3.0, requests that originated remotely were authenticated at the API gateway via an HTTP Authorization
header that contained a JWT bearer token. Internally-originated requests required no authentication. In EdgeX 3.0, the Authorization
header is additionally checked at the microservice level on a per-route basis, where the majority of URL paths require authentication.
In order to make an authenticated EdgeX service call to a REST API, an appropriate authentication token must be present on the HTTP Authorization
header. To be recognized as valid, these tokens must be issued by EdgeX's secret store.
Built-in EdgeX services already have a token that allows them access to the EdgeX secret store. The Configuring Add-on Services chapter contains details on what is required to enroll a new microservice into EdgeX, for the purpose of obtaining a secret store token. The secret store token is used to obtain a JWT that is used for authenticating EdgeX REST API calls. The service's secret store token is not used directly, as this would enable the receiver to access the senders private slice of the secret store. Instead, the identity of the caller is attested using a JWT authenticator.
Non-services such as interactive users and script clients are also required to obtain a secret store token and exchange it for a JWT authenticator for REST API calls.
There are several possible authentication scenarios:
Authentication for non-service clients (includes EdgeX UI)
Local service-to-service clients using EdgeX service clients
Local service-to-service clients using the SecretProvider interface
The service-to-service scenario using the API gateway is not currently supported. The built-in service clients are not reverse-proxy-aware, and the lack of service prefixes in generated URLs will result in the API gateway blocking requests.
"},{"location":"security/Ch-Authenticating/#authentication-for-non-service-clients","title":"Authentication for Non-service Clients","text":"Non-service clients include interactive users using the EdgeX UI, clients using hand-crafted REST API requests, or other API usages where the caller of an EdgeX microservice is not also an EdgeX microservice.
Authentication consists of three steps:
When running EdgeX in Docker using the edgex-compose
repository, steps 1, 2, and 3 above have been automated by the following command:
make get-token\n
This method should only be used for development and testing: the username is fixed by the script, and the password is reset every time the script is run.
The example will be done in the Docker environment. For snaps, refer here.
The long form of make get-token
is below:
Internally, a user identity is a paring of a Vault identity and an associated userpass
login method bound to that identity. Vault supports many other authentication backends besides userpass
, making it possible to federate with enterprise single sign-on, for example, but userpass
is the only authentication method enabled by default.
The provided secrets-config
tool includes two sub-functions, adduser
and deluser
, for creating user identities.
Let use first set a shell variable to hold a username:
username=exampleuser\n
Optional: Delete existing user
docker exec -ti edgex-security-proxy-setup ./secrets-config proxy deluser --user \"${username}\" --useRootToken\n
Create new user identity, capture the password. In this example, the Vault token has a 60 second time-to-live (TTL), and any JWTs that we create will have a 119 minute TTL. This is set at the time of account creation.
password=$(docker exec -ti edgex-security-proxy-setup ./secrets-config proxy adduser --user \"${username}\" --tokenTTL 60 --jwtTTL 119m --useRootToken | jq -r '.password')\n
The username and password created above should be saved for future use; they will be required in the future to obtain fresh JWT's.
"},{"location":"security/Ch-Authenticating/#2-obtaining-a-temporary-secret-store-token","title":"2. Obtaining a Temporary Secret Store Token","text":"Authenticate to the EdgeX secret store using the username and password generated above to obtain a temporary secret store token. This token must be exchanged for a JWT within the tokenTTL
liveness period.
vault_token=$(curl -ks \"http://localhost:8200/v1/auth/userpass/login/${username}\" -d \"{\\\"password\\\":\\\"${password}\\\"}\" | jq -r '.auth.client_token')\n
This temporary token can be discarded after the next step.
In the microservice-to-microservice authentication scenario, secret store tokens are periodically renewed and used to request further JWTs and access the service's secret store. Tokens associated with user identities, however, only be used to obtain a JWT.
"},{"location":"security/Ch-Authenticating/#3-obtaining-a-jwt-authentication-token","title":"3. Obtaining a JWT authentication token","text":"The token created in the previous step is passed as an authenticator to Vault's identity secrets engine. The output is a JWT that expires after jwtTTL
(see above) has passed.
id_token=$(curl -ks -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/token/${username}\" | jq -r '.data.token')\n\necho \"${id_token}\"\n
Optionally, if the secret store token (vault_token) isn't expired yet, it can be used to check the validity of an arbitrary JWT. This example checks the validity of the JWT that was issued above. Any JWT that passes this check should suffice for making an authenticated EdgeX microservice call.
introspect_result=$(curl -ks -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/introspect\" -d \"{\\\"token\\\":\\\"${id_token}\\\"}\" | jq -r '.active')\necho \"${introspect_result}\"\n
"},{"location":"security/Ch-Authenticating/#4-using-the-jwt-to-call-an-edgex-api-or-edgex-ui","title":"4. Using the JWT to Call an EdgeX API or EdgeX UI","text":""},{"location":"security/Ch-Authenticating/#calls-via-edgex-ui","title":"Calls via EdgeX UI","text":"EdgeX UI users should supply the id_token
to the prompt issued by the EdgeX UI. When the token eventually expires, obtain another token using the above process.
To call an EdgeX service directly from host context using a command-line interface, go directly to the service's localhost-mapped port, and pass the JWT as an HTTP Authorization
header:
curl -H\"Authorization: Bearer ${id_token}\" \"http://localhost:59xxx/api/v3/version\"\n
"},{"location":"security/Ch-Authenticating/#remote-calls-to-services-via-api-gateway","title":"Remote Calls to Services via API Gateway","text":"Calling an EdgeX service from a remote machine using the EdgeX API gateway looks similar to the above, with a few minor changes:
The docker network architecture is illustrated below:
In the example below, ca.crt
is the CA certificate that is used to verify the TLS certificate presented by the API gateway, and SERVICENAME
is the name of the EdgeX service that is being proxied by the API gateway, such as core-data
:
curl --cacert ca.crt -H\"Authorization: Bearer ${id_token}\" \"https://`hostname --fqdn`:8443/SERVICENAME/api/v3/version\"\n
This is identical to what was done in EdgeX versions prior to 3.0. The only thing that has changed is the method use to obtain the JWT.
"},{"location":"security/Ch-Authenticating/#local-service-to-service-using-edgex-service-clients","title":"Local Service-to-Service - Using EdgeX Service Clients","text":"The preferred method of making an authenticated call to an EdgeX microservice is to use the service proxies configured by go-mod-bootstrap.
Clients are retrieved from the dependency injection container using the helper functions in clients.go in go-mod-bootstrap. For example:
import \"github.com/edgexfoundry/go-mod-bootstrap/bootstrap/container\"\n\n// ... \n\ncommandClient := container.CommandClientFrom(dic.Get)\n
EdgeX methods invoked via the service proxies automatically authenticate to peer EdgeX microservices with no additional work needed on the part of the developer.
If EdgeX is run in non-secure mode, the built-in service clients that are configured in go-mod-bootstrap gracefully degrade to non-authenticating clients.
"},{"location":"security/Ch-Authenticating/#local-service-to-service-using-the-secretprovider-interface","title":"Local Service-to-Service - Using the SecretProvider interface","text":"In the example where two user-provided services directly invoke one-another, there will be no service client available. In this case, it is necessary to use go-mod-bootstrap's SecretProvider
interface to obtain a JWT.
See the following pseudo-code to add an Authorization
header to an outgoing HTTP request, req.
import (\nbootstrapContainer \"github.com/edgexfoundry/go-mod-bootstrap/v3/bootstrap/container\"\nclientInterfaces \"github.com/edgexfoundry/go-mod-core-contracts/v3/clients/interfaces\"\n\"github.com/edgexfoundry/go-mod-bootstrap/v3/bootstrap/secret\"\n)\n\n\n// Get the SecretProvider from bootstrap's DI container.\n// Internally, this is a wrapper for go-mod-secret's GetSelfJWT()\nsecretProvider := bootstrapContainer.SecretProviderFrom(dic.Get)\n\n// get an instance of the AuthenticationInjector helper\nvar jwtSecretProvider clientInterfaces.AuthenticationInjector\njwtSecretProvider = secret.NewJWTSecretProvider(m.secretProvider)\n\n// Call the AddAuthenticationData helper method\n// internally, this calls GetSelfJWT() on the SecretProvider\n// to obtain a JWT and adds an Authorization header to the HTTP request\nerr := jwtSecretProvider.AddAuthenticationData(req);\n
"},{"location":"security/Ch-Authenticating/#implementation-notes","title":"Implementation Notes","text":"Internally, the receiving microservice will call the secret store's token introspection endpoint to validate incoming JWT's. Note that as in all things dealing with the EdgeX secret store, calling the introspection endpoint is also an authenticated call, and a service must have explicit authorization to invoke this API.
Similarly, explicit authorization is required for a calling microservice to obtain a JWT to pass as an authentication token. In the EdgeX implementation, microservices use the userpass login authentication method to obtain an initial secret store token. This token is explicitly granted the ability to generate a JWT.
In the external user scenario of the API gateway, clients must manually log in to the secret store, and exchange the resulting token for JWT. In the internal usage scenario, EdgeX microservices are typically pre-seeded with a valid JWT, and obtain a fresh JWT for each outbound microservice call.
There are obvious opportunities for caching to reduce round trips to the EdgeX secret store, but none have been implemented at this time.
"},{"location":"security/Ch-CORS-Settings/","title":"CORS settings","text":"The EdgeX microservices provide REST APIs and those services might be called from a GUI through a browser. Browsers prevent service calls from a different origin, making it impossible to host a management GUI on one domain that manages an EdgeX device on a different domain. Thus, EdgeX supports Cross-Origin Resource Sharing (CORS) since Jakarta release (v2.1), and this feature can be controlled by the configurations. The default behavior of CORS is disabled. Here is a good reference to understand CORS.
Note
C Device SDK doesn't support CORS, and enabling CORS in Device Services is not recommended because browsers should not access Device Services directly.
"},{"location":"security/Ch-CORS-Settings/#enabling-cors","title":"Enabling CORS","text":"There are two different ways to enable CORS depending on whether EdgeX is deployed in the security-enabled configuration. In the non-security configuration, EdgeX microservices are directly exposed on host ports. EdgeX microservices receive client requests directly in this configuration, and thus, the EdgeX microservices themselves must respond to CORS requests. In the security-enabled configuration, EdgeX microservices are exposed behind an API gateway that will receive CORS requests first. Only authenticated calls will be forwarded to the EdgeX microservice, but CORS pre-flight requests are always unauthenticated.
CORS can be enabled at the API gateway in a security-enabled configuration, and at the individual microservice level in the non-security configuration. However, implementers should choose one or the other, not both.
"},{"location":"security/Ch-CORS-Settings/#enabling-cors-for-microservices","title":"Enabling CORS for Microservices","text":"There are two different options to enable CORS.
core-common-config-bootstrapper
service section on docker-compose.file. They can be set via SERVICE_CORSCONFIGURATION_*
environment variables. Please refer to the following example:Example - Set EnableCORS
to true
by environment variables override
core-common-config-bootstrapper:\nenvironment: SERVICE_CORSCONFIGURATION_ENABLECORS: \"true\"\n
Service.CORSConfiguration.EnableCORS
via Consul for the targeted service and restart the service.Service.CORSConfiguration.EnableCORS
to each services private configuration file.Please refer to the Common Configuration page to learn the details.
"},{"location":"security/Ch-CORS-Settings/#enabling-cors-for-api-gateway","title":"Enabling CORS for API Gateway","text":"The default CORS settings for the API gateway come from the following section in cat cmd/core-common-config-bootstrapper/res/configuration.yaml
in the edgex-go
repository
all-services:\n Service:\n CORSConfiguration:\n EnableCORS: false\n CORSAllowCredentials: false\n CORSAllowedOrigin: \"https://localhost\"\n CORSAllowedMethods: \"GET, POST, PUT, PATCH, DELETE\"\n CORSAllowedHeaders: \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\"\n CORSExposeHeaders: \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\"\n CORSMaxAge: 3600\n
In the Docker configuration if the EDGEX_SERVICE_CORSCONFIGURATION_*
environment variables are set on the security-proxy-setup
microservice, the CORS configuration will be applied to all microservices (EDGEX_SERVICE_CORSCONFIGURATION_ENABLECORS=true
). There is not a way, when using the API gateway, to turn CORS on for one microservice but not another without writing a custom security-proxy-setup
microservice.
Note
The settings under the CORSConfiguration configuration section are the same as those under the Service.CORSConfiguration so please refer to the Common Configuration page to learn the details. Note that these overrides are prefixed with EDGEX_
.
Note
The name of the configuration sections and environment variable overrides are intentionally different than the API gateway section, in alignment with the guidance that CORS should be enabled at the microservice level or the API gateway level, but not both. Thus, the security-enabled overrides are accomplished with EDGEX_SERVICE_CORSCONFIGURATION_*
overrides, and the no-security overrides with SERVICE_CORSCONFIGURATION_*
.
To enable CORS support in the API gateway in the EdgeX Snap, a slightly different procedure is required.
First, we need to override the EDGEX_SERVICE_CORSCONFIGURATION_*
environment variables like was done in Docker. However, we need to override this in the security-bootstrapper-nginx
service. This service runs before nginx.service
to write the NGINX configuration file. If started prior to this configuration, restart the security-bootstrapper-nginx
service to generate a new configuration, and also restart nginx
to put the new configuration into effect. Otherwise, start the services as usual. Lastly, we send a sample CORS preflight request at the API gateway to make sure everything is working.
Note
Setting CORSAllowedOrigin=\"*\"
is not a security best practice for an authenticated API; rather, it should be set to the domain that is hosting your user interface. The example provided is for illustrative purposes only.
Example, assuming the services are running:
$ sudo snap set edgexfoundry apps.security-bootstrapper-nginx.config.edgex-service-corsconfiguration-corsallowedorigin=\"*\"\n$ sudo snap set edgexfoundry apps.security-bootstrapper-nginx.config.edgex-service-corsconfiguration-enablecors=true\n$ sudo snap restart edgexfoundry.security-bootstrapper-nginx\n$ sudo snap restart edgexfoundry.nginx\n$ curl -ki -X OPTIONS -H\"Origin: http://localhost\" \"https://localhost:8443/core-data/api/v2/ping\"\nHTTP/1.1 204 No Content\nServer: nginx\nDate: Wed, 23 Aug 2023 03:08:18 GMT\nConnection: keep-alive\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET, POST, PUT, PATCH, DELETE\nAccess-Control-Allow-Headers: Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\nAccess-Control-Max-Age: 3600\nVary: origin\nContent-Type: text/plain; charset=utf-8\nContent-Length: 0\n
"},{"location":"security/Ch-Configuring-Add-On-Services/","title":"Configuring Add-on Service","text":"In the current EdgeX security serivces, we set up and configure all security related properties and environments for the existing default serivces like core-data
, core-metadata
, device-virtual
, and so on.
The settings and service environment variables are pre-wired and ready to run in secure mode without any update or modification to the Docker-compose files. However, there are some pre-built add-on services like some device services (e.g.device-camera
, device-modbus
), and some of application services (e.g. app-http-export
, app-mqtt-export
) are not pre-wired for by default. Also if you are adding on your custom application service, there is no pre-wiring for it and thus need some configuration efforts to make them run in secure mode.
EdgeX provides a way for a user to add and configure those add-on services into EdgeX Docker software stack running in secure mode. This can be done vai Docker-compose files with a few additional environment variables and some modification of micro-service's Dockerfile. From edgex-compose
repository, the compose-builder
utility provides some ways to deal with those add-on services like through add-security.yml
via make
targets to generate docker-compose
file for running them in secure mode. For more details, please refer to README documentation of compose-builder.
The above same guidelines can also be applied to custom device and application services, i.e. non-EdgeX built services.
One of the major security features in EdgeX Ireland release is to utilize the service security-bootstrapper
to ensure the right starting sequence so that all services have their needed security dependencies when they start up.
Currently EdgeX uses Vault
as the default implementation for secret store and Consul as the configuration and/or registry server if user chooses to do so. There are some default services pre-configured to have Secret Stores
created by default such as EdgeX core/support services, device-virtual, device-rest, and app-rules-engine services.
For running additional add-on services (e.g. device-camera
, app-http-export
) in secure mode, their Secret Stores
are not generated by default but they can be generated through some configuring steps as shown below.
In the following scenario, we assume the EdgeX services are running in Docker environments, and thus the examples are given in terms of Docker-compose ways. It should not be much or bigger difference for snap
running environment to apply the same steps or concepts if found to do so.
If users want to configure and set up an add-on service, e.g. device-camera
, they can achieve this by following the steps that are outlined below:
To use the Docker entrypoint scripts for gating mechanism from security-bootstrapper
, the Dockerfile of device-camera
should inherit shell scripting capability like alpine
-based as the base Docker image and should install dumb-init
(see details in Why you need an init system) via apk add --update
command.
Dockerfile example using alpine-base image and add dumb-init
:
......\nFROM alpine:3.12\n\n# dumb-init needed for injected secure bootstrapping entrypoint script when run in secure mode.\nRUN apk add --update --no-cache dumb-init\n......\n
and then for the service itself should add /edgex-init/ready_to_run_wait_install.sh
as the entrypoint script for the service in gating fashion and add related Docker volumes for edgex-init
and for Secret Store
token, which will be outlined in the next section.
A good example of this will be like app-service-rules
:
...\napp-service-rules:\nentrypoint: [\"/edgex-init/ready_to_run_wait_install.sh\"]\ncommand: \"/app-service-configurable ${DEFAULT_EDGEX_RUN_CMD_PARMS}\"\nvolumes:\n- edgex-init:/edgex-init:ro,z\n- /tmp/edgex/secrets/app-rules-engine:/tmp/edgex/secrets/app-rules-engine:ro,z\ndepends_on:\n- security-bootstrapper\n...\n
Note that we also add command
directive override in the above example because we override Docker's entrypoint script in the original Dockerfile and Docker ignores the original command when the entrypoint script is overridden. In this case, we also override the command
for app-service-rules
service with arguments to execute.
Secret Store
to use","text":"Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
Note that the service key , i.e.device-onvif-camera
, must be used for the Path
and in the TokenFile
path to keep it consistent and easier to maintain. These are now part of the built in default values for the SecretStore configuration. Then the add-on service's service key must be added to the EdgeX service secretstore-setup
'sEDGEX_ADD_SECRETSTORE_TOKENS
environment variable in the environment
section of docker-compose
as the example shown below:
...\nsecretstore-setup:\ncontainer_name: edgex-secretstore-setup\ndepends_on:\n- security-bootstrapper\n- vault\nenvironment:\nEDGEX_ADD_SECRETSTORE_TOKENS: 'device-onvif-camera'\n...\n
With that, secretstore-setup
then will generate Secret Store
token from Vault
and store it in the TokenFile
path specified in the SecretStore configuration.
Also note that the value of EDGEX_ADD_SECRETSTORE_TOKENS
can take more than one service in a form of comma separated list like \"device-camera
, device-modbus
\" if needed.
The EDGEX_ADD_KNOWN_SECRETS
environment variable on secretstore-setup
allows for known secrets to be added to an add-on service's Secret Store
.
For the Ireland release, the only known
secret is the Redis DB credentials
identified by the name redisdb
. Any add-on service needing access to the Redis DB
such as App Service HTTP Export with Store and Forward enabled will need the Redis DB credentials
put in its Secret Store
. Also, since the Redis DB
service is now used for the MessageBus implementation, all services that connect to the MessageBus also need the Redis DB credentials
Note that the steps needed for connecting add-on services to the Secure MessageBus
are:
security-bootstrapper
to ensure proper startup sequenceSecret Store
for the add-on serviceredisdb
's known secret to the add-on service's Secret Store
and if the add-on service is not connecting to the bus or the Redis database, then this step can be skipped.
So given an example for service device-virtual
to use the Redis
message bus in secure mode, we need to tell secretstore-setup
to add the redisdb
known secret to Secret Store
for device-virtual
. This can be done through the configuration of adding redisdb[device-virtual]
into the environment variable EDGEX_ADD_KNOWN_SECRETS
in secretstore-setup
service's environment section, in which redisdb
is the name of the known secret
and device-virtual
is the service key of the add-on service.
...\nsecretstore-setup:\ncontainer_name: edgex-secretstore-setup\ndepends_on:\n- security-bootstrapper\n- vault\nenvironment:\nEDGEX_ADD_SECRETSTORE_TOKENS: 'device-onvif-camera, my-service'\nEDGEX_ADD_KNOWN_SECRETS: redisdb[app-rules-engine],redisdb[device-rest],redisdb[device-virtual]\n...\n
In the above docker-compose
section of secretstore-setup
, we specify the known secret of redisdb
to add/copy the Redis database credentials to the Secret Store
for the app-rules-engine
, device-rest
, and device-virtual
services.
We can also use the alternative or simpler form of EDGEX_ADD_KNOWN_SECRETS
environment variable's value like
EDGEX_ADD_KNOWN_SECRETS: redisdb[app-rules-engine; device-rest; device-virtual]\n
in which all add-on services are put together in a comma separated list associated with the known secret redisdb
.
This is a new step coming from securing Consul
security features as part of EdgeX Ireland release.
If the add-on service uses Consul
as the configuration and/or registry service, then we also need to configure the environment variable EDGEX_ADD_REGISTRY_ACL_ROLES
to tell security-bootstrapper
to generate an ACL role for Consul
to associate with its token.
An example of configuring ACL roles of the registry Consul
for the add-on services device-modbus
and app-http-export
is shown as follows:
...\nconsul:\ncontainer_name: edgex-core-consul\ndepends_on:\n- security-bootstrapper\n- vault\nentrypoint:\n- /edgex-init/consul_wait_install.sh\nenvironment:\nEDGEX_ADD_REGISTRY_ACL_ROLES: app-http-export,device-modbus\n...\n
The configuration of Edgex service consul
's environment variable EDGEX_ADD_REGISTRY_ACL_ROLES
tells the security-bootstrapper
to set up Consul
ACL role so that the ACL token is generated, hence the permission is granted for that service with the access to Consul
in secure mode.
Without this step the add-on service will get status Forbidden
(HTTP status code = 403) error when the service is depending on Consul and attempting to access Consul for configuration or service registry.
If it is desirable to let user or other application services outside EdgeX's Docker network access the endpoint of the add-on service, then we can configure and add it via proxy-setup
service's EDGEX_ADD_PROXY_ROUTE
environment variable. proxy-setup
adds those services listed in that environment variable into the API gateway routes so that the endpoint can be accessible via the gateway.
One example of adding API gateway access routes for both device-camera
and device-modbus
is given as follows:
...\nedgex-proxy:\n...\nenvironment:\n...\nEDGEX_ADD_PROXY_ROUTE: \"device-camera.http://edgex-device-onvif-camera:59984, device-modbus.http://edgex-device-modbus:59901\"\n...\n...\n
where in the comma separated list, the first part of configured value device-onvif-camera
is the service key and the URL format is the service's hostname with its docker network port number 59984
for device-camera
. The same idea applies to device-modbus
with its values.
With that setup, we can then access the endpoints of device-camera
from the host like https://<HostName>:8443/device-onvif-camera/{device-name}/name
assuming the caller can resolve <HostName>
from DNS server.
For more details on the introduction to the API gateway and how it works, please see APIGateway documentation page.
"},{"location":"security/Ch-DelayedStartServices/","title":"Delayed-Start Services","text":"In some use cases, it is not possible to deliver a secret store token to an EdgeX microservice at the time the framework is started. This may be because a service is optional, because it is transient (doesn't run all the time), or because it may be difficult to deliver the token generated by security-secretstore-setup.
To accommodate this use case, EdgeX microservices have an ability to obtain their secretstore tokens via SPIFFE workload attestation. Non-core EdgeX microservices have SPIFFE support compiled into their binaries by default, and core services are compiled with a non_delayedstart
build flag which removes this functionality for space reasons. Note that delayed start can be compiled into the core services as well, if desired, via a Makefile
change.
The article Remote Devices in Secure Mode describes how to use the delayed-start feature in a remote device service scenario. A workload attestation agent must be running on every node in order to use delayed start services.
"},{"location":"security/Ch-DelayedStartServices/#how-to-enable-docker","title":"How to Enable (Docker)","text":""},{"location":"security/Ch-DelayedStartServices/#enable-custom-application-or-device-services-optional","title":"Enable Custom Application or Device Services (Optional)","text":"If using EdgeX with custom Application or Device services in Secure mode, first generate a docker-compose.yml file by running the following command from edgex-compose/compose-builder
$ make gen delayed-start\n
Open the generated docker-compose.yml file and set the EDGEX_SPIFFE_CUSTOM_SERVICES
Environment variable. To set multiple custom services, use a white space delimiter.
security-spire-config:\n...\nenvironment:\n...\nEDGEX_SPIFFE_CUSTOM_SERVICES: '<custom-service> <custom-service-2>'\n
Run the modified Docker Compose file
$ docker compose -p edgex up -d\n
Refer to the configuration steps below to finish setting up any custom/non-core services.
"},{"location":"security/Ch-DelayedStartServices/#running-in-delayed-start-mode","title":"Running in Delayed Start Mode","text":"Using the Docker run scripts, start the framework with the delayed-start
option:
$ make run delayed-start\n
This will cause the following microservices to be started:
Next, pass the following environment variables to any non-core EdgeX microservice that has SPIFFE/SPIRE support compiled-in:
SECRETSTORE_RUNTIMETOKENPROVIDER_ENABLED: \"true\"\nSECRETSTORE_RUNTIMETOKENPROVIDER_HOST: edgex-security-spiffe-token-provider\n
If the configuration is successfully applied the following log messages should appear in the output (device-virtual
service shown):
level=INFO ts=2023-04-04T01:10:04.805777526Z app=device-virtual source=secret.go:196 msg=\"runtime token provider enabled\"\nlevel=INFO ts=2023-04-04T01:10:04.805811012Z app=device-virtual source=methods.go:138 msg=\"using Unix Domain Socket at unix:///tmp/edgex/secrets/spiffe/public/api.sock\"\nlevel=INFO ts=2023-04-04T01:10:04.860221916Z app=device-virtual source=methods.go:150 msg=\"workload got X509 source\"\nlevel=INFO ts=2023-04-04T01:10:04.999743052Z app=device-virtual source=methods.go:120 msg=\"successfully got token from spiffe-token-provider!\"\nlevel=INFO ts=2023-04-04T01:10:04.999984978Z app=device-virtual source=secret.go:93 msg=\"Attempting to create secret client\"\nlevel=INFO ts=2023-04-04T01:10:05.001185555Z app=device-virtual source=secret.go:104 msg=\"Created SecretClient\"\nlevel=INFO ts=2023-04-04T01:10:05.001261424Z app=device-virtual source=secrets.go:277 msg=\"kick off token renewal with interval: 30m0s\"\n
These messages indicate that the workload has been successfully attested, a SPIFFE SVID obtained, and that SVID has been exchanged with the edgex-security-spiffe-token-provider
service for an EdgeX secret store token.
Workload attestation failures are indicated by a hang in the service's log messages:
level=INFO ts=2023-04-04T01:10:04.805777526Z app=device-virtual source=secret.go:196 msg=\"runtime token provider enabled\"\nlevel=INFO ts=2023-04-04T01:10:04.805811012Z app=device-virtual source=methods.go:138 msg=\"using Unix Domain Socket at unix:///tmp/edgex/secrets/spiffe/public/api.sock\"\n
Workload attestation failures can be confirmed by examining edgex-security-spire-agent
logs:
$ docker logs edgex-security-spire-agent\ntime=\"2023-04-04T21:51:58Z\" level=error msg=\"No identity issued\" method=FetchX509SVID pid=87411 registered=false service=WorkloadAPI subsystem_name=endpoints\n
This message is preceded by a set of key-value pairs collected by the agent to identify the workload:
type:\"docker\" value:\"label:com.docker.compose.image:sha256:9ddd29b3453149a799a0ec3549537fa3f59f8ee85eb0e4e5c54febf1b74f0fc4\"\ntype:\"docker\" value:\"label:com.docker.compose.service:app-http-export\"\ntype:\"unix\" value:\"path:/app-service-configurable\"\ntype:\"unix\" value:\"sha256:2c72b9f4a871ff98ba410c292ee97206df8ee584002b34a4d08b6355e686c3d2\"\n
The agent communicates with the server/controller to authorize the workload. The server/controller consults an authorization database that is seeded with a script: https://github.com/edgexfoundry/edgex-go/blob/main/cmd/security-spire-config/seed_builtin_entries.sh
This authorization database can be dumped with the following command:
$ docker exec -ti edgex-security-spire-server spire-server entry show -socketPath /tmp/edgex/secrets/spiffe/private/api.sock\n\nFound ### entries\n...\nEntry ID : 2034b8d2-fa29-48bc-bce1-4e30ea0b66c2\nSPIFFE ID : spiffe://edgexfoundry.org/service/device-virtual\nParent ID : spiffe://edgexfoundry.org/spire/agent/x509pop/cn/agent0\nRevision : 0\nTTL : default\nSelector : docker:label:com.docker.compose.service:device-virtual\nDNS name : edgex-device-virtual\n
The key-value pairs collected by the agent is matched against the Selector
in the authorization database to determine whether an SVID should be generated. The agent will return the authorization decision to the service, which will continue to retry authentication.
Authorization entries may be persistently added to the authorization database by modifying the above script or adding them manually, replacing the CAPITALIZED words with appropriate values:
$ docker exec -ti edgex-security-spire-server spire-server entry create -socketPath /tmp/edgex/secrets/spiffe/private/api.sock -parentID \"spiffe://edgexfoundry.org/spire/agent/x509pop/cn/agent0\" -dns \"SERVICE-DNS-NAME\" -spiffeID \"spiffe://edgexfoundry.org/service/SERVICEKEY\" -selector \"docker:label:com.docker.compose.service:DOCKERCOMPOSESERVICEKEY\"\n
"},{"location":"security/Ch-RemoteDeviceServices/","title":"Remote Device Services in Secure Mode","text":"This page describes the remote device service example in the edgex-examples
GitHub repository.
Running a remote device service poses several problems when EdgeX is running in secure mode:
Network traffic between the primary EdgeX node and the remote device service node is unencrypted.
The remote device service will not have a Consul authentication token that allows it to talk to the registry and configuration services.
The remote device service will not have a secret store token that allows access to the EdgeX secret store (which is also needed to obtain a Consul authentication token).
This example will resolve the above complications by
Creating secure SSH network tunnel between nodes to encrypt network communication.
Use the delayed start feature introduced in EdgeX Kamakura to lasily obtain a secret store token that will grant the device service access to the EdgeX secret store, EdgeX registry service, and EdgeX configuration service.
First, clone the edgex-examples repository
, checkout main
and change to the security/remote_devices/spiffe_and_ssh
directory.
Next, run the generate_keys.sh
script to generate an SSH keypair for the SSH tunnel. This keypair is used only for the SSH tunnel and should have no other privileges.
Once the generate_keys.sh
script has been run, copy the remote
folder to the remote device service machine.
Change directories to the local
folder.
Edit docker-compose.yml
and change the TUNNEL_HOST
environment variable to the IP address of the remote node.
Run
$ docker compose build\n$ docker compose up -d\n
After the framework has been built and is running, check the device-ssh-proxy
service
$ docker ps -a | grep device-ssh-proxy\na92ff2d6999c device-ssh-proxy:latest \"/edgex-init\u2026\" 2 minutes ago Restarting (1) 16 seconds ago edgex-device-ssh-proxy\n$ docker logs device-ssh-proxy\n+ scp -p -o 'StrictHostKeyChecking=no' -o 'UserKnownHostsFile=/dev/null' -P 2223 /srv/spiffe/remote-agent/agent.key 192.168.122.193:/srv/spiffe/remote-agent/agent.key\nssh: connect to host 192.168.122.193 port 2223: Connection refused\nlost connection\n
The SSH connection will continue to fail until the remote node is brought up.
Next, authorize the workload running on the remote node.
$ ./add-server-entry.sh\nEntry ID : f62bfec6-b19c-43ea-94b8-975f7e9a258e\nSPIFFE ID : spiffe://edgexfoundry.org/service/device-virtual\nParent ID : spiffe://edgexfoundry.org/spire/agent/x509pop/cn/remote-agent\nRevision : 0\nTTL : default\nSelector : docker:label:com.docker.compose.service:device-virtual\nDNS name : edgex-device-virtual\n
That is all to be done on the local node.
"},{"location":"security/Ch-RemoteDeviceServices/#on-the-remote-machine","title":"On the Remote Machine","text":"Change directories to the remote
folder and run
$ docker compose build\n$ docker compose up -d\n
After the framework has been built and is running for about a minute, check the device-virtual
service
$ docker logs -f edgex-device-virtual\nlevel=INFO ts=2022-05-05T14:28:30.005673094Z app=device-virtual source=config.go:391 msg=\"Loaded service configuration from ./res/configuration.yaml\"\nlevel=INFO ts=2022-05-05T14:28:30.006211643Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Port' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_PORT=59841\"\nlevel=INFO ts=2022-05-05T14:28:30.006286584Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Protocol' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_PROTOCOL=https\"\nlevel=INFO ts=2022-05-05T14:28:30.006341968Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Clients.core-metadata.Host' by environment variable: CLIENTS_CORE_METADATA_HOST=edgex-core-metadata\"\nlevel=INFO ts=2022-05-05T14:28:30.006382102Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'MessageQueue.Host' by environment variable: MESSAGEQUEUE_HOST=edgex-redis\"\nlevel=INFO ts=2022-05-05T14:28:30.006416098Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.EndpointSocket' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_ENDPOINTSOCKET=/tmp/edgex/secrets/spiffe/public/api.sock\"\nlevel=INFO ts=2022-05-05T14:28:30.006457406Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.RequiredSecrets' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_REQUIREDSECRETS=redisdb\"\nlevel=INFO ts=2022-05-05T14:28:30.006495791Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Enabled' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_ENABLED=true\"\nlevel=INFO ts=2022-05-05T14:28:30.006529808Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.Host' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_HOST=edgex-security-spiffe-token-provider\"\nlevel=INFO ts=2022-05-05T14:28:30.006575741Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Clients.core-data.Host' by environment variable: CLIENTS_CORE_DATA_HOST=edgex-core-data\"\nlevel=INFO ts=2022-05-05T14:28:30.006617026Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.Host' by environment variable: SECRETSTORE_HOST=edgex-vault\"\nlevel=INFO ts=2022-05-05T14:28:30.006650922Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.Port' by environment variable: SECRETSTORE_PORT=8200\"\nlevel=INFO ts=2022-05-05T14:28:30.006691769Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'SecretStore.RuntimeTokenProvider.TrustDomain' by environment variable: SECRETSTORE_RUNTIMETOKENPROVIDER_TRUSTDOMAIN=edgexfoundry.org\"\nlevel=INFO ts=2022-05-05T14:28:30.006729711Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Service.Host' by environment variable: SERVICE_HOST=edgex-device-virtual\"\nlevel=INFO ts=2022-05-05T14:28:30.006764754Z app=device-virtual source=variables.go:352 msg=\"Variables override of 'Registry.Host' by environment variable: REGISTRY_HOST=edgex-core-consul\"\nlevel=INFO ts=2022-05-05T14:28:30.006904867Z app=device-virtual source=secret.go:55 msg=\"Creating SecretClient\"\nlevel=INFO ts=2022-05-05T14:28:30.006953018Z app=device-virtual source=secret.go:62 msg=\"Reading secret store configuration and authentication token\"\nlevel=INFO ts=2022-05-05T14:28:30.006994824Z app=device-virtual source=secret.go:165 msg=\"runtime token provider enabled\"\nlevel=INFO ts=2022-05-05T14:28:30.007064786Z app=device-virtual source=methods.go:138 msg=\"using Unix Domain Socket at unix:///tmp/edgex/secrets/spiffe/public/api.sock\"\n
If the workload was not authorized on the local side, the output will stop as shown above. The service would be hung waiting for a SPIFFE authentication token.
Since the local site was stuck in a retry loop trying to establish an SSH connection to the remote, the service may stay stuck in this state for several minutes until the network tunnels are established.
Otherwise the log would continue as follows:
level=INFO ts=2022-05-05T14:29:25.078483584Z app=device-virtual source=methods.go:150 msg=\"workload got X509 source\"\nlevel=INFO ts=2022-05-05T14:29:25.168325689Z app=device-virtual source=methods.go:120 msg=\"successfully got token from spiffe-token-provider!\"\nlevel=INFO ts=2022-05-05T14:29:25.169095621Z app=device-virtual source=secret.go:80 msg=\"Attempting to create secret client\"\nlevel=INFO ts=2022-05-05T14:29:25.172259336Z app=device-virtual source=secret.go:91 msg=\"Created SecretClient\"\nlevel=INFO ts=2022-05-05T14:29:25.172359472Z app=device-virtual source=secret.go:96 msg=\"SecretsFile not set, skipping seeding of service secrets.\"\nlevel=INFO ts=2022-05-05T14:29:25.172539631Z app=device-virtual source=secrets.go:276 msg=\"kick off token renewal with interval: 30m0s\"\nlevel=INFO ts=2022-05-05T14:29:25.172433598Z app=device-virtual source=config.go:551 msg=\"Using local configuration from file (14 envVars overrides applied)\"\nlevel=INFO ts=2022-05-05T14:29:25.172916142Z app=device-virtual source=httpserver.go:131 msg=\"Web server starting (edgex-device-virtual:59900)\"\nlevel=INFO ts=2022-05-05T14:29:25.172948285Z app=device-virtual source=messaging.go:69 msg=\"Setting options for secure MessageBus with AuthMode='usernamepassword' and SecretName='redisdb\"\nlevel=INFO ts=2022-05-05T14:29:25.174321296Z app=device-virtual source=messaging.go:97 msg=\"Connected to redis Message Bus @ redis://edgex-redis:6379 publishing on 'edgex/events/device' prefix topic with AuthMode='usernamepassword'\"\nlevel=INFO ts=2022-05-05T14:29:25.174585076Z app=device-virtual source=init.go:135 msg=\"Check core-metadata service's status by ping...\"\nlevel=INFO ts=2022-05-05T14:29:25.176202842Z app=device-virtual source=init.go:54 msg=\"Service clients initialize successful.\"\nlevel=INFO ts=2022-05-05T14:29:25.176377929Z app=device-virtual source=clients.go:124 msg=\"Using configuration for URL for 'core-metadata': http://edgex-core-metadata:59881\"\nlevel=INFO ts=2022-05-05T14:29:25.176559116Z app=device-virtual source=clients.go:124 msg=\"Using configuration for URL for 'core-data': http://edgex-core-data:59880\"\nlevel=INFO ts=2022-05-05T14:29:25.176806351Z app=device-virtual source=restrouter.go:55 msg=\"Registering v2 routes...\"\nlevel=INFO ts=2022-05-05T14:29:25.192658275Z app=device-virtual source=service.go:230 msg=\"device service device-virtual exists, updating it\"\nlevel=INFO ts=2022-05-05T14:29:25.195403199Z app=device-virtual source=profiles.go:54 msg=\"Loading pre-defined profiles from /res/profiles\"\nlevel=INFO ts=2022-05-05T14:29:25.197297762Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Binary-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.240099318Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Boolean-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.24221092Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Float-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.245516797Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-Integer-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.250310838Z app=device-virtual source=profiles.go:88 msg=\"Profile Random-UnsignedInteger-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.250961547Z app=device-virtual source=devices.go:49 msg=\"Loading pre-defined devices from /res/devices\"\nlevel=INFO ts=2022-05-05T14:29:25.252216571Z app=device-virtual source=devices.go:85 msg=\"Device Random-Boolean-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252274853Z app=device-virtual source=devices.go:85 msg=\"Device Random-Integer-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252290321Z app=device-virtual source=devices.go:85 msg=\"Device Random-UnsignedInteger-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252297541Z app=device-virtual source=devices.go:85 msg=\"Device Random-Float-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252304305Z app=device-virtual source=devices.go:85 msg=\"Device Random-Binary-Device exists, using the existing one\"\nlevel=INFO ts=2022-05-05T14:29:25.252698155Z app=device-virtual source=autodiscovery.go:33 msg=\"AutoDiscovery stopped: disabled by configuration\"\nlevel=INFO ts=2022-05-05T14:29:25.252726349Z app=device-virtual source=autodiscovery.go:42 msg=\"AutoDiscovery stopped: ProtocolDiscovery not implemented\"\nlevel=INFO ts=2022-05-05T14:29:25.252736451Z app=device-virtual source=message.go:50 msg=\"Service dependencies resolved...\"\nlevel=INFO ts=2022-05-05T14:29:25.252804946Z app=device-virtual source=message.go:51 msg=\"Starting device-virtual main \"\nlevel=INFO ts=2022-05-05T14:29:25.252817404Z app=device-virtual source=message.go:55 msg=\"device virtual started\"\nlevel=INFO ts=2022-05-05T14:29:25.252880346Z app=device-virtual source=message.go:58 msg=\"Service started in: 55.248960914s\"\n
At this point, the remote device service is up and running in secure mode.
"},{"location":"security/Ch-RemoteDeviceServices/#ssh-tunneling-explained","title":"SSH Tunneling Explained","text":"In this example, SSH port forwarding is used to establish an encrypted network channel between the local and remote nodes. The local machine as the primary host is running the whole EdgeX core services including core services and security services but without any device service. The device services are running on the remote machine.
The SSH communication is established by introducing some extra SSH-related services:
1) device-ssh-proxy
. This service runs on the local machine an is an SSH client that initiates communication with the remote node. The device-ssh-proxy
service has the private key needed to establish the network connection and also authorizes the network tunnels.
2) sshd-remote
. This service runs on the remote machine and provides an SSH server for the purposes of establishing network communcation with the remote device service.
Running sshd
in Docker is a container anti-pattern, as one can enter a container for remote administration using docker exec
. In this use case, however, we are not using sshd
for remote administration, but instead to set up a network tunnel.
For an example of how to run a SSH server in Docker, checkout the SPIFFE and SHH example for detailed instructions.
The generate-keys.sh
helper script generates an RSA keypair, and copies the authorized_keys
file into the remote/sshd-remote
folder. The sample's Dockerfile
will then build this key into the the remote sshd
container image and use it for authentication. The private key remains on the local machine and is bind-mounted to the host from the device-ssh-proxy
service.
In this use case, we want to impersonate a device service that is running on a remote machine. We use local port forwarding to receive inbound requests on the device service's port, and ask that the traffic be forwarded through the ssh tunnel to a remote host and a remote port. The -L flag of ssh command is important here.
ssh -N \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-L *:$SERVICE_PORT:$SERVICE_HOST:$SERVICE_PORT \\\n-p $TUNNEL_SSH_PORT \\\n$TUNNEL_HOST
where environment variables are:
TUNNEL_HOST
is the remote host name or IP address that SSH daemon or server is running on;
TUNNEL_SSH_PROT
is the port number to be used on the SSH tunnel communication between the local machine and the remote machine
SERVICE_PORT
is the port number from the local or the primary to be forwared to the remote machine; without lose of generality, the port number on the remote machine is the same as the local one
SERVICE_HOST
is the service host name or IP address of the Docker containers that are running on the remote machine
In order to make the other containers aware of the port forwarding, the docker-compose.yml
is configured to so that the device-ssh-proxy
service impersonates edgex-device-virtual
on the local docker network.
device-ssh-proxy:\nimage: device-ssh-proxy:latest\nnetworks:\nedgex-network:\naliases:\n- edgex-device-virtual\n
The port-forwarding is transparent to the EdgeX services running on the local machine.
"},{"location":"security/Ch-RemoteDeviceServices/#remote-port-forwarding","title":"Remote Port Forwarding","text":"This step is to show the reverse direction of SSH tunneling: from the remote back to the local machine.
The reverse SSH tunneling is also needed because the device services depends on the core services like core-data
, core-metadata
, Redis (for message queuing), Vault (for the secret store), and Consul (for registry and configuration). These core services are running on the local machine and should be reverse tunneled back from the remote machine. Essentially, the sshd
container will impersonate these services on the remote side. This can be achieved by using -R
flag of ssh command. Extending the previous example:
ssh -N \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-L *:$SERVICE_PORT:$SERVICE_HOST:$SERVICE_PORT \\\n-R 0.0.0.0:$SECRETSTORE_PORT:$SECRETSTORE_HOST:$SECRETSTORE_PORT \\\n-R 0.0.0.0:6379:$MESSAGEQUEUE_HOST:6379 \\\n-R 0.0.0.0:8500:$REGISTRY_HOST:8500 \\\n-R 0.0.0.0:5563:$CLIENTS_CORE_DATA_HOST:5563 \\\n-R 0.0.0.0:59880:$CLIENTS_CORE_DATA_HOST:59880 \\\n-R 0.0.0.0:59881:$CLIENTS_CORE_METADATA_HOST:59881 \\\n-R 0.0.0.0:$SECURITY_SPIRE_SERVER_PORT:$SECURITY_SPIRE_SERVER_HOST:$SECURITY_SPIRE_SERVER_PORT \\\n-R 0.0.0.0:$SECRETSTORE_RUNTIMETOKENPROVIDER_PORT:$SECRETSTORE_RUNTIMETOKENPROVIDER_HOST:$SECRETSTORE_RUNTIMETOKENPROVIDER_PORT \\\n-p $TUNNEL_SSH_PORT \\\n$TUNNEL_HOST
As was done on the local side, the remote side does in reverse, masquerading on the network as the core services needed by device services:
sshd-remote:\nimage: edgex-sshd-remote:latest\nnetworks:\nedgex-network:\naliases:\n- edgex-core-consul\n- edgex-core-data\n- edgex-core-metadata\n- edgex-redis\n- edgex-security-spire-server\n- edgex-security-spiffe-token-provider\n- edgex-vault\n
"},{"location":"security/Ch-RemoteDeviceServices/#security-edgex-secret-store-token","title":"Security: EdgeX Secret Store Token","text":"Beyond port forwarding, extra steps need to be taken to enable the remote device service to use SPIFFE/SPIRE to obtain a token for the EdgeX secret store.
"},{"location":"security/Ch-RemoteDeviceServices/#local-side","title":"Local side","text":"On the local machine side, the device-ssh-proxy
service has some initialization code inserted into its entrypoint script. It is done this way to facilitate ease-of-use for the example. In a production deployment this should be done out-of-band.
# Wait for agent CA creation\n\nwhile test ! -f \"/srv/spiffe/ca/public/agent-ca.crt\"; do\necho \"Waiting for /srv/spiffe/ca/public/agent-ca.crt\"\nsleep 1\ndone\n\n# Pre-create remote agent certificate\n\nif test ! -f \"/srv/spiffe/remote-agent/agent.crt\"; then\nopenssl ecparam -genkey -name secp521r1 -noout -out \"/srv/spiffe/remote-agent/agent.key\"\nSAN=\"\" openssl req -subj \"/CN=remote-agent\" -config \"/usr/local/etc/openssl.conf\" -key \"/srv/spiffe/remote-agent/agent.key\" -sha512 -new -out \"/run/agent.req.$$\"\nSAN=\"\" openssl x509 -sha512 -extfile /usr/local/etc/openssl.conf -extensions agent_ext -CA \"/srv/spiffe/ca/public/agent-ca.crt\" -CAkey \"/srv/spiffe/ca/private/agent-ca.key\" -CAcreateserial -req -in \"/run/agent.req.$$\" -days 3650 -out \"/srv/spiffe/remote-agent/agent.crt\"\nrm -f \"/run/agent.req.$$\"\nfi\n\n\nwhile true; do\nscp -p \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-P $TUNNEL_SSH_PORT \\\n/srv/spiffe/remote-agent/agent.key $TUNNEL_HOST:/srv/spiffe/remote-agent/agent.key\n scp -p \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-P $TUNNEL_SSH_PORT \\\n/srv/spiffe/remote-agent/agent.crt $TUNNEL_HOST:/srv/spiffe/remote-agent/agent.crt\n scp -p \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-P $TUNNEL_SSH_PORT \\\n/tmp/edgex/secrets/spiffe/trust/bundle $TUNNEL_HOST:/tmp/edgex/secrets/spiffe/trust/bundle ssh \\\n-o StrictHostKeyChecking=no \\\n-o UserKnownHostsFile=/dev/null \\\n-p $TUNNEL_SSH_PORT \\\n$TUNNEL_HOST -- \\\nchown -Rh 2002:2001 /tmp/edgex/secrets/spiffe\n\n ...\n
The one-time setup is generating a new agent key from the agent CA certificate. This will enable the SPIRE server to trust the new agent. There is also automation to copy the certificate and private key to the remote node as part of SSH session establishment. This entire flow could be done as an out-of-band process.
The last part, which is to copy the current trust bundle to the remote node as part of SSH session establishment, should be left as-is, as the trust bundle is on a temp file system and might be cleaned between reboots.
"},{"location":"security/Ch-RemoteDeviceServices/#remote-side","title":"Remote side","text":"On the remote side, the SPIRE agent looks mostly like the local side SPIRE agent, except that the paths are different, and there is a delay loop waiting for the agent key and certificate to be copied to the node via the above process.
The requirements for the remote side are:
The SPIRE server must be able to establish trust in the agent. There are many mechanisms available to do this. The example uses a public key infrastructure to establish trust.
The SPIRE agent must have network connectivity with the SPIRE server. This is provided by the SSH reverse proxy tunnel.
The easiest way to test the setup is to make a call from the local machine to the remote device-virtual
service:
$ curl -s http://127.0.0.1:59900/api/v3/config | jq\n{\n\"apiVersion\" : \"v3\",\n \"config\": {\n\"Writable\": {\n\"LogLevel\": \"INFO\",\n \"InsecureSecrets\": {\n\"DB\": {\n\"Path\": \"redisdb\",\n \"Secrets\": {\n\"password\": \"\",\n \"username\": \"\"\n}\n}\n},\n \"Reading\": {\n\"ReadingUnits\": true\n}\n},\n \"Clients\": {\n\"core-data\": {\n\"Host\": \"edgex-core-data\",\n \"Port\": 59880,\n \"Protocol\": \"http\"\n},\n \"core-metadata\": {\n\"Host\": \"edgex-core-metadata\",\n \"Port\": 59881,\n \"Protocol\": \"http\"\n}\n},\n \"Registry\": {\n\"Host\": \"edgex-core-consul\",\n \"Port\": 8500,\n \"Type\": \"consul\"\n},\n \"Service\": {\n\"HealthCheckInterval\": \"10s\",\n \"Host\": \"edgex-device-virtual\",\n \"Port\": 59900,\n \"ServerBindAddr\": \"\",\n \"StartupMsg\": \"device virtual started\",\n \"MaxResultCount\": 0,\n \"MaxRequestSize\": 0,\n \"RequestTimeout\": \"5s\",\n \"CORSConfiguration\": {\n\"EnableCORS\": false,\n \"CORSAllowCredentials\": false,\n \"CORSAllowedOrigin\": \"https://localhost\",\n \"CORSAllowedMethods\": \"GET, POST, PUT, PATCH, DELETE\",\n \"CORSAllowedHeaders\": \"Authorization, Accept, Accept-Language, Content-Language, Content-Type, X-Correlation-ID\",\n \"CORSExposeHeaders\": \"Cache-Control, Content-Language, Content-Length, Content-Type, Expires, Last-Modified, Pragma, X-Correlation-ID\",\n \"CORSMaxAge\": 3600\n}\n},\n \"Device\": {\n\"DataTransform\": true,\n \"MaxCmdOps\": 128,\n \"MaxCmdValueLen\": 256,\n \"ProfilesDir\": \"./res/profiles\",\n \"DevicesDir\": \"./res/devices\",\n \"Discovery\": {\n\"Enabled\": false,\n \"Interval\": \"30s\"\n},\n \"AsyncBufferSize\": 16,\n \"EnableAsyncReadings\": true,\n \"Labels\": [],\n \"UseMessageBus\": true\n},\n \"Driver\": {},\n \"SecretStore\": {\n\"Type\": \"vault\",\n \"Host\": \"edgex-vault\",\n \"Port\": 8200,\n \"Path\": \"device-virtual/\",\n \"Protocol\": \"http\",\n \"Namespace\": \"\",\n \"RootCaCertPath\": \"\",\n \"ServerName\": \"\",\n \"Authentication\": {\n\"AuthType\": \"X-Vault-Token\",\n \"AuthToken\": \"\"\n},\n \"TokenFile\": \"/tmp/edgex/secrets/device-virtual/secrets-token.json\",\n \"SecretsFile\": \"\",\n \"DisableScrubSecretsFile\": false,\n \"RuntimeTokenProvider\": {\n\"Enabled\": true,\n \"Protocol\": \"https\",\n \"Host\": \"edgex-security-spiffe-token-provider\",\n \"Port\": 59841,\n \"TrustDomain\": \"edgexfoundry.org\",\n \"EndpointSocket\": \"/tmp/edgex/secrets/spiffe/public/api.sock\",\n \"RequiredSecrets\": \"redisdb\"\n}\n},\n \"MessageQueue\": {\n\"Type\": \"redis\",\n \"Protocol\": \"redis\",\n \"Host\": \"edgex-redis\",\n \"Port\": 6379,\n \"PublishTopicPrefix\": \"edgex/events/device\",\n \"SubscribeTopic\": \"\",\n \"AuthMode\": \"usernamepassword\",\n \"SecretName\": \"redisdb\",\n \"Optional\": {\n\"AutoReconnect\": \"true\",\n \"ClientId\": \"device-virtual\",\n \"ConnectTimeout\": \"5\",\n \"KeepAlive\": \"10\",\n \"Password\": \"(redacted)\",\n \"Qos\": \"0\",\n \"Retained\": \"false\",\n \"SkipCertVerify\": \"false\",\n \"Username\": \"redis5\"\n},\n \"SubscribeEnabled\": false\n},\n \"MaxEventSize\": 0\n},\n \"serviceName\": \"device-virtual\"\n}\n
"},{"location":"security/Ch-SecretProviderApi/","title":"Secret Provider API","text":""},{"location":"security/Ch-SecretProviderApi/#introduction","title":"Introduction","text":"The SecretProvider API is available to custom Application and Device Services to access the service's Secret Store. This API is available in both secure and non-secure modes. When in secure mode, it provides access to the service's Secret Store in Vault, otherwise it uses the service's [InsecureSecrets]
configuration section as the Secret Store. See InsecureSecrets section here for more details.
type SecretProvider interface {\nStoreSecret(secretName string, secrets map[string]string) error\nGetSecret(secretName string, keys ...string) (map[string]string, error)\nHasSecret(secretName string) (bool, error)\nListSecretNames() ([]string, error)\nSecretsLastUpdated() time.Time RegisterSecretUpdatedCallback(secretName string, callback func(secretName string)) error\nDeregisterSecretUpdatedCallback(secretName string)\n}\n
"},{"location":"security/Ch-SecretProviderApi/#storesecret","title":"StoreSecret","text":"StoreSecret(secretName string, secrets map[string]string) error
Stores new secrets into the service's SecretStore at the specified secretName
. An error is returned if the secrets can not be stored.
Note
This API is only valid to call when in secure mode. It will return an error when in non-secure mode. Insecure Secrets should be added/updated directly in the configuration file or via the Configuration Provider (aka Consul).
"},{"location":"security/Ch-SecretProviderApi/#getsecret","title":"GetSecret","text":"GetSecret(secretName string, keys ...string) (map[string]string, error)
Retrieves the secrets from the service's SecretStore for the specified secretName
. The list of keys is optional and limits the secret data returned to just those keys specified, otherwise all keys are returned. An error is returned if the secretName
doesn't exist in the service's Secret Store or if one or more of the optional keys specified are not present.
HasSecret(secretName string) (bool, error)
Returns true if the service's Secret Store contains a secret at the specified secretName
. An error is returned if the Secret Store can not be accessed.
ListSecretNames() ([]string, error)
Returns a list of secret names from the current service's Secret Store. An error is returned if the Secret Store can not be accessed.
"},{"location":"security/Ch-SecretProviderApi/#secretslastupdated","title":"SecretsLastUpdated","text":"SecretsLastUpdated() time.Time
Returns the timestamp for last time when the service's secrets were updated in its Secret Store. This is useful when using external client that is initialized with the secret and needs to be recreated if the secret has changed.
"},{"location":"security/Ch-SecretProviderApi/#registersecretupdatedcallback","title":"RegisterSecretUpdatedCallback","text":"RegisterSecretUpdatedCallback(secretName string, callback func(secretName string)) error\n
Registers a callback for when the specified secretName
is added or updated. The secretName
that changed is provided as an argument to the callback so that the same callback can be utilized for multiple secrets if desired.
Note
The constant value secret.WildcardName
can be used to register a callback for when any secret has changed. The actual secretName
that changed will be passed to the callback. Note that the callbacks set for a specific secretName
are given a higher precedence over wildcard ones, and will be called instead of the wildcard one if both are present.
Note
This function will return an error if there is already a callback registered for the specified secretName
. Please call DeregisterSecretUpdatedCallback
first before attempting to register a new one.
DeregisterSecretUpdatedCallback(secretName string)\n
Removes the registered callback for the specified secretName
. If none exist, this is a no-op.
There are all kinds of secrets used within EdgeX Foundry micro services, such as tokens, passwords, certificates etc. The secret store serves as the central repository to keep these secrets. The developers of other EdgeX Foundry micro services utilize the secret store to create, store and retrieve secrets relevant to their corresponding micro services.
Currently the EdgeX Foundry secret store is implemented with Vault, a HashiCorp open source software product.
Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, database credentials, service credentials, or certificates. Vault provides a unified interface to any secret, while providing tight access control and multiple authentication mechanisms (token, LDAP, etc.). Additionally, Vault supports pluggable \"secrets engines\". EdgeX uses the Consul secrets engine to allow Vault to issue Consul access tokens to EdgeX microservices.
In EdgeX, Vault's storage backend is the host file system.
"},{"location":"security/Ch-SecretStore/#start-the-secret-store","title":"Start the Secret Store","text":"The EdgeX secret store is started by default when using the secure version of the Docker Compose scripts found at https://github.com/edgexfoundry/edgex-compose/tree/ireland.
The command to start EdgeX with the secret store enabled is:
git clone -b ireland https://github.com/edgexfoundry/edgex-compose\nmake run\n
or
git clone -b ireland https://github.com/edgexfoundry/edgex-compose\nmake run arm64\n
The EdgeX secret store is not started if EdgeX is started with security features disabled by appending no-secty
to the previous commands. This disables all EdgeX security features, not just the API gateway.
Documentation on how the EdgeX security store is sequenced with respect to all of the other EdgeX services is covered in the Secure Bootstrapping of EdgeX Architecture Decision Record(ADR).
"},{"location":"security/Ch-SecretStore/#using-the-secret-store","title":"Using the Secret Store","text":""},{"location":"security/Ch-SecretStore/#preferred-approach","title":"Preferred Approach","text":"The preferred approach for interacting with the EdgeX secret store is to use the SecretClient
interface in go-mod-secrets.
Each EdgeX microservice has access to a StoreSecrets()
method that allows setting of per-microservice secrets, and a GetSecrets()
method to read them back.
If manual \"super-user\" to the EdgeX secret store is required, it is necesary to obtain a privileged access token, called the Vault root token.
"},{"location":"security/Ch-SecretStore/#obtaining-the-vault-root-token","title":"Obtaining the Vault Root Token","text":"For security reasons (the Vault production hardening guide recommends revokation of the root token), the Vault root token is revoked by default. EdgeX automatically manages the secrets required by the framework, and provides a programmatic interface for individual microservices to interact with their partition of the secret store.
If global access to the secret store is required, it is necessary to obtain a copy of the Vault root token using the below recommended procedure. Note that following this procedure directly contradicts the Vault production hardening guide. Since the root token cannot be un-revoked, the framework must be started for the first time with root token revokation disabled.
Shut down the entire framework and remove the Docker persistent volumes using make clean
in edgex-compose
or docker volume prune
after stopping all the containers. Optionally remove /tmp/edgex
as well to clean the shared secrets volume.
Edit docker-compose.yml
and add an environment variable override for SECRETSTORE_REVOKEROOTTOKENS
secretstore-setup:\nenvironment:\nSECRETSTORE_REVOKEROOTTOKENS: \"false\"\n
Start EdgeX using make run
or some other mechanism.
Reveal the contents of the resp-init.json
file stored in a Docker volume.
docker run --rm -ti -v edgex_vault-config:/vault/config:ro alpine:latest cat /vault/config/assets/resp-init.json\n
root_token
field value from the resulting JSON output.As an alternative to overriding SECRETSTORE_REVOKEROOTTOKENS
from the beginning, it is possible to regenerate the root token from the Vault unseal keys in resp-init.json
using the Vault's documented procedure. The EdgeX framework executes this process internally whenever it requires root token capability. Note that a token created in this manner will again be revoked the next time EdgeX is restarted if SECRETSTORE_REVOKEROOTTOKENS
remains set to its default value: all root tokens are revoked every time the framework is started if SECRETSTORE_REVOKEROOTTOKENS
is true
.
Execute a shell session in the running Vault container:
docker exec -it edgex-vault sh -l\n
Login to Vault using Vault CLI and the gathered Root Token:
edgex-vault:/# vault login s.ULr5bcjwy8S0I5g3h4xZ5uWa\nSuccess! You are now authenticated. The token information displayed below\nis already stored in the token helper. You do NOT need to run \"vault login\"\nagain. Future Vault requests will automatically use this token.\n\nKey Value\n--- -----\ntoken s.ULr5bcjwy8S0I5g3h4xZ5uWa\ntoken_accessor Kv5FUhT2XgN2lLu8XbVxJI0o\ntoken_duration \u221e\ntoken_renewable false\ntoken_policies [\"root\"]\nidentity_policies []\npolicies [\"root\"]\n
Perform an introspection lookup
on the current token login. This proves the token works and is valid.
edgex-vault:/# vault token lookup\nKey Value\n--- -----\naccessor Kv5FUhT2XgN2lLu8XbVxJI0o\ncreation_time 1623371879\ncreation_ttl 0s\ndisplay_name root\nentity_id n/a\nexpire_time <nil>\nexplicit_max_ttl 0s\nid s.ULr5bcjwy8S0I5g3h4xZ5uWa\nmeta <nil>\nnum_uses 0\norphan true\npath auth/token/root\npolicies [root]\nttl 0s\ntype service\n
!!! Note: The Root Token is the only token that has no expiration enforcement rules (Time to Live TTL counter).
As an example, let's poke around and spy on the Redis database password:
edgex-vault:/# vault list secret \n\nKeys\n----\nedgex/\n\nedgex-vault:/# vault list secret/edgex\nKeys\n----\napp-rules-engine/\ncore-command/\ncore-data/\ncore-metadata/\ndevice-rest/\ndevice-virtual/\nsecurity-bootstrapper-redis/\nsupport-notifications/\nsupport-scheduler/\n\nedgex-vault:/# vault list secret/edgex/core-data\nKeys\n----\nredisdb\n\nedgex-vault:/# vault read secret/edgex/core-data/redisdb\nKey Value\n--- -----\nrefresh_interval 168h\npassword 9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3\nusername redis5\n
With the root token, it is possible to modify any Vault setting. See the Vault manual for available commands.
"},{"location":"security/Ch-SecretStore/#use-the-vault-rest-api","title":"Use the Vault REST API","text":"Vault also supports a REST API with functionality equivalent to the command line interface:
The equivalent of the
vault read secret/edgex/core-data/redisdb\n
command looks like the following using the REST API:
Displaying (GET) the redis credentials from Core Data's secret store:
curl -s -H 'X-Vault-Token: s.ULr5bcjwy8S0I5g3h4xZ5uWa' http://localhost:8200/v1/secret/edgex/core-data/redisdb | python -m json.tool\n{\n \"request_id\": \"9d28ffe0-6b25-c0a8-e395-9fbc633f20cc\",\n \"lease_id\": \"\",\n \"renewable\": false,\n \"lease_duration\": 604800,\n \"data\": {\n \"password\": \"9/crBba5mZqAfAH8d90m7RlZfd7N8yF2IVul89+GEaG3\",\n \"username\": \"redis5\"\n },\n \"wrap_info\": null,\n \"warnings\": null,\n \"auth\": null\n}\n
See HashiCorp Vault API documentation for further details on syntax and usage (https://developer.hashicorp.com/vault/api-docs).
"},{"location":"security/Ch-SecretStore/#using-the-vault-web-ui","title":"Using the Vault Web UI","text":"The Vault Web UI is not exposed via the API gateway. It must therefore be accessed via localhost
or a network tunnel of some kind.
Open a browser session on http://localhost:8200
and sign-in with the Root Token.
Upper left corner of the current Vault UI session, the sign-out menu displaying the current token name:
Select the Vault secret backend, and navigate to any secret that is of interest:
The Vault UI also allows entering Vault CLI commands (see above 1st alternative) using an embedded console:
"},{"location":"security/Ch-SecretStore/#see-also","title":"See also","text":"Some of the command used in implementing security services have man-style documentation:
In the current EdgeX architecture, Consul
is pre-wired as the default agent service for Service Configuration
, Service Registry
, and Service Health Check
purposes. Prior to EdgeX's Ireland release, the communication to Consul
uses plain HTTP calls without any access control (ACL) token header and thus are insecure. With the Ireland release, that situation is now improved by adding required ACL token header X-Consul-Token
in any HTTP calls. Moreover, Consul
itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected.
In this documentation, we will highlight some major features incorporated into EdgeX framework system for Securing Consul
, including how the Consul
token is generated via the integration of secret store management system Vault
with Consul
via Vault's Consul Secrets Engine APIs. Also a brief overview on how Consul token is governed by Vault using Consul's ACL policy associated with a Vault role for that token is given. Finally, EdgeX provides an easy way for getting Consul token from edgex-compose
's compose-builder
utility for better developer experiences.
In order to reduce another token generation system to maintain, we utilize the Vault's feature of Consul Secrets Engine
APIs, governed by Vault itself, and integrated with Consul. Consul service itself provides ACL system and is enabled via Consul's configuration settings like:
acl = {\nenabled = true\ndefault_policy = \"deny\"\nenable_token_persistence = true\n}\n
and this is set as part of EdgeX security-bootstrapper
service's process. Note that the default ACL policy is set to \"deny\" so that anything is not listed in the ACL list will get access denied by nature. The flag enable_token_persistence
is related to the persistence of Consul's agent token and is set to true so as to re-use the same agent token when EdgeX system restarts again.
During the process of Consul bootstrapping, the first main step of security-bootstrapper
for Consul is to bootstrap Consul's ACL system with Consul's API endpoint /acl/bootstrap
.
Once Consul's ACL is successfully bootstrapped, security-bootstrapper
stores the Consul's ACL bootstrap token onto the pre-configured folder under /tmp/edgex/secrets/consul-acl-token
.
As part of security-bootstrapper
process for Consul, Consul service's agent token is also set via Consul's sub-command: consul acl set-agent-token agent
or Consul's HTTP API endpoint /agent/token/<agent_token>
using Consul's ACL bootstrap token for the authentication. This agent token provides the identity for Consul service itself and access control for any agent-based API calls from client and thus provides better security.
The management token provides the identity for Consul service itself and access control for remote configuration from client and thus provides better security. It's created and stored onto the pre-configured folder under /tmp/edgex/secrets/consul-acl-token
.
security-bootstrapper
service also uses Consul's bootstrap token to generate Vault's role based from Consul Secrets Engine API /consul/role/<role_name>
for all internal default EdgeX services and add-on services via environment variable EDGEX_ADD_REGISTRY_ACL_ROLES
. Please see more details and some examples in Configuring Add-on Service documentation section for how to configure add-on services' ACL roles.
security-bootstrapper
then automatically associated with Consul's ACL policy rules with this provided ACL role so that Consul token will be created or generated with that ACL rules and hence enforced access controls by Consul when the service is communicating with it.
Note that Consul token is generated via Vault's /consul/creds/<role_name>
API with Vault's secretstore token and hence the generated Consul token is inherited the time-restriction nature from Vault system itself. Thus Consul token will be revoked by Vault if Vault's token used to generate it expires or is revoked. Currently in EdgeX we utilize the auto-renewal feature of Vault's token implemented in go-mod-secrets
to keep Consul token alive and not expire.
Consul's access token can be obtained from the compose-builder
of edgex-compose
repository via command make get-consul-acl-token
. One example of this will be like:
$ make get-consul-acl-token ef4a0580-d200-32bf-17ba-ba78e3a546e7\n
This output token is Consul's ACL management token and thus one can use it to login and access Consul service's features from Consul's GUI on http://localhost:8500/ui.
From the upper right-hand corner of Consul's GUI or the \"Log in\" button in the center, one can login with the obtained Consul token in order to access Consul's GUI features:
If the end user wants to access consul from the command line and since by default now Consul is running in ACL enabled mode, any API call to Consul's endpoints will requires the access token and thus one needs to give the access token into the header X-Consul-Token
of HTTP calls.
One example using curl
command with Consul access token to do local Consul KV store is given as follows:
curl -v -H \"X-Consul-Token:8775c1db-9340-d07b-ac95-bc6a1fa5fe57\" -X PUT --data 'TestKey=\"My key values\"' \\\n http://localhost:8500/v1/kv/my-test-key\n
where the Consul access token is passed into the header X-Consul-Token
and assuming it has write permission for accessing and updating data in Consul's KV store.
All the default services (Core Data, App Service Rules, Device Virtual, eKuiper, etc.) that utilize the MessageBus
are configured out of the box to connect securely.
Additional add-on services that require Secure MessageBus
access (App and/or Device services) need to follow the steps outline in the Configuring Add-On Services for Security section.
Security elements, both inside and outside of EdgeX Foundry, protect the data and control of devices, sensors, and other IoT objects managed by EdgeX Foundry. Based on the fact that EdgeX is a \"vendor-neutral open source software platform at the edge of the network\", the EdgeX security features are also built on a foundation of open interfaces and pluggable, replaceable modules. With security service enabled, the administrator of the EdgeX would be able to initialize the security components, set up running environment for security services, manage user access control, and create JWT( JSON Web Token) for resource access for other EdgeX business services. There are two major EdgeX security components. The first is a security store, which is used to provide a safe place to keep the EdgeX secrets. The second is an API gateway, which is used as a reverse proxy to restrict access to EdgeX REST resources and perform access control related works. In summary, the current features are as below:
This page describes how to report EdgeX Foundry security issues and how they are handled.
"},{"location":"security/Ch-SecurityIssues/#security-announcements","title":"Security Announcements","text":"Join the edgexfoundry-announce group at: https://groups.google.com/d/forum/edgexfoundry-announce) for emails about security and major API announcements.
"},{"location":"security/Ch-SecurityIssues/#vulnerability-reporting","title":"Vulnerability Reporting","text":"The EdgeX Foundry Open Source Community is grateful for all security reports made by users and security researchers. All reports are thoroughly investigated by a set of community volunteers.
To make a report, please email the private list: security-issues@edgexfoundry.org, providing as much detail as possible. Use the security issue template: security_issue_template.
At this time we do not yet offer an encrypted bug reporting option.
"},{"location":"security/Ch-SecurityIssues/#when-to-report-a-vulnerability","title":"When to Report a Vulnerability?","text":"Each report is acknowledged and analyzed by Security Issue Review (SIR) team within one week.
Any vulnerability information shared with SIR stays private, and is shared with sub-projects as necessary to get the issue fixed.
As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated.
In the case of 3 rd party dependency (code or library not managed and maintained by the EdgeX community) related security issues, while the issue report triggers the same response workflow, the EdgeX community will defer to owning community for fixes.
On receipt of a security issue report, SIR:
7. Uploads a Common Vulnerabilities and Exposures (CVE) style report of the issue and associated threat
The issue reporter will be kept in the loop as appropriate. Note that a critical or high severity issue can delay a scheduled release to incorporate a fix or mitigation.
"},{"location":"security/Ch-SecurityIssues/#public-disclosure-timing","title":"Public Disclosure Timing","text":"A public disclosure date is negotiated by the EdgeX Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible AFTER a mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure may be immediate (especially publicly known issues) to a few weeks. The EdgeX Foundry Product Security Committee holds the final say when setting a disclosure date.
"},{"location":"security/SeedingServiceSecrets/","title":"Seeding Service Secrets","text":"All EdgeX services now have the capability to specify a JSON file that contains the service's secrets which are seeded into the service's SecretStore
during service start-up. This allows the secrets to be present in the service's SecretStore
when the service needs to use them.
Note
The service must already have a SecretStore
configured. This is done by default for the Core/Support services. See Configure the service's Secret Store section for details for add-on App and Device services
Edgex 3.0
For EdgeX 3.0 the SecretStore configuration has been removed from each service's configuration files. It has default values which can be overridden with environment variables. See the SecretStore Overrides section for more details.
"},{"location":"security/SeedingServiceSecrets/#secrets-file","title":"Secrets File","text":"The new SecretsFile
setting on the SecretStore
configuration allows the service to specify the fully-qualified path to the location of the service's secrets file. Normally this setting is left blank when a service has no secrets to be seeded.
This setting can overridden with the SECRETSTORE_SECRETSFILE
environment variable. When EdgeX is deployed using Docker/docker-compose the setting can be overridden in the docker-compose file and the file can be volume mounted into the service's container.
Example - Setting SecretsFile via environment override
environment:\nSECRETSTORE_SECRETSFILE: \"/tmp/my-service/secrets.json\"\n...\nvolumes:\n- /tmp/my-service/secrets.json:/tmp/my-service/secrets.json\n
During service start-up, after SecretStore
initialization, the service's secrets JSON file is read, validated, and the secrets stored into the service's SecretStore
. The file is then scrubbed of the secret data, i.e rewritten without the sensitive secret data that was successfully stored. See Disable Scrubbing section below for detail on disabling the scrubbing of the secret data
Example - Initial service secrets JSON
{\n\"secrets\": [\n{\n\"secretName\": \"credentials001\",\n\"imported\": false,\n\"secretData\": [\n{\n\"key\": \"username\",\n\"value\": \"my-user-1\"\n},\n{\n\"key\": \"password\",\n\"value\": \"password-001\"\n}\n]\n},\n{\n\"secretName\": \"credentials002\",\n\"imported\": false,\n\"secretData\": [\n{\n\"key\": \"username\",\n\"value\": \"my-user-2\"\n},\n{\n\"key\": \"password\",\n\"value\": \"password-002\"\n}\n]\n}\n]\n}\n
Example - Re-written service secrets JSON after seeding complete
{\n\"secrets\": [\n{\n\"secretName\": \"credentials001\",\n\"imported\": true,\n\"secretData\": []\n},\n{\n\"secretName\": \"credentials002\",\n\"imported\": true,\n\"secretData\": []\n}\n]\n}\n
The secrets marked with imported=true
are ignored the next time the service starts up since they are already in the service's SecretStore
. If the Secret Store service's persistence is cleared, the original version of service's secrets file will need to be provided for the next time the service starts up.
Note
The secrets file must have write permissions for the file to be scrubbed of the secret data. If not the service will fail to start-up with an error re-writing the file.
"},{"location":"security/SeedingServiceSecrets/#disable-scrubbing","title":"Disable Scrubbing","text":"Scrubbing of the secret data can be disabled by setting SecretStore.DisableScrubSecretsFile
to true
. This can be done in the by using the SECRETSTORE_DISABLESCRUBSECRETSFILE
environment variable override.
Example - Set DisableScrubSecretsFile via environment variable
environment:\nSECRETSTORE_DISABLESCRUBSECRETSFILE: \"true\"\n
"},{"location":"security/V3Migration/","title":"V3 Security Migration Guide","text":""},{"location":"security/V3Migration/#whats-changed-in-edgex-30-security","title":"What's Changed in EdgeX 3.0 Security","text":"EdgeX 3.0 (\"Minnesota\") release implements a significant change to its security architecture.
In EdgeX \"Fuji\" release, EdgeX introduced an opt-in secure mode that featured a secret store capability based on Hashicorp Vault and an API gateway based on Kong. The API gateway served to separate the outside Internet-facing network, which was \"untrusted\", from the internally-facing network, which was a \"trusted\".
EdgeX 3.0 takes significant steps to put limits on that trust. Whereas in EdgeX 1.0 and 2.0, microservice security was enforced at the API gateway, in EdgeX 3.0 microservice security is now also enforced at the individual microservice level. EdgeX 2.0 already enabled authentication for third-party components such as the EdgeX database, the EdgeX service registry, the EdgeX configuration provider, the EdgeX secret store, the EdgeX API gateway and the EdgeX message bus, but the EdgeX microservices themselves did not require authentication if the request originated from behind the API gateway. In EdgeX 3.0, even internal calls to EdgeX microservices now require an authentication token.
Compared to EdgeX 2.0, the security footprint of EdgeX 3.0 is reduced through the removal of the third-party Postgres and Kong components and using a minimally-configured NGINX gateway instead. Measurements taken before and after show a ~300 MB savings in downloaded Docker images in the container version of EdgeX, and a ~150 MB reduction in memory usage. Achieving these impressive improvements to the EdgeX footprint unfortunately means that there are some breaking changes to API gateway authentication that will be detailed later.
Although not a functional change, a significant addition to EdgeX 3.0 has been made in the form of a STRIDE Threat Model contributed by IOTech. This threat model takes an outside-in view of EdgeX, treating the EdgeX services together as a unit. The STRIDE threat model should serve as a good starting point for EdgeX adopters own threat models in which EdgeX is a component in the overall architecture. It should be noted, however, that since EdgeX services are taken together as a unit, the impact of the recent microservice authentication changes, which primarily affect EdgeX internals, is not reflected in the threat model.
"},{"location":"security/V3Migration/#api-gateway-breaking-authentication-changes","title":"API Gateway Breaking Authentication Changes","text":"In EdgeX 2.0, the secrets-config
utility was used to create a user account in the API gateway (Kong) and associate it to a user-specified public key. A user would then self-create a JWT, and use it for authentication against the API gateway. These tokens were opaque to EdgeX microservices because their contents were controlled by the user, and only the API gateway had the information needed to validate them.
In EdgeX 3.0, the secrets-config
utility is still used to create a user account, but instead of creating it in the API gateway, the user account is created in the EdgeX secret store, and the Vault identity secrets engine is used to generate and verify JWT's. All EdgeX services implicitly trust the EdgeX secret store and have a secret store token issued to them at startup that can be used to request a JWT from Vault.
Externally-originated requests are performed similarly to how they were done before: provide the JWT in the Authorization
header and direct the request at the API gateway with a path prefix denoting the desired service. The key difference is in obtaining the JWT. In EdgeX 2.0, the client simply generated the JWT using its private key. In EdgeX 3.0, obtaining a JWT is a two-step process. First, authenticate to the EdgeX secret store (Vault) to obtain a secret store token. Second, exchange the secret store token for a JWT. This process is described in detail in the authenticating chapter of the EdgeX documentation. Due to these changes, the secrets-config proxy jwt
helper command has been removed. This same chapter also explains that, similar to Kong, Vault has an extensible authentication mechanism, although only username/password (with a randomized strong password) is enabled out of the box.
As was before, all requests (with the exception of a passthrough for Vault authentication) are checked at the API gateway prior to forwarding to the backend service for fulfillment.
"},{"location":"security/V3Migration/#microservice-level-breaking-authentication-changes","title":"Microservice-level Breaking Authentication Changes","text":"EdgeX microservices in EdgeX 3.0 will now require authentication on a per-route basis, even for requests that originate behind the API gateway. Peer-to-peer service requests (such as a device service calling core-metadata, or core-command forwarding a request to a device service) are authenticated automatically. This new behavior may create compatibility issues for custom components that worked fine in EdgeX 2.0 that may suddenly experience authentication failures in EdgeX 3.0. This new behavior may also create issues for 3rd party components, such as the eKuiper rules engine, because of its ability to issue ad-hoc HTTP requests in response to certain events. The main V3 migration guide contains specific guidance for handling eKuiper rules that call back in to EdgeX.
To revert to legacy EdgeX 2.0 behavior--no authentication at the microservice level-- set the environment variable EDGEX_DISABLE_JWT_VALIDATION
to true
. JWT validation must be disabled on a per-microservice basis. This will not stop EdgeX microservices from sending JWT's to peer EdgeX microservices--it will only disable validation on the receiving side, allowing unauthenticated requests.
For sending JWTs, custom EdgeX services have two basic choices. The first is to use one of the pre-built service clients in go-mod-core-contracts
. The other is to to use the GetSelfJWT()
method of the SecretProviderExt
interface. The authenticating chapter of the EdgeX documentation explains in greater detail how to use these two methods.
Some minor changes have been made to the secrets-config proxy tls
command:
--snis
argument is no longer supported: the supplied TLS certificate and key will be used for all TLS connections.--incert
option is renamed to --inCert
, and--inkey
option is renamed to --inKey
for consistency of flag names.Several security-related environment variables have been renamed in EdgeX 3.0:
Old Name New Name ADD_KNOWN_SECRETS EDGEX_ADD_KNOWN_SECRETS ADD_PROXY_ROUTE EDGEX_ADD_PROXY_ROUTE ADD_REGISTRY_ACL_ROLES EDGEX_ADD_REGISTRY_ACL_ROLES ADD_SECRETSTORE_TOKENS EDGEX_ADD_SECRETSTORE_TOKENS IKM_HOOK EDGEX_IKM_HOOK"},{"location":"security/V3Migration/#references","title":"References","text":"% secrets-config-proxy(1) User Manuals secrets-config-proxy(1)
"},{"location":"security/secrets-config-proxy/#name","title":"NAME","text":"secrets-config-proxy \u2013 Configure EdgeX API gateway service
"},{"location":"security/secrets-config-proxy/#synopsis","title":"SYNOPSIS","text":"secrets-config proxy SUBCOMMAND [OPTIONS]
"},{"location":"security/secrets-config-proxy/#description","title":"DESCRIPTION","text":"Configures the EdgeX API gateway service.
This command is used to configure the TLS certificate for external connections, create authentication tokens for inbound proxy access, and other related utility functions.
Proxy configuration commands (listed below) require access to the secret store master key in order to generate temporary secret store access credentials.
"},{"location":"security/secrets-config-proxy/#options","title":"OPTIONS","text":"--configDir /path/to/directory/with/configuration.yaml (optional)
Points to directory containing a configuration.yaml file.
EdgeX 3.0
The --confdir
command line option is replaced by --configDir
in EdgeX 3.0.
tls
Configure inbound TLS certificate. This command will replace the default TLS certificate created with EdgeX is started for the first time. Requires additional arguments:
Path to TLS leaf certificate (PEM-encoded x.509) (the file extension is arbitrary). If intermediate certificates are required to chain to a certificate authority, these should also be included. The root certificate authority should not be included.
Path to TLS private key (PEM-encoded).
--keyFilename filename (optional)
Filename of private key file (on target (default \"nginx.key\")
--targetFolder directory-path (optional)
Path to TLS key file (default \"/etc/ssl/nginx\")
adduser
Create an API gateway user by creating a user identity the EdgeX secret store. Requires additional arguments:
Username of the user to add.
--jwtTTL duration (optional)
JWT created by vault identity provider lasts this long (_s, _m, _h, or _d, seconds if no unit) (default \"1h\")
Clients have up to tokenTTL
time available to exchange the secret store token for a signed JWT. The validity period of that JWT is governed by jwtTTL
.
--tokenTTL duration (optional)
Vault token created as a result of vault login lasts this long (_s, _m, _h, or _d, seconds if no unit) (default \"1h\")
The adduser
command creates a credential that enables a use to request a token for the secret store. The intended purpose of this token is to exchange it for a signed JWT. The duration specified here governs the time period within which a signed JWT can be requested.
Note that although these tokens are renewable, there is nothing to be done with the token except for requesting a JWT. Thus, the token renew endpoint is not currently exposed externally.
Normally, secrets-config
uses a service token in the secret store token file. As this token expires from inactivity an hour after it is created, it is possible to point secrets-config
at a resp-init.json
and a root token will be created afresh from the key shares in that file. The --useRootToken
flag is used to tell secrets-config
to use this authentication method to talk to the EdgeX secret store.
Upon completion, adduser
returns a JSON object with a random password
field set. This password is generated from the kernel random source and overwrites any previous password set on the account.
A sample shell script to turn this into an token that can be used for API gateway authentication is as follows:
username=example\npassword=password-from-above\n\nvault_token=$(curl -ks \"http://localhost:8200/v1/auth/userpass/login/${username}\" -d \"{\\\"password\\\":\\\"${password}\\\"}\" | jq -r '.auth.client_token')\n\nid_token=$(curl -ks -H \"Authorization: Bearer ${vault_token}\" \"http://localhost:8200/v1/identity/oidc/token/${username}\" | jq -r '.data.token')\n\necho \"${id_token}\"\n
It is expected the the username/password returned from adduser will be saved for later use. However, if the password is lost, adduser can be run a second time to reset the password.
deluser
Delete a API gateway user. Requires additional arguments:
Username of the user to delete.
jwt
EdgeX 3.0
The jwt
sub-command is no longer supported in EdgeX 3.0.
IKM_HOOK
Enables decryption of an encrypted secret store master key by pointing at an executable that returns an encryption seed that is formatted as a hex-encoded (typically 32-byte) string to its stdout. This optional feature, if enabled, requires pointing at the same executable that was used by security-secretstore-setup to provision and unlock the EdgeX the secret store.
secrets-config(1)
EdgeX Foundry Last change: 2023
"},{"location":"security/secrets-config/","title":"Secrets config","text":"% edgex-secrets-config(1) User Manuals edgex-secrets-config(1)
"},{"location":"security/secrets-config/#name","title":"NAME","text":"edgex-secrets-config \u2013 Perform post-installation EdgeX secrets configuration
"},{"location":"security/secrets-config/#synopsis","title":"SYNOPSIS","text":"edgex-secrets-config [OPTIONS] COMMAND [ARG...]
"},{"location":"security/secrets-config/#description","title":"DESCRIPTION","text":"edgex-secrets-config performs post-installation EdgeX secrets configuration. edgex-secrets-config takes a command that specifies which module is being configured, and module-specific arguments thereafter.
"},{"location":"security/secrets-config/#commands","title":"COMMANDS","text":"help
Return a list of available commands. Use edgex-secrets-config help (command)
for an overview of available subcommands.
proxy
Configure secrets related to the EdgeX reverse proxy. Use edgex-secrets-config help proxy
for an overview of available subcommands.
edgex-secrets-config-proxy(1)
EdgeX Foundry Last change: 2021
"},{"location":"security/security-file-token-provider.1/","title":"NAME","text":"security-file-token-provider -- Generate Vault tokens for EdgeX services
"},{"location":"security/security-file-token-provider.1/#synopsis","title":"SYNOPSIS","text":"security-file-token-provider [-h--configDir \\<configDir>] [-p|--profile \\<name>]
EdgeX 3.0
The --confdir
command line option is replaced by --configDir
in EdgeX 3.0.
security-file-token-provider generates per-service Vault tokens for EdgeX services so that they can make authenticated connections to Vault to retrieve application secrets. security-file-token-provider implements a generic secret seeding mechanism based on pre-created files and is designed for maximum portability. security-file-token-provider takes a configuration file that specifies the services for which tokens shall be generated and the Vault access policy that shall be applied to those tokens. security-file-token-provider assumes that there is some underlying protection mechanism that will be used to prevent EdgeX services from reading each other's tokens.
"},{"location":"security/security-file-token-provider.1/#options","title":"OPTIONS","text":"-h, --help
: Display help text
-cd, --configDir \\<configDir>
: Look in this directory for configuration.yaml instead.
-p, --profile \\<name>
: Indicate configuration profile other than default
EdgeX 3.0
The -c, --confdir
command line option is replaced by -cd, --configDir
in EdgeX 3.0.
This file specifies the TCP/IP location of the Vault service and parameters used for Vault token generation.
SecretService:\nScheme: \"https\"\nServer: \"localhost\"\nPort: 8200\n\nTokenFileProvider:\nPrivilegedTokenPath: \"/run/edgex/secrets/security-file-token-provider/secrets-token.json\"\nConfigFile: \"token-config.json\"\nOutputDir: \"/run/edgex/secrets/\"\nOutputFilename: \"secrets-token.json\"\n
"},{"location":"security/security-file-token-provider.1/#secrets-tokenjson","title":"secrets-token.json","text":"This file contains a token used to authenticate to Vault. The filename is customizable via OutputFilename.
{\n \"auth\": {\n \"client_token\": \"s.wOrq9dO9kzOcuvB06CMviJhZ\"\n }\n}\n
"},{"location":"security/security-file-token-provider.1/#token-configjson","title":"token-config.json","text":"This configuration file tells security-file-token-provider which tokens to generate.
In order to avoid a directory full of .hcl
files, this configuration file uses the JSON serialization of HCL, documented at https://github.com/hashicorp/hcl/blob/master/README.md.
Note that all paths are keys under the \"path\" object.
{\n \"service-name\": {\n \"edgex_use_defaults\": true,\n \"custom_policy\": {\n \"path\": {\n \"secret/non/standard/location/*\": {\n \"capabilities\": [ \"list\", \"read\" ]\n }\n }\n },\n \"custom_token_parameters\": { }\n }\n}\n
When edgex-use-default is true (the default), the following is added to the policy specification for the auto-generated policy. The auto-generated policy is named edgex-secrets-XYZ
where XYZ
is service-name
from the JSON key above. Thus, the final policy created for the token will be the union of the policy below (if using the default policy) plus the custom_policy
defined above.
{\n \"path\": {\n \"secret/edgex/service-name/*\": {\n \"capabilities\": [ \"create\", \"update\", \"delete\", \"list\", \"read\" ]\n }\n }\n}\n
When edgex-use-default is true (the default), the following is inserted (if not overridden) to the token parameters for the generated token. (See https://developer.hashicorp.com/vault/api-docs/auth/token#create-token.)
\"display_name\": token-service-name\n\"no_parent\": true\n\"policies\": [ \"edgex-service-service-name\" ]\n
Note that display_name
is set by vault to be \"token-\" + the specified display name. This is hard-coded in Vault from versions 0.6 to 1.2.3 and cannot be changed.
Additionally, a meta property, edgex-service-name
is set to service-name
. The edgex-service-name property may be used by clients to infer the location in the secret store where service-specific secrets are held.
\"meta\": {\n \"edgex-service-name\": service-name\n}\n
"},{"location":"security/security-file-token-provider.1/#outputdirservice-nameoutputfilename","title":"{OutputDir}/{service-name}/{OutputFilename}","text":"For example: /run/edgex/secrets/edgex-security-proxy-setup/secrets-token.json
For each \"service-name\" in {ConfigFile}
, a matching directory is created under {OutputDir}
and the corresponding Vault token is stored as {OutputFilename}
. This file contains the authorization token generated to allow the indicated EdgeX service to retrieve its secrets.
PrivilegedTokenPath
points to a non-expired Vault token that the security-file-token-provider will use to install policies and create per-service tokens. It will create policies with the naming convention \"edgex-service-service-name\"
where service-name
comes from JSON keys in the configuration file and the Vault policy will be configured to allow creation and modification of policies using this naming convention. This token must have the following policy (edgex-privileged-token-creator
) configured.
path \"auth/token/create\" {\n capabilities = [\"create\", \"update\", \"sudo\"]\n}\n\npath \"auth/token/create-orphan\" {\n capabilities = [\"create\", \"update\", \"sudo\"]\n}\n\npath \"auth/token/create/*\" {\n capabilities = [\"create\", \"update\", \"sudo\"]\n}\n\npath \"sys/policies/acl/edgex-service-*\"\n{\n capabilities = [\"create\", \"read\", \"update\", \"delete\" ]\n}\n\npath \"sys/policies/acl\"\n{\n capabilities = [\"list\"]\n}\n
"},{"location":"security/security-file-token-provider.1/#author","title":"AUTHOR","text":"EdgeX Foundry \\<info@edgexfoundry.org>
"},{"location":"threat-models/secret-store/","title":"EdgeX Foundry Secret Management Threat Model","text":""},{"location":"threat-models/secret-store/#table-of-contents","title":"Table of Contents","text":"The secret management components comprise a very small portion of the EdgeX framework. Many components of an actual system are out-of-scope including the underlying hardware platform, the operating system on which the framework is running, the applications that are using it, and even the existence of workload isolation technologies, although the reference code does support deployment as Docker containers or Snaps.
The goal of the EdgeX secret store is to provide general-purpose secret management to EdgeX core services and applications.
"},{"location":"threat-models/secret-store/background/#motivation","title":"Motivation","text":"The EdgeX Foundry security roadmap is published on the Security WG Wiki:
The security roadmap establishes the requirement for a secret storage engine at the edge, and that furthermore that hardware secure storage should be supported:
Initial EdgeX secrets (needed to start Vault/Kong) will be encrypted on the file system using a secure storage abstraction layer \u2013 allowing other implementations to store these in hardware stores (based on hardware root of trust systems)
The current state of secret storage is described in the Hardware Secure Storage Draft.
The AS-IS architecture resembles the following diagram:
As the diagram notes, the critical secrets for securing the entire on-device infrastructure sit unencrypted on bulk storage media. While the deptiction that the Vault contents are encrypted is true, the key needed to decrypt it is in plaintext nearby.
The Hardware Secure Storage Draft proposes the following future state:
This future state proposes a security service that can encrypt the currently unencrypted data items.
A number of problems must be resolved to make this future state a reality:
Initialization order of containers: containers must block until their prerequisites have been satisfied. It is not sufficient to have only start-ordering, as initialization can take a variable amount of time, and the initialization tasks of a previous step are not necessarily completed before the next step is initiated.
Allowing for variability in the hardware encryption component. A simple bulk encryption/decryption interface does not allow for interesting scenarios based on local attestation, for example.
Distribution of Vault tokens to services.
When using Vault at the edge, there are a number of general problems that must be solved as illustrated in the below diagram:
Working top to bottom and left to right:
The secret management design for EdgeX can be said to be finished when there is a sufficiently secure solution to the above challenges for the supported execution models.
"},{"location":"threat-models/secret-store/background/#next-steps-for-edgex","title":"Next Steps for EdgeX","text":"All parts of the system must collaborate in order to ensure a robust secret management design. What is needed is a systematic approach to secret management that will close the gaps between the AS-IS and TO-BE future state. This systematic approach is based on formal threat model with the aim that the system will meet some critical security objectives. The threat model is built against a proposed design and validates the security architecture of the design. Through threat modeling, we can identify assets, adversaries, threats, and mitigations against those threats. We can then make a prioritized implementation plan to address those threats. More importantly, for someone adopting EdgeX, the documented threat model outlines the threats that the framework has been designed to protect against and by omission, the threats that it has not.
"},{"location":"threat-models/secret-store/high_level_design/","title":"Detailed Design","text":"This document gets into the design details of the proposed secret management architecture, starting with a design overview and going into greater detail for each subsystem.
"},{"location":"threat-models/secret-store/high_level_design/#design-overview","title":"Design Overview","text":"In context of the stated future goal to support hardware-based secret storage, it is important to note that in a Vault-based design, not every secret is actually wrapped by a hardware-backed key. Instead, the secrets in Vault are wrapped by a single master key, and the encryption and decryption of secrets are done in a user-level process in software. The Vault master key is then wrapped by one more additional keys, ultimately to a root key that is hardware-based using some authorization mechanism. In a PKCS#11 hardware token, authorization is typically a PIN. In a TPM, authorization is typically a set of PCR values and an optional password. The idea is that the Vault master key is eventually protected by some uncopyable unique secret attached to physical hardware.
The hardware may or may not have non-volatile tamper-resistant storage. Non-volatile storage is useful for integrity protection as well as in pre-OS scenarios. An example of the former would be to store a hash value for HTTP Public Key Pinning (HPKP) in a manner that makes it difficult for an attacker to pin a different key. An example of the latter would be storing a LUKS disk encryption key that can decrypt a root file system when normal file system storage is not yet available. If non-volatile storage is available, it is often available only in very limited quantity.
Obvious with the above design is that at some point along the line, the Vault master key or a wrapping key is observably exposed to user-mode software. In fact, the number two recommendation for Vault hardening is \"single tenancy\" which is further explained, in priority order, as (a) giving Vault its own physical machine, (b) giving Vault its own virtual machine, or (c) giving Vault its own container. The general solution to the exposure of the Vault master key or a wrapping key is to use a Trusted Execution Environment (TEE) to limit observability. There is currently no platform- and architecture-independent TEE solution.
"},{"location":"threat-models/secret-store/high_level_design/#high-level-design","title":"High-level design","text":"Figure 1: High-level design.
The secrets to be protected are the application secrets (P-1). The application secrets are protected with a per-service Vault service token (S-1). The Vault service token is delivered by a \"token server\" running in the security service to a pre-agreed rendezvous location, where mandatory access control, namespaces, or file system permissions constrain path accessibility. Vault access tokens are simply 128-bit random handles that are renewed at the Vault server. They can be shared across multiple instances of a load-balanced service, and unlike a JWT there is no need to periodically re-issue them if they have not expired.
The token server has its own non-root token-issuing token (S-3) that is created by the security service with the root token after it has initialized or unlocked the vault but before the root token is revoked. (S-4) Because of the sensitive nature of this token, it is co-located in the security service, and revoked immediately after use.
The actual application secrets are stored in the Vault encrypted data store (S-6) that is logically stored in Consul's data store (S-7). The vault data store is encrypted with a master key (S-5) that is held in Vault memory and forgotten across Vault restarts. The master key must be resupplied whenever Vault is restarted. The security service encrypts the master key using AES-256-GCM where the key (S-13) is derived using an RFC5869 key derivation function (KDF). The input key material for the KDF originates from a vendor-defined plugin that interfaces with a hardware security mechanism such as a TPM, PKCS11-compatible HSM, trusted execution environments (TEE), or enclave. An encrypted Vault master key is what is ultimately saved to storage.
Confidentiality of the secret management APIs is established using server-side TLS. The PKI initialization component is responsible for generating a root certificate authority (S-8), one or more intermediate certificate authorities (S-9), and several leaf certificates (S-10) needed for initialization of the core services. The PKI can be generated afresh every boot, or installed during initial provisioning and cached. PKI intialization is covered next.
"},{"location":"threat-models/secret-store/high_level_design/#pki-initialization","title":"PKI Initialization","text":"Figure 2: PKI initialization.
PKI initialization must happen before any other component in the secret management architecture is started because Vault requires a PKI to be in place to protect its HTTP API. Creation of a PKI is a multi-stage operation and care must be taken to ensure that critical secrets, such as the the CA private keys, are not written to a location where they can be recovered, such as bulk storage devices. The PKI can be created on-device at every boot, at device provisioning time, or created off-device and imported. Caching of the PKI is optional if the PKI is created afresh every boot, but required otherwise.
If the implementation allows, the private keys for certificate authorities should be destroyed after PKI generation to prevent unauthorized issuance of new leaf certificates, except where the certificate authority is stored in Vault and controlled with an appropriate policy. Following creation of the PKI, or retrieving it from cache, the PKI initialization is responsible for distributing keying material to pre-agreed per-service drop locations that service configuration files expect to find them.
PKI initialization is not instantaneous. Even if PKI initialization is started first, dependent services may also be started before PKI initialization is completed. It is necessary to implement init-blocking code in dependent services that delays service startup until PKI assets have been delivered to the service.
Most dependent services do not support encrypted TLS private keys. File access controls offered by the underlying execution environment are their only protection. A potential future enhancement might be to re-use the key derivation strategy used earlier to generate additional keys to encrypt the cached PKI keying material at rest.
(Update: ADR 0015, adopted after this threat model was written, stipulates that TLS will not be used for single-node deployments of EdgeX.)
"},{"location":"threat-models/secret-store/high_level_design/#vault-initialization-and-unsealing-flow","title":"Vault initialization and unsealing flow","text":"Figure 3: Vault initialization and unsealing flow
When the security service starts the first thing that it does is check to see if a hardware security hook has been defined. The presence of a hardware security hook is indicated by an environment variable, IKM_HOOK, that points to an executable program. The security service will run the program and look for a hex-encoded key on its standard output. If a key is found, it will be used as the input key material for the HMAC key deriviation function, otherwise, hardware security will not be used. The input key material is combined with a random salt that is also saved to disk for later retrieval. The salt ensures that unique encryption keys will be used each time EdgeX is installed on a platform, even if the underlying input key material does not change. The salt also defends against weak input key material.
"},{"location":"threat-models/secret-store/high_level_design/#initialization-flow","title":"Initialization flow","text":"Next, the security service will determine if Vault has been initialized. In the case that Vault is uninitialized, Vault's initialization API will be invoked, which results in a set of keys that can be used to reconstruct a Vault master key. When hardware security is enabled, the input key material and salt are fed into the key derivation function to generate a unique AES-256-GCM encryption key for each key shard. The encrypted keys along with nonces will be persisted to disk. AES-GCM protects against padding oracle attacks, but is sensitive to re-use of the salt value. This weakness is addressed both by using a unique encryption key for each shard, as well as the expectation that encryption is performed exactly once: when Vault is initialized. The Vault response is saved to disk directly in the case that hardware security is not enabled.
"},{"location":"threat-models/secret-store/high_level_design/#unseal-flow","title":"Unseal flow","text":"If Vault is found to be in an initialized and sealed state, the Vault master key shards are retrieved from disk. If they are encrypted, they will be encrypted by reversing the process performed during initialization. The key shards are then fed back to Vault until the Vault is unsealed and operational.
"},{"location":"threat-models/secret-store/high_level_design/#token-issuing-flow","title":"Token-issuing flow","text":"Figure 7: Token-issuing flow.
"},{"location":"threat-models/secret-store/high_level_design/#client-side","title":"Client side","text":"Every service that wants to query Vault must link to a secrets module either directly (go-mod-secrets) or indirectly (go-mod-bootstrap) or implement their own Vault interface. The module must take as input a path to a file that contains a Vault access token specific to that service. There is currently no secrets module for the C SDK.
Clients must be prepared to handle a number of error conditions while attempting to access the secret store:
Judicious use of retry loops should be sufficient to handle most of the above issues.
"},{"location":"threat-models/secret-store/high_level_design/#server-side","title":"Server side","text":"On the server side, the Vault master key will be used to generate a fresh \"root token\". The root token will generate a special \"token-issuing token\" that will generate tokens for the EdgeX microservices. The root token will then be revoked, and a \"token provider\" process with access to the token-issuing token will be launched in the background.
EdgeX will provide a single reference implementation for the token provider: * security-file-token-provider: This token provider will consume a list of services that require tokens, along with a set of customizable parameters. At startup, the service tokens are created in bulk and delivered via the host file system on a per-service basis.
The token-issuing token will be revoked upon termination of the token provider.
"},{"location":"threat-models/secret-store/high_level_design/#token-revocation","title":"Token revocation","text":"Vault tokens are persistent. Although they will automatically expire if they are not renewed, inadvertent disclosure of a token would be difficult to detect. This condition could allow an attacker to maintain an unauthorized connection to Vault indefinitely. Since tokens do expire if not renewed, it is necessary to generate fresh tokens on startup. Therefore, part of the startup process is the revokation of all previously Vault tokens, as a mitigation against token disclosure as well as garbage collection of obsolete tokens.
"},{"location":"threat-models/secret-store/threat_model/","title":"Threat Model","text":""},{"location":"threat-models/secret-store/threat_model/#historical-context","title":"Historical Context","text":"This threat model was written in the EdgeX Fuji timeframe. Significant changes have occured to EdgeX since that time. This document serves as a historical record of motification for security changes that occured in the Fuji, Geneva, Hanoi, and Ireland releases of EdgeX.
This threat model also covers ONLY THE EDGEX SECRET STORE and not the EdgeX project as a whole.
"},{"location":"threat-models/secret-store/threat_model/#assumptions","title":"Assumptions","text":"The EdgeX Framework is a API-based software framework that strives to be platform and architecture-independent. The threat model considers only the following two deployment scenarios:
The threat model presented in this document analyzes the secret management subsystem of EdgeX, and has considerations for both of the above runtime environments, both of which implement protections beyond a stock user/process runtime environment. In generic terms, the secret management threat model assumes:
Any particular of implementation of EdgeX should perform its own threat modeling activity as part of securing the implementation, and may use this document to supplement analysis of the secret management subsystem of EdgeX.
"},{"location":"threat-models/secret-store/threat_model/#recommended-hardening","title":"Recommended Hardening","text":"Physical security and hardening of the underlying platform is out-of-scope for implementation by the EdgeX reference code. But since the privileged administrator can bypass all access controls, such hardening is nevertheless recommended: the threat model assumes that there are no unauthorized privileged administrators. One should look to industry standard hardening guides, such as CIS Benchmarks for hardening operating system and container runtimes. Additionally, typical EdgeX base platforms are likely to support the following types of hardening out-of-the-box(1), and these should be enabled where possible.
The EdgeX secret store provides hooks for utilizing hardware secure storage to ensure that secrets stored on the device can only be decrypted on that device. Implementations should use hardware security features where a suitable plug-in is available. For maximum benefit, hardware security should be combined with verified/secure boot, file system protection, and other software-level hardening.
Lastly, due consideration should be given to the security of the software supply chain: it is important to ensure that code deployed to a device is what is expected and free of known vulnerabilities. This implies an ability to update a device in the field to ensure that it remains free of known vulnerabilities.
Footnotes:
(1) Most Linux distributions support verified/secure boot. Microsoft Windows enables verified/secure boot by default, and can automatically use TPM hardware if full disk encryption is enabled and will fail to decrypt if verified/secure boot is disabled.
"},{"location":"threat-models/secret-store/threat_model/#protections-afforded-by-modeled-runtime-environments","title":"Protections afforded by modeled runtime environments","text":"The threat model considers Docker-based and Snap-based deployments. Each of these deployment environments offer sandboxing protections that go beyond a standard Unix user and process model. As mentioned earlier, the threat model assumes the sandboxing protections:
In the Linux environment, most of these protections are based on a combination of two technologies: Linux namespaces and mandatory access control (MAC) based on Linux Security Module (LSM).
"},{"location":"threat-models/secret-store/threat_model/#docker-based-runtimes","title":"Docker-based runtimes","text":"All services running within a single container are assumed to be within the same trust boundary. Docker-based runtimes are expected to provide the following properties:
"},{"location":"threat-models/secret-store/threat_model/#general-protections","title":"General protections","text":"root
user in a container is subject to namespace constraints and restricted set of capabilities./var/lib/docker
where they are observable on the host and stored persistently.All services running within a single snap are assumed to be within the same trust boundary. However, even in a snap, due to the use of mandatory access control, there are stronger-than-normal process isolation policies in place, as documented below.
"},{"location":"threat-models/secret-store/threat_model/#general-protections_1","title":"General protections","text":"root
user in a snap is subject to namespace constraints and MAC rules enforced by Linux Security Modules (LSMs) configured as part of the snap.$XDG_RUNTIME_DIR
which is a user-private user-writable-directory that is also per-snap. Snaps can write persistent data local to the snap to the $SNAP_DATA
folder.mount(2)
, capability./proc/mem
or to ptrace(2)
other processes.The security objectives call out the security goals of the architecture/design. They are:
Primary assets are the assets at the level of the conceptual data model of the system and primarily represent \"real-world\" things.
AssetId Name Description Attack Points P-1 Application secrets The things we are trying to protect In use, in transit, in storage"},{"location":"threat-models/secret-store/threat_model/#secondary-assets","title":"Secondary Assets","text":"Secondary assets are assets are used to support or protect the primary assets and are usually implementation details versus being part of the conceptual data model.
AssetId Name Description Attack Points S-1 Vault service token Vault service tokens are issued per-service and used by services to authenticate to vault and retrieve per-service application secrets. In-flight via API, at rest S-3 Vault token-issuing-token Used by the token issuing service to create vault service tokens for other services. (Called out separately from S-1 due to its high privilege.) In-flight via API, at rest S-4 Vault root token A special token created at Vault initialization time that has all capabilities and never expires. In-flight via API, at rest S-5 Vault master key A root secret that encrypts all of Vault's other secrets. In-flight via API, at rest, in-use. S-6 Vault data store A data store encrypted with the Vault master key that contains the contents of the vault. In storage S-7 Consul data store Back-end storage engine for vault data store. In storage S-8 CA key Private keys for on-device PKI certificate authority. In use, in transit, in storage S-9 Issuing CA key Private keys for on-device PKI issuing authorities. In use, in transit, in storage S-10 Leaf TLS key Private keys for TLS server authentication for on-device services (e.g. Vault service, Consul service) In use, in transit, in storage S-13 IKM Initial keying material as input to HMAC KDF In use, in transit, in storageNote that asset S-9 (issuing CA key) is not currently implemented: in all current EdgeX releases all TLS leaf certificates are derived from the root CA.
"},{"location":"threat-models/secret-store/threat_model/#attack-surfaces","title":"Attack Surfaces","text":"This table lists components in the system architecture that have assets of potential value to an attacker and how a potential attacker may attempt to gain access to those components.
System Element Compromise Type Assets Exposed Attack Method Consul API IA Vault data store, service location data/registry, settings Data modification, DoS against API Vault API CIA All application secrets, all vault tokens Data channel snooping or data modification, DoS against API Host file system CIA PKI private keys, Vault tokens, Vault master key, Vault store, Consul store Snooping or data modification, deletion of critical files PKI initiazation agent CI Private keys for on-device PKI Snooping generation of assets or forcing predictable PKI Vault initialization agent CI Vault master key, Vault root token, token-issuing-token, encryption key for Vault master key Snooping generation of assets or tampering with assets Token server API CIA Token issuing token, service tokens Data channel snooping, tampering with asset policies, or forcing service down Process memory CIA Most assets excluding hardware and storage media Read or modify process memory through /proc or related IPC mechanisms"},{"location":"threat-models/secret-store/threat_model/#adversaries","title":"Adversaries","text":"The adversary model is use-case specific, but for the sake of discussion assume the following simplistic list:
Persona Motivation Starting Access Skill / Effort Thief (Larceny) Quick cash by reselling stolen components. None Low Remote hacker Financial gain by harvesting resellable information or performing ransomware attacks via exploitable vulnerabilities. Network Medium Malicious administrator Out of scope. Cannot defend against attacks originating at level of system software. N/A N/A Malicious non-privileged service Escalation of privilege and data exfiltration. Malicious services includes software supply chain attackers. User mode access Medium Industrial espionage / Malicious developer Financial gain or harm by obtaining access to back-end systems and/or competitive data. Unknown HighThe malicious administrator is out of scope: the threat model assumes that there are no unauthorized privileged administrators on the device. This must be ensured through hardening of the underlying platform, which is out of scope.
Malicious non-privileged services are a concern. This can occur through a wide variety of software supply chain attacks, as well as implementation bugs that permit a service to exhibit unintended functionality.
The industrial espionage or malicious developer adversary deserves some explanation. Whereas the remote hacker adversary is primarily motivated by a one-time attack, the industrial espionage attacker seeks to maintain a persistent foothold or to insert back-doors into an entire fleet of devices. Making each device unique (e.g. device-unique secrets) helps to mitigate against break-once-run-everywhere (BORE) attacks.
"},{"location":"threat-models/secret-store/threat_model/#threat-matrix","title":"Threat Matrix","text":"The threat matrix indicates what assets are at risk for the various attack surfaces in the system.
Consul API Vault API Host FS PKI agent Vault agent Token svc /proc /mem Application secrets *a *p Vault service token *bd *b *bd *p Token-issuing-token *e *e *e *e *p Vault root token *f *f *f *p Vault master key *g *g *g *p Vault DS *hi Consul DS *j *j PKI CA *m *k *p PKI intermediate *m *l *p PKI leaf *m *m *p IKM *q *p"},{"location":"threat-models/secret-store/threat_model/#threats-and-mitigations","title":"Threats and Mitigations","text":"Format:
(identifier) Threat name
The EdgeX secret store threat model calls out a particular aspect of the Vault-based secret store architecture upon which the whole EdgeX secret store depends: the Vault master key. Because plaintext storage of the Vault master key at rest would be a known security weakness, the high level design calls for the Vault master key to be encrypted on storage.
One way of doing this would be to simply encrypt the whole drive upon which the Vault master key is stored. This is a good solution: it would encrypt not only the Vault master key, but also other part of the system to harden them against offline tampering and information disclosure risks. This solution also has drawbacks as well: whole volume encryption may slow down boot times and have a runtime performance impact on constrained devices without hardware-accelerated crypto.
The Vault Master Key Encryption feature of EdgeX enables a system designer to specifically target encryption of the Vault master key, and enables a variety of flexible use cases that are not tied to volume encryption such as key escrow (where a key is stored on another machine on the network), smart cards or USB HSMs (where a key us stored in a dongle or chip card), or TPM (security hardware found on many PC-class motherboards).
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#internal-design","title":"Internal design","text":"As stated in the high level design, an RFC-5869 key derivation function (KDF) is used to produce a set of wrapping keys that are used by the vault-worker process to encrypt the Vault master key.
An RFC-5869 KDF requires three inputs. A change to any input results in a different output key:
Input keying material (IKM). It need not be (but should be) cryptographically strong, and is the \"secret\" part of the KDF.
A salt. A non-secret random number that adds to the strength of the KDF.
An \"info\" argument. The info argument allows multiple keys to be generated from the same IKM and salt. This allows the same KDF to generate multiple keys each used for a different purpose. For instance, the same KDF can be used to generate an encryption key to protect the PKI at-rest.
The Vault Master Key Encryption feature consumes the IKM from a Unix-style pipe. The IKM is provided by a vendor-defined mechanism, and is intended to be tied into security hardware on the device, be device-unique, and explicitly not stored in the file system.
To further strengthen the solution, an implementation could choose to engineer a solution whereby the IKM is only released a configurable number of times per boot, so that malware that runs on the system post-boot cannot retrieve it.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#ikm-hook","title":"IKM HOOK","text":"The Vault Master Key Encryption feature is embedded into the EdgeX security-secretsetore-setup
utility. It is enabled by setting an environment variable, EDGEX_IKM_HOOK
, containing the path to an executable that implements the IKM interface, described below, when the security-secretstore-setup
executable is run in early boot to initialize or unseal the EdgeX secret store.
When this feature is enabled, the Vault master key is encrypted at rest, and cannot be recovered unless the same IKM is provided as when the secretstore was initialized.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#ikm-interface","title":"IKM interface","text":""},{"location":"threat-models/secret-store/vault_master_key_encryption/#name","title":"NAME","text":"ikm - Return input key material for a hash-based KDF.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#synopsis","title":"SYNOPSIS","text":"ikm
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#description","title":"DESCRIPTION","text":"ikm outputs initial keying material to stdout as a lowercase hex string to be used for the default EdgeX software implementation of an RFC-5869 KDF.
The ikm can output any number of octets. Typically, the KDF will pad the ikm if it is shorter than hashlen, and hash the ikm if it is longer than hashlen. Thus, if ikm returns variable-length output it is advantageous to ensure that the output is always greater than hashlen, where hashlen depends on the hash function used by the KDF.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#example","title":"EXAMPLE","text":"ikm\n64acd82883269a5e46b8b0426d5a18e2b006f7d79041a68a4efa5339f25aba80\n
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#sample-implementations","title":"Sample implementations","text":"This section lists example implementations of the EdgeX Hardware Security Hook.
"},{"location":"threat-models/secret-store/vault_master_key_encryption/#tutorial-configuring-edgex-hardware-security-hooks-to-use-a-tpm-on-intel-developer-zone","title":"Tutorial: Configuring EdgeX Hardware Security Hooks to use a TPM on Intel\u00ae Developer Zone","text":"There is a tutorial published on Intel\u00ae Developer Zone that uses TPM hardware through a device driver interface to encrypt the Vault master key shares. The sample uses TPM-based local attestation to attest the system state prior to releasing the IKM. The sample is based on the tpm2-software project in GitHub and is specifically designed to run as a statically-linked executable that could be injected into a Docker container. Although not a complete solution, it is an illustrative sample that demonstrates in concrete terms how to use the TSS C API to access TPM functionality.
"},{"location":"threat-models/stride-model/EdgeX-STRIDE/","title":"EdgeX Foundry STRIDE Threat Model","text":"STRIDE is an acroymn standing for:
STRIDE is a type of security threat modeling to identify security vulnerabilities and risks associated with IT systems and then put methods (mitigation) in place to protect against the vulnerabilities and risks. Specifically, the STRIDE approach to threat modeling looks for common threats as represented in the acroymn in a consistent and methodical way.
"},{"location":"threat-models/stride-model/EdgeX-STRIDE/#report","title":"Report","text":"There are many tools to help create STRIDE threat models. Many of these tools will allow the developer to visually diagram the system and then automatically analyze the diagram and generate STRIDE risks which the developer must then explore and mitigate.
This EdgeX STRIDE model was created using Microsoft's Threat Modeling Tool (MTMT). It is available for free. Documentation on the product is available here.
If you wish to use the tool, make changes and/or generate your own reports you will need to import the following files into the Microsoft TMT:
Created on 12/27/2022 3:06:56 PM
generated from HTML by https://www.convertsimple.com/convert-html-to-markdown/ embedded images extracted with Pandoc https://pandoc.org (Pandoc did not do well with tables so just used for image extraction) using the command below
pandoc -o EdgeXFoundryThreatReportV2.2.md -t markdown -f markdown EdgeXFoundryThreatReportV2.2-original.md --extract-media=./images\n
Threat Model Name: EdgeX Foundry Threat Model
Owner: Jim White (IOTech Systems)
Reviewer: Bryon Nevis, Lenny Goodell, Jim Wang (all from Intel), Farshid Tavakolizadeh (Canonical), Rodney Hess (Beechwoods)
Contributors:
Description: General Threat Model for EdgeX Foundry - inclusive of security elements (Kong, Vault, etc).
Assumptions: EdgeX is platform agnostic, but this Threat model assumes the underlying OS is a Linux distribution. EdgeX can run containerized or non-containerized (natively). This Threat Model assumes EdgeX is running in a containerized environment (Docker). EdgeX micro services can run distributed, but this Threat Model assumes EdgeX is running on a single host (single Docker deamon with a single Docker network unless otherwise specified). Many different devices/sensors can be connected to EdgeX via its device services. This Threat model treats all sensors/devices the same (which is not always the case given the varoius protocols of support). Per https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/, additional hardening such as secure boot with hardware root of trust, and secure disk encryption are outside of EdgeX control but would greatly improve the threat mitigation.
External Dependencies: Operating system and hardware (including devices/sensors) Device/sensor drivers Possibly a cloud system or external enterprise system that EdgeX gets data to A message bus broker (such as an MQTT broker)
"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#notes","title":"Notes:","text":"Id Note Date Added By 1 Tampering with Data - This is a threat where information in the system is changed by an attacker. For example, an attacker changes an account balance Unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet 8/25/2022 6:40:40 PM DESKTOP-SL3KKHH\\jpwhi 2 XSS protections: filter input on arrival (don't do), encode data on oputput (don't do), use appropriate headers (do), use CSP (dont do) 8/25/2022 6:54:16 PM DESKTOP-SL3KKHH\\jpwhi 3 priority is determined by the likelihood of a threat occuring and the severity of the impact of its occurance 8/25/2022 7:11:40 PM DESKTOP-SL3KKHH\\jpwhi 4 Repudiation - don't track and log users actions; can't prove a transaction took place 8/25/2022 7:13:14 PM DESKTOP-SL3KKHH\\jpwhi 5 Elevation of privil - authorized or unauthorized user gains access to info not authorized 8/25/2022 7:16:24 PM DESKTOP-SL3KKHH\\jpwhi 6 Remote code execution: https://www.comparitech.com/blog/information-security/remote-code-execution-attacks/ buffer overflow sanitize user inputs proper auth use a firewall 8/25/2022 7:21:28 PM DESKTOP-SL3KKHH\\jpwhi 7 Privilege escalation attacks occur when bad actors exploit misconfigurations, bugs, weak passwords, and other vulnerabilities 8/27/2022 1:57:18 PM DESKTOP-SL3KKHH\\jpwhi"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#threat-model-summary","title":"Threat Model Summary:","text":"Not Started 0 Not Applicable 27 Needs Investigation 14 Mitigation Implemented 100 Total 141 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-edgex-foundry-big-picture","title":"Diagram: EdgeX Foundry (Big Picture)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#edgex-foundry-big-picture-diagram-summary","title":"EdgeX Foundry (Big Picture) Diagram Summary:","text":"Not Started 0 Not Applicable 20 Needs Investigation 3 Mitigation Implemented 96 Total 119 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-config","title":"Interaction: config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#1-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"1. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Consul (configuration) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: EdgeX services that use Consul must use a Vault access token provided in bootstrapping of the service. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/. There is also per service ACL rules in place to limit Consul access. As of the Ireland release, access of Consul requires ACL token header X-Consul-Token in any HTTP calls. Moreover, Consul itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#2-spoofing-of-source-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"2. Spoofing of Source Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-configuration","title":"Interaction: configuration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#3-spoofing-of-source-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"3. Spoofing of Source Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#4-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"4. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Configuration Files can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Disclosure of configuration files is not important. Configuration data is not considered sensitive. As long as the configuration files are not manipulated, then access to configuration files is not deemed a threat. All secret configuration is made available through Vault. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-data","title":"Interaction: data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#5-spoofing-of-source-data-store-redis-state-mitigation-implemented-priority-low","title":"5. Spoofing of Source Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#6-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"6. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Redis can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Access control credentials for Redis are secured in Vault (provided to EdgeX services at bootstrapping but otherwise unknown). Access without credentials is denied. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#7-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"7. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-published-message","title":"Interaction: published message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#8-potential-excessive-resource-consumption-for-edgex-foundry-or-message-bus-broker-state-mitigation-implemented-priority-medium","title":"8. Potential Excessive Resource Consumption for EdgeX Foundry or Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Message Bus Broker take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: The EdgeX message broker is either Redis Pub/Sub or an MQTT broker like Mosquitto and runs as a container in a Docker network that, by default with security on, does not allow direct access to the broker. Access to publish or subscribe to cause it to use excessive resources would require authorized access to the host as the port to the internal message broker is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service producing too many message than the broker can handle) that result in a DoS event. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#9-spoofing-of-destination-data-store-message-bus-state-mitigation-implemented-priority-low","title":"9. Spoofing of Destination Data Store Message Bus\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Message Bus. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Message broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-queries-data","title":"Interaction: queries & data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#10-spoofing-of-destination-data-store-redis-state-mitigation-implemented-priority-low","title":"10. Spoofing of Destination Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Redis. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Database host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#11-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"11. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. Docker containers do not share the host's network interface by default and instead is based on virtual ethernet adapters and bridges. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#12-potential-excessive-resource-consumption-for-edgex-foundry-or-redis-state-mitigation-implemented-priority-low","title":"12. Potential Excessive Resource Consumption for EdgeX Foundry or Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Redis take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Redis runs as a container in a Docker network that, by default with security on, does not allow direct access to the database. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to much data into it) that result in a DoS event. EdgeX does have a routine with customizable configuration that \"cleans up\" and removes older data so that \"normal\" or otherwise expected use of the database for persistenct does not result in DoS. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#13-spoofing-of-destination-data-store-vault-state-mitigation-implemented-priority-low","title":"13. Spoofing of Destination Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Vault. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. EdgeX services that use Vault must use the go-mod-secrets client or a Vault service token to access its secrets (which is revoked by default). See https://docs.edgexfoundry.org/2.3/security/Ch-SecretStore/#using-the-secret-store Vault host and port are configured from static configuration or environment overrides (trusted input) and not Consul, making it difficult to misdirect services access to Vault. See EdgeX Threat Model documentation (https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/#threat-matrix) for additional considerations and mitigation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#14-potential-excessive-resource-consumption-for-edgex-foundry-or-vault-state-mitigation-implemented-priority-low","title":"14. Potential Excessive Resource Consumption for EdgeX Foundry or Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Vault take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Vault runs as a container in a Docker network that, by default with security on, does not allow direct access to the secret store. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to many secrets into it) that result in a DoS event. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query_1","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#15-spoofing-of-destination-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"15. Spoofing of Destination Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (REST authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#16-the-devicesensor-rest-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"16. The Device/Sensor (REST authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (REST authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#17-data-store-denies-devicesensor-rest-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"17. Data Store Denies Device/Sensor (REST authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (REST authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#18-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-medium","title":"18. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#19-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"19. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query_2","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#20-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"20. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or subscriber to the broker. Physical and sytem security is required to protect these and mitigate this threat. Query requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#21-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"21. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#22-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"22. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data that cause the broker or subscriber to go offline or appear unresponsive - depending on the capabilities of the broker or subscribing application. In the opposite direction, an MQTT publisher could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system or MQTT broker) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#23-data-flow-sniffing-state-mitigation-implemented-priority-high","title":"23. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#24-data-store-denies-devicesensor-via-external-mqtt-broker-authenticated-potentially-writing-data-state-mitigation-implemented-priority-high","title":"24. Data Store Denies Device/Sensor (via external MQTT broker - authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Device/Sensor (via external MQTT broker - authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#25-the-devicesensor-via-external-mqtt-broker-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"25. The Device/Sensor (via external MQTT broker - authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (via external MQTT broker - authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#26-spoofing-of-destination-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"26. Spoofing of Destination Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT query sendor (or the spoofed external message broker) would not be properly authenticated and thereby be unable to publish. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#27-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"27. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of a query (or the spoofed external message broker) would not be properly authenticated and thereby be unable to make its request. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query-or-actuation","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#28-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"28. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#29-spoofing-of-destination-data-store-devicesensor-state-needs-investigation-priority-high","title":"29. Spoofing of Destination Data Store Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof a legitimate device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#30-the-devicesensor-data-store-could-be-corrupted-state-not-applicable-priority-high","title":"30. The Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor. Ensure the integrity of the data flow to the data store. I.e. - example: a man in the middle attack on the wire between EdgeX and the wired device/sensor or an attack on the sensor (giggling a vibration sensor) Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device or intercept/use of the data to the device/sensor is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Additional optional mitigation ideas require modifications to the EdgeX device service. The device service could be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#31-data-store-denies-devicesensor-potentially-writing-data-state-mitigation-implemented-priority-low","title":"31. Data Store Denies Device/Sensor Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#32-data-flow-sniffing-state-not-applicable-priority-high","title":"32. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#33-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-state-not-applicable-priority-high","title":"33. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#34-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"34. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#35-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"35. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query-config","title":"Interaction: query & config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#36-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-configuration-state-mitigation-implemented-priority-low","title":"36. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (configuration) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Consul runs as a container in a Docker network that, by default with security on, does not allow direct access to the APIs and UI without the Consul access token (see https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/#how-to-get-consul-acl-token). A rogue authorized user or someone that illegally obtained the Consul token could force Consul to use too many resources by invoking its API or stuffing too much configuration in the system (or impact it enough that disrupts its abilty to service the EdgeX services). Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#37-spoofing-of-destination-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"37. Spoofing of Destination Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (configuration). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Replacing/spoofing the Consul container would require administrative access to the Docker socket. EdgeX services will talk to any service that answers on the configured consul hostname. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-query-or-actuation_1","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#38-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"38. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#39-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"39. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#40-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"40. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (physically connected authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#41-data-flow-sniffing-state-not-applicable-priority-high","title":"41. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#42-data-store-denies-devicesensor-physically-connected-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"42. Data Store Denies Device/Sensor (physically connected authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (physically connected authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#43-the-devicesensor-physically-connected-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"43. The Device/Sensor (physically connected authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor (physically connected authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: With authentication and encrypting the data between EdgeX and the device/sensor (ex: using TLS), the data on the wire can be protected. The physcial security of the device/sensor still needs to be achieved to protect someone tampering with the device/sensor (ex: holding a match to a thermostat). As with device/sensors that are not authenticated, additional optional mitigation ideas to mitigate unprotected devices/sensors require modifications to the EdgeX device service. The device service could be constructed to filter data or report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#44-spoofing-of-destination-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"44. Spoofing of Destination Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#45-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"45. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-read","title":"Interaction: read","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#46-spoofing-of-destination-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"46. Spoofing of Destination Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Configuration Files. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#47-potential-excessive-resource-consumption-for-edgex-foundry-or-configuration-files-state-mitigation-implemented-priority-low","title":"47. Potential Excessive Resource Consumption for EdgeX Foundry or Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Configuration Files take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Config file does not consume resources other than file space. Configuration file is deployed with the service container and therefore, without access to the host and Docker, its size is controlled. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#48-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"48. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request_1","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#49-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"49. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX Foundry may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: There is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Impersonating EdgeX would require access to the host system and the Docker network. With this access, many other severe issues could occur (stopping the system, sending incorrect data, etc.). Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#50-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"50. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX Foundry. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Kong, the service would not know that the response came from something other than Kong. I.e. - there is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Kong is run as a container on the EdgeX Docker network. Replacing/spoofing the Kong container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Kong (with TLS cert in place). A spoofing service (in this case Kong), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request_2","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#51-elevation-by-changing-the-execution-flow-in-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"51. Elevation by Changing the Execution Flow in EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX UI - Web Application in order to change the flow of program execution within EdgeX UI - Web Application to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. In order to use the Web UI (with secure mode EdgeX), authentication required via Kong. With proper authentication, a rogue user could invoke commands, change the rules engine rules (and alter workkflows), stop services (and alter workflows), etc. - but these could also be accomplished directly with EdgeX. If the GUI is of extreme concern, it can be removed or turned off as it is a convenience mechanism and is not required for EdgeX operation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#52-edgex-ui-web-application-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-medium","title":"52. EdgeX UI - Web Application May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: Browser/API Caller may be able to remotely execute code for EdgeX UI - Web Application. Justification: <no mitigation provided> Possible Mitigation: Possible protections to be implemented: buffer overflow protection, sanitize user inputs, use of a firewall Mitigator: EdgeX Foundry Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#53-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"53. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Browser/API Caller in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. The Edge GUI is deployed as a container part of the EdgeX application set. Impersonation of Web Application would require access to the host (with privilege) and require changing or removing the existing GUI Web application. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#54-data-flow-request-is-potentially-interrupted-state-not-applicable-priority-low","title":"54. Data Flow request Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#55-potential-process-crash-or-stop-for-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"55. Potential Process Crash or Stop for EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: EdgeX UI - Web Application crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Other mechisms exist to work with EdgeX (such as the service APIs). As another EdgeX, stopping the service requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges. The GUI service can be removed for extra security. The GUI is a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#56-data-flow-sniffing-state-mitigation-implemented-priority-medium","title":"56. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Data flowing across request may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Use of a VPN or HTTPS can be used to secure the communications with the EdgeX UI. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#57-potential-data-repudiation-by-edgex-ui-web-application-state-not-applicable-priority-low","title":"57. Potential Data Repudiation by EdgeX UI - Web Application\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX UI - Web Application claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web UI can use elevated logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#58-cross-site-scripting-state-mitigation-implemented-priority-low","title":"58. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: X-XSS-Protection is enabled on all pages to protect against detected XSS. In environments where cross site scripting is a huge concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#59-potential-lack-of-input-validation-for-edgex-ui-web-application-state-needs-investigation-priority-medium","title":"59. Potential Lack of Input Validation for EdgeX UI - Web Application\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Tampering Description: Data flowing across request may be tampered with by an attacker. This may lead to a denial of service attack against EdgeX UI - Web Application or an elevation of privilege attack against EdgeX UI - Web Application or an information disclosure by EdgeX UI - Web Application. Failure to verify that input is as expected is a root cause of a very large number of exploitable issues. Consider all paths and the way they handle data. Verify that all input is verified for correctness using an approved list input validation approach. Justification: <no mitigation provided> Possible Mitigation: Input validation should be added to the GUI. However, access to the Web GUI (and then EdgeX) requires the API gateway token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). If this threat is likely, the Web GUI can be removed as this does not impact the remainder of EdgeX operations. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#60-spoofing-the-browserapi-caller-external-entity-state-not-applicable-priority-low","title":"60. Spoofing the Browser/API Caller External Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#61-spoofing-the-edgex-ui-web-application-process-state-mitigation-implemented-priority-low","title":"61. Spoofing the EdgeX UI - Web Application Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: EdgeX UI - Web Application may be spoofed by an attacker and this may lead to information disclosure by Browser/API Caller. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As one of the services deployed as a container of EdgeX, spoofing of EdgeX GUI would require either replacing the container (requiring host access and elevated privileges) and/or intercepting and rerouting traffic. Further, the GUI must obtain and use a Kong JWT token to access the EdgeX APIs which a spoofer would not have. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-request_3","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#62-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"62. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#63-data-flow-request-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"63. Data Flow request Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#64-external-entity-kong-potentially-denies-receiving-data-state-not-applicable-priority-low","title":"64. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#65-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"65. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response_1","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#66-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"66. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Kong is run as a container on the EdgeX Docker network. Replacing/spoofing Kong would require privileaged access to the host. Kong is exposed via TLS and we provide a cli tool to install a custom certificate that the web UI can validate if the CA is trusted. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#67-cross-site-scripting-state-mitigation-implemented-priority-low","title":"67. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: Because the Web application is running as a container on the Docker network with Kong, access to the response traffic via Kong would require access to the Docker network (requiring access to the host with elevated privilege). The EdgeX Web GUI has X-XSS-Protection enabled. In environments where cross site scripting is a concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#68-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"68. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: The Web GUI must authenticate with Kong using a JWT token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). Without the proper JWT token access, the Web GUI cannot get eleveated privilege to EdgeX as a whole. An impersonating Web GUI might be used to have a user provide their JWT token which could be used to then perform other operations in EdgeX. If this is a real threat, the GUI can be removed and not used without other impacts to EdgeX. The GUI is a convenience tool. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response_2","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#69-data-flow-response-is-potentially-interrupted-state-not-applicable-priority-low","title":"69. Data Flow response Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Third Party Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#70-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"70. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web GUI can use elevated log level to log all requests. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#71-spoofing-of-the-browserapi-caller-external-destination-entity-state-not-applicable-priority-low","title":"71. Spoofing of the Browser/API Caller External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Browser/API Caller. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-response_3","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#72-data-flow-response-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"72. Data Flow response Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#73-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"73. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging to document all requests. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#74-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"74. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#75-spoofing-of-source-data-store-devicesensor-state-not-applicable-priority-high","title":"75. Spoofing of Source Data Store Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof as a ligitimage device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#76-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-low","title":"76. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#77-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"77. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#78-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"78. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#79-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"79. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#80-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"80. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#81-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"81. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#82-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"82. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_1","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#83-external-entity-megaservice-cloud-or-enterprise-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"83. External Entity Megaservice - Cloud or Enterprise Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Megaservice - Cloud or Enterprise claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#84-spoofing-of-the-megaservice-cloud-or-enterprise-external-destination-entity-state-not-applicable-priority-low","title":"84. Spoofing of the Megaservice - Cloud or Enterprise External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Megaservice - Cloud or Enterprise may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Megaservice - Cloud or Enterprise. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of a megacloud or enterprise, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export) Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#85-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"85. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the cloud). If the data is deemed critical and if by some means the data flow was interrupted, then store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_2","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#86-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"86. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the external message bus). If the data is deemed critical and if by some means the data flow was interrupted, store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#87-external-entity-message-topic-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"87. External Entity Message Topic Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Message Topic claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#88-spoofing-of-the-message-topic-external-destination-entity-state-not-applicable-priority-low","title":"88. Spoofing of the Message Topic External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Topic may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Message Topic. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of an external message bus, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_3","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#89-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"89. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#90-spoofing-of-source-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"90. Spoofing of Source Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#91-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"91. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#92-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"92. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (physically connected authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#93-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"93. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#94-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"94. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#95-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"95. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#96-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"96. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (physically connected authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#97-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"97. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_4","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#98-spoofing-of-source-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"98. Spoofing of Source Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to Kong. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#99-external-entity-kong-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"99. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#100-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"100. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (REST authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#101-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"101. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#102-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"102. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#103-weakness-in-sso-authorization-state-mitigation-implemented-priority-high","title":"103. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-sensor-data_5","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#104-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-low","title":"104. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Access to publish data through the external MQTT broker is protected with authentication. Wrong data can also be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#105-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-mitigation-implemented-priority-low","title":"105. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (via external MQTT broker - authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor via MQTT (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#106-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"106. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#107-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"107. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#108-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"108. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#109-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"109. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (via external MQTT broker - authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#110-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"110. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#111-spoofing-of-source-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"111. Spoofing of Source Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby deny any request. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#112-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"112. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT receiver of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby be unable to receive. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-service-registration","title":"Interaction: service registration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#113-spoofing-of-destination-data-store-consul-registry-state-mitigation-implemented-priority-low","title":"113. Spoofing of Destination Data Store Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (registry) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (registry). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#114-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-registry-state-mitigation-implemented-priority-low","title":"114. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (registry) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX services and Consul run as containers in a Docker network that, by default with security on, does not allow direct access to the service APIs. During the process of Consul bootstrapping, the EdgeX security bootstrapper ensures that the Consul APIs and GUI cannot be accessed without an ACL token (see https://docs.edgexfoundry.org/2.2/security/Ch-Secure-Consul/). Therefore, using the Consul APIs to cause a DoS attack would require access tokens. A rogue authorized user or someone able to illegally get the Consul token could cause excess use of resources that cause the services or Consul down. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#115-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"115. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then TLS or overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-service-secrets","title":"Interaction: service secrets","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#116-weak-access-control-for-a-resource-state-mitigation-implemented-priority-medium","title":"116. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Improper data protection of Vault can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: The Vault root and service level tokens are revoked after setup and then all interactions is via the programmatic interface (with properly authenticated token). There are additional options to Vault Master Key encryption provided here: https://docs.edgexfoundry.org/2.2/threat-models/secret-store/vault_master_key_encryption/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#117-spoofing-of-source-data-store-vault-state-mitigation-implemented-priority-low","title":"117. Spoofing of Source Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-subscribed-message","title":"Interaction: subscribed message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#118-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"118. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Message Bus Broker can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: When running EdgeX in secure mode the Redis database service is secured with a username/password. Redis Pub/Sub utilizes the existing Redis database service so that no additional broker service is required. This in turn creates a Secure MessageBus. See https://docs.edgexfoundry.org/2.2/security/Ch-Secure-MessageBus/. MQTTS can used for internal message bus communications but not provided by EdgeX Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#119-spoofing-of-source-data-store-message-bus-broker-state-mitigation-implemented-priority-low","title":"119. Spoofing of Source Data Store Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus Broker may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-edgex-service-to-service-http-comms","title":"Diagram: EdgeX Service to Service HTTP comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#edgex-service-to-service-http-comms-diagram-summary","title":"EdgeX Service to Service HTTP comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 2 Mitigation Implemented 0 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-http","title":"Interaction: HTTP","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#120-edgex-service-a-process-memory-tampered-state-needs-investigation-priority-high","title":"120. EdgeX Service A Process Memory Tampered\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#121-elevation-using-impersonation-state-needs-investigation-priority-high","title":"121. Elevation Using Impersonation\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service APIs is restricted except through Kong. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, TLS can be used to encrypt all traffic. Service-to-service calls behind Kong are unauthenticated in the current implementation. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-edgex-service-to-service-message-bus-comms","title":"Diagram: EdgeX Service to Service message bus comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#edgex-service-to-service-message-bus-comms-diagram-summary","title":"EdgeX Service to Service message bus comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 2 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-message-bus-mqtt-redis-pubsub-nats","title":"Interaction: message bus (MQTT, Redis Pub/Sub, NATS)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#122-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"122. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: All services are required to authroize to the message bus, but all services authorized on the message bus have equal privilege to send and receive messages. Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service message bus is restricted to internal communications only. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, secure MQTT (MQTTS) message bus communications can be used. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#123-edgex-service-a-process-memory-tampered-state-mitigation-implemented-priority-high","title":"123. EdgeX Service A Process Memory Tampered\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-access-via-vpn","title":"Diagram: Access via VPN","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#access-via-vpn-diagram-summary","title":"Access via VPN Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-host-access","title":"Diagram: Host Access","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#host-access-diagram-summary","title":"Host Access Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-open-port-protections","title":"Diagram: Open Port Protections","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#open-port-protections-diagram-summary","title":"Open Port Protections Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#diagram-device-protocol-threats-modbus-example","title":"Diagram: Device Protocol Threats - Modbus example","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#device-protocol-threats-modbus-example-diagram-summary","title":"Device Protocol Threats - Modbus example Diagram Summary:","text":"Not Started 0 Not Applicable 7 Needs Investigation 9 Mitigation Implemented 2 Total 18 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-binary-rtu-get-or-set","title":"Interaction: Binary RTU (GET or SET)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#124-spoofing-of-destination-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"124. Spoofing of Destination Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#125-potential-excessive-resource-consumption-for-modbus-device-service-or-modbus-devicesensor-state-needs-investigation-priority-high","title":"125. Potential Excessive Resource Consumption for Modbus Device Service or Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does Modbus Device Service or Modbus Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#126-spoofing-the-modbus-device-service-process-state-needs-investigation-priority-high","title":"126. Spoofing the Modbus Device Service Process\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to unauthorized access to Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#127-the-modbus-devicesensor-data-store-could-be-corrupted-state-needs-investigation-priority-high","title":"127. The Modbus Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across Binary RTU (GET or SET) may be tampered with by an attacker. This may lead to corruption of Modbus Device/Sensor. Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#128-data-store-denies-modbus-devicesensor-potentially-writing-data-state-not-applicable-priority-high","title":"128. Data Store Denies Modbus Device/Sensor Potentially Writing Data\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: It is unlikely that a Modbus device/sensor has a log to provide an audit of requests. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#129-data-flow-sniffing-state-not-applicable-priority-high","title":"129. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across Binary RTU (GET or SET) may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized nor encrypted by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#130-weak-credential-transit-state-needs-investigation-priority-high","title":"130. Weak Credential Transit\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Credentials on the wire are often subject to sniffing by an attacker. Are the credentials re-usable/re-playable? Are credentials included in a message? For example, sending a zip file with the password in the email. Use strong cryptography for the transmission of credentials. Use the OS libraries if at all possible, and consider cryptographic algorithm agility, rather than hardcoding a choice. Justification: <no mitigation provided> Possible Mitigation: Modbus does not support any type of authentication/authorization in communications. Physical security of the device and wire are the only ways to thwart information disclosure. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#131-data-flow-binary-rtu-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"131. Data Flow Binary RTU (GET or SET) Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#132-data-store-inaccessible-state-needs-investigation-priority-high","title":"132. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#interaction-binary-rtu-response-get-or-se","title":"Interaction: Binary RTU Response (GET or SE","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#133-spoofing-of-source-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"133. Spoofing of Source Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to Modbus Device Service. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#134-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"134. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Modbus Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: As Modbus is a simple protocol (reporting data or reacting to accuation requests), it is not possible for the device or sensor to gain other data from the device service (or EdgeX as a whole). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#135-spoofing-the-modbus-device-service-process-state-not-applicable-priority-high","title":"135. Spoofing the Modbus Device Service Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to information disclosure by Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#136-potential-data-repudiation-by-modbus-device-service-state-mitigation-implemented-priority-high","title":"136. Potential Data Repudiation by Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device Service claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level can be used to log all data communications from a device/sensor. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#137-potential-process-crash-or-stop-for-modbus-device-service-state-mitigation-implemented-priority-medium","title":"137. Potential Process Crash or Stop for Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Modbus Device Service crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#138-data-flow-binary-rtu-response-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"138. Data Flow Binary RTU Response (GET or SET Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#139-data-store-inaccessible-state-needs-investigation-priority-high","title":"139. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#140-modbus-device-service-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-high","title":"140. Modbus Device Service May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Modbus Device/Sensor may be able to remotely execute code for Modbus Device Service. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2-original/#141-elevation-by-changing-the-execution-flow-in-modbus-device-service-state-not-applicable-priority-high","title":"141. Elevation by Changing the Execution Flow in Modbus Device Service\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into Modbus Device Service in order to change the flow of program execution within Modbus Device Service to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Physical security of the sensor and communications (wire) offer the best hope to mitigate this threat. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/","title":"Threat Modeling Report","text":"Created on 12/27/2022 3:06:56 PM
generated from HTML by https://www.convertsimple.com/convert-html-to-markdown/ embedded images extracted with Pandoc https://pandoc.org (Pandoc did not do well with tables so just used for image extraction) using the command below
pandoc -o EdgeXFoundryThreatReportV2.2.md -t markdown -f markdown EdgeXFoundryThreatReportV2.2-original.md --extract-media=./images\n
Threat Model Name: EdgeX Foundry Threat Model
Owner: Jim White (IOTech Systems)
Reviewer: Bryon Nevis, Lenny Goodell, Jim Wang (all from Intel), Farshid Tavakolizadeh (Canonical), Rodney Hess (Beechwoods)
Contributors:
Description: General Threat Model for EdgeX Foundry - inclusive of security elements (Kong, Vault, etc).
Assumptions: EdgeX is platform agnostic, but this Threat model assumes the underlying OS is a Linux distribution. EdgeX can run containerized or non-containerized (natively). This Threat Model assumes EdgeX is running in a containerized environment (Docker). EdgeX micro services can run distributed, but this Threat Model assumes EdgeX is running on a single host (single Docker deamon with a single Docker network unless otherwise specified). Many different devices/sensors can be connected to EdgeX via its device services. This Threat model treats all sensors/devices the same (which is not always the case given the varoius protocols of support). Per https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/, additional hardening such as secure boot with hardware root of trust, and secure disk encryption are outside of EdgeX control but would greatly improve the threat mitigation.
External Dependencies: Operating system and hardware (including devices/sensors) Device/sensor drivers Possibly a cloud system or external enterprise system that EdgeX gets data to A message bus broker (such as an MQTT broker)
"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#notes","title":"Notes:","text":"Id Note Date Added By 1 Tampering with Data - This is a threat where information in the system is changed by an attacker. For example, an attacker changes an account balance Unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet 8/25/2022 6:40:40 PM DESKTOP-SL3KKHH\\jpwhi 2 XSS protections: filter input on arrival (don't do), encode data on oputput (don't do), use appropriate headers (do), use CSP (dont do) 8/25/2022 6:54:16 PM DESKTOP-SL3KKHH\\jpwhi 3 priority is determined by the likelihood of a threat occuring and the severity of the impact of its occurance 8/25/2022 7:11:40 PM DESKTOP-SL3KKHH\\jpwhi 4 Repudiation - don't track and log users actions; can't prove a transaction took place 8/25/2022 7:13:14 PM DESKTOP-SL3KKHH\\jpwhi 5 Elevation of privil - authorized or unauthorized user gains access to info not authorized 8/25/2022 7:16:24 PM DESKTOP-SL3KKHH\\jpwhi 6 Remote code execution: https://www.comparitech.com/blog/information-security/remote-code-execution-attacks/ buffer overflow sanitize user inputs proper auth use a firewall 8/25/2022 7:21:28 PM DESKTOP-SL3KKHH\\jpwhi 7 Privilege escalation attacks occur when bad actors exploit misconfigurations, bugs, weak passwords, and other vulnerabilities 8/27/2022 1:57:18 PM DESKTOP-SL3KKHH\\jpwhi"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#threat-model-summary","title":"Threat Model Summary:","text":"Not Started 0 Not Applicable 27 Needs Investigation 14 Mitigation Implemented 100 Total 141 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-edgex-foundry-big-picture","title":"Diagram: EdgeX Foundry (Big Picture)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#edgex-foundry-big-picture-diagram-summary","title":"EdgeX Foundry (Big Picture) Diagram Summary:","text":"Not Started 0 Not Applicable 20 Needs Investigation 3 Mitigation Implemented 96 Total 119 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-config","title":"Interaction: config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#1-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"1. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Consul (configuration) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: EdgeX services that use Consul must use a Vault access token provided in bootstrapping of the service. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/. There is also per service ACL rules in place to limit Consul access. As of the Ireland release, access of Consul requires ACL token header X-Consul-Token in any HTTP calls. Moreover, Consul itself is now bootstrapped and started with its ACL system enabled and thus provides better authentication and authorization security features for services. In other words, with the required Consul's ACL token for accessing Consul, assets inside Consul like EdgeX's configuration items in Key-Value (KV) store are now better protected. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#2-spoofing-of-source-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"2. Spoofing of Source Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-configuration","title":"Interaction: configuration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#3-spoofing-of-source-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"3. Spoofing of Source Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#4-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"4. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Configuration Files can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Disclosure of configuration files is not important. Configuration data is not considered sensitive. As long as the configuration files are not manipulated, then access to configuration files is not deemed a threat. All secret configuration is made available through Vault. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-data","title":"Interaction: data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#5-spoofing-of-source-data-store-redis-state-mitigation-implemented-priority-low","title":"5. Spoofing of Source Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#6-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"6. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Redis can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Access control credentials for Redis are secured in Vault (provided to EdgeX services at bootstrapping but otherwise unknown). Access without credentials is denied. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#7-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"7. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-published-message","title":"Interaction: published message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#8-potential-excessive-resource-consumption-for-edgex-foundry-or-message-bus-broker-state-mitigation-implemented-priority-medium","title":"8. Potential Excessive Resource Consumption for EdgeX Foundry or Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Message Bus Broker take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: The EdgeX message broker is either Redis Pub/Sub or an MQTT broker like Mosquitto and runs as a container in a Docker network that, by default with security on, does not allow direct access to the broker. Access to publish or subscribe to cause it to use excessive resources would require authorized access to the host as the port to the internal message broker is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service producing too many message than the broker can handle) that result in a DoS event. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#9-spoofing-of-destination-data-store-message-bus-state-mitigation-implemented-priority-low","title":"9. Spoofing of Destination Data Store Message Bus\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Message Bus. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Message broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-queries-data","title":"Interaction: queries & data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#10-spoofing-of-destination-data-store-redis-state-mitigation-implemented-priority-low","title":"10. Spoofing of Destination Data Store Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Redis may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Redis. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Redis, the service would not know that the response came from something other than Redis. However, Redis is run as a container on the EdgeX Docker network. Replacing/spoofing the Redis container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Redis (with TLS cert in place). A spoofing service (in this case Redis), would not have the appropriate cert in place to participate in the communications. Database host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#11-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"11. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. Docker containers do not share the host's network interface by default and instead is based on virtual ethernet adapters and bridges. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#12-potential-excessive-resource-consumption-for-edgex-foundry-or-redis-state-mitigation-implemented-priority-low","title":"12. Potential Excessive Resource Consumption for EdgeX Foundry or Redis\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Redis take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Redis runs as a container in a Docker network that, by default with security on, does not allow direct access to the database. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to much data into it) that result in a DoS event. EdgeX does have a routine with customizable configuration that \"cleans up\" and removes older data so that \"normal\" or otherwise expected use of the database for persistenct does not result in DoS. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#13-spoofing-of-destination-data-store-vault-state-mitigation-implemented-priority-low","title":"13. Spoofing of Destination Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Vault. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. EdgeX services that use Vault must use the go-mod-secrets client or a Vault service token to access its secrets (which is revoked by default). See https://docs.edgexfoundry.org/2.3/security/Ch-SecretStore/#using-the-secret-store Vault host and port are configured from static configuration or environment overrides (trusted input) and not Consul, making it difficult to misdirect services access to Vault. See EdgeX Threat Model documentation (https://docs.edgexfoundry.org/2.0/threat-models/secret-store/threat_model/#threat-matrix) for additional considerations and mitigation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#14-potential-excessive-resource-consumption-for-edgex-foundry-or-vault-state-mitigation-implemented-priority-low","title":"14. Potential Excessive Resource Consumption for EdgeX Foundry or Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Vault take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Vault runs as a container in a Docker network that, by default with security on, does not allow direct access to the secret store. Access to query or push data into it to cause it to use excessive resources would require authorized access to the host as the port to the database is protected. In other words, EdgeX mitigates unauthorized attacks resulting in DoS event, but would not mitigate authorized attacks (such as a service making too many queries or pushing to many secrets into it) that result in a DoS event. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query_1","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#15-spoofing-of-destination-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"15. Spoofing of Destination Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (REST authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#16-the-devicesensor-rest-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"16. The Device/Sensor (REST authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (REST authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#17-data-store-denies-devicesensor-rest-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"17. Data Store Denies Device/Sensor (REST authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (REST authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#18-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-medium","title":"18. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#19-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"19. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query_2","title":"Interaction: query","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#20-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"20. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or subscriber to the broker. Physical and sytem security is required to protect these and mitigate this threat. Query requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#21-data-flow-query-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"21. Data Flow query Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#22-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"22. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (via external MQTT broker - authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data that cause the broker or subscriber to go offline or appear unresponsive - depending on the capabilities of the broker or subscribing application. In the opposite direction, an MQTT publisher could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system or MQTT broker) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#23-data-flow-sniffing-state-mitigation-implemented-priority-high","title":"23. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#24-data-store-denies-devicesensor-via-external-mqtt-broker-authenticated-potentially-writing-data-state-mitigation-implemented-priority-high","title":"24. Data Store Denies Device/Sensor (via external MQTT broker - authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Device/Sensor (via external MQTT broker - authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#25-the-devicesensor-via-external-mqtt-broker-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"25. The Device/Sensor (via external MQTT broker - authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query may be tampered with by an attacker. This may lead to corruption of Device/Sensor (via external MQTT broker - authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#26-spoofing-of-destination-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"26. Spoofing of Destination Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT query sendor (or the spoofed external message broker) would not be properly authenticated and thereby be unable to publish. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#27-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"27. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of a query (or the spoofed external message broker) would not be properly authenticated and thereby be unable to make its request. The EdgeX framework has the support to store secrets to authenticate devices. Broker host and port are part of services' configuration (covered under threats against configuration) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query-or-actuation","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#28-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"28. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#29-spoofing-of-destination-data-store-devicesensor-state-needs-investigation-priority-high","title":"29. Spoofing of Destination Data Store Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof a legitimate device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#30-the-devicesensor-data-store-could-be-corrupted-state-not-applicable-priority-high","title":"30. The Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor. Ensure the integrity of the data flow to the data store. I.e. - example: a man in the middle attack on the wire between EdgeX and the wired device/sensor or an attack on the sensor (giggling a vibration sensor) Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device or intercept/use of the data to the device/sensor is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Additional optional mitigation ideas require modifications to the EdgeX device service. The device service could be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#31-data-store-denies-devicesensor-potentially-writing-data-state-mitigation-implemented-priority-low","title":"31. Data Store Denies Device/Sensor Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#32-data-flow-sniffing-state-not-applicable-priority-high","title":"32. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#33-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-state-not-applicable-priority-high","title":"33. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#34-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"34. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#35-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"35. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query-config","title":"Interaction: query & config","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#36-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-configuration-state-mitigation-implemented-priority-low","title":"36. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (configuration) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Consul runs as a container in a Docker network that, by default with security on, does not allow direct access to the APIs and UI without the Consul access token (see https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/#how-to-get-consul-acl-token). A rogue authorized user or someone that illegally obtained the Consul token could force Consul to use too many resources by invoking its API or stuffing too much configuration in the system (or impact it enough that disrupts its abilty to service the EdgeX services). Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#37-spoofing-of-destination-data-store-consul-configuration-state-mitigation-implemented-priority-low","title":"37. Spoofing of Destination Data Store Consul (configuration)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (configuration) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (configuration). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Replacing/spoofing the Consul container would require administrative access to the Docker socket. EdgeX services will talk to any service that answers on the configured consul hostname. See https://docs.edgexfoundry.org/2.3/security/Ch-Secure-Consul/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-query-or-actuation_1","title":"Interaction: query or actuation","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#38-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"38. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#39-data-flow-query-or-actuation-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"39. Data Flow query or actuation Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and mitigate this threat. Query or actuation requests that do not receive a response would result in an error that could be responded to. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#40-potential-excessive-resource-consumption-for-edgex-foundry-or-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"40. Potential Excessive Resource Consumption for EdgeX Foundry or Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Device/Sensor (physically connected authenticated) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX could send too many requests for data or actuation requests that cause the sensor / device to go offline or appear unresponsive - depending on the sophistication of the device/sensor. In the opposite direction, a device/sensor could be tampered with or improperly configured to send too much data (overwhelming the EdgeX system) causing a DoS. Other than writing the device service to filter data to avoid the \u201ctoo much\u201d data DoS, this threat is not mitigated. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#41-data-flow-sniffing-state-not-applicable-priority-high","title":"41. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across query or actuation may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#42-data-store-denies-devicesensor-physically-connected-authenticated-potentially-writing-data-state-mitigation-implemented-priority-low","title":"42. Data Store Denies Device/Sensor (physically connected authenticated) Potentially Writing Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Device/Sensor (physically connected authenticated) claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#43-the-devicesensor-physically-connected-authenticated-data-store-could-be-corrupted-state-mitigation-implemented-priority-high","title":"43. The Device/Sensor (physically connected authenticated) Data Store Could Be Corrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across query or actuation may be tampered with by an attacker. This may lead to corruption of Device/Sensor (physically connected authenticated). Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: With authentication and encrypting the data between EdgeX and the device/sensor (ex: using TLS), the data on the wire can be protected. The physcial security of the device/sensor still needs to be achieved to protect someone tampering with the device/sensor (ex: holding a match to a thermostat). As with device/sensors that are not authenticated, additional optional mitigation ideas to mitigate unprotected devices/sensors require modifications to the EdgeX device service. The device service could be constructed to filter data or report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). All of these have limits and only mitigate the data from being used in the rest of EdgeX once received by the device service. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could also be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#44-spoofing-of-destination-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"44. Spoofing of Destination Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#45-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"45. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to unauthorized access to Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-read","title":"Interaction: read","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#46-spoofing-of-destination-data-store-configuration-files-state-mitigation-implemented-priority-low","title":"46. Spoofing of Destination Data Store Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Configuration Files may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Configuration Files. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: Configuration files are used to seed EdgeX configuration service (Consul) before the services are started. Configuration files are made part of the service container (deployed with the container image). The only way to spoof the file is to replace the entire service container with new configuration or to transplant new configuration in the container - both require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#47-potential-excessive-resource-consumption-for-edgex-foundry-or-configuration-files-state-mitigation-implemented-priority-low","title":"47. Potential Excessive Resource Consumption for EdgeX Foundry or Configuration Files\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Configuration Files take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: Config file does not consume resources other than file space. Configuration file is deployed with the service container and therefore, without access to the host and Docker, its size is controlled. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#48-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"48. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request_1","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#49-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"49. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX Foundry may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: There is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Impersonating EdgeX would require access to the host system and the Docker network. With this access, many other severe issues could occur (stopping the system, sending incorrect data, etc.). Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#50-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"50. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX Foundry. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Kong, the service would not know that the response came from something other than Kong. I.e. - there is no current ability to authenticate Kong as a caller of EdgeX services from any other local process on the system. However, Kong is run as a container on the EdgeX Docker network. Replacing/spoofing the Kong container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Kong (with TLS cert in place). A spoofing service (in this case Kong), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request_2","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#51-elevation-by-changing-the-execution-flow-in-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"51. Elevation by Changing the Execution Flow in EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX UI - Web Application in order to change the flow of program execution within EdgeX UI - Web Application to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. In order to use the Web UI (with secure mode EdgeX), authentication required via Kong. With proper authentication, a rogue user could invoke commands, change the rules engine rules (and alter workkflows), stop services (and alter workflows), etc. - but these could also be accomplished directly with EdgeX. If the GUI is of extreme concern, it can be removed or turned off as it is a convenience mechanism and is not required for EdgeX operation. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#52-edgex-ui-web-application-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-medium","title":"52. EdgeX UI - Web Application May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: Browser/API Caller may be able to remotely execute code for EdgeX UI - Web Application. Justification: <no mitigation provided> Possible Mitigation: Possible protections to be implemented: buffer overflow protection, sanitize user inputs, use of a firewall Mitigator: EdgeX Foundry Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#53-elevation-using-impersonation-state-mitigation-implemented-priority-low","title":"53. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Browser/API Caller in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: EdgeX UI just uses the JWT given to it. The browser cannot forge new JWT or elevate its own privilege as it has no more privilege than a normal API caller. The Edge GUI is deployed as a container part of the EdgeX application set. Impersonation of Web Application would require access to the host (with privilege) and require changing or removing the existing GUI Web application. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#54-data-flow-request-is-potentially-interrupted-state-not-applicable-priority-low","title":"54. Data Flow request Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#55-potential-process-crash-or-stop-for-edgex-ui-web-application-state-mitigation-implemented-priority-low","title":"55. Potential Process Crash or Stop for EdgeX UI - Web Application\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: EdgeX UI - Web Application crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Other mechisms exist to work with EdgeX (such as the service APIs). As another EdgeX, stopping the service requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges. The GUI service can be removed for extra security. The GUI is a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#56-data-flow-sniffing-state-mitigation-implemented-priority-medium","title":"56. Data Flow Sniffing\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Data flowing across request may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: Use of a VPN or HTTPS can be used to secure the communications with the EdgeX UI. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#57-potential-data-repudiation-by-edgex-ui-web-application-state-not-applicable-priority-low","title":"57. Potential Data Repudiation by EdgeX UI - Web Application\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX UI - Web Application claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web UI can use elevated logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#58-cross-site-scripting-state-mitigation-implemented-priority-low","title":"58. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: X-XSS-Protection is enabled on all pages to protect against detected XSS. In environments where cross site scripting is a huge concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#59-potential-lack-of-input-validation-for-edgex-ui-web-application-state-needs-investigation-priority-medium","title":"59. Potential Lack of Input Validation for EdgeX UI - Web Application\u00a0 [State: Needs Investigation]\u00a0 [Priority: Medium]","text":"Category: Tampering Description: Data flowing across request may be tampered with by an attacker. This may lead to a denial of service attack against EdgeX UI - Web Application or an elevation of privilege attack against EdgeX UI - Web Application or an information disclosure by EdgeX UI - Web Application. Failure to verify that input is as expected is a root cause of a very large number of exploitable issues. Consider all paths and the way they handle data. Verify that all input is verified for correctness using an approved list input validation approach. Justification: <no mitigation provided> Possible Mitigation: Input validation should be added to the GUI. However, access to the Web GUI (and then EdgeX) requires the API gateway token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). If this threat is likely, the Web GUI can be removed as this does not impact the remainder of EdgeX operations. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#60-spoofing-the-browserapi-caller-external-entity-state-not-applicable-priority-low","title":"60. Spoofing the Browser/API Caller External Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#61-spoofing-the-edgex-ui-web-application-process-state-mitigation-implemented-priority-low","title":"61. Spoofing the EdgeX UI - Web Application Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: EdgeX UI - Web Application may be spoofed by an attacker and this may lead to information disclosure by Browser/API Caller. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As one of the services deployed as a container of EdgeX, spoofing of EdgeX GUI would require either replacing the container (requiring host access and elevated privileges) and/or intercepting and rerouting traffic. Further, the GUI must obtain and use a Kong JWT token to access the EdgeX APIs which a spoofer would not have. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-request_3","title":"Interaction: request","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#62-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"62. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. EdgeX UI today does not have the notion of \"users\" or \"permissions\" and that it just takes the JWT that is supplied to it, rather than running any sort of SSO login flow. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#63-data-flow-request-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"63. Data Flow request Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#64-external-entity-kong-potentially-denies-receiving-data-state-not-applicable-priority-low","title":"64. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging, but if it did not see a request from a browser or API caller like Postman, then nothing gets issued to EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#65-weakness-in-sso-authorization-state-mitigation-implemented-priority-low","title":"65. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response_1","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#66-spoofing-the-kong-external-entity-state-mitigation-implemented-priority-low","title":"66. Spoofing the Kong External Entity\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Kong may be spoofed by an attacker and this may lead to unauthorized access to EdgeX UI - Web Application. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Kong is run as a container on the EdgeX Docker network. Replacing/spoofing Kong would require privileaged access to the host. Kong is exposed via TLS and we provide a cli tool to install a custom certificate that the web UI can validate if the CA is trusted. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#67-cross-site-scripting-state-mitigation-implemented-priority-low","title":"67. Cross Site Scripting\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: The web server 'EdgeX UI - Web Application' could be a subject to a cross-site scripting attack because it does not sanitize untrusted input. Justification: <no mitigation provided> Possible Mitigation: Because the Web application is running as a container on the Docker network with Kong, access to the response traffic via Kong would require access to the Docker network (requiring access to the host with elevated privilege). The EdgeX Web GUI has X-XSS-Protection enabled. In environments where cross site scripting is a concern, the EdgeX UI Web application can be removed with no effect to the rest of the system. The UI is offered as a convenience. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#68-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"68. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX UI - Web Application may be able to impersonate the context of Kong in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: The Web GUI must authenticate with Kong using a JWT token (see https://docs.edgexfoundry.org/2.2/getting-started/tools/Ch-GUI/#secure-mode-with-api-gateway-token). Without the proper JWT token access, the Web GUI cannot get eleveated privilege to EdgeX as a whole. An impersonating Web GUI might be used to have a user provide their JWT token which could be used to then perform other operations in EdgeX. If this is a real threat, the GUI can be removed and not used without other impacts to EdgeX. The GUI is a convenience tool. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response_2","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#69-data-flow-response-is-potentially-interrupted-state-not-applicable-priority-low","title":"69. Data Flow response Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: While a DoS on the GUI is possible (its endpoint is accessible on the Docker network), the GUI would not prevent the critical work of EdgeX from continuing. Kong prevents unauthorized access beyond the GUI. Kong can also be used to throttle requests coming from the GUI or other caller (see https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/). Other mechisms exist to work with EdgeX (such as the service APIs). The GUI is a convenience. It can be removed if a high risk target without affect to the rest of EdgeX. Mitigator: Third Party Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#70-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"70. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: The Web GUI can use elevated log level to log all requests. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#71-spoofing-of-the-browserapi-caller-external-destination-entity-state-not-applicable-priority-low","title":"71. Spoofing of the Browser/API Caller External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Browser/API Caller may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Browser/API Caller. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-response_3","title":"Interaction: response","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#72-data-flow-response-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"72. Data Flow response Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#73-external-entity-browserapi-caller-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"73. External Entity Browser/API Caller Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Browser/API Caller claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Kong provides logging to document all requests. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#74-spoofing-the-edgex-foundry-process-state-not-applicable-priority-high","title":"74. Spoofing the EdgeX Foundry Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: Without an authentication protocol, there is no mitigation for this threat. The device would not be able to determine that the Spoofing EdgeX caller is not EdgeX. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#75-spoofing-of-source-data-store-devicesensor-state-not-applicable-priority-high","title":"75. Spoofing of Source Data Store Device/Sensor\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: Due to the nature of many protocols, an outside agent could spoof as a ligitimage device/sensor. This is of particular concern if the device service auto provisions the devices/sensors without any authentication. Auto provisioning shold be limited to pick up trusted devices. Protocols such as BACnet do allow for authentication with the device/sensor. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system, but there is no ability in EdgeX directly to protect against a spoofed device/sensor that does not authenticate (which is the norm in some older OT protocols). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#76-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-low","title":"76. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#77-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"77. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of most simple and typically older OT protocols (Modbus or GPIO as examples), there is no way to secure the communications with the device/sensor under that protocol. Critical sensors/devices of this nature should be physically secured (along with their physical connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#78-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"78. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#79-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"79. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#80-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"80. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#81-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"81. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#82-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"82. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_1","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#83-external-entity-megaservice-cloud-or-enterprise-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"83. External Entity Megaservice - Cloud or Enterprise Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Megaservice - Cloud or Enterprise claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#84-spoofing-of-the-megaservice-cloud-or-enterprise-external-destination-entity-state-not-applicable-priority-low","title":"84. Spoofing of the Megaservice - Cloud or Enterprise External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Megaservice - Cloud or Enterprise may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Megaservice - Cloud or Enterprise. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of a megacloud or enterprise, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export) Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#85-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"85. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the cloud). If the data is deemed critical and if by some means the data flow was interrupted, then store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_2","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#86-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"86. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Data flow is in one direction (exporting from EdgeX to the external message bus). If the data is deemed critical and if by some means the data flow was interrupted, store and forward mechisms in EdgeX allow the data to be sent once the communications are re-established. If using MQTT, the quality of service (QoS) setting on a message broker can also be used to ensure all data is delivered or it is resent later. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#87-external-entity-message-topic-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"87. External Entity Message Topic Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Message Topic claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Application services can use elevated log level to log all exports. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#88-spoofing-of-the-message-topic-external-destination-entity-state-not-applicable-priority-low","title":"88. Spoofing of the Message Topic External Destination Entity\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Topic may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of Message Topic. Consider using a standard authentication mechanism to identify the external entity. Justification: <no mitigation provided> Possible Mitigation: Spoofing as the browser or any tool or system of EdgeX is immaterial. Any browser or API tool like Postman would need to request access using the API gateway token. With the token, they are considered a legitimate user of EdgeX. In the case of an external message bus, most communication is from EdgeX to that system vs sending requests to EdgeX (as an export). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_3","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#89-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"89. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (physically connected authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the device would not get the proper authenticated requests and thereby deny any query or actuation request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#90-spoofing-of-source-data-store-devicesensor-physically-connected-authenticated-state-mitigation-implemented-priority-high","title":"90. Spoofing of Source Data Store Device/Sensor (physically connected authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (physically connected authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication protocol in place (as examplified by BACnet secured or ONVIF cameras with security on), the spoofing device or sensor would not be able to properly authenticated and thereby be denied the ability to send data, be queried. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#91-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"91. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#92-weak-access-control-for-a-resource-state-not-applicable-priority-high","title":"92. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (physically connected authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Securing the data flow to/from a device or sensor is dependent on the OT protocol. In the case of something like BACnet secure (which is based on TLS - see https://www.bacnetinternational.org/page/secureconnect), the flow between EdgeX and the BACnet device can be encryped. The Device Service would need to be written to use that secure communications. In cases where there is no way to secure the communications with the device/sensor under that protocol, then mitigation is via physical security of the device/sensor (along with their connection to the EdgeX host). Mitigator: No mitigation or not applicable Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#93-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"93. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#94-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"94. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#95-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"95. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or remove a device/senosr causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#96-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-not-applicable-priority-low","title":"96. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (physically connected authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#97-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-high","title":"97. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold). EdgeX has no means to protect the \"wire\" to a physically connected device/sensor. Physical security is required to protect the wire and device/sensor and mitigate this threat. Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_4","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#98-spoofing-of-source-data-store-devicesensor-rest-authenticated-state-mitigation-implemented-priority-low","title":"98. Spoofing of Source Data Store Device/Sensor (REST authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Device/Sensor (REST authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to Kong. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the REST caller would not get the proper authenticated by a spoofed Kong and thereby deny any query request. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#99-external-entity-kong-potentially-denies-receiving-data-state-mitigation-implemented-priority-low","title":"99. External Entity Kong Potentially Denies Receiving Data\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Repudiation Description: Kong claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#100-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"100. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (REST authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: REST requests and responses to/through Kong are encrypted by default. Mitigator: Third Party Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#101-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-low","title":"101. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Kong can be configured to throttle requests to prevent a DoS attack. See https://keyvatech.com/2019/12/03/secure-your-business-critical-apps-with-kong/ Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#102-data-store-inaccessible-state-mitigation-implemented-priority-medium","title":"102. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the network communication connection causing major disruption of service (ex: removing or cutting off comms to a critical temperature resource of a heating or cooling machine). EdgeX has no means to protect the network connection. Physical security is required to protect the wire and device/sensor and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#103-weakness-in-sso-authorization-state-mitigation-implemented-priority-high","title":"103. Weakness in SSO Authorization\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Common SSO implementations such as OAUTH2 and OAUTH Wrap are vulnerable to MitM attacks. Justification: <no mitigation provided> Possible Mitigation: In EdgeX, Kong is configured to use JWT token authentication. OAUTH2 and OAUTH are not allowed as of EdgeX 2.0 (Ireland release - see https://docs.edgexfoundry.org/2.3/security/Ch-APIGateway/#configuration-of-jwt-authentication-for-api-gateway). JWT token expires in one hour by default. Mitigator: Third Party Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-sensor-data_5","title":"Interaction: sensor data","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#104-elevation-by-changing-the-execution-flow-in-edgex-foundry-state-mitigation-implemented-priority-low","title":"104. Elevation by Changing the Execution Flow in EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into EdgeX Foundry in order to change the flow of program execution within EdgeX Foundry to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Access to publish data through the external MQTT broker is protected with authentication. Wrong data can also be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#105-edgex-foundry-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-mitigation-implemented-priority-low","title":"105. EdgeX Foundry May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Elevation Of Privilege Description: Device/Sensor (via external MQTT broker - authenticated) may be able to remotely execute code for EdgeX Foundry. Justification: <no mitigation provided> Possible Mitigation: EdgeX does not execute random code based on input from a device or sensor via MQTT (as if it was from a web application with something like unsanitized inputs). All data is santized by extracting expected data values from the sensor input data, creating an EdgeX event/reading message and sending that into the rest of EdgeX. The data coming from a sensor could be used to kill the service (ex: buffer overflow attack and sending too much data for the service to consume for example - see DoS threats). The device service in EdgeX can be written to reject to large of a request (for example). In some cases, a protocol may offer dual authentication, and if used, help to mitigate RCE Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#106-data-store-inaccessible-state-mitigation-implemented-priority-high","title":"106. Data Store Inaccessible\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#107-data-flow-sensor-data-is-potentially-interrupted-state-mitigation-implemented-priority-high","title":"107. Data Flow sensor data Is Potentially Interrupted\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: Outside influence could break the communication connection or MQTT broker causing major disruption of service (ex: removing or cutting off comms to a critical temperature sensor of a heating or cooling machine). EdgeX has no means to protect the connection to the external MQTT broker, the broker itself, or publisher to the broker. Physical and sytem security is required to protect these and mitigate this threat. The device service does track \"last connected\" and that timestamp could be monitored for outside of normal reporting ranges. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#108-potential-process-crash-or-stop-for-edgex-foundry-state-mitigation-implemented-priority-medium","title":"108. Potential Process Crash or Stop for EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: EdgeX Foundry crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#109-weak-access-control-for-a-resource-state-mitigation-implemented-priority-high","title":"109. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Improper data protection of Device/Sensor (via external MQTT broker - authenticated) can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: Requires encryption of the communications (on both the EdgeX and device/sensor ends) which is not in place by default. MQTTS could be implemented by the adopter with the appropriate MQTT broker. Mitigator: Adopter Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#110-potential-data-repudiation-by-edgex-foundry-state-mitigation-implemented-priority-high","title":"110. Potential Data Repudiation by EdgeX Foundry\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: EdgeX Foundry claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level (set writable configuration log level to DEBUG in the device service) can be used to log all data communications. Log level on the message bus may also be elevated. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#111-spoofing-of-source-data-store-devicesensor-via-external-mqtt-broker-authenticated-state-mitigation-implemented-priority-high","title":"111. Spoofing of Source Data Store Device/Sensor (via external MQTT broker - authenticated)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Device/Sensor (via external MQTT broker - authenticated) may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT publisher of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby deny any request. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#112-spoofing-the-edgex-foundry-process-state-mitigation-implemented-priority-high","title":"112. Spoofing the EdgeX Foundry Process\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Spoofing Description: EdgeX Foundry may be spoofed by an attacker and this may lead to information disclosure by Device/Sensor (via external MQTT broker - authenticated). Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: With authentication in place the spoofing MQTT receiver of sensor data (or the spoofed external message broker) would not be properly authenticated and thereby be unable to receive. The EdgeX framework has the support to store secrets to authenticate devices. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-service-registration","title":"Interaction: service registration","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#113-spoofing-of-destination-data-store-consul-registry-state-mitigation-implemented-priority-low","title":"113. Spoofing of Destination Data Store Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Consul (registry) may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Consul (registry). Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Consul, the service would not know that the response came from something other than Consul. However, Consul is run as a container on the EdgeX Docker network. Replacing/spoofing the Consul container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Consul (with TLS cert in place). A spoofing service (in this case Consul), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#114-potential-excessive-resource-consumption-for-edgex-foundry-or-consul-registry-state-mitigation-implemented-priority-low","title":"114. Potential Excessive Resource Consumption for EdgeX Foundry or Consul (registry)\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Denial Of Service Description: Does EdgeX Foundry or Consul (registry) take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: EdgeX services and Consul run as containers in a Docker network that, by default with security on, does not allow direct access to the service APIs. During the process of Consul bootstrapping, the EdgeX security bootstrapper ensures that the Consul APIs and GUI cannot be accessed without an ACL token (see https://docs.edgexfoundry.org/2.2/security/Ch-Secure-Consul/). Therefore, using the Consul APIs to cause a DoS attack would require access tokens. A rogue authorized user or someone able to illegally get the Consul token could cause excess use of resources that cause the services or Consul down. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#115-authenticated-data-flow-compromised-state-mitigation-implemented-priority-low","title":"115. Authenticated Data Flow Compromised\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Tampering Description: An attacker can read or modify data transmitted over an authenticated dataflow. Justification: <no mitigation provided> Possible Mitigation: EdgeX containers communicate via a Docker network. A hacker would need to gain access to the host and have elevated privileages on the host to access the network traffic. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then TLS or overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm) Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-service-secrets","title":"Interaction: service secrets","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#116-weak-access-control-for-a-resource-state-mitigation-implemented-priority-medium","title":"116. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Information Disclosure Description: Improper data protection of Vault can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: The Vault root and service level tokens are revoked after setup and then all interactions is via the programmatic interface (with properly authenticated token). There are additional options to Vault Master Key encryption provided here: https://docs.edgexfoundry.org/2.2/threat-models/secret-store/vault_master_key_encryption/ Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#117-spoofing-of-source-data-store-vault-state-mitigation-implemented-priority-low","title":"117. Spoofing of Source Data Store Vault\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Vault may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: If someone was able to provide a container that was spoofing as Vault, the service would not know that the response came from something other than Vault. However, Vault is run as a container on the EdgeX Docker network. Replacing/spoofing the Vault container would require privileaged (root) access to the host. Additional adopter mitigation would include putting TLS in place between EdgeX and Vault (with TLS cert in place). A spoofing service (in this case Vault), would not have the appropriate cert in place to participate in the communications. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-subscribed-message","title":"Interaction: subscribed message","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#118-weak-access-control-for-a-resource-state-mitigation-implemented-priority-low","title":"118. Weak Access Control for a Resource\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Message Bus Broker can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: When running EdgeX in secure mode the Redis database service is secured with a username/password. Redis Pub/Sub utilizes the existing Redis database service so that no additional broker service is required. This in turn creates a Secure MessageBus. See https://docs.edgexfoundry.org/2.2/security/Ch-Secure-MessageBus/. MQTTS can used for internal message bus communications but not provided by EdgeX Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#119-spoofing-of-source-data-store-message-bus-broker-state-mitigation-implemented-priority-low","title":"119. Spoofing of Source Data Store Message Bus Broker\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Low]","text":"Category: Spoofing Description: Message Bus Broker may be spoofed by an attacker and this may lead to incorrect data delivered to EdgeX Foundry. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: The message bus when requiring a broker (MQTT broker for example) is run as a container on the EdgeX Docker network. Replacing/spoofing the broker container would require privileaged access to the host. Mitigator: EdgeX Foundry Mitigation Status: Mitigation reviewed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-edgex-service-to-service-http-comms","title":"Diagram: EdgeX Service to Service HTTP comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#edgex-service-to-service-http-comms-diagram-summary","title":"EdgeX Service to Service HTTP comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 2 Mitigation Implemented 0 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-http","title":"Interaction: HTTP","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#120-edgex-service-a-process-memory-tampered-state-needs-investigation-priority-high","title":"120. EdgeX Service A Process Memory Tampered\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: No mitigation or not applicable Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#121-elevation-using-impersonation-state-needs-investigation-priority-high","title":"121. Elevation Using Impersonation\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service APIs is restricted except through Kong. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, TLS can be used to encrypt all traffic. Service-to-service calls behind Kong are unauthenticated in the current implementation. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-edgex-service-to-service-message-bus-comms","title":"Diagram: EdgeX Service to Service message bus comms","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#edgex-service-to-service-message-bus-comms-diagram-summary","title":"EdgeX Service to Service message bus comms Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 2 Total 2 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-message-bus-mqtt-redis-pubsub-nats","title":"Interaction: message bus (MQTT, Redis Pub/Sub, NATS)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#122-elevation-using-impersonation-state-mitigation-implemented-priority-medium","title":"122. Elevation Using Impersonation\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Elevation Of Privilege Description: EdgeX Service B may be able to impersonate the context of EdgeX Service A in order to gain additional privilege. Justification: <no mitigation provided> Possible Mitigation: All services are required to authroize to the message bus, but all services authorized on the message bus have equal privilege to send and receive messages. Impersonating another EdgeX service would require access to the host system and the Docker network. Ports to the service message bus is restricted to internal communications only. If extra security is needed or if an adopter is running EdgeX services in a distributed environment (multiple hosts), then overlay network encryption can be used (see example: https://github.com/edgexfoundry/edgex-examples/tree/update-custom-trigger-multiple-pipelines/security/remote_devices/docker-swarm). Alternately, secure MQTT (MQTTS) message bus communications can be used. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#123-edgex-service-a-process-memory-tampered-state-mitigation-implemented-priority-high","title":"123. EdgeX Service A Process Memory Tampered\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Tampering Description: If EdgeX Service A is given access to memory, such as shared memory or pointers, or is given the ability to control what EdgeX Service B executes (for example, passing back a function pointer.), then EdgeX Service A can tamper with EdgeX Service B. Consider if the function could work with less access to memory, such as passing data rather than pointers. Copy in data provided, and then validate it. Justification: <no mitigation provided> Possible Mitigation: Not applicable in containerized environments. Separate processes running in separate containers. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-access-via-vpn","title":"Diagram: Access via VPN","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#access-via-vpn-diagram-summary","title":"Access via VPN Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-host-access","title":"Diagram: Host Access","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#host-access-diagram-summary","title":"Host Access Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-open-port-protections","title":"Diagram: Open Port Protections","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#open-port-protections-diagram-summary","title":"Open Port Protections Diagram Summary:","text":"Not Started 0 Not Applicable 0 Needs Investigation 0 Mitigation Implemented 0 Total 0 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#diagram-device-protocol-threats-modbus-example","title":"Diagram: Device Protocol Threats - Modbus example","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#device-protocol-threats-modbus-example-diagram-summary","title":"Device Protocol Threats - Modbus example Diagram Summary:","text":"Not Started 0 Not Applicable 7 Needs Investigation 9 Mitigation Implemented 2 Total 18 Total Migrated 0"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-binary-rtu-get-or-set","title":"Interaction: Binary RTU (GET or SET)","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#124-spoofing-of-destination-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"124. Spoofing of Destination Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination data store. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#125-potential-excessive-resource-consumption-for-modbus-device-service-or-modbus-devicesensor-state-needs-investigation-priority-high","title":"125. Potential Excessive Resource Consumption for Modbus Device Service or Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: Does Modbus Device Service or Modbus Device/Sensor take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#126-spoofing-the-modbus-device-service-process-state-needs-investigation-priority-high","title":"126. Spoofing the Modbus Device Service Process\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to unauthorized access to Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the source process. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#127-the-modbus-devicesensor-data-store-could-be-corrupted-state-needs-investigation-priority-high","title":"127. The Modbus Device/Sensor Data Store Could Be Corrupted\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Tampering Description: Data flowing across Binary RTU (GET or SET) may be tampered with by an attacker. This may lead to corruption of Modbus Device/Sensor. Ensure the integrity of the data flow to the data store. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#128-data-store-denies-modbus-devicesensor-potentially-writing-data-state-not-applicable-priority-high","title":"128. Data Store Denies Modbus Device/Sensor Potentially Writing Data\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device/Sensor claims that it did not write data received from an entity on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: It is unlikely that a Modbus device/sensor has a log to provide an audit of requests. Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#129-data-flow-sniffing-state-not-applicable-priority-high","title":"129. Data Flow Sniffing\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Data flowing across Binary RTU (GET or SET) may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized nor encrypted by the Protocol, any service (any spoof) could appear to be the EdgeX device service and either get data from or (worse) actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#130-weak-credential-transit-state-needs-investigation-priority-high","title":"130. Weak Credential Transit\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Information Disclosure Description: Credentials on the wire are often subject to sniffing by an attacker. Are the credentials re-usable/re-playable? Are credentials included in a message? For example, sending a zip file with the password in the email. Use strong cryptography for the transmission of credentials. Use the OS libraries if at all possible, and consider cryptographic algorithm agility, rather than hardcoding a choice. Justification: <no mitigation provided> Possible Mitigation: Modbus does not support any type of authentication/authorization in communications. Physical security of the device and wire are the only ways to thwart information disclosure. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#131-data-flow-binary-rtu-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"131. Data Flow Binary RTU (GET or SET) Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#132-data-store-inaccessible-state-needs-investigation-priority-high","title":"132. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#interaction-binary-rtu-response-get-or-se","title":"Interaction: Binary RTU Response (GET or SE","text":""},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#133-spoofing-of-source-data-store-modbus-devicesensor-state-needs-investigation-priority-high","title":"133. Spoofing of Source Data Store Modbus Device/Sensor\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device/Sensor may be spoofed by an attacker and this may lead to incorrect data delivered to Modbus Device Service. Consider using a standard authentication mechanism to identify the source data store. Justification: <no mitigation provided> Possible Mitigation: As an unprotected (physically) Modbus device/sensor can be used to create a DOS attack (sending too much data), or send erroneous/faulty data, or disrupted / cut off and thereofore not send any data, the device service must be written to monitor and thwart the flow of too much data, notify when data is outside of expected ranges and notify when it appears the device/sensor is no longer connected and reporting. Provisioning of the device using known or specific ranges of MAC addresses (or IP addresses if using Modbus TCP/IP), etc. can help onboarding with an unauthorized device. Mitigator: Adopter Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#134-weak-access-control-for-a-resource-state-not-applicable-priority-low","title":"134. Weak Access Control for a Resource\u00a0 [State: Not Applicable]\u00a0 [Priority: Low]","text":"Category: Information Disclosure Description: Improper data protection of Modbus Device/Sensor can allow an attacker to read information not intended for disclosure. Review authorization settings. Justification: <no mitigation provided> Possible Mitigation: As Modbus is a simple protocol (reporting data or reacting to accuation requests), it is not possible for the device or sensor to gain other data from the device service (or EdgeX as a whole). Mitigator: No mitigation or not applicable Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#135-spoofing-the-modbus-device-service-process-state-not-applicable-priority-high","title":"135. Spoofing the Modbus Device Service Process\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Spoofing Description: Modbus Device Service may be spoofed by an attacker and this may lead to information disclosure by Modbus Device/Sensor. Consider using a standard authentication mechanism to identify the destination process. Justification: <no mitigation provided> Possible Mitigation: As there are no means to secure Modbus communications via the protocol exchange, the Modbus device/sensor and its wired connection must be physically secured to insure no spoofing or unauthorized collection of data or actuation with the device. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#136-potential-data-repudiation-by-modbus-device-service-state-mitigation-implemented-priority-high","title":"136. Potential Data Repudiation by Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: High]","text":"Category: Repudiation Description: Modbus Device Service claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data. Justification: <no mitigation provided> Possible Mitigation: Use of elevated log level can be used to log all data communications from a device/sensor. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#137-potential-process-crash-or-stop-for-modbus-device-service-state-mitigation-implemented-priority-medium","title":"137. Potential Process Crash or Stop for Modbus Device Service\u00a0 [State: Mitigation Implemented]\u00a0 [Priority: Medium]","text":"Category: Denial Of Service Description: Modbus Device Service crashes, halts, stops or runs slowly; in all cases violating an availability metric. Justification: <no mitigation provided> Possible Mitigation: Stopping EdgeX services requires host access (and access to the Docker engine, Docker containers and Docker network) with eleveated privileges or access to the EdgeX system management APIs (requiring the Kong JWT token). The system management service can be removed for extra security. Mitigator: EdgeX Foundry Mitigation Status: Mitigation written"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#138-data-flow-binary-rtu-response-get-or-set-is-potentially-interrupted-state-not-applicable-priority-high","title":"138. Data Flow Binary RTU Response (GET or SET Is Potentially Interrupted\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent interrupts data flowing across a trust boundary in either direction. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with or shut off to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#139-data-store-inaccessible-state-needs-investigation-priority-high","title":"139. Data Store Inaccessible\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Denial Of Service Description: An external agent prevents access to a data store on the other side of the trust boundary. Justification: <no mitigation provided> Possible Mitigation: As the communication to a Modbus device / sensor is not authenticated/authorized by the protocol, the communication across the wire could be tampered with to cause DOS attacts or actuate the device illegally. Given the nature of Modbus, the only way to protect against this threat is to physically secure the device and connectivity (wire). Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#140-modbus-device-service-may-be-subject-to-elevation-of-privilege-using-remote-code-execution-state-needs-investigation-priority-high","title":"140. Modbus Device Service May be Subject to Elevation of Privilege Using Remote Code Execution\u00a0 [State: Needs Investigation]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: Modbus Device/Sensor may be able to remotely execute code for Modbus Device Service. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Mitigation Research needed"},{"location":"threat-models/stride-model/EdgeXFoundryThreatReportV2.2/#141-elevation-by-changing-the-execution-flow-in-modbus-device-service-state-not-applicable-priority-high","title":"141. Elevation by Changing the Execution Flow in Modbus Device Service\u00a0 [State: Not Applicable]\u00a0 [Priority: High]","text":"Category: Elevation Of Privilege Description: An attacker may pass data into Modbus Device Service in order to change the flow of program execution within Modbus Device Service to the attacker's choosing. Justification: <no mitigation provided> Possible Mitigation: Outside influence on a sensor or device is one of the biggest threats to an edge system and one of the hardest to mitigate. If tampered with, a sensor or device could be used to send the wrong data (e.g., force a temp sensor to send a signal that it is too hot when it is really too cold), too much data (overwhelming the edge system by causing the sensor to send data too often), or not enough data (e.g., disconnecting a critical monitor sensor that would cause a system to stop). The device service can be constructed to filter data to avoid the \u201ctoo much\u201d data DoS. The device service can be constructed to report and alert when there is not enough data coming from the device or sensor or the sensor/device appears to be offline (provided by the last connected tracking in EdgeX). Wrong data can be mitigated by having the device service look for expected ranges of values (as supported by min/max attributes on device profiles). Physical security of the sensor and communications (wire) offer the best hope to mitigate this threat. Commercial 3rd party software or extensions to EdgeX (see, for example, RSA\u2019s Netwitness IoT: https://www.netwitness.com/en-us/products/iot/) could be used to detect anomalous sensor/device communications and isolate the sensor from the system. Mitigator: Adopter Mitigation Status: Cannot mitigate or not appilcable"},{"location":"walk-through/Ch-Walkthrough/","title":"EdgeX Demonstration API Walk Through","text":"In order to better appreciate the EdgeX Foundry micro services (what they do and how they work), how they inter-operate with each other, and some of the more important API calls that each micro service has to offer, this demonstration API walk through shows how a device service and device are established in EdgeX, how data is sent flowing through the various services, and how data is then shipped out of EdgeX to the cloud or enterprise system.
Through this demonstration, you will play the part of various EdgeX micro services by manually making REST calls in a way that mimics EdgeX system behavior. After exploring this demonstration, and hopefully exercising the APIs yourself, you should have a much better understanding of how EdgeX Foundry works.
To be clear, this walkthrough is not the way you setup all your device services, devices, etc. In this walkthrough, you manually call EdgeX APIs to perform the work that a device service would do to get a new device setup and to send data to/through EdgeX. In other words, you are simulating the work of a device service does automatically by manually executing EdgeX APIs. You will also exercise APIs to see the results of the work accomplished by the device service and all of EdgeX.
Next>
"},{"location":"walk-through/Ch-WalkthroughCommands/","title":"Calling commands","text":"Recall that the device profile (the camera-monitor-profile
in this walkthrough) included a number of commands to get/set (read or write) information from any device of that type. Also recall that the device (the countcamera1
in this walkthrough) was associated to the device profile (again, the camera-monitor-profile
) when the device was provisioned.
See core command API for more details.
With the setup complete, you can ask the core command micro service for the list of commands associated to the device (the countcamera1
). The command micro service exposes the commands in a common, normalized way that enables simplified communications with the devices for
Use either the Postman or Curl tab below to walkthrough getting the list of commands.
PostmanCurlMake a GET request to http://localhost:59882/api/v3/device/name/countcamera1
.
Note
Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default.
Make a curl GET request as shown below.
curl -X GET localhost:59882/api/v3/device/name/countcamera1 | json_pp\n
Note
Please note the change in port for the command request above. We are no longer calling on core metadata in this part of the walkthrough. The command micro service is at port 59882 by default.
Explore all of the URLs returned as part of this response! These are the URLs that clients (internal or external to EdgeX) can call to trigger the various get/set (read and write) offerings on the Device. However, do take note that the host for the URLs is edgex-core-command
. This is the name of the host for core command inside Docker. To exercise the URL outside of Docker, you would have to use the name of the system host (localhost
if executing on the same box).
While we're at it, check that no data has yet been shipped to core data from the camera device. Since the device service and device in this demonstration are wholly manually driven by you, no sensor data should yet have been collected. You can test this theory by asking for the count of events in core data.
"},{"location":"walk-through/Ch-WalkthroughCommands/#walkthrough-events","title":"Walkthrough - Events","text":"Use either the Postman or Curl tab below to walkthrough getting the list of events.
PostmanCurlMake a GET request to http://localhost:59880/api/v3/event/count/device/name/countcamera1
.
Make a curl GET request as shown below.
curl -X GET localhost:59880/api/v3/event/count/device/name/countcamera1\n
The response returned should indicate no events for the camera in core data.
{\"apiVersion\":\"v2\",\"statusCode\":200,\"Count\":0}\n
"},{"location":"walk-through/Ch-WalkthroughCommands/#execute-a-command","title":"Execute a Command","text":"While there is no real device or device service in this walkthrough, EdgeX doesn't know that. Therefore, with all the configuration and setup you have performed, you can ask EdgeX to set the scan depth or set the snapshot duration to the camera, and EdgeX will dutifully try to perform the task. Of course, since no device service or device exists, as expected EdgeX will ultimately responds with an error. However, through the log files, you can see a command made of the core command micro service, attempts to call on the appropriate command of the fictitious device service that manages our fictitious camera.
For example sake, let's launch a command to set the scan depth of countcamera1
(the name of the single human/dog counting camera device in EdgeX right now). The first task to launch a request to set the scan depth is to get the URL for the command to set
or write a new scan depth on the device. Return to the results of the request to get a list of the commands by the device name above.
Locate and copy the URL and path for the set
depth command. Below is a picture containing a slice of the JSON returned by the GET request above and desired set
Command URL highlighted - yours will vary based on IDs.
Use either the Postman or Curl tab below to walkthrough actuating the device.
PostmanCurlMake a PUT request to http://localhost:59882/api/v3/device/name/countcamera1/ScanDepth
with the following body.
{\"depth\":\"9\"}\n
Warning
Notice that the URL above is a combination of both the command URL and path you found from your command list.
Make a curl PUT request as shown below.
curl -X PUT -d '{\"depth\":\"9\"}' localhost:59882/api/v3/device/name/countcamera1/ScanDepth\n
Warning
Notice that the URL above is a combination of both the command URL and path you found from your command list.
"},{"location":"walk-through/Ch-WalkthroughCommands/#check-command-service-log","title":"Check Command Service Log","text":"Again, because no device service (or device) actually exists, core command will respond with a Failed to send a http request
error. However, checking the logging output will prove that the core command micro service did receive the request and attempted to call on the non-existent device service (at the address provided for the device service - defined earlier in this walkthrough) to issue the actuating command. To see the core command service log issue the following Docker command :
docker logs edgex-core-command\n
The last lines of the log entries should highlight the attempt to contact the non-existent device. level=ERROR ts=2021-09-16T20:50:09.965368572Z app=core-command source=http.go:47 X-Correlation-ID=49cc97f5-1e84-4a46-9eb5-543ae8bd5284 msg=\"failed to send a http request -> Put \\\"camera-device-service:59990/api/v3/device/name/countcamera1/ScanDepth?\\\": unsupported protocol scheme \\\"camera-device-service\\\"\"\n...\n
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/","title":"Defining your device","text":"A device profile can be thought of as a template or as a type or classification of device. General characteristics about the type of device, the data theses devices provide, and how to command them is all provided in a device profile. Other pages within this document set provide more details about a device profile and its purpose (see core metadata to start). It is typical that as part of the reference information setup sequence, the device service provides the device profiles for the types of devices it manages.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#device-profile","title":"Device Profile","text":"See core metadata API for more details.
Our fictitious device service will manage only the human/dog counting camera, so it only needs to make one POST
request to create the monitoring camera device profile. Since device profiles are often represented in YAML, you make a multi-part form-data POST
with the device profile file (find the example profile here) to create the Camera Monitor profile.
If you explore the sample profile, you will see that the profile begins with some general information.
name: \"camera-monitor-profile\"\nmanufacturer: \"IOTech\"\nmodel: \"Cam12345\"\nlabels: - \"camera\"\ndescription: \"Human and canine camera monitor profile\"\n
Each profile has a unique name along with a description, manufacturer, model and collection of labels to assist in queries for particular profiles. These are relatively straightforward attributes of a profile.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#resources-and-commands","title":"Resources and Commands","text":"The device profile defines how to communicate with any device that abides by the profile. In particular, it defines the deviceResources
and deviceCommands
used to send requests to the device (via the device service). See the Device Profile documentation for more background on each of these.
The device profile describes the elements of data that can be obtained from the device or sensor and how to change a setting on a device or sensor. The data that can be obtained or the setting that can be changed are called resources or more precisely they are referred to as device resources in Edgex. Learn more about deviceReources
in the Device Profile documentation.
In this walkthrough example, there are two pieces of data we want to be able to get or read from the camera: dog and human counts. Therefore, both are represented as device resources in the device profile. Additionally, we want to be able to set two settings on the camera: the scan depth and snapshot duration. These are also represented as device resources in the device profile.
deviceResources:\n-\nname: \"HumanCount\"\nisHidden: false #is hidden is false by default so this is just making it explicit for purpose of the walkthrough demonstration\ndescription: \"Number of people on camera\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"R\" #designates that this property can only be read and not set\ndefaultValue: \"0\"\n-\nname: \"CanineCount\"\nisHidden: false\ndescription: \"Number of dogs on camera\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"R\" #designates that this property can only be read and not set\ndefaultValue: \"0\"\n-\nname: \"ScanDepth\"\nisHidden: false\ndescription: \"Get/set the scan depth\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"RW\" #designates that this property can be read or set\ndefaultValue: \"0\"\n\n-\nname: \"SnapshotDuration\"\nisHidden: false\ndescription: \"Get the snaphot duration\"\nproperties:\nvalueType: \"Int16\"\nreadWrite: \"RW\" #designates that this property can be read or set\ndefaultValue: \"0\"\n
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#understanding-device-commands","title":"Understanding Device Commands","text":"Command or more precisely device commands specify access to reads and writes for multiple simultaneous device resources. In other words, device commands allow you to ask for multiple pieces of data from a sensor at one time (or set multiple settings at one time). In this example, we can request both human and dog counts in one request by establishing a device command that specifies the request for both. Get more details on deviceCommands
in the Device Profile documentation.
deviceCommands:\n-\nname: \"Counts\"\nreadWrite: \"R\"\nisHidden: false\nresourceOperations:\n- { deviceResource: \"HumanCount\" }\n- { deviceResource: \"CanineCount\" }\n
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#walkthrough-device-profile","title":"Walkthrough - Device Profile","text":"Use either the Postman or Curl tab below to walkthrough uploading the device profile.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#download-the-device-profile","title":"Download the Device Profile","text":"Click on the link below to download and save the device profile (YAML) to your system.
EdgeX_CameraMonitorProfile.yml
Note
Device profiles are stored in core metadata. Therefore, note that the calls in the walkthrough are to the metadata service, which defaults to port 59881.
"},{"location":"walk-through/Ch-WalkthroughDeviceProfile/#upload-the-device-profile-to-edgex","title":"Upload the Device Profile to EdgeX","text":"PostmanCurlMake a POST request to http://localhost:59881/api/v3/deviceprofile/uploadfile
. The request should not include any additional headers (leave the defaults). In the Body, make sure \"form-data\" is selected and set the Key to file
and then select the device profile file where you saved it (as shown below).
If your API call is successful, you will get a generated id for your new DeviceProfile
in the response area.
Make a curl POST request as shown below.
curl -X POST -F 'file=@/path/to/your/profile/here/EdgeX_CameraMonitorProfile.yml' http://localhost:59881/api/v3/deviceprofile/uploadfile\n
If your API call is successful, you will get a generated id for your new DeviceProfile
in the response area.
Warning
Note that the file location in the curl command above needs to be replaced with your actual file location path. Also, if you do not save the device profile file to EdgeX_CameraMonitorProfile.yml
, then you will need to replace the file name as well.
If you make a GET call to the http://localhost:59881/api/v3/deviceprofile/all
URL (with Postman or curl) you will get a listing (in JSON) of all the device profiles (and all of its associated deviceResource
and deviceCommand
) currently defined in your instance of EdgeX, including the one you just added.
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughDeviceService/","title":"Register your device service","text":"Our next task in this walkthrough is to have the device service register or define itself in EdgeX. That is, it can proclaim to EdgeX that \"I have arrived and am functional.\"
"},{"location":"walk-through/Ch-WalkthroughDeviceService/#register-with-core-configuration-and-registration","title":"Register with Core Configuration and Registration","text":"Part of that registration process of the device service, indeed any EdgeX micro service, is to register itself with the core configuration & registration. In this process, the micro service provides its location to the Config/Reg micro service and picks up any new/latest configuration information from this central service. Since there is no real device service in this walkthrough demonstration, this part of the inter-micro service exchange is not explored here.
"},{"location":"walk-through/Ch-WalkthroughDeviceService/#device-service","title":"Device Service","text":"See core metadata API for more details.
At this point in your walkthrough, the device service must create a representative instance of itself in core metadata. It is in this registration that the device service is given an address that allows core command or any EdgeX service to communicate with it.
The name of the device service must be unique across all of EdgeX. When registering a device service, the initial admin state can be provided. The administrative state (aka admin state) provides control of the device service by man or other systems. It can be set to LOCKED
or UNLOCKED
. When a device service is set to LOCKED
, it is not suppose to respond to any command requests nor send data from the devices. See Admin State documentation for more details.
Use either the Postman or Curl tab below to walkthrough creating the DeviceService
.
Make a POST request to http://localhost:59881/api/v3/deviceservice
with the following body:
{\n\"apiVersion\" : \"v3\",\n\"service\": {\n\"name\": \"camera-control-device-service\",\n\"description\": \"Manage human and dog counting cameras\",\n\"adminState\": \"UNLOCKED\",\n\"labels\": [\n\"camera\",\n\"counter\"\n],\n\"baseAddress\": \"camera-device-service:59990\"\n}\n}\n
Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new DeviceService
in the response area.
Make a curl POST request as shown below.
curl -X 'POST' 'http://localhost:59881/api/v3/deviceservice' -d '[{\"apiVersion\" : \"v3\",\"service\": {\"name\": \"camera-control-device-service\",\"description\": \"Manage human and dog counting cameras\", \"adminState\": \"UNLOCKED\", \"labels\": [\"camera\",\"counter\"], \"baseAddress\": \"camera-device-service:59990\"}}]'\n
If your API call is successful, you will get a generated ID for your new DeviceService
.
If you make a GET call to the http://localhost:59881/api/v3/deviceservice/all
URL (with Postman or curl) you will get a listing (in JSON) of all the device services currently defined in your instance of EdgeX, including the one you just added.
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughExporting/","title":"Exporting your device data","text":"Great, so the data sent by the camera device makes its way to core data. How can that data be sent to an enterprise system or the Cloud? How can that data be used by an edge analytics system (like a rules engine) to actuate on a device?
"},{"location":"walk-through/Ch-WalkthroughExporting/#getting-data-to-the-rules-engine","title":"Getting data to the rules engine","text":"By default, data is already passed from the core data service to application services (app services) via Redis Pub/Sub messaging. Alternately, the data can be supplied between the two via MQTT. A preconfigured application service is provided with the EdgeX default Docker Compose files that gets this data and routes it to the eKuiper rules engine. The application service is called app-service-rules
(see below). More specifically, it is an app service configurable.
app-service-rules:\ncontainer_name: edgex-app-rules-engine\ndepends_on:\n- consul\n- data\nenvironment:\nCLIENTS_CORE_COMMAND_HOST: edgex-core-command\nCLIENTS_CORE_DATA_HOST: edgex-core-data\nCLIENTS_CORE_METADATA_HOST: edgex-core-metadata\nCLIENTS_SUPPORT_NOTIFICATIONS_HOST: edgex-support-notifications\nCLIENTS_SUPPORT_SCHEDULER_HOST: edgex-support-scheduler\nDATABASE_HOST: edgex-redis\nEDGEX_PROFILE: rules-engine\nEDGEX_SECURITY_SECRET_STORE: \"false\"\nMESSAGEQUEUE_HOST: edgex-redis\nREGISTRY_HOST: edgex-core-consul\nSERVICE_HOST: edgex-app-rules-engine\nTRIGGER_EDGEXMESSAGEBUS_PUBLISHHOST_HOST: edgex-redis\nTRIGGER_EDGEXMESSAGEBUS_SUBSCRIBEHOST_HOST: edgex-redis\nhostname: edgex-app-rules-engine\nimage: edgexfoundry/app-service-configurable:2.0.1\nnetworks:\nedgex-network: {}\nports:\n- 127.0.0.1:59701:59701/tcp\nread_only: true\nsecurity_opt:\n- no-new-privileges:true\nuser: 2002:2001\n
"},{"location":"walk-through/Ch-WalkthroughExporting/#seeing-the-data-export","title":"Seeing the data export","text":"The log level of any EdgeX micro service is set to INFO
by default. If you tune the log level of the app-service-rules micro service to DEBUG
, you can see Event
s pass through the app service on the way to the rules engine.
To set the log level of any service, open the Consul UI in a browser by visiting http://[host]:8500
. When the Consul UI opens, click on the Key/Value tab on the top of the screen.
On the Key/Value display page, click on edgex
> appservices
> 2.0
> app-rules-engine
> Writable
> LogLevel
. In the Value entry field that presents itself, replace INFO
with DEBUG
and hit the Save
button.
The log level change will be picked up by the application service. In a terminal window, execute the Docker command below to view the service log.
docker logs -f edgex-app-rules-engine\n
Now push another event/reading into core data as you did earlier (see Send Event). You should see each new event/reading created by acknowledged by the app service. With the right application service and rules engine configuration, the event/reading data is published to the rules engine topic where it can then be picked up and used by the rules engine service to trigger commands just as you did manually in this walkthrough.
"},{"location":"walk-through/Ch-WalkthroughExporting/#exporting-data-to-anywhere","title":"Exporting data to anywhere","text":"You can create an additional application service to get the data to another application or service, REST endpoint, MQTT topic, cloud provider, and more. See the Getting Started guide on exporting data for more information on how to use another app service configurable to get EdgeX data to any client.
"},{"location":"walk-through/Ch-WalkthroughExporting/#building-your-own-solutions","title":"Building your own solutions","text":"Congratulations, you've made it all the way through the Walkthrough tutorial!
<Back
"},{"location":"walk-through/Ch-WalkthroughProvision/","title":"Provision a device","text":"In the last act of setup, a device service often discovers and provisions devices (either statically or dynamically) and that it is going to manage on the part of EdgeX. Note the word \"often\" in the last sentence. Not all device services will discover new devices or provision them right away. Depending on the type of device and how the devices communicate, it is up to the device service to determine how/when to provision a device. In some cases, the provisioning may be triggered by a human request of the device service once everything is in place and once the human can provide the information the device service needs to physically connected to the device.
"},{"location":"walk-through/Ch-WalkthroughProvision/#device","title":"Device","text":"See core metadata API for more details.
For the sake of this demonstration, the call to core metadata will provision the human/dog counting monitor camera as if the device service discovered it (by some unknown means) and provisioned the device as part of some startup process. To create a Device
, it must be associated to a DeviceProfile
, a DeviceService
, and contain one or more Protocols
that define how and where to communicate with the device (possibly providing its address).
When creating a device, you specify both the admin state (just as you did for a device service) and an operating state. The operating state (aka op state) provides an indication on the part of EdgeX about the internal operating status of the device. The operating state is not set externally (as by another system or man), it is a signal from within EdgeX (and potentially the device service itself) about the condition of the device. The operating state of the device may be either UP
or DOWN
(it may alsy be UNKNOWN
if the state cannot be determined). When the operating state of the device is DOWN
, it is either experiencing some difficulty or going through some process (for example an upgrade) which does not allow it to function in its normal capacity.
Use either the Postman or Curl tab below to walkthrough creating the Device
.
Make a POST request to http://localhost:59881/api/v3/device
with the following body:
[\n{\n\"apiVersion\" : \"v3\",\n\"device\": {\n\"name\": \"countcamera1\",\n\"description\": \"human and dog counting camera #1\",\n\"adminState\": \"UNLOCKED\",\n\"operatingState\": \"UP\",\n\"labels\": [\n\"camera\",\"counter\"\n],\n\"location\": \"{lat:45.45,long:47.80}\",\n\"serviceName\": \"camera-control-device-service\",\n\"profileName\": \"camera-monitor-profile\",\n\"protocols\": {\n\"camera-protocol\": {\n\"camera-address\": \"localhost\",\n\"port\": \"1234\",\n\"unitID\": \"1\"\n}\n}\n}\n}\n]\n
Be sure that you are POSTing raw data, not form-encoded data. If your API call is successful, you will get a generated ID for your new Device
in the response area.
Note
The camera-monitor-profile
was created by the device profile uploaded in a previous walkthrough step. The camera-control-device-service
was created in the last walkthough step. These names must match the previously created EdgeX objects in order to successfully provision your device.
Make a curl POST request as shown below.
curl -X 'POST' 'http://localhost:59881/api/v3/device' -d '[{\"apiVersion\" : \"v3\", \"device\": {\"name\": \"countcamera1\",\"description\": \"human and dog counting camera #1\",\"adminState\": \"UNLOCKED\",\"operatingState\": \"UP\",\"labels\": [\"camera\",\"counter\"],\"location\": \"{lat:45.45,long:47.80}\",\"serviceName\": \"camera-control-device-service\",\"profileName\": \"camera-monitor-profile\",\"protocols\": {\"camera-protocol\": {\"camera-address\": \"localhost\",\"port\": \"1234\",\"unitID\": \"1\"}}}}]'\n
If your API call is successful, you will get a generated ID (a UUID) for your new Device
.
Note
The camera-monitor-profile
was created by the device profile uploaded in a previous walkthrough step. The camera-control-device-service
was created in the last walkthough step. These names must match the previously created EdgeX objects in order to successfully provision your device.
Ensure the monitor camera is among the devices known to core metadata. If you make a GET call to the http://localhost:59881/api/v3/device/all
URL (with Postman or curl) you will get a listing (in JSON) of all the devices currently defined in your instance of EdgeX that should include the one you just added.
There are many additional APIs on core metadata to retrieve a DeviceProfile
, Device
, DeviceService
, etc. As an example, here is one to find all devices associated to a given DeviceProfile
.
curl -X GET http://localhost:59881/api/v3/device/profile/name/camera-monitor-profile | json_pp\n
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughReading/","title":"Sending events and reading data","text":"In the real world, the human/dog counting camera would start to take pictures, count beings, and send that data to EdgeX. To simulate this activity in this section of the walkthrough, you will make core data API calls as if you were the camera's device and device service. That is, you will report human and dog counts to core data in the form of event/reading objects.
"},{"location":"walk-through/Ch-WalkthroughReading/#send-an-eventreading","title":"Send an Event/Reading","text":"See core data API for more details.
Data is submitted to core data as an Event
object. An event is a collection of sensor readings from a device (associated to a device by its name) at a particular point in time. A Reading
object in an Event
object is a particular value sensed by the device and associated to a Device Resource (by name) to provide context to the reading.
So, the human/dog counting camera might determine that there are 5 people and 3 dogs in the space it is monitoring. In the EdgeX vernacular, the device service upon receiving these sensed values from the camera device would create an Event
with two Reading
s - one Reading
would contain the key/value pair of HumanCount:5 and the other Reading
would contain the key/value pair of CanineCount:3.
The device service, on creating the Event
and associated Reading
objects would transmit this information to core data via REST call.
Use either the Postman or Curl tab below to walkthrough sending an Event
with Reading
s to core data.
Make a POST request to `http://localhost:59880/api/v3/event/camera-monitor-profile/countcamera1/HumanCount with the body below.
{\n\"apiVersion\" : \"v3\",\n\"event\": {\n\"apiVersion\" : \"v3\",\n\"deviceName\": \"countcamera1\",\n\"profileName\": \"camera-monitor-profile\",\n\"sourceName\": \"HumanCount\",\n\"id\": \"d5471d59-2810-419a-8744-18eb8fa03465\",\n\"origin\": 1602168089665565200,\n\"readings\": [\n{\n\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abd\",\n\"origin\": 1602168089665565200,\n\"deviceName\": \"countcamera1\",\n\"resourceName\": \"HumanCount\",\n\"profileName\": \"camera-monitor-profile\",\n\"valueType\": \"Int16\",\n\"value\": \"5\"\n},\n{\n\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abe\",\n\"origin\": 1602168089665565200,\n\"deviceName\": \"countcamera1\",\n\"resourceName\": \"CanineCount\",\n\"profileName\": \"camera-monitor-profile\",\n\"valueType\": \"Int16\",\n\"value\": \"3\"\n} ]\n}\n}\n
If your API call is successful, you will get a generated ID for your new Event
as shown in the image below.
Note
Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event.
Make a curl POST request as shown below.
curl -X POST -d '{\"apiVersion\" : \"v3\",\"event\": {\"apiVersion\" : \"v3\",\"deviceName\": \"countcamera1\",\"profileName\": \"camera-monitor-profile\",\"sourceName\": \"HumanCount\",\"id\":\"d5471d59-2810-419a-8744-18eb8fa03464\",\"origin\": 1602168089665565200,\"readings\": [{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abc\",\"origin\": 1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"HumanCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"5\"},{\"id\": \"7003cacc-0e00-4676-977c-4e58b9612abf\",\"origin\":1602168089665565200,\"deviceName\": \"countcamera1\",\"resourceName\": \"CanineCount\",\"profileName\": \"camera-monitor-profile\",\"valueType\": \"Int16\",\"value\": \"3\"}]}}' localhost:59880/api/v3/event/camera-monitor-profile/countcamera1/HumanCount\n
Note
Notice that the POST request URL contains the device profile name, the device name and the device resource (or device command) associated with the device that is providing the event.
"},{"location":"walk-through/Ch-WalkthroughReading/#origin-timestamp","title":"Origin Timestamp","text":"The device service will supply an origin property in the Event
and Reading
object to suggest the time (in Epoch timestamp/nanoseconds format) at which the data was sensed/collected.
EdgeX uses nanosecond because some devices and use cases may provide and need that degree of accuracy. Also, Collisions at nanosecond accuracy are unlikely.
The Event
origin is always set by device service SDK and it is intended to be unique for that device service instance. The Reading
origin should be set by the device service's ProtocolDriver implementation, SDK copies the Event
origin into it if it was not set.
Note
Smart devices will often timestamp sensor data and this timestamp can be used as the origin timestamp. In cases where the sensor/device is unable to provide a timestamp (\"dumb\" or brownfield sensors), it is the device service that creates a timestamp for the sensor data that it be applied as the origin timestamp for the device.
"},{"location":"walk-through/Ch-WalkthroughReading/#exploring-eventsreadings","title":"Exploring Events/Readings","text":"Now that an Event
and associated Readings
have been sent to core data, you can use the core data API to explore that data that is now stored in the database.
Recall from a previous walkthrough step, you checked that no data was yet stored in core data. Make a similar call to see event records have now been sent into core data..
"},{"location":"walk-through/Ch-WalkthroughReading/#walkthrough-query-eventsreadings","title":"Walkthrough - Query Events/Readings","text":"Use either the Postman or Curl tab below to walkthrough getting the list of events.
PostmanCurlMake a GET request to retrieve the Event
s associated to the countcamera1
device: http://localhost:59880/api/v3/event/device/name/countcamera1
.
Make a GET request to retrieve the Reading
s associated to the countcamera1
device: http://localhost:59880/api/v3/reading/device/name/countcamera1
.
Make a curl GET requests to retrieve 10 of the last Event
s associated to the countcamera1
device and to retrieve 10 of the human count readings associated to countcamera1
curl -X GET localhost:59880/api/v3/event/device/name/countcamera1 | json_pp\ncurl -X GET localhost:59880/api/v3/reading/device/name/countcamera1 | json_pp\n
There are many additional APIs on core data to retrieve Event
and Reading
data. As an example, here is one to find all events inside of a start and end time range.
curl -X GET localhost:59880/api/v3/event/start/1602168089665560000/end/1602168089665570000 | json_pp\n
<Back Next>
"},{"location":"walk-through/Ch-WalkthroughSetup/","title":"Setup up your environment","text":""},{"location":"walk-through/Ch-WalkthroughSetup/#install-docker-docker-compose-edgex-foundry","title":"Install Docker, Docker Compose & EdgeX Foundry","text":"To explore EdgeX and walk through it's APIs and how it works, you will need:
If you have not already done so, proceed to Getting Started using Docker for how to get these tools and run EdgeX Foundry. If you have the tools and EdgeX already installed and running, you can proceed to the Walkthrough Use Case.
"},{"location":"walk-through/Ch-WalkthroughSetup/#install-postman-optional","title":"Install Postman (optional)","text":"You can follow this walkthrough making HTTP calls from the command-line with a tool like curl
, but it's easier if you use a graphical user interface tool designed for exercising REST APIs. For that we like to use Postman. You can download the native Postman app for your operating system.
Note
Example curl
commands will be provided with the walk through so that you can run this walkthrough without Postman.
Alert
It is assumed that for the purposes of this walk through demonstration
localhost
. If this is not the case, substitute your hostname for localhost.<Back Next>
"},{"location":"walk-through/Ch-WalkthroughUseCase/","title":"Example Use Case","text":"In order to explore EdgeX, its services and APIs and to generally understand how it works, it helps to see EdgeX under the context of a real use case. While you exercise the APIs under a hypothetical situation in order to demonstrate how EdgeX works, the use case is very much a valid example of how EdgeX can be used to collect data from devices and actuate control of the sensed environment it monitors. People (and animal) counting camera technology as highlighted in this walk through does exist and has been connected to EdgeX before.
"},{"location":"walk-through/Ch-WalkthroughUseCase/#object-counting-camera","title":"Object Counting Camera","text":"Suppose you had a new device that you wanted to connect to EdgeX. The device was a camera that took a picture and then had an on-board chip that analyzed the picture and reported the number of humans and canines (dogs) it saw.
How often the camera takes a picture and reports its findings can be configured. In fact, the camera device could be sent two actuation commands - that is sent two requests for which it must respond and do something. You could send a request to set its time, in seconds, between picture snapshots (and then calculating the number of humans and dogs it finds in that resulting image). You could also request it to set the scan depth, in feet, of the camera - that is set how far out the camera looks. The farther out it looks, the less accurate the count of humans and dogs becomes, so this is something the manufacturer wants to allow the user to set based on use case needs.
"},{"location":"walk-through/Ch-WalkthroughUseCase/#edgex-device-representation","title":"EdgeX Device Representation","text":"In EdgeX, the camera must be represented by a Device
. Each Device
is managed by a device service. The device service communicates with the underlying hardware - in this case the camera - in the protocol of choice for that Device
. The device service collects the data from the devices it manages and passes that data into the rest of EdgeX.
Note
A device service will, by default, publish data into a message bus which can be subscribed to by core data and/or application services. You'll learn more about these later in this walkthrough. Alternately, a device service can send data directly to core data.
In this case, the device service would be collecting the count of humans and dogs that the camera sees. The device service also serves to translate the request for actuation from EdgeX and the rest of the world into protocol requests that the physical device would understand. So in this example, the device service would take requests to set the duration between snapshots and to set the scan depth and translate those requests into protocol commands that the camera understood.
Exactly how this camera physically connects to the host machine running EdgeX and how the device service works under the covers to communicate with the camera Device is immaterial for the point of this demonstration.
<Back Next>
"}]} \ No newline at end of file diff --git a/3.1/security/Ch-APIGateway/index.html b/3.1/security/Ch-APIGateway/index.html index ea20ba9b59..50f1f5f153 100644 --- a/3.1/security/Ch-APIGateway/index.html +++ b/3.1/security/Ch-APIGateway/index.html @@ -365,14 +365,39 @@ Available Servicesgp#BaJ7#fZnL=~hn>mRXCP!*C>%gG>jI>lt|
zts#2d4TN#RPw=r+gym6ggj@Nl_*_{`L&J-DC