diff --git a/README.md b/README.md index 58044ecceba..62715d70d81 100644 --- a/README.md +++ b/README.md @@ -130,7 +130,7 @@ please head to company's [careers page][careers] or shoot us an e-mail at CRT_FILE_NAME= -``` -These commands use [OpenSSL](https://www.openssl.org/) tool, so please make sure that you have it installed and set up before running these commands. - - - Command `make ca` will generate a self-signed certificate that will later be used as a CA to sign other generated certificates. CA will expire in 3 years. - - Command `make server_cert` will generate and sign (with previously created CA) server cert, which will expire after 1000 days. This cert is used as a Mainflux server-side certificate in usual TLS flow to establish HTTPS, WSS, or MQTTS connection. - - Command `make thing_cert` will finally generate and sign a client-side certificate and private key for the thing. - -In this example `` represents key of the thing, and `` represents the name of the certificate and key file which will be saved in `docker/ssl/certs` directory. Generated Certificate will expire after 2 years. The key must be stored in the x.509 certificate `CN` field. This script is created for testing purposes and is not meant to be used in production. We strongly recommend avoiding self-signed certificates and using a certificate management tool such as [Vault](https://www.vaultproject.io/) for the production. - -Once you have created CA and server-side cert, you can spin the composition using: - -```bash -AUTH=x509 docker-compose -f docker/docker-compose.yml up -d -``` - -Then, you can create user and provision things and channels. Now, in order to send a message from the specific thing to the channel, you need to connect thing to the channel and generate corresponding client certificate using aforementioned commands. To publish a message to the channel, thing should send following request: - -### HTTPS -```bash -curl -s -S -i --cacert docker/ssl/certs/ca.crt --cert docker/ssl/certs/.crt --key docker/ssl/certs/.key --insecure -X POST -H "Content-Type: application/senml+json" https://localhost/http/channels//messages -d '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' -``` - -### MQTTS - -#### Publish -```bash -mosquitto_pub -u -P -t channels//messages -h localhost -p 8883 --cafile docker/ssl/certs/ca.crt --cert docker/ssl/certs/.crt --key docker/ssl/certs/.key -m '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' -``` - -#### Subscribe -``` -mosquitto_sub -u -P --cafile docker/ssl/certs/ca.crt --cert docker/ssl/certs/.crt --key docker/ssl/certs/.key -t channels//messages -h localhost -p 8883 -``` - -### WSS -```javascript -const WebSocket = require('ws'); - -// Do not verify self-signed certificates if you are using one. -process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' - -// Replace and with real values. -const ws = new WebSocket('wss://localhost/ws/channels//messages?authorization=', -// This is ClientOptions object that contains client cert and client key in the form of string. You can easily load these strings from cert and key files. -{ - cert: `-----BEGIN CERTIFICATE-----....`, - key: `-----BEGIN RSA PRIVATE KEY-----.....` -}) - -ws.on('open', () => { - ws.send('something') -}) - -ws.on('message', (data) => { - console.log(data) -}) -ws.on('error', (e) => { - console.log(e) -}) -``` - -As you can see, `Authorization` header does not have to be present in the HTTP request, since the key is present in the certificate. However, if you pass `Authorization` header, it _must be the same as the key in the cert_. In the case of MQTTS, `password` filed in CONNECT message _must match the key from the certificate_. In the case of WSS, `Authorization` header or `authorization` query parameter _must match cert key_. diff --git a/docs/bootstrap.md b/docs/bootstrap.md deleted file mode 100644 index e7ace5d68b8..00000000000 --- a/docs/bootstrap.md +++ /dev/null @@ -1,124 +0,0 @@ -## Bootstrap - -`Bootstrapping` refers to a self-starting process that is supposed to proceed without external input. -Mainflux platform supports bootstrapping process, but some of the preconditions need to be fulfilled in advance. The device can trigger a bootstrap when:s - -- device contains only bootstrap credentials and no Mainflux credentials -- device, for any reason, fails to start a communication with the configured Mainflux services (server not responding, authentication failure, etc..). -- device, for any reason, wants to update its configuration - -> Bootstrapping and provisioning are two different procedures. Provisioning refers to entities management while bootstrapping is related to entity configuration. - -Bootstrapping procedure is the following: - -![Configure device](img/bootstrap/1.png) -*1) Configure device with Bootstrap service URL, an external key and external ID* - -> ![Provision Mainflux channels](img/bootstrap/2.png) -*Optionally create Mainflux channels if they don't exist* - -> ![Provision Mainflux things](img/bootstrap/3.png) -*Optionally create Mainflux thing if it doesn't exist* - -![Upload configuration](img/bootstrap/4.png) -*2) Upload configuration for the Mainflux thing* - -![Bootstrap](img/bootstrap/5.png) -*3) Bootstrap - send a request for the configuration* - -![Update, enable/disable, remove](img/bootstrap/6.png) -*4) Connect/disconnect thing from channels, update or remove configuration* - -### Configuration - -The configuration of Mainflux thing consists of three major parts: - -- The list of Mainflux channels the thing is connected to -- Custom configuration related to the specific thing -- Thing key and certificate data related to that thing - -Also, the configuration contains an external ID and external key, which will be explained later. -In order to enable the thing to start bootstrapping process, the user needs to upload a valid configuration for that specific thing. This can be done using the following HTTP request: - -```bash -curl -s -S -i -X POST -H "Authorization: " -H "Content-Type: application/json" http://localhost:8200/things/configs -d '{ - "external_id":"09:6:0:sb:sa", - "thing_id": "1b9b8fae-9035-4969-a240-7fe5bdc0ed28", - "external_key":"key", - "name":"some", - "channels":[ - "c3642289-501d-4974-82f2-ecccc71b2d83", - "cd4ce940-9173-43e3-86f7-f788e055eb14", - "ff13ca9c-7322-4c28-a25c-4fe5c7b753fc", - "c3642289-501d-4974-82f2-ecccc71b2d82" -], - "content": "config...", - "client_cert": "PEM cert", - "client_key": "PEM client cert key", - "ca_cert": "PEM CA cert" -}' -``` - -In this example, `channels` field represents the list of Mainflux channel IDs the thing is connected to. These channels need to be provisioned before the configuration is uploaded. Field `content` represents custom configuration. This custom configuration contains parameters that can be used to set up the thing. It can also be empty if no additional set up is needed. Field `name` is human readable name and `thing_id` is an ID of the Mainflux thing. This field is not required. If `thing_id` is empty, corresponding Mainflux thing will be created implicitly and its ID will be sent as a part of `Location` header of the response. Fields `client_cert`, `client_key`, and `ca_cert` represent PEM or base64-encoded DER client certificate, client certificate key, and trusted CA, respectively. - -There are two more fields: `external_id` and `external_key`. External ID represents an ID of the device that corresponds to the given thing. For example, this can be a MAC address or the serial number of the device. The external key represents the device key. This is the secret key that's safely stored on the device and it is used to authorize the thing during the bootstrapping process. Please note that external ID and external key and Mainflux ID and Mainflux key are _completely different concepts_. External id and key are only used to authenticate a device that corresponds to the specific Mainflux thing during the bootstrapping procedure. As Configuration optionally contains client certificate and issuing CA, it's possible that device is not able to establish TLS encrypted communication with Mainflux before bootstrapping. For that purpose, Bootstrap service exposes endpoint used for secure bootstrapping which can be used regardless of protocol (HTTP or HTTPS). Both device and Bootstrap service use a secret key to encrypt the content. Encryption is done as follows: - - 1) Device uses the secret encryption key to encrypt the value of that exact external key - 2) Device sends a bootstrap request using the value from 1 as an Authorization header - 3) Bootstrap service fetches config by its external ID - 4) Bootstrap service uses the secret encryption key to decrypt Authorization header - 5) Bootstrap service compares value from 4 with the external key of the config from 3 and proceeds to 6 if they're equal - 6) Bootstrap service uses the secret encryption key to encrypt the content of the bootstrap response - -> Please have on mind that secret key is passed to the Bootstrap service as an environment variable. As security measurement, Bootstrap service removes this variable once it reads it on startup. However, depending on your deployment, this variable can still be visible as a part of your configuration or terminal emulator environment. - -For more details on which encryption mechanisms are used, please take a look at the implementation. - -### Bootstrapping - -Currently, the bootstrapping procedure is executed over the HTTP protocol. Bootstrapping is nothing else but fetching and applying the configuration that corresponds to the given Mainflux thing. In order to fetch the configuration, _the thing_ needs to send a bootstrapping request: - -```bash -curl -s -S -i -H "Authorization: " http://localhost:8200/things/bootstrap/ -``` - -The response body should look something like: - -```json -{ - "mainflux_id":"7c9df5eb-d06b-4402-8c1a-df476e4394c8", - "mainflux_key":"86a4f870-eba4-46a0-bef9-d94db2b64392", - "mainflux_channels":[ - { - "id":"ff13ca9c-7322-4c28-a25c-4fe5c7b753fc", - "name":"some channel", - "metadata":{ - "operation":"someop", - "type":"metadata" - } - }, - { - "id":"925461e6-edfb-4755-9242-8a57199b90a5", - "name":"channel1", - "metadata":{ - "type":"control" - } - } - ], - "content":"config..." -} -``` - -The response consists of an ID and key of the Mainflux thing, the list of channels and custom configuration (`content` field). The list of channels contains not just channel IDs, but the additional Mainflux channel data (`name` and `metadata` fields), as well. - -### Enabling and disabling things - -Uploading configuration does not automatically connect thing to the given list of channels. In order to connect the thing to the channels, user needs to send the following HTTP request: - -```bash -curl -s -S -i -X PUT -H "Authorization: " -H "Content-Type: application/json" http://localhost:8200/things/state/ -d '{"state": 1}' -``` - -In order to disconnect, the same request should be sent with the value of `state` set to 0. - -For more information about Bootstrap API, please check out the [API documentation](https://github.com/mainflux/mainflux/blob/master/bootstrap/swagger.yml). diff --git a/docs/cli.md b/docs/cli.md deleted file mode 100644 index eaf3bb9a08f..00000000000 --- a/docs/cli.md +++ /dev/null @@ -1,211 +0,0 @@ -## CLI - -Mainflux CLI makes it easy to manage users, things, channels and messages. - -CLI can be downloaded as separate asset from [project realeses](https://github.com/mainflux/mainflux/releases) or it can be built with `GNU Make` tool: - -``` -make cli -``` - -which will build `mainflux-cli` in `/build` folder. - -Executing `build/mainflux-cli` without any arguments will output help with all available commands and flags: - -``` -Usage: - mainflux-cli [command] - -Available Commands: - channels Channels management - help Help about any command - messages Send or read messages - provision Bulk create things and channels from a config file - things Things management - users Users management - version Mainflux system version - -Flags: - -c, --content-type string Mainflux message content type (default "application/senml+json") - -h, --help help for mainflux-cli - -a, --http-prefix string Mainflux http adapter prefix (default "http") - -i, --insecure Do not check for TLS cert - -l, --limit uint limit query parameter (default 100) - -m, --mainflux-url string Mainflux host URL (default "http://localhost") - -o, --offset uint offset query parameter - -t, --things-prefix string Mainflux things service prefix - -u, --users-prefix string Mainflux users service prefix - -Use "mainflux-cli [command] --help" for more information about a command. -``` - -You can execute each command with `-h` flag for more information about that command, e.g. - -``` -./mainflux-cli channels -h -``` - -will get you usage info: - -``` -Channels management: create, get, update or delete Channels and get list of Things connected to Channels - -Usage: - mainflux-cli channels [flags] - mainflux-cli channels [command] - -Available Commands: - connections connections - create create - delete delete - get get - update update - -``` - -## Service -#### Get the version of Mainflux services -``` -mainflux-cli version -``` - -### Users management -#### Create User -``` -mainflux-cli users create john.doe@email.com password -``` - -#### Login User -``` -mainflux-cli users token john.doe@email.com password -``` - -### System Provisioning -#### Create Thing -``` -mainflux-cli things create '{"name":"myThing"}' -``` - -#### Bulk Provision Things - -```bash -mainflux-cli provision things -``` - -* `file` - A CSV or JSON file containing things -* `user_auth_token` - A valid user auth token for the current system - -#### Update Thing -``` -mainflux-cli things update '{"id":"", "name":"myNewName"}' -``` - -#### Remove Thing -``` -mainflux-cli things delete -``` - -#### Retrieve a subset list of provisioned Things -``` -mainflux-cli things get all --offset=1 --limit=5 -``` - -#### Retrieve Thing By ID -``` -mainflux-cli things get -``` - -#### Create Channel -``` -mainflux-cli channels create '{"name":"myChannel"}' -``` - -#### Bulk Provision Channels - -```bash -mainflux-cli provision channels -``` - -* `file` - A CSV or JSON file containing channels -* `user_auth_token` - A valid user auth token for the current system - -#### Update Channel -``` -mainflux-cli channels update '{"id":"","name":"myNewName"}' - -``` -#### Remove Channel -``` -mainflux-cli channels delete -``` - -#### Retrieve a subset list of provisioned Channels -``` -mainflux-cli channels get all --offset=1 --limit=5 -``` - -#### Retrieve Channel By ID -``` -mainflux-cli channels get -``` - -### Access control -#### Connect Thing to Channel -``` -mainflux-cli things connect -``` - -#### Bulk Connect Things to Channels - -```bash -mainflux-cli provision connect -``` - -* `file` - A CSV or JSON file containing thing and channel ids -* `user_auth_token` - A valid user auth token for the current system - -An example CSV file might be - -```csv -, -, -``` - -in which the first column is thing IDs and the second column is channel IDs. A connection will be created for each thing to each channel. This example would result in 4 connections being created. - -A comparable JSON file would be - -```json -{ - "thing_ids": [ - "", - "" - ], - "channel_ids": [ - "", - "" - ] -} -``` - -#### Disconnect Thing from Channel -``` -mainflux-cli things disconnect - -``` - -#### Retrieve a subset list of Channels connected to Thing -``` -mainflux-cli things connections -``` - -#### Retrieve a subset list of Things connected to Channel -``` -mainflux-cli channels connections -``` - -### Messaging -#### Send a message over HTTP -``` -mainflux-cli msg send '[{"bn":"Dev1","n":"temp","v":20}, {"n":"hum","v":40}, {"bn":"Dev2", "n":"temp","v":20}, {"n":"hum","v":40}]' -``` diff --git a/docs/dev-guide.md b/docs/dev-guide.md deleted file mode 100644 index 336bdeef360..00000000000 --- a/docs/dev-guide.md +++ /dev/null @@ -1,577 +0,0 @@ -## Getting Mainflux - -Mainflux source can be found in the official [Mainflux GitHub repository](https://github.com/Mainflux/mainflux). You should fork this repository in order to make changes to the project. The forked version of the repository should be cloned using the following: - -```bash -git clone $SOMEPATH/mainflux -cd $SOMEPATH/mainflux -``` - -**Note:** If your `$SOMEPATH` is equal to `$GOPATH/src/github.com/mainflux/mainflux`, -make sure that your `$GOROOT` and `$GOPATH` do not overlap (otherwise, go -modules won't work). - -## Building - -### Prerequisites - -Make sure that you have [Protocol Buffers](https://developers.google.com/protocol-buffers/) (version 3.6.1) compiler (`protoc`) installed. - -[Go Protobuf](https://github.com/golang/protobuf) installation instructions are [here](https://github.com/golang/protobuf#installation). -Go Protobuf uses C bindings, so you will need to install [C++ protobuf](https://github.com/google/protobuf) as a prerequisite. -Mainflux uses `Protocol Buffers for Go with Gadgets` to generate faster marshaling and unmarshaling Go code. Protocol Buffers for Go with Gadgets installation instructions can be found [here](https://github.com/gogo/protobuf). - -A copy of [Go](https://golang.org/doc/install) (version 1.13.3) and docker template (version 3.7) will also need to be installed on your system. - -If any of these versions seem outdated, the latest can always be found in our [CI script](https://github.com/mainflux/mainflux/blob/master/scripts/ci.sh). - -### Build All Services - -Use the *GNU Make* tool to build all Mainflux services: - -```bash -make -``` - -Build artifacts will be put in the `build` directory. - -> N.B. All Mainflux services are built as a statically linked binaries. This way they can be portable (transferred to any platform just by placing them there and running them) as they contain all needed libraries and do not relay on shared system libraries. This helps creating [FROM scratch](https://hub.docker.com/_/scratch/) dockers. - -### Build Individual Microservice - -Individual microservices can be built with: - -```bash -make -``` - -For example: - -```bash -make http -``` - -will build the HTTP Adapter microservice. - -### Building Dockers - -Dockers can be built with: - -```bash -make dockers -``` - -or individually with: - -```bash -make docker_ -``` - -For example: - -```bash -make docker_http -``` - -> N.B. Mainflux creates `FROM scratch` docker containers which are compact and small in size. - -> N.B. The `things-db` and `users-db` containers are built from a vanilla PostgreSQL docker image downloaded from docker hub which does not persist the data when these containers are rebuilt. Thus, __rebuilding of all docker containers with `make dockers` or rebuilding the `things-db` and `users-db` containers separately with `make docker_things-db` and `make docker_users-db` respectively, will cause data loss. All your users, things, channels and connections between them will be lost!__ As we use this setup only for development, we don't guarantee any permanent data persistence. Though, in order to enable data retention, we have configured persistent volumes for each container that stores some data. If you want to update your Mainflux dockerized installation and want to keep your data, use `make cleandocker` to clean the containers and images and keep the data (stored in docker persistent volumes) and then `make run` to update the images and the containers. Check the [Cleaning up your dockerized Mainflux setup](#cleaning-up-your-dockerized-mainflux-setup) section for details. Please note that this kind of updating might not work if there are database changes. - -#### Building Docker images for development - -In order to speed up build process, you can use commands such as: - -```bash -make dockers_dev -``` - -or individually with - -```bash -make docker_dev_ -``` - -Commands `make dockers` and `make dockers_dev` are similar. The main difference is that building images in the development mode is done on the local machine, rather than an intermediate image, which makes building images much faster. Before running this command, corresponding binary needs to be built in order to make changes visible. This can be done using `make` or `make ` command. Commands `make dockers_dev` and `make docker_dev_` should be used only for development to speed up the process of image building. **For deployment images, commands from section above should be used.** - -### Suggested workflow - -When the project is first cloned to your system, you will need to make sure and build all of the Mainflux services. - -```bash -make -make dockers_dev -``` - -As you develop and test changes, only the services related to your changes will need to be rebuilt. This will reduce compile time and create a much more enjoyable development experience. - -```bash -make -make docker_dev_ -make run -``` - -### Overriding the default docker-compose configuration -Sometimes, depending on the use case and the user's needs it might be useful to override or add some extra parameters to the docker-compose configuration. These configuration changes can be done by specifying multiple compose files with the [docker-compose command line option -f](https://docs.docker.com/compose/reference/overview/) as described [here](https://docs.docker.com/compose/extends/). -The following format of the `docker-compose` command can be used to extend or override the configuration: -``` -docker-compose -f docker/docker-compose.yml -f docker/docker-compose.custom1.yml -f docker/docker-compose.custom2.yml up [-d] -``` -In the command above each successive file overrides the previous parameters. - -A practical example in our case would be to enable debugging and tracing in NATS so that we can see better how are the messages moving around. - -`docker-compose.nats-debugging.yml` -```yaml -version: "3" - -services: - nats: - command: --debug -DV -``` - -When we have the override files in place, to compose the whole infrastructure including the persistent volumes we can execute: -``` -docker-compose -f docker/docker-compose.yml -f docker/docker-compose.nats-debugging.yml up -d -``` - -__Note:__ Please store your customizations to some folder outside the Mainflux's source folder and maybe add them to some other git repository. You can always apply your customizations by pointing to the right file using `docker-compose -f ...`. - - -### Cleaning up your dockerized Mainflux setup -If you want to clean your whole dockerized Mainflux installation you can use the `make pv=true cleandocker` command. Please note that __by default the `make cleandocker` command will stop and delete all of the containers and images, but NOT DELETE persistent volumes__. If you want to delete the gathered data in the system (the persistent volumes) please use the following command `make pv=true cleandocker` (pv = persistent volumes). This form of the command will stop and delete the containers, the images and will also delete the persistent volumes. - - -### MQTT Microservice -The MQTT Microservice in Mainflux is special, as it is currently the only microservice written in NodeJS. It is not compiled, but node modules need to be downloaded in order to start the service: - -``` -cd mqtt -npm install -``` - -Note that there is a shorthand for doing these commands with `make` tool: - -``` -make mqtt -``` - -After that, the MQTT Adapter can be started from top directory (as it needs to find `*.proto` files) with: -``` -node mqtt/mqtt.js -``` - -#### Troubleshooting -Depending on your use case, MQTT topics, message size, the number of clients and the frequency with which the messages are sent it can happen that you experience some problems. - -Up until now it has been noticed that in case of high load, big messages and many clients it can happen that the MQTT microservice crashes with the following error: -``` -mainflux-mqtt | FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory -mainflux-mqtt exited with code 137 -``` -This problem is caused the default allowed memory in node (V8). [V8 gives the user 1.7GB per default](https://medium.com/tomincode/increasing-nodes-memory-337dfb1a60dd). To fix the problem you should add the following environment variable `NODE_OPTIONS:--max-old-space-size=SPACE_IN_MB` in the [environment section](https://github.com/mainflux/mainflux/blob/master/docker/aedes.yml#L31) of the aedes.yml configuration. To find the right value for the `--max-old-space-size` parameter you'll have to experiment a bit depending on your needs. - -The Mainflux MQTT service uses the [Aedes MQTT Broker](https://github.com/mcollina/aedes) for implementation of the MQTT related things. Therefore, for some questions or problems you can also check out the Aedes's documentation or reach out its contributors. - -### Protobuf -If you've made any changes to `.proto` files, you should call `protoc` command prior to compiling individual microservices. - -To do this by hand, execute: - -``` -protoc --gofast_out=plugins=grpc:. *.proto -``` - -A shorthand to do this via `make` tool is: - -``` -make proto -``` - -> N.B. This must be done once at the beginning in order to generate protobuf Go structures needed for the build. However, if you don't change any of `.proto` files, this step is not mandatory, since all generated files are included in the repository (those are files with `.pb.go` extension). - -### Cross-compiling for ARM -Mainflux can be compiled for ARM platform and run on Raspberry Pi or other similar IoT gateways, by following the instructions [here](https://dave.cheney.net/2015/08/22/cross-compilation-with-go-1-5) or [here](https://www.alexruf.net/golang/arm/raspberrypi/2016/01/16/cross-compile-with-go-1-5-for-raspberry-pi.html) as well as information -found [here](https://github.com/golang/go/wiki/GoArm). The environment variables `GOARCH=arm` and `GOARM=7` must be set for the compilation. - -Cross-compilation for ARM with Mainflux make: - -``` -GOOS=linux GOARCH=arm GOARM=7 make -``` - -## Running tests -To run all of the tests you can execute: -``` -make test -``` -Dockertest is used for the tests, so to run them, you will need the Docker daemon/service running. - -## Installing -Installing Go binaries is simple: just move them from `build` to `$GOBIN` (do not fortget to add `$GOBIN` to your `$PATH`). - -You can execute: - -``` -make install -``` - -which will do this copying of the binaries. - -> N.B. Only Go binaries will be installed this way. The MQTT adapter is a NodeJS script and will stay in the `mqtt` dir. - -## Deployment - -### Prerequisites -Mainflux depends on several infrastructural services, notably [NATS](https://www.nats.io/) broker and [PostgreSQL](https://www.postgresql.org/) database. - -#### NATS -Mainflux uses NATS as it's central message bus. For development purposes (when not run via Docker), it expects that NATS is installed on the local system. - -To do this execute: - -``` -go get github.com/nats-io/gnatsd -``` - -This will install `gnatsd` binary that can be simply run by executing: - -``` -gnatsd -``` - -#### PostgreSQL -Mainflux uses PostgreSQL to store metadata (`users`, `things` and `channels` entities alongside with authorization tokens). -It expects that PostgreSQL DB is installed, set up and running on the local system. - -Information how to set-up (prepare) PostgreSQL database can be found [here](https://support.rackspace.com/how-to/postgresql-creating-and-dropping-roles/), -and it is done by executing following commands: - -``` -# Create `users` and `things` databases -sudo -u postgres createdb users -sudo -u postgres createdb things - -# Set-up Postgres roles -sudo su - postgres -psql -U postgres -postgres=# CREATE ROLE mainflux WITH LOGIN ENCRYPTED PASSWORD 'mainflux'; -postgres=# ALTER USER mainflux WITH LOGIN ENCRYPTED PASSWORD 'mainflux'; -``` - -### Mainflux Services -Running of the Mainflux microservices can be tricky, as there is a lot of them and each demand configuration in the form of environment variables. - -The whole system (set of microservices) can be run with one command: - -``` -make rundev -``` - -which will properly configure and run all microservices. - -Please assure that MQTT microservice has `node_modules` installed, as explained in _MQTT Microservice_ chapter. - -> N.B. `make rundev` actually calls helper script `scripts/run.sh`, so you can inspect this script for the details. - -## Events -In order to be easily integratable system, Mainflux is using [Redis Streams](https://redis.io/topics/streams-intro) -as an event log for event sourcing. Services that are publishing events to Redis Streams -are `things` service, `bootstrap` service, and `mqtt` adapter. - -### Things Service -For every operation that has side effects (that is changing service state) `things` -service will generate new event and publish it to Redis Stream called `mainflux.things`. -Every event has its own event ID that is automatically generated and `operation` -field that can have one of the following values: -- `thing.create` for thing creation, -- `thing.update` for thing update, -- `thing.remove` for thing removal, -- `thing.connect` for connecting a thing to a channel, -- `thing.disconnect` for disconnecting thing from a channel, -- `channel.create` for channel creation, -- `channel.update` for channel update, -- `channel.remove` for channel removal. - -By fetching and processing these events you can reconstruct `things` service state. -If you store some of your custom data in `metadata` field, this is the perfect -way to fetch it and process it. If you want to integrate through -[docker-compose.yml](https://github.com/mainflux/mainflux/blob/master/docker/docker-compose.yml) -you can use `mainflux-es-redis` service. Just connect to it and consume events -from Redis Stream named `mainflux.things`. - -#### Thing create event - -Whenever thing is created, `things` service will generate new `create` event. This -event will have the following format: -``` -1) "1555334740911-0" -2) 1) "operation" - 2) "thing.create" - 3) "name" - 4) "d0" - 5) "id" - 6) "3c36273a-94ea-4802-84d6-a51de140112e" - 7) "owner" - 8) "john.doe@email.com" - 9) "metadata" - 10) "{}" -``` - -As you can see from this example, every odd field represents field name while every -even field represents field value. This is standard event format for Redis Streams. -If you want to extract `metadata` field from this event, you'll have to read it as -string first, and then you can deserialize it to some structured format. - -#### Thing update event -Whenever thing instance is updated, `things` service will generate new `update` event. -This event will have the following format: -``` -1) "1555336161544-0" -2) 1) "operation" - 2) "thing.update" - 3) "name" - 4) "weio" - 5) "id" - 6) "3c36273a-94ea-4802-84d6-a51de140112e" -``` -Note that thing update event will contain only those fields that were updated using -update endpoint. - -#### Thing remove event -Whenever thing instance is removed from the system, `things` service will generate and -publish new `remove` event. This event will have the following format: -``` -1) 1) "1555339313003-0" -2) 1) "id" - 2) "3c36273a-94ea-4802-84d6-a51de140112e" - 3) "operation" - 4) "thing.remove" -``` - -#### Channel create event -Whenever channel instance is created, `things` service will generate and publish new -`create` event. This event will have the following format: -``` -1) "1555334740918-0" -2) 1) "id" - 2) "16fb2748-8d3b-4783-b272-bb5f4ad4d661" - 3) "owner" - 4) "john.doe@email.com" - 5) "operation" - 6) "channel.create" - 7) "name" - 8) "c1" -``` - -#### Channel update event -Whenever channel instance is updated, `things` service will generate and publish new -`update` event. This event will have the following format: -``` -1) "1555338870341-0" -2) 1) "name" - 2) "chan" - 3) "id" - 4) "d9d8f31b-f8d4-49c5-b943-6db10d8e2949" - 5) "operation" - 6) "channel.update" -``` -Note that update channel event will contain only those fields that were updated using -update channel endpoint. - -#### Channel remove event -Whenever channel instance is removed from the system, `things` service will generate and -publish new `remove` event. This event will have the following format: -``` -1) 1) "1555339429661-0" -2) 1) "id" - 2) "d9d8f31b-f8d4-49c5-b943-6db10d8e2949" - 3) "operation" - 4) "channel.remove" -``` - -#### Connect thing to a channel event -Whenever thing is connected to a channel on `things` service, `things` service will -generate and publish new `connect` event. This event will have the following format: -``` -1) "1555334740920-0" -2) 1) "chan_id" - 2) "d9d8f31b-f8d4-49c5-b943-6db10d8e2949" - 3) "thing_id" - 4) "3c36273a-94ea-4802-84d6-a51de140112e" - 5) "operation" - 6) "thing.connect" -``` - -#### Disconnect thing from a channel event -Whenever thing is disconnected from a channel on `things` service, `things` service -will generate and publish new `disconnect` event. This event will have the following -format: -``` -1) "1555334740920-0" -2) 1) "chan_id" - 2) "d9d8f31b-f8d4-49c5-b943-6db10d8e2949" - 3) "thing_id" - 4) "3c36273a-94ea-4802-84d6-a51de140112e" - 5) "operation" - 6) "thing.disconnect" -``` - -> **Note:** Every one of these events will omit fields that were not used or are not -relevant for specific operation. Also, field ordering is not guaranteed, so DO NOT -rely on it. - -### Bootstrap Service -Bootstrap service publishes events to Redis Stream called `mainflux.bootstrap`. -Every event from this service contains `operation` field which indicates one of -the following event types: -- `config.create` for configuration creation, -- `config.update` for configuration update, -- `config.remove` for configuration removal, -- `thing.bootstrap` for device bootstrap, -- `thing.state_change` for device state change, -- `thing.update_connections` for device connection update. - -If you want to integrate through -[docker-compose.yml](https://github.com/mainflux/mainflux/blob/master/docker/addons/bootstrap/docker-compose.yml) -you can use `mainflux-es-redis` service. Just connect to it and consume events -from Redis Stream named `mainflux.bootstrap`. - -#### Configuration create event -Whenever configuration is created, `bootstrap` service will generate and publish -new `create` event. This event will have the following format: -``` -1) "1555404899581-0" -2) 1) "owner" - 2) "john.doe@email.com" - 3) "name" - 4) "some" - 5) "channels" - 6) "ff13ca9c-7322-4c28-a25c-4fe5c7b753fc, c3642289-501d-4974-82f2-ecccc71b2d82, c3642289-501d-4974-82f2-ecccc71b2d83, cd4ce940-9173-43e3-86f7-f788e055eb14" - 7) "externalID" - 8) "9c:b6:d:eb:9f:fd" - 9) "content" - 10) "{}" - 11) "timestamp" - 12) "1555404899" - 13) "operation" - 14) "config.create" - 15) "thing_id" - 16) "63a110d4-2b77-48d2-aa46-2582681eeb82" -``` - -#### Configuration update event -Whenever configuration is updated, `bootstrap` service will generate and publish -new `update` event. This event will have the following format: -``` -1) "1555405104368-0" -2) 1) "content" - 2) "NOV_MGT_HOST: http://127.0.0.1:7000\nDOCKER_MGT_HOST: http://127.0.0.1:2375\nAGENT_MGT_HOST: https://127.0.0.1:7003\nMF_MQTT_HOST: tcp://104.248.142.133:8443" - 3) "timestamp" - 4) "1555405104" - 5) "operation" - 6) "config.update" - 7) "thing_id" - 8) "63a110d4-2b77-48d2-aa46-2582681eeb82" - 9) "name" - 10) "weio" -``` - -#### Configuration remove event -Whenever configuration is removed, `bootstrap` service will generate and publish -new `remove` event. This event will have the following format: -``` -1) "1555405464328-0" -2) 1) "thing_id" - 2) "63a110d4-2b77-48d2-aa46-2582681eeb82" - 3) "timestamp" - 4) "1555405464" - 5) "operation" - 6) "config.remove" -``` - -#### Thing bootstrap event -Whenever thing is bootstrapped, `bootstrap` service will generate and publish -new `bootstrap` event. This event will have the following format: -``` -1) "1555405173785-0" -2) 1) "externalID" - 2) "9c:b6:d:eb:9f:fd" - 3) "success" - 4) "1" - 5) "timestamp" - 6) "1555405173" - 7) "operation" - 8) "thing.bootstrap" -``` - -#### Thing change state event -Whenever thing's state changes, `bootstrap` service will generate and publish -new `change state` event. This event will have the following format: -``` -1) "1555405294806-0" -2) 1) "thing_id" - 2) "63a110d4-2b77-48d2-aa46-2582681eeb82" - 3) "state" - 4) "0" - 5) "timestamp" - 6) "1555405294" - 7) "operation" - 8) "thing.state_change" -``` - -#### Thing update connections event -Whenever thing's list of connections is updated, `bootstrap` service will generate -and publish new `update connections` event. This event will have the following format: -``` -1) "1555405373360-0" -2) 1) "operation" - 2) "thing.update_connections" - 3) "thing_id" - 4) "63a110d4-2b77-48d2-aa46-2582681eeb82" - 5) "channels" - 6) "ff13ca9c-7322-4c28-a25c-4fe5c7b753fc, 925461e6-edfb-4755-9242-8a57199b90a5, c3642289-501d-4974-82f2-ecccc71b2d82" - 7) "timestamp" - 8) "1555405373" -``` - -### MQTT Adapter -Instead of using heartbeat to know when client is connected through MQTT adapter one -can fetch events from Redis Streams that MQTT adapter publishes. MQTT adapter -publishes events every time client connects and disconnects to stream named `mainflux.mqtt`. - -Events that are coming from MQTT adapter have following fields: -- `thing_id` ID of a thing that has connected to MQTT adapter, -- `timestamp` is in Epoch UNIX Time Stamp format, -- `event_type` can have two possible values, connect and disconnect, -- `instance` represents MQTT adapter instance. - -If you want to integrate through -[docker-compose.yml](https://github.com/mainflux/mainflux/blob/master/docker/docker-compose.yml) -you can use `mainflux-es-redis` service. Just connect to it and consume events -from Redis Stream named `mainflux.mqtt`. - -Example of connect event: -``` -1) 1) "1555351214144-0" -2) 1) "thing_id" - 2) "1c597a85-b68e-42ff-8ed8-a3a761884bc4" - 3) "timestamp" - 4) "1555351214" - 5) "event_type" - 6) "connect" - 7) "instance" - 8) "mqtt-adapter-1" -``` - -Example of disconnect event: -``` -1) 1) "1555351214188-0" -2) 1) "thing_id" - 2) "1c597a85-b68e-42ff-8ed8-a3a761884bc4" - 3) "timestamp" - 4) "1555351214" - 5) "event_type" - 6) "disconnect" - 7) "instance" - 8) "mqtt-adapter-1" -``` diff --git a/docs/getting-started.md b/docs/getting-started.md deleted file mode 100644 index e33fd7dce72..00000000000 --- a/docs/getting-started.md +++ /dev/null @@ -1,106 +0,0 @@ -## Step 1 - Run the System -Before proceeding, install the following prerequisites: - -- [Docker](https://docs.docker.com/install/) (version 18.09) -- [Docker compose](https://docs.docker.com/compose/install/) (version 1.24.1) - -Once everything is installed, execute the following command from project root: - -```bash -make run -``` - -This will start Mainflux docker composition, which will output the logs from the containers. - -## Step 2 - Install the CLI -Open a new terminal from which you can interact with the running Mainflux system. The easiest way to do this is by using the Mainflux CLI, -which can be downloaded as a tarball from GitHub (here we use release `0.9.0` but be sure to use the latest release): - -```bash -wget -O- https://github.com/mainflux/mainflux/releases/download/0.9.0/mainflux-cli_v0.9.0_linux-amd64.tar.gz | tar xvz -C $GOBIN -``` - -> Make sure that `$GOBIN` is added to your `$PATH` so that `mainflux-cli` command can be accessible system-wide - -## Step 3 - Provision the System -Once installed, you can use the CLI to quick-provision the system for testing: -```bash -mainflux-cli provision test -``` - -This command actually creates a temporary testing user, logs it in, then creates two things and two channels on behalf of this user. -This quickly provisions a Mainflux system with one simple testing scenario. - -You can read more about system provisioning in the dedicated [Provisioning](./provisioning.md) chapter - -Output of the command follows this pattern: - -```json -{ - "email": "friendly_beaver@email.com", - "password": "123" -} - - -"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NDcwMjE3ODAsImlhdCI6MTU0Njk4NTc4MCwiaXNzIjoibWFpbmZsdXgiLCJzdWIiOiJmcmllbmRseV9iZWF2ZXJAZW1haWwuY29tIn0.Tyk31Ae680KqMrDqP895PRZg_GUytLE0IMIR_o3oO7o" - - -[ - { - "id": "513d02d2-16c1-4f23-98be-9e12f8fee898", - "key": "69590b3a-9d76-4baa-adae-9b5fec0ea14f", - "name": "d0", - }, - { - "id": "bf78ca98-2fef-4cfc-9f26-e02da5ecdf67", - "key": "840c1ea1-2e8d-4809-a6d3-3433a5c489d2", - "name": "d1", - } -] - - -[ - { - "id": "b7bfc4b6-c18d-47c5-b343-98235c5acc19", - "name": "c0" - }, - { - "id": "378678cd-891b-4a39-b026-869938783f54", - "name": "c1" - } -] -``` - -In the Mainflux system terminal (where docker compose is running) you should see following logs: -```bash -mainflux-users | {"level":"info","message":"Method register for user friendly_beaver@email.com took 97.573974ms to complete without errors.","ts":"2019-01-08T22:16:20.745989495Z"} -mainflux-users | {"level":"info","message":"Method login for user friendly_beaver@email.com took 69.308406ms to complete without errors.","ts":"2019-01-08T22:16:20.820610461Z"} -mainflux-users | {"level":"info","message":"Method identity for client friendly_beaver@email.com took 50.903µs to complete without errors.","ts":"2019-01-08T22:16:20.822208948Z"} -mainflux-things | {"level":"info","message":"Method add_thing for token eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NDcwMjE3ODAsImlhdCI6MTU0Njk4NTc4MCwiaXNzIjoibWFpbmZsdXgiLCJzdWIiOiJmcmllbmRseV9iZWF2ZXJAZW1haWwuY29tIn0.Tyk31Ae680KqMrDqP895PRZg_GUytLE0IMIR_o3oO7o and thing 513d02d2-16c1-4f23-98be-9e12f8fee898 took 4.865299ms to complete without errors.","ts":"2019-01-08T22:16:20.826786175Z"} - -... - -``` - -This proves that these provisioning commands were sent from the CLI to the Mainflux system. - -## Step 4 - Send Messages -Once system is provisioned, a `thing` can start sending messages on a `channel`: - -```bash -mainflux-cli messages send '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' -``` - -For example: -```bash -mainflux-cli messages send b7bfc4b6-c18d-47c5-b343-98235c5acc19 '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' 69590b3a-9d76-4baa-adae-9b5fec0ea14f -``` - -In the Mainflux system terminal you should see following logs: - -```bash -mainflux-things | {"level":"info","message":"Method can_access for channel b7bfc4b6-c18d-47c5-b343-98235c5acc19 and thing 513d02d2-16c1-4f23-98be-9e12f8fee898 took 1.410194ms to complete without errors.","ts":"2019-01-08T22:19:30.148097648Z"} -mainflux-http | {"level":"info","message":"Method publish took 336.685µs to complete without errors.","ts":"2019-01-08T22:19:30.148689601Z"} -``` - -This proves that messages have been correctly sent through the system via the protocol adapter (`mainflux-http`). diff --git a/docs/img/architecture.jpg b/docs/img/architecture.jpg deleted file mode 100644 index 079349cf4b6..00000000000 Binary files a/docs/img/architecture.jpg and /dev/null differ diff --git a/docs/img/architecture.xml b/docs/img/architecture.xml deleted file mode 100644 index d3e4e4f595a..00000000000 --- a/docs/img/architecture.xml +++ /dev/null @@ -1 +0,0 @@ -7Vpbc6M2FP41flwPd+PHtTeb7UyacZt0uvuUkUEGdQVihBzb++srgYQB4YQm+JImyWQCB13gO993dHQZ2fNke01BFv9OQohHlhFuR/aXkWWZjm3wf8KyKy1TyywNEUWhLLQ33KFfUBplvWiNQpg3CjJCMENZ0xiQNIUBa9gApWTTLLYiuNlrBiKoGe4CgHXr3yhkcWn1XWNv/wZRFKueTUM+WYLgZ0TJOpX9jSx7VfyUjxOg2pLl8xiEZFMz2Vcje04JYeVVsp1DLLBVsJX1vh54Wr03hSnrU2FSVngEeC0//fb6t9vv8uXYTgHyCClDHJ8bsIR4QXLEEEn5oyVhjCQjewbyrHTECm0hb3wWswTzW5NfqsqfMYpEJUYybs0ZJT/hnGBCuS0lKe9qRtYMo5RblV8NUTIGmXiPZBsJuo0DxCjajjNKtruHHFLePi+lf7pEQ3QPtzWThOIakgQyuuNF5NOKf5K2lueW95s9CSbeWBrjGgOmsiKQxIuqtvfg8wuJ/wHneZoz/uJfl2vOwMIJ/Is3MWLwLgOBsG84NE3YUVKyHCUR7zrAiGPD+NU1BPTBtPwt/xtnacSLrhDGyhOcsiGA/irQfMSfeIEPl6th0LanTbRNJa8a2qYSSh1sbwCwbVMD+/rPxVzDGoY8MMhbQllMIpICfLW3zgq1C8oXXK3hD7eIfRdmTpjy7ocsxDGju9ojcftD1soZ99JnEcO4IcAgz1GgzF8RVo3/AxnbycAJ1owI7VSvd0OEwopyB/2UkzUNoOK9jLGARlAWs6alTQDwpDcpxIChx2bkfI1rLN01C0oYCQgv5WH+frMl5VeRuAIhyNg7EoljnFAkrn0cPfiThiIqfSwgRfwlRTgvA38/LcA01Atx49HkYutymZxLLa6jqWXkzjIlGPeL5kLOPNb0DIU5+gWWRQGBe0ZQyop3cmeiBS4zOXQHHJditG2P6QkKw8L9he5mVRpUk4VMhLpAVkTTdFGlePLtGmlSl14+GWPDNOQI3Rts2dxCfHetCFmtcu7gtjeqXvuFM1tz0BwjKADWxIV5/MlhLeUJMFmHz0exVmiCZujCSVdomnoTG3jDhKZ2sjTpGZn8ASJTR+LKs1IWUXj3x42Ga4XmjieXYcHfZwBdlkHsZnkAYY6vHzpdCPvW0vYGQtj1WhD7OsTTYyVI1keCtI/4Uz3iq3T9DAnSVHPNfYzS6N0kQV06ON5MwdUHWE0EXRnIAaLWBVDRvE7ykvO9eDnRaamKPUvLGnZuB3TK9tIxVE2p7Vb+2nZJ+TmyVn2lot2Q32rIbDVUgqA19IIR29b19fZSqoq2r06peEZlGX4D/E/m68gxaIJl6+smFy3QvhPr0wjUbAnUm7xQoGZLoJ51PIH6mse/3d8v7t6ELhVbB5nquPYrpzrbzgrD6FLP0bu9dEHi7J3UfYjzkNsdffnuFB5Xkwk1sfiPXOjK8Hsv6nyQ4SAZ9MWPdzBP79rJONY83elY/7t8uak4W5eb4sqFyM21mk612yrpKzd38kxDA8pNn6vefr7XR9xSDWpr22q6vb3dusJkE8ScLOMQUZ6fPISAgQPSbMlv5YrfzuWB4mcg+bluA2DH1Ldtqx36OlPsAfTnTXS5DbAk1mt/pBKoqQs0BHlc9KCtmcmt9vqCmXGGXRTH10OAd7Y1NUefUrxD5bimPnAdTTlOxzJLr5GrQy/6MPTk4CVu22I6LJjDi8y9iN61u64C9YWMdW8xtVSffnr6NJjSFZf7xb/LokA73XGcF1JgYrQaaseKASngH2XsbWxHNRLhjnG2M5aceLxtELIX+dyOtQ81BJ7hDEPHMURCE4D5p1PNxf+TTS1NJ6c82ePpa+bzdS6OcmonrAKS5uvkLEesVn4Ag05HLH3XcZ8Mta84YjXtmD4M5Ah+uz/cW0a8/Qlq++pf \ No newline at end of file diff --git a/docs/img/bootstrap/1.png b/docs/img/bootstrap/1.png deleted file mode 100644 index c6741c6539d..00000000000 Binary files a/docs/img/bootstrap/1.png and /dev/null differ diff --git a/docs/img/bootstrap/2.png b/docs/img/bootstrap/2.png deleted file mode 100644 index f404fd454fd..00000000000 Binary files a/docs/img/bootstrap/2.png and /dev/null differ diff --git a/docs/img/bootstrap/3.png b/docs/img/bootstrap/3.png deleted file mode 100644 index 4e8492e6b4a..00000000000 Binary files a/docs/img/bootstrap/3.png and /dev/null differ diff --git a/docs/img/bootstrap/4.png b/docs/img/bootstrap/4.png deleted file mode 100644 index a23bac56db6..00000000000 Binary files a/docs/img/bootstrap/4.png and /dev/null differ diff --git a/docs/img/bootstrap/5.png b/docs/img/bootstrap/5.png deleted file mode 100644 index 077232b8f0f..00000000000 Binary files a/docs/img/bootstrap/5.png and /dev/null differ diff --git a/docs/img/bootstrap/6.png b/docs/img/bootstrap/6.png deleted file mode 100644 index 11f7f7dd742..00000000000 Binary files a/docs/img/bootstrap/6.png and /dev/null differ diff --git a/docs/img/gopherBanner.jpg b/docs/img/gopherBanner.jpg deleted file mode 100644 index b7b84fa68ae..00000000000 Binary files a/docs/img/gopherBanner.jpg and /dev/null differ diff --git a/docs/img/logo.png b/docs/img/logo.png deleted file mode 100644 index 87f99352844..00000000000 Binary files a/docs/img/logo.png and /dev/null differ diff --git a/docs/img/tracing/search.png b/docs/img/tracing/search.png deleted file mode 100644 index 10f8e1f1736..00000000000 Binary files a/docs/img/tracing/search.png and /dev/null differ diff --git a/docs/img/tracing/trace.png b/docs/img/tracing/trace.png deleted file mode 100644 index a41bb1dbde1..00000000000 Binary files a/docs/img/tracing/trace.png and /dev/null differ diff --git a/docs/index.md b/docs/index.md deleted file mode 100644 index 3291f01b941..00000000000 --- a/docs/index.md +++ /dev/null @@ -1,17 +0,0 @@ -## What is Mainflux? - -Mainflux is modern, scalable, secure open source and patent-free IoT cloud platform written in Go. - -It accepts user and thing connections over various network protocols (i.e. HTTP, -MQTT, WebSocket, CoAP), thus making a seamless bridge between them. It is used as the IoT middleware -for building complex IoT solutions. - -![banner](img/gopherBanner.jpg) - -## Features - -- Protocol bridging (i.e. HTTP, MQTT, WebSocket, CoAP) -- Device management and provisioning -- Fine-grained access control -- Platform logging and instrumentation support -- Container-based deployment using Docker diff --git a/docs/load-test.md b/docs/load-test.md deleted file mode 100644 index 23ba6e700e3..00000000000 --- a/docs/load-test.md +++ /dev/null @@ -1,23 +0,0 @@ -## Test scenarios - -Testing environment to be determined. - -### Message publishing - -In this scenario, large number of requests are sent to HTTP adapter service -every second. This test checks how much time HTTP adapter needs to respond -to each request. - -#### Results - -TBD - -### Create and get client - -In this scenario, large number of requests are sent to things service to create -things, and than to retrieve their data. This test checks how much time things -service needs to respond to each request. - -#### Results - -TBD diff --git a/docs/lora.md b/docs/lora.md deleted file mode 100644 index f5ca441a602..00000000000 --- a/docs/lora.md +++ /dev/null @@ -1,85 +0,0 @@ -Bridging with LoRaWAN Networks can be done over the [lora-adapter](https://github.com/mainflux/mainflux/tree/master/lora). This service sits between Mainflux and [LoRa Server](https://www.loraserver.io) and just forwards the messages from one system to another via MQTT protocol, using the adequate MQTT topics and in the good message format (JSON and SenML), i.e. respecting the APIs of both systems. - -LoRa Server is used for connectivity layer. Specially for the [LoRa Gateway Bridge](https://www.loraserver.io/lora-gateway-bridge/overview/) service, which abstracts the [SemTech packet-forwarder UDP protocol](https://github.com/Lora-net/packet_forwarder/blob/master/PROTOCOL.TXT) into JSON over MQTT. But also for the [LoRa Server](https://www.loraserver.io/loraserver/overview) service, responsible of the de-duplication and handling of uplink frames received by the gateway(s), handling of the LoRaWAN mac-layer and scheduling of downlink data transmissions. Finally the [Lora App Server](https://www.loraserver.io/lora-app-server/overview/) services is used to interact with the system. - -## Run Lora Server - -Before to run the `lora-adapter` you must install and run LoRa Server. First, execute the following command: - -```bash -go get github.com/brocaar/loraserver-docker -``` - -Once everything is installed, execute the following command from the LoRa Server project root: - -```bash -docker-compose up -``` - -**Troubleshouting:** Mainflux and LoRa Server use their own MQTT brokers. By default, those use the standard MQTT port `1883`. If you are running both systems on the same machine you must use different ports. You can fix this on Mainflux side configuring the environment variable `MF_MQTT_ADAPTER_PORT`. - - -## Setup LoRa Server - -Now that both systems are running you must provision LoRa Server, which offers for integration with external services, a RESTful and gRPC API. You can do it as well over the [LoRa App Server](https://www.loraserver.io/lora-app-server/overview), which is good example of integration. - -- **Create an Organization:** To add your own Gateways to the network you must have an Organization. -- **Create a Network:** Set the address of your Network-Server API that is used by LoRa App Server or other custom components interacting with LoRa Server (by default loraserver:8000). -- **Create a Gateways-Profile:** In this profile you can select the radio LoRa channels and the LoRa Network Server to use. -- **Create a Service-profile:** A service-profile connects an organization to a network-server and defines the features that an organization can use on this Network-Server. -- **Create a Gateway:** You must set proper ID in order to be discovered by LoRa Server. -- **Create an Application:** This will allows you to create Devices by connecting them to this application. This is equivalent to Devices connected to channels in Mainflux. -- **Create a Device-Profile:** Before creating Device you must create Device profile where you will define some parameter as LoRaWAN MAC version (format of the device address) and the LoRaWAN regional parameter (frequency band). This will allow you to create many devices using this profile. -- **Create a Device:** Finally, you can create a Device. You must configure the `network session key` and `application session key` of your Device. You can generate and copy them on your device configuration or you can use your own pre generated keys and set them using the LoRa App Server UI. -Device connect through OTAA. Make sure that loraserver device-profile is using same release as device. If MAC version is 1.0.X, `application key = app_key` and `app_eui = deviceEUI`. If MAC version is 1.1 or ABP both parameters will be needed, APP_key and Network key. - - -## Mainflux and LoRa Server - - -Once everything is running and the LoRa Server is provisioned, execute the following command from Mainflux project root to run the lora-adapter: - -```bash -docker-compose -f docker/addons/lora-adapter/docker-compose.yml up -d -``` - -**Troubleshouting:** The lora-adapter subscribes to the LoRa Server MQTT broker and will fail if the connection is not established. You must ensure that the environment variable `MF_LORA_ADAPTER_MESSAGES_URL` is propertly configured. - -**Remark:** By defaut, `MF_LORA_ADAPTER_MESSAGES_URL` is set as `tcp://lora.mqtt.mainflux.io:1883` in the [docker-compose.yml](https://github.com/mainflux/mainflux/blob/master/docker/addons/lora-adapter/docker-compose.yml) file of the adapter. If you run the composition without configure this variable you will start to receive messages from our demo server. - -### Route Map - -The lora-adapter use [Redis](https://redis.io/) database to create a route map between both systems. As in Mainflux we use Channels to connect Things, LoRa Server uses Applications to connect Devices. - -The lora-adapter uses the matadata of provision events emitted by Mainflux system to update his route map. For that, you must provision Mainflux Channels and Things with an extra metadata key in the JSON Body of the HTTP request. It must be a JSON object with key `lora` which value is another JSON object. This nested JSON object should contain `appID` or `devEUI` field. In this case `appID` or `devEUI` must be an existent Lora application ID or device EUI: - -**Channel structure:** - -``` -{ - "name": "", - "metadata:": { - "lora": { - "appID": "" - } - } -} -``` - -**Thing structure:** - -``` -{ - "type": "device", - "name": "", - "metadata:": { - "lora": { - "devEUI": "" - } - } -} -``` - -##### Messaging - -To forward LoRa messages the lora-adapter subscribes to topics `applications/+/devices/+` of the LoRa Server MQTT broker. It verifies `appID` and `devEUI` of published messages. If the mapping exists it uses corresponding `channelID` and `thingID` to sign and forwards the content of the LoRa message to the Mainflux message broker. diff --git a/docs/messaging.md b/docs/messaging.md deleted file mode 100644 index a691386d243..00000000000 --- a/docs/messaging.md +++ /dev/null @@ -1,120 +0,0 @@ -Once a channel is provisioned and thing is connected to it, it can start to -publish messages on the channel. The following sections will provide an example -of message publishing for each of the supported protocols. - -## HTTP - -To publish message over channel, thing should send following request: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/senml+json" -H "Authorization: " https://localhost/http/channels//messages -d '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' -``` - -Note that if you're going to use senml message format, you should always send -messages as an array. - -## WebSocket - -To publish and receive messages over channel using web socket, you should first -send handshake request to `/channels//messages` path. Don't forget -to send `Authorization` header with thing authorization token. In order to pass -message content type to WS adapter you can use `Content-Type` header. - -If you are not able to send custom headers in your handshake request, send them as -query parameter `authorization` and `content-type`. Then your path should look like -this `/channels//messages?authorization=&content-type=`. - -If you are using the docker environment prepend the url with `ws`. So for example -`/ws/channels//messages?authorization=&content-type=`. - -### Basic nodejs example - -```javascript -const WebSocket = require('ws'); - -// do not verify self-signed certificates if you are using one -process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0' - -// cbf02d60-72f2-4180-9f82-2c957db929d1 is an example of a thing_auth_key -const ws = new WebSocket('wss://localhost/ws/channels/1/messages?authorization=cbf02d60-72f2-4180-9f82-2c957db929d1&content-type=application%2Fsenml%2Bjson') - -ws.on('open', () => { - ws.send('something') -}) - -ws.on('message', (data) => { - console.log(data) -}) -ws.on('error', (e) => { - console.log(e) -}) -``` - -## MQTT - -To send and receive messages over MQTT you could use [Mosquitto tools](https://mosquitto.org), -or [Paho](https://www.eclipse.org/paho/) if you want to use MQTT over WebSocket. - -To publish message over channel, thing should call following command: - -``` -mosquitto_pub -u -P -t channels//messages -h localhost -m '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' -``` - -To subscribe to channel, thing should call following command: - -``` -mosquitto_sub -u -P -t channels//messages -h localhost -``` - -In order to pass content type as part of topic, one should append it to the end -of an existing topic. Content type value should always be prefixed with `/ct/`. -If you want to use standard topic such as `channels//messages` -with SenML content type, you should use following topic `channels//messages/ct/application_senml-json`. Characters like `/` and `+` in the content type will be -replaced with `_` and `-` respectively. - -If you are using TLS to secure MQTT connection, add `--cafile docker/ssl/certs/ca.crt` -to every command. - -## CoAP - -CoAP adapter implements CoAP protocol using underlying UDP and according to [RFC 7252](https://tools.ietf.org/html/rfc7252). To send and receive messages over CoAP, you can use [Copper](https://github.com/mkovatsc/Copper) CoAP user-agent. To set the add-on, please follow the installation instructions provided [here](https://github.com/mkovatsc/Copper#how-to-integrate-the-copper-sources-into-firefox). Once the Mozilla Firefox and Copper are ready and CoAP adapter is running locally on the default port (5683), you can navigate to the appropriate URL and start using CoAP. The URL should look like this: - -``` -coap://localhost/channels//messages?authorization= -``` - -To send a message, use `POST` request. When posting a message you can pass content type in `Content-Format` option. -To subscribe, send `GET` request with Observe option set to 0. There are two ways to unsubscribe: - 1) Send `GET` request with Observe option set to 1. - 2) Forget the token and send `RST` message as a response to `CONF` message received by the server. - -The most of the notifications received from the Adapter are non-confirmable. By [RFC 7641](https://tools.ietf.org/html/rfc7641#page-18): - -> Server must send a notification in a confirmable message instead of a non-confirmable message at least every 24 hours. This prevents a client that went away or is no longer interested from remaining in the list of observers indefinitely. - -CoAP Adapter sends these notifications every 12 hours. To configure this period, please check [adapter documentation](https://www.github.com/mainflux/mainflux/tree/master/coap/README.md) If the client is no longer interested in receiving notifications, the second scenario described above can be used to unsubscribe. - -## Subtopics - -In order to use subtopics and give more meaning to your pub/sub channel, you can simply add any suffix to base `/channels//messages` topic. - -Example subtopic publish/subscribe for bedroom temperature would be `channels//messages/bedroom/temperature`. - -Subtopics are generic and multilevel. You can use almost any suffix with any depth. - -Topics with subtopics are propagated to NATS broker in the following format `channel..`. - -Our example topic `channels//messages/bedroom/temperature` will be translated to appropriate NATS topic `channel..bedroom.temperature`. - -You can use multilevel subtopics, that have multiple parts. These parts are seaprated by `.` or `/` separators. -When you use combination of these two, have in mind that behind the scene, `/` separator will be replaced with `.`. -Every empty part of subtopic will be removed. What this means is that subtopic `a///b` is equivalent to `a/b`. -When you want to subscribe, you can use NATS wildcards `*` and `>`. Every subtopic part can have `*` or `>` as it's value, but if there is any other character beside these wildcards, subtopic will be invalid. What this means is that subtopics such as `a.b*c.d` will be invalid, while `a.b.*.c.d` will be valid. - -Authorization is done on channel level, so you only have to have access to channel in order to have access to -it's subtopics. - -**Note:** When using MQTT, it's recommended that you use standard MQTT wildcards `+` and `#`. - -For more information and examples checkout [official nats.io documentation](https://nats.io/documentation/writing_applications/subscribing/) \ No newline at end of file diff --git a/docs/opcua.md b/docs/opcua.md deleted file mode 100644 index a1cc8048ea7..00000000000 --- a/docs/opcua.md +++ /dev/null @@ -1,53 +0,0 @@ -Bridging with an OPC-UA Server can be done over the [opcua-adapter](https://github.com/mainflux/mainflux/tree/master/opcua). This service sits between Mainflux and an [OPC-UA Server](https://en.wikipedia.org/wiki/OPC_Unified_Architecture) and just forwards the messages from one system to another. - -## Run OPC-UA Server - -The OPC-UA Server is used for connectivity layer. It allows various methods to read informations from the OPC-UA server and its nodes. The current version of the opcua-adapter still experimental and only `Read` and `Subscribe` methods are implemented. -[Public OPC-UA test servers](https://github.com/node-opcua/node-opcua/wiki/publicly-available-OPC-UA-Servers-and-Clients) are available for testing of OPC-UA clients and can be used for development and test purposes. - -## Mainflux and OPC-UA Server - -Once the OPC-UA Server you want to connect is running, execute the following command from Mainflux project root to run the opcua-adapter: - -```bash -docker-compose -f docker/addons/opcua-adapter/docker-compose.yml up -d -``` - -**Remark:** By defaut, `MF_OPCUA_ADAPTER_SERVER_URI` is set as `opc.tcp://opcua.rocks:4840` in the [docker-compose.yml](https://github.com/mainflux/mainflux/blob/master/docker/addons/opcua-adapter/docker-compose.yml) file of the adapter. If you run the composition without configure this variable you will start to receive messages from the public test server [OPC UA rocks](https://opcua.rocks/open62541-online-test-server). - -### Route Map - -The opcua-adapter use [Redis](https://redis.io/) database to create a route-map between Mainflux and an OPC-UA Server. As Mainflux use Things and Channels IDs to sign messages, OPC-UA servers use Node Namespaces and Node Identifiers (the combination is called NodeID). The adapter route-mmap associate a `ThingID` with a `Node Identifier` and a `ChannelID` with a `Node Namespace` - -The opcua-adapter uses the matadata of provision events emitted by Mainflux system to update its route map. For that, you must provision Mainflux `Channels` and `Things` with an extra metadata key in the JSON Body of the HTTP request. It must be a JSON object with key `opcua` which value is another JSON object. This nested JSON object should contain `namespace` or `id` field. In this case `namespace` or `id` must be an existent OPC-UA `Node Namespace` or `Node Identifier`: - -**Channel structure:** - -``` -{ - "name": "", - "metadata:": { - "opcua": { - "namespace": "" - } - } -} -``` - -**Thing structure:** - -``` -{ - "type": "device", - "name": "", - "metadata:": { - "opcua": { - "id": "" - } - } -} -``` - -##### Messaging - -To forward OPC-UA messages the opcua-adapter subscribes to the NodeID `ns=;i=` of the OPC-UA Server. It verifies `namespace` and `id` of published messages. If the mapping exists it uses corresponding `ChannelID` and `ThingID` to sign and forwards the content of the OPC-UA message to the Mainflux message broker. diff --git a/docs/provisioning.md b/docs/provisioning.md deleted file mode 100644 index f4c23b40df4..00000000000 --- a/docs/provisioning.md +++ /dev/null @@ -1,324 +0,0 @@ -Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities -used in the platform - users, channels and things. - -## Users management - -### Account creation - -Use the Mainflux API to create user account: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" https://localhost/users -d '{"email":"john.doe@email.com", "password":"123"}' -``` - -Note that when using official `docker-compose`, all services are behind `nginx` -proxy and all traffic is `TLS` encrypted. - -### Obtaining an authorization key - -In order for this user to be able to authenticate to the system, you will have -to create an authorization token for him: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" https://localhost/tokens -d '{"email":"john.doe@email.com", "password":"123"}' -``` - -Response should look like this: -``` -{ - "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1MjMzODg0NzcsImlhdCI6MTUyMzM1MjQ3NywiaXNzIjoibWFpbmZsdXgiLCJzdWIiOiJqb2huLmRvZUBlbWFpbC5jb20ifQ.cygz9zoqD7Rd8f88hpQNilTCAS1DrLLgLg4PRcH-iAI" -} -``` - -## System provisioning - -Before proceeding, make sure that you have created a new account, and obtained -an authorization key. - -### Provisioning things - -> This endpoint will be depreciated in 0.11.0. It will be replaced with the bulk endpoint currently found at /things/bulk. - -Things are created by executing request `POST /things` with a JSON payload. -Note that you will also need `user_auth_token` in order to create things -that belong to this particular user. - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" -H "Authorization: " https://localhost/things -d '{"name":"weio"}' -``` - -Response will contain `Location` header whose value represents path to newly -created thing: - -``` -HTTP/1.1 201 Created -Content-Type: application/json -Location: /things/81380742-7116-4f6f-9800-14fe464f6773 -Date: Tue, 10 Apr 2018 10:02:59 GMT -Content-Length: 0 -``` - -### Bulk provisioning things - -Multiple things can be created by executing a `POST /things/bulk` request with a JSON payload. The payload should contain a JSON array of the things to be created. If there is an error any of the things, none of the things will be created. - -```bash -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" -H "Authorization: " https://localhost/things/bulk -d '[{"name":"weio"},{"name":"bob"}]' -``` - -The response's body will contain a list of the created things. - -``` -HTTP/2 201 -server: nginx/1.16.0 -date: Tue, 22 Oct 2019 02:19:15 GMT -content-type: application/json -content-length: 222 -access-control-expose-headers: Location - -{"things":[{"id":"8909adbf-312f-41eb-8cfc-ccc8c4e3655e","name":"weio","key":"4ef103cc-964a-41b5-b75b-b7415c3a3619"},{"id":"2fcd2349-38f7-4b5c-8a29-9607b2ca8ff5","name":"bob","key":"ff0d1490-355c-4dcf-b322-a4c536c8c3bf"}]} -``` - -### Retrieving provisioned things - -In order to retrieve data of provisioned things that is written in database, you -can send following request: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/things -``` - -Notice that you will receive only those things that were provisioned by -`user_auth_token` owner. - -``` -HTTP/1.1 200 OK -Content-Type: application/json -Date: Tue, 10 Apr 2018 10:50:12 GMT -Content-Length: 1105 - -{ - "total": 2, - "offset": 0, - "limit": 10, - "things": [ - { - "id": "81380742-7116-4f6f-9800-14fe464f6773", - "name": "weio", - "key": "7aa91f7a-cbea-4fed-b427-07e029577590" - }, - { - "id": "cb63f852-2d48-44f0-a0cf-e450496c6c92", - "name": "myapp", - "key": "cbf02d60-72f2-4180-9f82-2c957db929d1" - } - ] -} -``` - -You can specify `offset` and `limit` parameters in order to fetch specific -group of things. In that case, your request should look like: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/things?offset=0&limit=5 -``` - -You can specify `name` and/or `metadata` parameters in order to fetch specific -group of things. When specifiying metadata you can specify just a part of the metadata json you want to match - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/things?offset=0&limit=5&metadata={"serial":"123456"} -``` - -If you don't provide them, default values will be used instead: 0 for `offset`, -and 10 for `limit`. Note that `limit` cannot be set to values greater than 100. Providing -invalid values will be considered malformed request. - -### Removing things - -In order to remove you own thing you can send following request: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X DELETE -H "Authorization: " https://localhost/things/ -``` - -### Provisioning channels - -> This endpoint will be depreciated in 0.11.0. It will be replaced with the bulk endpoint currently found at /channels/bulk. - -Channels are created by executing request `POST /channels`: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" -H "Authorization: " https://localhost/channels -d '{"name":"mychan"}' -``` - -After sending request you should receive response with `Location` header that -contains path to newly created channel: - -``` -HTTP/1.1 201 Created -Content-Type: application/json -Location: /channels/19daa7a8-a489-4571-8714-ef1a214ed914 -Date: Tue, 10 Apr 2018 11:30:07 GMT -Content-Length: 0 -``` - -### Bulk provisioning channels - -Multiple channels can be created by executing a `POST /things/bulk` request with a JSON payload. The payload should contain a JSON array of the channels to be created. If there is an error any of the channels, none of the channels will be created. - -```bash -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" -H "Authorization: " https://localhost/channels/bulk -d '[{"name":"joe"},{"name":"betty"}]' -``` - -The response's body will contain a list of the created channels. - -``` -HTTP/2 201 -server: nginx/1.16.0 -date: Tue, 22 Oct 2019 02:14:41 GMT -content-type: application/json -content-length: 135 -access-control-expose-headers: Location - -{"channels":[{"id":"5a21bbcb-4c9a-4bb4-af31-9982d00f7a6e","name":"joe"},{"id":"d74b119b-2eea-4285-a999-9f747869bb45","name":"betty"}]} -``` - -### Retrieving provisioned channels - -To retreve provisioned channels you should send request to `/channels` with -authorization token in `Authorization` header: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/channels -``` - -Note that you will receive only those channels that were created by authorization -token's owner. - -``` -HTTP/1.1 200 OK -Content-Type: application/json -Date: Tue, 10 Apr 2018 11:38:06 GMT -Content-Length: 139 - -{ - "total": 1, - "offset": 0, - "limit": 10, - "channels": [ - { - "id": "19daa7a8-a489-4571-8714-ef1a214ed914", - "name": "mychan" - } - ] -} -``` - -You can specify `offset` and `limit` parameters in order to fetch specific -group of channels. In that case, your request should look like: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/channels?offset=0&limit=5 -``` - -If you don't provide them, default values will be used instead: 0 for `offset`, -and 10 for `limit`. Note that `limit` cannot be set to values greater than 100. Providing -invalid values will be considered malformed request. - -### Removing channels - -In order to remove specific channel you should send following request: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X DELETE -H "Authorization: " https://localhost/channels/ -``` - -## Access control - -Channel can be observed as a communication group of things. Only things that -are connected to the channel can send and receive messages from other things -in this channel. things that are not connected to this channel are not allowed -to communicate over it. - -Only user, who is the owner of a channel and of the things, can connect the -things to the channel (which is equivalent of giving permissions to these things -to communicate over given communication group). - -To connect a thing to the channel you should send following request: - -> This endpoint will be depreciated in 0.11.0. It will be replaced with the bulk endpoint found at /connect. - -```bash -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X PUT -H "Authorization: " https://localhost/channels//things/ -``` - -To connect multiple things to a channel, you can send the following request: - -```bash -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X POST -H "Content-Type: application/json" -H "Authorization: " https://localhost/connect -d '{"channel_ids":["", ""],"thing_ids":["", ""]}' -``` - -You can observe which things are connected to specific channel: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/channels//things -``` - -Response that you'll get should look like this: - -``` -{ - "total": 2, - "offset": 0, - "limit": 10, - "things": [ - { - "id": "3ffb3880-d1e6-4edd-acd9-4294d013f35b", - "name": "d0", - "key": "b1996995-237a-4552-94b2-83ec2e92a040", - "metadata": "{}" - }, - { - "id": "94d166d6-6477-43dc-93b7-5c3707dbef1e", - "name": "d1", - "key": "e4588a68-6028-4740-9f12-c356796aebe8", - "metadata": "{}" - } - ] -} -``` - -You can also observe to which channels is specified thing connected: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -H "Authorization: " https://localhost/things//channels -``` - -Response that you'll get should look like this: - -``` -{ - "total": 2, - "offset": 0, - "limit": 10, - "channels": [ - { - "id": "5e62eb13-2695-4860-8d87-85b8a2f80fd4", - "name": "c1", - "metadata": "{}" - }, - { - "id": "c4b5e19a-7ffe-4172-b2c5-c8b9d570a165", - "name": "c0", - "metadata":"{}" - } - ] -} -``` - -If you want to disconnect your thing from the channel, send following request: - -``` -curl -s -S -i --cacert docker/ssl/certs/mainflux-server.crt --insecure -X DELETE -H "Authorization: " https://localhost/channels//things/ -``` diff --git a/docs/security.md b/docs/security.md deleted file mode 100644 index 30c5050832b..00000000000 --- a/docs/security.md +++ /dev/null @@ -1,51 +0,0 @@ -## Server configuration - -### Users - -If either the cert or key is not set, the server will use insecure transport. - -`MF_USERS_SERVER_CERT` the path to server certificate in pem format. - -`MF_USERS_SERVER_KEY` the path to the server key in pem format. - -### Things - -If either the cert or key is not set, the server will use insecure transport. - -`MF_THINGS_SERVER_CERT` the path to server certificate in pem format. - -`MF_THINGS_SERVER_KEY` the path to the server key in pem format. - -## Client configuration - -If you wish to secure the gRPC connection to `things` and `users` services you must define the CAs that you trust. This does not support mutual certificate authentication. - -### Adapter configuration - -`MF_HTTP_ADAPTER_CA_CERTS`, `MF_MQTT_ADAPTER_CA_CERTS`, `MF_WS_ADAPTER_CA_CERTS`, `MF_COAP_ADAPTER_CA_CERTS` - the path to a file that contains the CAs in PEM format. If not set, the default connection will be insecure. If it fails to read the file, the adapter will fail to start up. - -### Things - -`MF_THINGS_CA_CERTS` - the path to a file that contains the CAs in PEM format. If not set, the default connection will be insecure. If it fails to read the file, the service will fail to start up. - -## Securing PostgreSQL connections - -By default, Mainflux will connect to Postgres using insecure transport. -If a secured connection is required, you can select the SSL mode and set paths to any extra certificates and keys needed. - -`MF_USERS_DB_SSL_MODE` the SSL connection mode for Users. -`MF_USERS_DB_SSL_CERT` the path to the certificate file for Users. -`MF_USERS_DB_SSL_KEY` the path to the key file for Users. -`MF_USERS_DB_SSL_ROOT_CERT` the path to the root certificate file for Users. - -`MF_THINGS_DB_SSL_MODE` the SSL connection mode for Things. -`MF_THINGS_DB_SSL_CERT` the path to the certificate file for Things. -`MF_THINGS_DB_SSL_KEY` the path to the key file for Things. -`MF_THINGS_DB_SSL_ROOT_CERT` the path to the root certificate file for Things. - -Supported database connection modes are: `disabled` (default), `required`, `verify-ca` and `verify-full`. - -## Securing gRPC -By default gRPC communication is not secure as Mainflux system is most often run in a private network behind the reverse proxy. - -However, TLS can be activated and configured. diff --git a/docs/storage.md b/docs/storage.md deleted file mode 100644 index 4be26a07e4e..00000000000 --- a/docs/storage.md +++ /dev/null @@ -1,154 +0,0 @@ -Mainflux supports various storage databases in which messages are stored: -- CassandraDB -- MongoDB -- InfluxDB -- PostgreSQL - -These storages are activated via docker-compose add-ons. - -The `/docker` folder contains an `addons` directory. This directory is used for various services that are not core to the Mainflux platform but could be used for providing additional features. - -In order to run these services, core services, as well as the network from the core composition, should be already running. - -## Writers - -Writers provide an implementation of various `message writers`. Message writers are services that consume Mainflux messages, transform them to `SenML` format, and store them in specific data store. - -Each writer can filter messages based on channel list that is set in -`channels.toml` configuration file. If you want to listen on all channels, -just pass one element ["*"], otherwise pass the list of channels. Here is -an example: - -```toml -[channels] -filter = ["*"] -``` - -### InfluxDB, InfluxDB-writer and Grafana - -From the project root execute the following command: - -```bash -docker-compose -f docker/addons/influxdb-writer/docker-compose.yml up -d -``` - -This will install and start: - -- [InfluxDB](https://docs.influxdata.com/influxdb) - time series database -- InfluxDB writer - message repository implementation for InfluxDB -- [Grafana](https://grafana.com) - tool for database exploration and data visualization and analytics - -Those new services will take some additional ports: - -- 8086 by InfluxDB -- 8900 by InfluxDB writer service -- 3001 by Grafana - -To access Grafana, navigate to `http://localhost:3001` and login with: `admin`, password: `admin` - -### Cassandra and Cassandra-writer - -```bash -./docker/addons/cassandra-writer/init.sh -``` - -_Please note that Cassandra may not be suitable for your testing environment because of its high system requirements._ - -### MongoDB and MongoDB-writer - -```bash -docker-compose -f docker/addons/mongodb-writer/docker-compose.yml up -d -``` - -MongoDB default port (27017) is exposed, so you can use various tools for database inspection and data visualization. - -### PostgreSQL and PostgreSQL-writer - -```bash -docker-compose -f docker/addons/postgres-writer/docker-compose.yml up -d -``` - -Postgres default port (5432) is exposed, so you can use various tools for database inspection and data visualization. - -## Readers - -Readers provide an implementation of various `message readers`. -Message readers are services that consume normalized (in `SenML` format) Mainflux messages from data storage and opens HTTP API for message consumption. -Installing corresponding writer before reader is implied. - -Each of the Reader services exposes the same [HTTP API](https://github.com/mainflux/mainflux/blob/master/readers/swagger.yml) for fetching messages on its default port. - -To read sent messages on channel with id `channel_id` you should send `GET` request to `/channels//messages` with thing access token in `Authorization` header. That thing must be connected to channel with `channel_id` - -Response should look like this: - -```http -HTTP/1.1 200 OK -Content-Type: application/json -Date: Tue, 18 Sep 2018 18:56:19 GMT -Content-Length: 228 - -{ - "messages": [ - { - "Channel": 1, - "Publisher": 2, - "Protocol": "mqtt", - "Name": "name:voltage", - "Unit": "V", - "Value": 5.6, - "Time": 48.56 - }, - { - "Channel": 1, - "Publisher": 2, - "Protocol": "mqtt", - "Name": "name:temperature", - "Unit": "C", - "Value": 24.3, - "Time": 48.56 - } - ] -} -``` - -Note that you will receive only those messages that were sent by authorization token's owner. -You can specify `offset` and `limit` parameters in order to fetch specific group of messages. An example of HTTP request looks like: - -```bash -curl -s -S -i -H "Authorization: " http://localhost:/channels//messages?offset=0&limit=5 -``` - -If you don't provide these parameters, default values will be used instead: 0 for `offset`, and 10 for `limit`. - -### InfluxDB-reader - -To start InfluxDB reader, execute the following command: - -```bash -docker-compose -f docker/addons/influxdb-reader/docker-compose.yml up -d -``` - -### Cassandra-reader - -To start Cassandra reader, execute the following command: - -```bash -docker-compose -f docker/addons/cassandra-reader/docker-compose.yml up -d -``` - -### MongoDB-reader - -To start MongoDB reader, execute the following command: - -```bash -docker-compose -f docker/addons/mongodb-reader/docker-compose.yml up -d -``` - -### PostgreSQL-reader - -To start PostgreSQL reader, execute the following command: - -```bash -docker-compose -f docker/addons/postgres-reader/docker-compose.yml up -d -``` diff --git a/docs/tracing.md b/docs/tracing.md deleted file mode 100644 index d90431e092b..00000000000 --- a/docs/tracing.md +++ /dev/null @@ -1,54 +0,0 @@ -# Tracing - -Distributed tracing is a method of profiling and monitoring applications. It can provide valuable insight when optimizing and debugging an application. Mainflux includes the [Jaeger](https://www.jaegertracing.io) open tracing framework as a service with its stack by default. - -## Launch - -The Jaeger service will launch with the rest of the Mainflux services. All services can be launched using: - -```bash -make run -``` - -The Jaeger UI can then be accessed at ```http://localhost:16686``` from a browser. Details about the UI can be found in [Jaeger's official documentation](https://www.jaegertracing.io/docs/1.14/frontend-ui/). - -## Configure - -The Jaeger service can be disabled by using the `scale` flag with ```docker-compose up``` and setting the jaeger container to 0. - -```bash ---scale jaeger=0 -``` - -```make rungw``` will run all of the Mainflux services except for the Jaeger service. This is currently the only difference from ```make run```. -> The ```make rungw``` command runs Mainflux for gateway devices. There could potentially be more differences running with this command in the future. - -Jaeger uses 5 ports within the Mainflux framework. These ports can be edited in the `.env` file. - -| Variable | Description | Default | -| ------------------- | ------------------------------------------------- | ----------- | -| MF_JAEGER_PORT | Agent port for compact jaeger.thrift protocol | 6831 | -| MF_JAEGER_FRONTEND | UI port | 16686 | -| MF_JAEGER_COLLECTOR | Collector for jaeger.thrift directly from clients | 14268 | -| MF_JAEGER_CONFIGS | Configuration server | 5778 | -| MF_JAEGER_URL | Jaeger access from within Mainflux | jaeger:6831 | - -## Example - -As an example for using Jaeger, we can look at the traces generated after provisioning the system. Make sure to have ran the provisioning script that is part of the [Getting Started](./getting-started.md) step. - -Before getting started with Jaeger, there are a few terms that are important to define. A `trace` can be thought of as one transaction within the system. A trace is made up of one or more `spans`. These are the individual steps that must be taken for a trace to perform its action. A span has `tags` and `logs` associated with it. Tags are key-value pairs that provide information such as a database type or http method. Tags are useful when filtering traces in the Jaeger UI. Logs are structured messages used at specific points in the trace's transaction. These are typically used to indicate an error. - -When first navigating to the Jaeger UI, it will present a search page with an empty results section. There are multiple fields to search from including service, operation, tags, and time frames. Clicking `Find Traces` will fill the results section with traces containing the selected fields. - -![Search page with results](img/tracing/search.png) - -The top of the results page includes a scatter plot of the traces and their durations. This can be very useful for finding a trace with a prolonged runtime. Clicking on one of the points will open the trace page of that trace. - -Below the graph is a list of all the traces with a summary of its information. Each trace shows a unique identifier, the overall runtime, the spans it is composed of, and when it was ran. Clicking on one of the traces will open the trace page of that trace. - -![Trace page with expanded spans](img/tracing/trace.png) - -The trace page provides a more detailed breakdown of the individual span calls. The top of the page shows a chart breaking down what spans the trace is spending its time in. Below the chart are the individual spans and their details. Expanding the spans shows any tags associated with that span and process information. This is also where any errors or logs seen while running the span will be reported. - -This is just a brief overview of the possibilities of Jaeger and its UI. For more information, check out [Jaeger's official documentation](https://www.jaegertracing.io/docs/1.14/frontend-ui/). diff --git a/mkdocs.yml b/mkdocs.yml deleted file mode 100644 index 50d3be3de91..00000000000 --- a/mkdocs.yml +++ /dev/null @@ -1,41 +0,0 @@ -# -# Copyright (c) Mainflux -# SPDX-License-Identifier: Apache-2.0 -# - -copyright: Copyright (c) Mainflux -repo_url: https://github.com/mainflux/mainflux -site_description: Mainflux IoT System -site_name: Mainflux -theme: readthedocs - -extra: - logo: docs/img/logo.png - author: - github: mainflux/mainflux - twitter: mainflux - -markdown_extensions: - - admonition - - toc: - permalink: "#" - -pages: - - Overview: - - About: index.md - - Contributing: CONTRIBUTING.md - - License: LICENSE.txt - - Architecture: architecture.md - - Getting Started: getting-started.md - - Provisioning: provisioning.md - - Messaging: messaging.md - - Storage: storage.md - - LoRa: lora.md - - OPC-UA: opcua.md - - Security: security.md - - Authentication: authentication.md - - CLI: cli.md - - Bootstrap: bootstrap.md - - Tracing: tracing.md - - Developer's Guide: dev-guide.md - - Load Test: load-test.md