THIS PROJECT IS UNDER CONSTRUCTION
Just a VERY SIMPLE library to manage your DevOps.
The BlackOps library makes it easy to manage your DevOps.
Once installed, BlackOps gives you an ops
tool to perform your deployments and monitor your nodes from the comfort of your command line.
The ops
command provides the following features:
- Continious Deployment,
- Logs Monitoring,
- Processes Monitoring; and
- Infrastructure Monitoring.
Note: The ops
command has been tested on Ubuntu 20.04.
Table of Contents
- Getting Started
- Remote Operations
- Configuration Files
- Environment Variable
$OPSLIB
- Remote
.op
Files - Repositories
- Custom Parameters
- Connecting
- Installing
- Deploying
- Starting and Stopping Nodes
- Configuration Templates
- Monitoring
- Infrastructure Managing
- Custom Alerts
- Processes Watching
- Logs Watching
- Further Work
- Install the
ops
command.
sudo apt-get update
sudo apt-get install blackops
- Run an operation.
The code below will download and execute a .ops
script that sets the hostname of your computer.
wget https://raw.githubusercontent.com/leandrosardi/blackops/refs/heads/main/ops/hostname.op
ops source ./hostname.op --local --name=dev1
Notes:
Here are some other considerations about the ops
command.
- You can write
./hostname
instead of./hostname.op
.
The source
command will look for the ./hostname
file. And if ./hostname
doesn't exists, then the source
command will try with ./hostname.op
- If you are writing Ruby code, you can install the
blackops
gem. Such a gem allows you to perform all the same operations from Ruby code.
First, install the gem.
gem install blackops
Then, execute your ops from a Ruby script using the source_local
method:
require 'simple_cloud_logging'
require 'blackops'
l = BlackStack::LocalLogger.new('./example.log')
BlackOps.source_local(
op: './hostname.op',
parameters: => {
'name' => 'dev1',
},
logger: l
)
- The content of
hostname.op
looks like this:
hostname.op
# Description:
# - Very simple script that shows how to use an `.op` file to change the hostname of a node.
# - Run this op as root.
#
# Change hostname
RUN hostnamectl set-hostname "$$name"
-
The argument
--name
in theops
command is to replace the$$name
variable into the.ops
file. -
You can define any variable into your
.ops
file, and you can set its value into a command argument.
E.g.:
set-rubylib.op
RUN export RUBYLIB=$$rubylib
- All the variables defined into the
.ops
file must be present into the list of arguments of theops
command. Or, if you are using theblackops
gem, all the variables must be present into theparameters
hash.
You can also run operations on a remote node through SSH.
Use the --ssh
arguments instead of --local
.
ops source ./hostname.op --ssh=username:password@ip:port --name=prod1
If you are coding with Ruby, call to the source_remote
method.
require 'simple_cloud_logging'
require 'blackstack-nodes'
require 'blackops'
l = BlackStack::LocalLogger.new('./example.log')
n = BlackStack::Infrastructure::Node.new({
:ip => '81.28.96.103',
:ssh_username => 'root',
:ssh_port => 22,
:ssh_password => '****',
})
BlackOps.source_remote(
node: n,
op: './hostname.op',
parameters: => {
'name' => 'dev1',
},
logger: l
)
You can define nodes into a configuration file.
Such a configuration file is written with Ruby syntax.
The ops
command has a Ruby interpreter enbedded, so you don't need to have Ruby installed in your computer.
BlackOpsFile
BlackOps.add_node({
:name => 'prod1',
:ip => '55.55.55.55',
:ssh_username => 'blackstack',
:ssh_port => 22,
:ssh_password => 'blackstack-password',
:ssh_root_password => 'root-password',
})
Then you can run the ops
command referencing to
-
such a configuration file;
-
the node defined in such a configuration file;
and
- the
--root
flag to useroot
user for this operation.
ops source ./hostname.ops --config=./BlackOpsFile --node=prod1 --root --name=prod1
You can do the same from Ruby code by including the BlackOpsFile
and calling the source_remote
method:
require 'simple_cloud_logging'
require 'blackstack-nodes'
require 'blackops'
l = BlackStack::LocalLogger.new('./example.log')
include './BlackOpsFile' # <===
BlackOps.source_remote(
'prod1', # name of node defined in `BlackOpsFile`
op: './hostname.op',
parameters: => {
'name' => 'dev1',
},
connect_as_root: true,
logger: l
)
Note:
- In the example above, if the
--root
flag is disabled, then BlackOps will access the node with theblackstack
user. Otherwise, it will access with theroot
user.
Additionally, you can store one or more paths into the environment variable $OPSLIB
.
The ops source
command will look for BlackOpsFile
there.
Using $OPSLIB
you don't need to write the --config
argument every time you call the ops source
command.
E.g.:
export OPSLIB=~/
ops source ./hostname.ops --node=prod1 --name=prod1
The environment variable $OPSLIB
can include a list of folders separated by :
.
E.g.:
export OPSLIB=~/:/home/leandro/code1:/home/leandro/code2
ops source ./hostname.ops --node=prod1 --name=prod1
Notes:
There are some considerations about the $OPSLIB
variable:
- If the file
BlackOpsFile
file is present into more than one path, then theops
command with show an error message:Configuration file is present in more than one path: <list of paths.>
.
You can refer to .op
files hosted in the web.
E.g.:
ops source https://raw.githubusercontent.com/leandrosardi/blackops/refs/heads/main/ops/hostname.op --node=prod1 --name=prod1
In your configuration file, you can define the locations where to find the .op
files.
Such locations must be either:
- folders in your local computer, or
- URLs in the web.
BlackOpsFile
...
BlackOps.set(
repositories: [
# private operations defined in my local computer.
'/home/leandro/code1/blackops/ops',
# public operations defined in blackops repository.
'https://raw.githubusercontent.com/leandrosardi/blackops/refs/heads/main/ops',
],
)
...
Any call to the ops
command gets simplified, because you don't need to write the full path to the .ops
file.
ops source hostname.op --node=prod1 --name=prod1
Notes:
There are some considerations about the repositories.
- If the file
hostname.op
is present into more than one repository, then theops
command with show an error message:Operation hostname.op is present in more than one repository: <list of repositories.>
.
The argument --name
is not really necessary in the command line,
ops source hostname.op --node=prod1 --name=prod1
because it is already defined in the hash descriptor of the node (:name
).
BlackOpsFile
...
BlackOps.add_node({
:name => 'prod1', # <=====
:ip => '55.55.55.55',
:ssh_username => 'blackstack',
:ssh_port => 22,
:ssh_password => 'blackops-password',
:ssh_root_password => 'root-password',
})
...
You can define any custom parameter into the hash descriptor of your node.
E.g.: You can define the value for the --rubylib
argument,
...
BlackOps.add_node({
:name => 'prod1',
:rubylib => '/home/blackstack/code', # <=====
:ip => '55.55.55.55',
:ssh_username => 'blackstack',
:ssh_port => 22,
:ssh_password => 'blackops-password',
:ssh_root_password => 'root-password',
})
...
So the execution of any operation gets simplified even more.
E.g.:
The --rubylib
argument in the command line is not longer needed:
ops source set-rubylib.op --node=prod1
You can access any node via SSH using the ops ssh
command and the credentials defined in BlackOpsFile
.
The goal of the ops ssh
command is that you can access any node easily, writing short commands.
ops ssh prod1
Notes:
- You can also require to connect as
root
.
E.g.:
ops ssh prod1 --root
- You can do the same from Ruby code.
E.g.:
BlackOps.ssh( :prod1,
connect_as_root: true,
logger: l
)
The ops install
executes one or more .op
scripts, like the ops source
does.
E.g.:
ops install worker*
Notes:
-
The command above will run installations for all the nodes defined in your
BlackOpsFile
with name matchingworker*
. -
The list of
.op
scripts to execute are defined in the keyinstall_ops
of the node descriptor.
E.g.:
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
# installation operations
:install_ops => [ # <===
'mysaas.install.ubuntu_20_04.base',
'mysaas.install.ubuntu_20_04.postgresql',
'mysaas.install.ubuntu_20_04.nginx',
'mysaas.install.ubuntu_20_04.adspower',
]
})
- You can also require to connect as
root
.
E.g.:
ops install worker* --root
- You can do the same from Ruby code.
E.g.:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.install_remote(
node: n,
connect_as_root: true,
logger: l
)
-
Internally, the
BlackOps.install_remote
method callsBlackOps.source_remote
. -
The
ops install
command supports all the same arguments thanops source
, except theop
argument:--local
.--foo=xx
wherefoo
is a paremeter to be replaced in the.op
file.--root
--config
--ssh
-
The
BlackOps.install_remote
method also supports all the same parameters thanBlackStack.source_remote
, except theop
parameter:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.install_remote(
node: n,
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
- There is a
BlackOps.install_local
method too.
BlackOps.install_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
- When running
ops install
in your local computer, use the--local
argument, and don't forget the--install_ops
argument too.
ops install --local \
--install_ops "mysaas.install.ubuntu_20_04.base,mysaas.install.ubuntu_20_04.postgresql,mysaas.install.ubuntu_20_04.nginx,mysaas.install.ubuntu_20_04.adspower" \
BlackOps.install_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
...
'install_ops' => [ # <===
'mysaas.install.ubuntu_20_04.base',
'mysaas.install.ubuntu_20_04.postgresql',
'mysaas.install.ubuntu_20_04.nginx',
'mysaas.install.ubuntu_20_04.adspower',
],
},
logger: l
)
Pre-Built Install Operations:
There are some pre-built install operations that you can use:
- Install base required packages on Ubuntu 20.04.
- Install PostgreSQL on Ubuntu 20.04.
- Install Nginx on Ubuntu 20.04.
- Install AdsPower on Ubuntu 20.04.
The ops deploy
executes one or more .op
scripts (like the ops source
does), and it also connects a PostgreSQL database for running SQL migrations.
E.g.:
ops deploy worker*
Notes:
-
The command above will run deployment for all the nodes defined in your
BlackOpsFile
with name matchingworker*
. -
The list of
.op
scripts to execute are defined in the keydeploy_ops
of the node descriptor.
E.g.:
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
# deployment operations
:deploy_ops => [ # <===
'mass.slave.deploy',
'mass.sdk.deploy',
]
})
- You can also require to connect as
root
.
E.g.:
ops deploy worker* --root
- You can do the same from Ruby code.
E.g.:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.deploy_remote(
node: n,
connect_as_root: true,
logger: l
)
- To execute migrations, your node must to define both: the connection parameters and the migration folders:
BlackOpsFile
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
:migrations => {
# db connection parameters
'postgres_port' => 5432, # <===
'postgres_database' => 'blackstack',
'postgres_username' => 'blackstack',
'postgres_password' => 'MyFooPassword123',
...
# migration folders
'migration_folders' => [ # <===
'/home/leandro/code1/sql',
'/home/leandro/code2/sql',
],
},
...
# deployment operations
:deploy_ops => [
'mass.slave.deploy',
'mass.sdk.deploy',
]
})
- When running migrations, BlackOps will execute every
.sql
file into the migration folders.
BlackOps will iterate the folders in the same order they are listed.
At each folder, BlackOps will execute the .sql
scripts sorted by their filenames.
For each .sql
file, BlackOps will execute sentence by sentence. Where each sentence finishes whith a semicolon (;
).
- You can execute a deployment from Ruby code too:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.deploy_remote(
node: n,
logger: l
)
-
Internally, the
BlackOps.deploy_remote
method callsBlackOps.source_remote
. -
The
ops deploy
command supports all the same arguments thanops source
, except theop
argument:--local
.--foo=xx
wherefoo
is a paremeter to be replaced in the.op
file.--root
--config
--ssh
-
The
BlackOps.deploy_remote
method also supports all the same parameters thanBlackStack.source_remote
, except theop
parameter:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.deploy_remote(
node: n,
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
- There is a
BlackOps.deploy_local
method too.
BlackOps.deploy_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
- When running
ops deploy
in your local computer, don't forget to define the--local
argument, the list of operations, the connection parameters and migration folders into your command line:
ops deploy --local \
--deploy_ops "./hostname.op,./rubylib.op" \
--postgres_port 5432
--postgres_database blackstack \
--postgres_username blackstack \
--postgres_password MyFooPassword123 \
--migration_folders="/home/leandro/code1/sql,/home/leandro/code2.sql" \
and you can do the same from Ruby code:
BlackOps.deploy_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
...
'deploy_ops' => [ # <===
'mass.slave.deploy',
'mass.sdk.deploy',
],
...
# db connection parameters
'postgres_port' => 5432, # <===
'postgres_database' => 'blackstack',
'postgres_username' => 'blackstack',
'postgres_password' => 'MyFooPassword123',
...
# migration folders
'migration_folders' => [ # <===
'/home/leandro/code1/sql',
],
},
logger: l
)
-
The parameters below are not mandatory, but if one of them is defined, all the others must be defined too:
postgres_port
,postgres_database
,postgres_username
,postgres_password
,migration_folders
.
Otherwise, BlackOps.deploy
will raise an exception:
Pre-Built Deploy Operations:
There are some pre-built deploy operations that you can use:
- Deploy source code of master node of MassProspsecting.
- Deploy source code of slave node of MassProspsecting.
- Deploy source code of MassProspsecting SDK.
You can define a list of operations for:
- starting (running) your software, and
- stopping your software
in any node.
E.g.:
ops start worker*
and
ops stop worker*
Both ops start
and ops stop
execute one or more .op
scripts, like the ops source
does.
Notes:
-
The commands above will run operations for all the nodes defined in your
BlackOpsFile
with name matchingworker*
. -
The list of
.op
scripts to execute are defined in the keysstart_ops
andstop_ops
of the node descriptor.
E.g.:
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
# starting operations
:start_ops => [ # <===
'mass.worker.start',
],
# stopping operations
:stop_ops => [ # <===
'mass.worker.stop',
],
})
- You can also require to connect as
root
.
E.g.:
ops start worker* --root
or
ops stop worker* --root
- You can do the same from Ruby code.
E.g.:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.start_remote(
node: n,
connect_as_root: true,
logger: l
)
or
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.stop_remote(
node: n,
connect_as_root: true,
logger: l
)
-
Internally, the
BlackOps.start_remote
andBlackOps.stop_remote
methods callBlackOps.source_remote
. -
The
ops start
andops stop
commands support all the same arguments thanops source
, except theop
argument:--local
.--foo=xx
wherefoo
is a paremeter to be replaced in the.op
file.--root
--config
--ssh
-
The
BlackOps.start_remote
andBlackOps.stop_remote
methods also support all the same parameters thanBlackStack.source_remote
, except theop
parameter:
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.start_remote(
node: n,
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
or
# Get hash descriptor of the node.
h = BlackOps.get_node(:worker06)
# Create instance of node.
n = BlackStack::Infrastructure::Node.new(h)
BlackOps.stop_remote(
node: n,
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
- There are
BlackOps.start_local
andBlackOps.stop_local
methods too.
BlackOps.start_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
and
BlackOps.stop_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
},
logger: l
)
- When running
ops start
orops stop
in your local computer, use the--local
argument, and don't forget the--start_ops
or--stop_ops
arguments too.
ops start --local \
--start_ops "./start.worker.op"
or
ops stop --local \
--stop_ops "./start.worker.op"
and you can do the same from Ruby code:
BlackOps.start_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
...
'start_ops' => [ # <===
'mass.worker.start',
],
},
logger: l
)
or
BlackOps.stop_local(
#op: './hostname.op', <== Ignore. Operations are defined in the hash descriptor of the node.
parameters: => {
'name' => 'dev1',
...
'stop_ops' => [ # <===
'mass.worker.stop',
],
},
logger: l
)
Pre-Built Start/Stop Operations:
There are some pre-built operations for starting or stopping your software:
- Start processes on MassProspecting Master Node.
- Stop processes on MassProspecting Master Node.
- Start processes on MassProspecting Slave Nodes.
- Stop processes on MassProspecting Slave Nodes.
- Start processes on MassProspecting Worker Nodes.
- Stop processes on MassProspecting Worker Nodes
pending
Your can list your nodes and monitor the usage of CPU, RAM and disk space.
ops list
The ops list
command will:
-
show all the nodes defined in your configuration file;
-
connect the nodes one by one via SSH and bring RAM usage, CPU usage, disk usage and custom alerts (custom alerts will be introduced further).
Notes:
- Once connected to a node, the values shown in the row of the node will be aupdated every 5 seconds by default.
- You can define a custom number of seconds to update each row:
ops list --interval 15
- The SSH connection to a node may fail.
- By default; the usage of RAM, CPU or disk must be under 50% or it will be shown in red.
- You can define custom thresholds for RAM, CPU and disk usage.
ops list --cpu-threshold 75 --ram-threshold 80 --disk-threshold 40
- You can define the thresholds of each node in your configuration file, so you don't need write them in the command line:
...
BlackOps.add_node({
:name => 'prod1',
:ip => '55.55.55.55',
:cpu_threshold => 75, # <=====
:ram_threshold => 80, # <=====
:disk_threshold => 40, # <=====
...
})
...
- The number of custom alerts must be 0, or it will be shown in red. This treshold is always
0
and cannot be modified.
- You can use wildcard to choose the list of nodes you want to see.
ops list worker*
- If you press
CTRL+C
, theops list
command will terminate.
You can connect BlackOps with Contabo using our Contabo Client library.
BlackOpsFile
...
BlackOps.set(
contabo: ContaboClient.new(
client_id: 'INT-11833581',
client_secret: '******',
api_user: 'leandro@massprospecting.com',
api_password: '********'
),
)
...
The ops list
command will merge the nodes defined in your configuration file with the list of instances in your Contabo account.
Such a merging is performed using the public IPv4 addess of Contabo instances and nodes defined in the configuration file.
ops list
Notes:
- The rows with no value in the Contabo ID column are nodes defined into the configuration file, but not existing in the list of Contabo instances.
E.g.: in the picture above, the no slave01
.
- The rows with
unknown
in the status column are Contabo instances that are not defined in your configuration file.
The unknown
situation happens when you have a software that creates instances on Contabo dynamically, using Contabo Client's create
feature.
E.g.: You developed a scalable SAAS that creates a dedicated instance on Contabo for each user signed up.
To avoid the unknown
situation, your software should store instances created dynamically into its database, and add them to BlackOps dynamically too, by editing your BlackOpsFile
You can write code snipets of monitoring of your nodes:
BlackOpsFile
BlackOps.add_node({
:name => 's01',
:ip => '195.179.229.20',
...
:alerts => { # <===
# this function calls the REST-API of a MassProspecting Slave Node,
# and returns true if there are one or more `job` record with failed status
#
# Arguments:
# - node: Instance of a node object.
# - ssh: Already opened SSH connection with the node.
:massprospecting_failed_jobs => Proc.new do |node, ssh, *args|
# ...
# source code to call the REST-API of the slave node
# ...
end,
...
},
...
# to call the REST-API of the slave node, you will need an API key for sure.
:api_key => 'foo-api-key',
})
Using the ops alerts
command, you can get a report of the alerts raised by each node.
ops alerts s*
When you define a node, you can specify what are the processes that will be running there.
BlackOpsFile
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
:procs => [
'/home/blackstack/code1/master/ipn.rb',
'/home/blackstack/code1/master/dispatch.rb',
'/home/blackstack/code1/master/allocate.rb',
]
})
Then, call the proc
command to watch
- if they are running or not,
- the RAM consumed by each one of the processes; and
- the CPU consumed by each one of the processes.
ops proc
picture pending
You can also use wildcards with specify the nodes you want to watch:
ops proc worker*
Notes:
-
The
proc
command simply connect the nodes via SSH and performa agrep
command to find the processes you specified. -
If one processes listed into the
procs
array is not found when running thegrep
, then such a process is shown asoffline
in the list.
When you define a node, you can specify what are the log files that you may want to watch.
E.g.:
BlackOpsFile
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
:logs => [
'/home/blackstack/code1/master/ipn.log',
'/home/blackstack/code1/master/dispatch.log',
'/home/blackstack/code1/master/allocate.log',
]
})
Then, you can run the ops logs
command that is a kinda ls
of all the files into a node that matches with the list of files defined in its hash descriptor.
ops logs worker*
Notes:
- In the list of logfiles shown by the command
ops logs
, you can choose one of then and start watching it online.
This feature simply does a tail -f
of such a logfile.
picture pending
- You can also define a pattern of log files using wildcards.
E.g.:
BlackOpsFile
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
:logs => [
'/home/blackstack/code1/master/*.log',
]
})
- You can define a list of keywords into log files that can be indicating that an error happened.
E.g.:
BlackOpsFile
BlackOps.add_node({
:name => 'worker06',
:ip => '195.179.229.21',
...
:logs => [
'/home/blackstack/code1/master/*.log',
],
:keywords => [
'error', 'failure', 'failed',
]
})
- You can run the command
ops keywords
for listing the lines with error keywords into some logfiles, into some nodes.
E.g.:
ops keywords worker* --filename=*dispatch.log
- The
keywords
command simply connect the node via SSH and perform acat <logfilename> | grep "keyword"
command.
You can define an SMTP relay and a list of email address to notify when any value in the table above goes red.
...
BlackOps.set({
:alerts => {
'smtp_ip' => '...',
'smtp_port' => '...',
'smtp_username' => '...',
'smtp_password' => '...',
'smtp_sending_name' => 'BlackOps',
'smtp_sending_email' => 'blackops@massprospecting.com',
'receivers' => [
'leandro@massprospecting.com',
...
'cto@massprospecting.com',
]
}
...
})
...
Notes:
- You can run the
ops list
command in background, keep it monitoring 24/7, and get notified when an error happens.
ops list --background
-
When CPU or RAM usage run over their threshold, no email will be delivered. This is because CPU and RAM usage may be very flutuating.
-
An email notification will be delivered when the disk usage raises over its threshold at a first time after has been under it.
-
An email notifications will be delivered when the number of alerts raises over
0
at a first time after has been0
. -
Log error keywords
-
Processes not running
-
Any email notification includes the name and public IP of the node, the value of CPU usage, RAM usage, disk usage and alerts, and the threshold of each one.
- Scalable Monitoring
- Scalable Processes Watching
- Scalable Log Watching
- Scalable Deployment
E.g.:
mysaas.ubuntu_20_04.full.op
# This directive validates you are connecting the node as root.
#!root
E.g.:
mysaas.ubuntu_20_04.full.op
# This requires execute another op at this point.
require mysaas.ubuntu_20_04.base.op