ℹ️
|
Originally developed under the H2020 project MSO4SC: https://github.com/MSO4SC/cloudify-hpc-plugin |
A HPC plugin for Cloudify that enables it to manage HPC resources in one or more infrastructures. The currently supported resource types are described below.
This plugin is part of the MSO4SC H2020 European Project.
💡
|
Example blueprints can be found at the MSO4SC resources repository. |
The plugin aims to enable Cloudify to manage HPC resources, so at the end, combined with other plugins, it can orchestrate a hybrid cloud+hpc environment, with one or more cloud and hpc providerds at the same time.
It adds a new resource type, hpc.nodes.WorkloadManager, that represents an infrastructure interface (Slurm, Torque, or even Bash), and hpc.nodes.Job and hpc.nodes.SingularityJob, that represents a job in the HPC and a job using a Singularity container respectively.
In order to Cloudify orchestrates properly the HPC resources, the help of an external monitor system is needed, from which the job status is retrieved. In the next release the plugin will also use the monitor to predict the overall state of the HPC and take best decisions about which partition and infrastructure to use for each job.
❗
|
Only Slurm and Torque based HPCs are supported for now (Bash for virtual machines), as well as only Prometheus is supported as external monitoring system. |
-
Python version 2.7.x
-
Access to at least a Slurm based HPC by ssh user & password.
-
Access to Moab/Torque based HPC by ssh user & password.
-
The Monitor Orchestrator can be deployed in the same host as the monitor to allow the plugin to dynamically use new HPC infrastructures defined in TOSCA.
-
Grafana can be used to visualize the status of the HPCs.
-
Prometheus monitoring the infrastructures to be used (Slurm exporter has been developed for this purpose)
The plugin is installed as any other plugin. Check Cloudify Docs for general information about how to install and use Cloudify, and this section for concrete information about using plugins.
Additionally MSO4SC provide Vagrant and Docker images at Docker Hub to install everything. Check MSOOrchestrator-CLI to start using Cloudify CLI and bootstrap the Cloudify Manager. Use docker compose file to deploy all the external components. A Grafana dashboard can be found here.
The HPC plugin requires credentials, endpoint and other setup information in order to authenticate and interact with them.
This configuration properties are defined in [hpc.nodes.Compute] credentials and config properties.
credentials:
host: "[HPC-HOST]"
user: "[HPC-SSH-USER]"
private_key: |
-----BEGIN RSA PRIVATE KEY-----
......
-----END RSA PRIVATE KEY-----
private_key_password: "[PRIVATE-KEY-PASSWORD]"
password: "[HPC-SSH-PASS]"
login_shell: {true|false}
tunnel:
host: ...
...
-
HPC and ssh credentials. At least
private_key
orpassword
must be provided.-
tunnel: Follows the same structure as its parent (credentials), to connect to the HPC through an tunneled SSH connection.
-
config:
country_tz: "Europe/Madrid"
workload_manager: {"SLURM"|"TORQUE"}
-
country_tz: Country Time Zone configured in the the HPC.
-
workload_manager: Workload manager used by the HPC.
|
Only Slurm and Torque are currently accepted as workload managers. |
This section describes the node type definitions. Nodes describe resources in your HPC infrastructures. For more information, see node type.
Derived From: cloudify.nodes.Compute
Use this type to describe a HPC infrastructure.
Properties:
-
credentials
: Connection credentials, as described in [hpc-config-properties]. -
config
: HPC configuration, as described in [hpc-config-properties]. -
external_monitor_entrypoint
: Entrypoint of the external monitor that Cloudify will use instead of the internal one. -
external_monitor_port
: Port of the monitor. Default:9090
. -
external_monitor_type
: Specific monitor tool. DefaultPROMETHEUS
. -
external_monitor_orchestrator_port
: Monitor orchestrator port. Default:8079
. -
job_prefix
: Job name prefix for the jobs created in this HPC. Defaultcfyhpc
. -
base_dir
: Root directory in which to run the executions in this ifrastructure. Default$HOME
. -
workdir_prefix
: Prefix name of the working directory that will be created for this infrastructure. -
monitor_period
: Seconds to check job status. This is necessary because workload managers can be overloaded if asked too much times in a short period of time. Default60
. -
skip_cleanup
: True to not clean all files when destroying the deployment. DefaultFalse
. -
simulate
: If true, don’t send the jobs to the HPC and simulate that they finish inmediately. Useful for test new TOSCA files. DefaultFalse
.
Example
This example demonstrates how to add a new HPC.
hpc_wm:
type: hpc.nodes.WorkloadManager
properties:
credentials:
host: "[HPC-HOST]"
user: "[HPC-SSH-USER]"
password: "[HPC-SSH-PASS]"
login_shell: false
config:
country_tz: "Europe/Madrid"
workload_manager: "SLURM"
job_prefix: wm_
workdir_prefix: test
...
Mapped Operations:
-
cloudify.interfaces.lifecycle.configure
Checks that there is connection between Cloudify and the HPC, and creates a new working directory. -
cloudify.interfaces.lifecycle.delete
Clean up all data generated by the execution. -
cloudify.interfaces.monitoring.start
If the external monitor orchestrator is available, sends a notification to start monitoring the HPC. -
cloudify.interfaces.monitoring.stop
If the external monitor orchestrator is available, sends a notification to end monitoring the HPC.
Derived From: {uri-cloudify-builtin-type}[cloudify.nodes.Root]
Use this tipe to describe a HPC job.
Properties:
-
job_options
: Job parameters and needed resources.-
type
: SRUN or SBATCH (job executed using a command or using a script). TORQUE supports only SBATCH mode. -
pre
: List of commands to be executed before running the job. Optional. -
post
: List of commands to be executed after running the job. Optional. -
partition
: Partition in which the job will be executed. If not provided, the HPC default will be used. -
command
: Job executable command with arguments if necessary. Since TORQUE does NOT accept extra arguments in job submission commandqsub
, this field must contain only a name of the batch script to run for TORQUE. Mandatory. -
nodes
: Necessary nodes of the job. Default1
. -
tasks
: Number of tasks of the job. Default1
. -
tasks_per_node
: Number of tasks per node. Default1
. -
max_time
: Set a limit on the total run time of the job allocation. Mandatory if SRUN type. -
scale
: Execute in parallel the job N times according to this property. Only works with SBATCH jobs. Default1
(no scale). -
scale_max_in_parallel
: Maximum number of scaled job instances that can be run in parallel. Only works with scale >1
. Default same as scale. -
memory
: Specify the real memory required per node. Different units can be specified using the suffix [K|M|G|T
]. Default value""
lets the workload manager assign the default memory to the job. -
stdout_file
: Define the file where to gather the standard output of the job. Default value""
sets<job-name>.err
filename. -
stderr_file
: Define the file where to gather the standard error output. Default value""
sets<job-name>.out
filename. -
mail-user
: Email to receive notification of job state changes. Default value""
does not send any mail. -
mail-type
: Type of event to be notified by mail, can define several events separated by comma. Valid valuesNONE, BEGIN, END, FAIL, TIME_LIMIT, REQUEUE, ALL
. Default value""
does not send any mail. -
reservation
: Allocate resources for the job from the named reservation. Default value""
does not allocate from any named reservation. -
qos
: Request a quality of service for the job. Default value""
lets de workload manager assign the default userqos
.
-
-
deployment
: Scripts to perform deployment operations. Optional.-
bootstrap
: Relative path to blueprint to the script that will be executed in the HPC at the install workflow to bootstrap the job (like data movements, binary download, etc.) -
revert
: Relative path to blueprint to the script that will be executed in the HPC at the uninstall workflow, reverting the bootstrap or other clean up operations. -
inputs
: List of inputs that will be passed to the scripts when executed in the HPC.
-
-
publish
: A list of outputs to be published after job execution. Each list item is a dictionary containing:-
type
: Type of the external repository to be published. OnlyCKAN
is supported for now. The rest of the parameters depends on the type. -
type: CKAN
-
entrypoint
: ckan entrypoint -
api_key
: Individual user ckan api key. -
dataset
: Id of the dataset in which the file will be published. -
file_path
: Local path of the output file in the computation node. -
name
: Name used to publish the file in the repository. -
description
: Text describing the data file.
-
-
-
skip_cleanup
: Set to true to not clean up orchestrator auxiliar files. DefaultFalse
.
ℹ️
|
The variable $CURRENT_WORKDIR is available in all operations and scripts. It points to the working directory of the execution in the HPC from the HOME directory: /home/user/$CURRENT_WORKDIR/ .
|
ℹ️
|
The variables $SCALE_INDEX , $SCALE_COUNT and $SCALE_MAX will be available in the batch script if the line # DYNAMIC VARIABLES exist (they will be dynamicaly loaded after this line). They hold, for each job instance, the index, the total number of instances, and the maximun in parallel respectively.
|
Example
This example demonstrates how to describe a new job for non-batched run (in Slurm).
hpc_job:
type: hpc.nodes.Job
properties:
job_options:
type: 'SRUN'
pre:
- module load gcc/5.3.0
partition: 'thin-shared'
command: 'touch example.test'
nodes: 1
tasks: 1
tasks_per_node: 1
max_time: '00:01:00'
deployment:
bootstrap: 'scripts/bootstrap_example.sh'
revert: 'scripts/revert_example.sh'
inputs:
- 'example_job'
...
This example demonstrates how to describe a new batch job (works with both Slurm and Torque).
hpc_batch_job:
type: hpc.nodes.job
properties:
job_options:
type: 'SBATCH'
command: "touch.script"
deployment:
bootstrap: 'scripts/bootstrap_sbatch_example.sh'
revert: 'scripts/revert_sbatch_example.sh'
inputs:
- 'single'
skip_cleanup: True
relationships:
- type: job_contained_in_hpc
target: first_hpc
...
Mapped Operations:
-
cloudify.interfaces.lifecycle.start
Send and execute the bootstrap script. -
cloudify.interfaces.lifecycle.stop
Send and execute the revert script. -
hpc.interfaces.lifecycle.queue
Queues the job in the HPC. -
hpc.interfaces.lifecycle.publish
Publish outputs outside the HPC. -
hpc.interfaces.lifecycle.cleanup
Clean up operations after job is finished. -
hpc.interfaces.lifecycle.cancel
Cancels a queued job.
Derived From: [hpc.nodes.job]
Use this tipe to describe a HPC job executed from a Singularity image. Note that in this version TORQUE does not support Singularity jobs yet.
Properties:
-
job_options
: Job parameters and needed resources.-
pre
: List of commands to be executed before running singularity container. Optional. -
post
: List of commands to be executed after running singularity container. Optional. -
image
: Singularity image file. -
home
: Home volume that will be bind with the image instance (Optional). -
volumes
: List of volumes that will be bind with the image instance. -
partition
: Partition in which the job will be executed. If not provided, the HPC default will be used. -
nodes
: Necessary nodes of the job. 1 by default. -
tasks
: Number of tasks of the job. 1 by default. -
tasks_per_node
: Number of tasks per node. 1 by default. -
max_time
: Set a limit on the total run time of the job allocation. Mandatory if SRUN type. -
scale
: Execute in parallel the job N times according to this property. Default1
(no scale). -
scale_max_in_parallel
: Maximum number of scaled job instances that can be run in parallel. Only works with scale >1
. Default same as scale. -
memory
: Specify the real memory required per node. Different units can be specified using the suffix [K|M|G|T
]. Default value""
lets the workload manager assign the default memory to the job. -
stdout_file
: Define the file where to gather the standard output of the job. Default value""
sets<job-name>.err
filename. -
stderr_file
: Define the file where to gather the standard error output. Default value""
sets<job-name>.out
filename. -
mail-user
: Email to receive notification of job state changes. Default value""
does not send any mail. -
mail-type
: Type of event to be notified by mail, can define several events separated by comma. Valid valuesNONE, BEGIN, END, FAIL, TIME_LIMIT, REQUEUE, ALL
. Default value""
does not send any mail. -
reservation
: Allocate resources for the job from the named reservation. Default value""
does not allocate from any named reservation. -
qos
: Request a quality of service for the job. Default value""
lets de workload manager assign the default userqos
.
-
-
deployment
: Optional scripts to perform deployment operations (bootstrap and revert).-
bootstrap
: Relative path to blueprint to the script that will be executed in the HPC at the install workflow to bootstrap the job (like image download, data movements, etc.) -
revert
: Relative path to blueprint to the script that will be executed in the HPC at the uninstall workflow, reverting the bootstrap or other clean up operations (like removing the image). -
inputs
: List of inputs that will be passed to the scripts when executed in the HPC
-
-
skip_cleanup
: Set to true to not clean up orchestrator auxiliar files. DefaultFalse
.
ℹ️
|
The variable $CURRENT_WORKDIR is available in all operations and scripts. It points to the working directory of the execution in the HPC from the HOME directory: /home/user/$CURRENT_WORKDIR/ .
|
ℹ️
|
The variables $SCALE_INDEX, $SCALE_COUNT and $SCALE_MAX are available when scaling, holding for each job instance the index, the total number of instances, and the maximun in parallel respectively. |
Example
This example demonstrates how to describe a new job executed in a Singularity instance.
singularity_job:
type: hpc.nodes.SingularityJob
properties:
job_options:
pre:
- module load gcc/5.3.0 openmpi/1.10.2
- module load singularity/2.3.1
- touch pre.output
partition: 'thin-shared'
post:
- touch post.output
image: '$LUSTRE/openmpi_1.10.7_ring.img'
home: '$HOME:/home/$USER'
volumes:
- '/scratch'
command: 'ring > fourth_example_3.test'
nodes: 1
tasks: 1
tasks_per_node: 1
max_time: '00:01:00'
deployment:
bootstrap: 'scripts/singularity_bootstrap_example.sh'
revert: 'scripts/singularity_revert_example.sh'
inputs:
- 'singularity_job'
...
Mapped Operations:
-
cloudify.interfaces.lifecycle.start
Send and execute the bootstrap script. -
cloudify.interfaces.lifecycle.stop
Send and execute the revert script. -
hpc.interfaces.lifecycle.queue
Queues the job in the HPC. -
hpc.interfaces.lifecycle.publish
Publish outputs outside the HPC. -
hpc.interfaces.lifecycle.cleanup
Clean up operations after job is finished. -
hpc.interfaces.lifecycle.cancel
Cancels a queued job.
See the relationships section.
The following plugin relationship operations are defined in the HPC plugin:
-
job_managed_by_wm
Sets a hpc.nodes.Job to be executed inside the target hpc.nodes.WorkloadManager. -
job_depends_on
Sets a hpc.nodes.Job as a dependency of the target (another hpc.nodes.Job), so the target job needs to finish before the source can start. -
wm_contained_in
Sets a hpc.nodes.WorkloadManager to be contained in the specific target (a computing node).
To run the tests Cloudify CLI has to be installed locally. Example blueprints can be found at tests/blueprint folder and have the simulate
option active by default. Blueprint to be tested can be changed at workflows_tests.py in the tests folder.
To run the tests against a real HPC / Monitor system, copy the file blueprint-inputs.yaml to local-blueprint-inputs.yaml and edit with your credentials. Then edit the blueprint commenting the simulate option, and other parameters as you wish (e.g change the name ft2_node for your own hpc name). To use the openstack integration, your private key must be put in the folder inputs/keys.
ℹ️
|
dev-requirements.txt needs to be installed (windev-requirements.txt for windows): pip install -r dev-requirements.txt To run the tests, run tox on the root folder tox -e flake8,py27 |