Skip to content

Commit

Permalink
Merge pull request #172 from heyealex/runnable-gromacs
Browse files Browse the repository at this point in the history
Add slurm cluster to spack-gromacs
  • Loading branch information
heyealex authored Mar 28, 2022
2 parents a98bd61 + f185e2a commit 19030e8
Show file tree
Hide file tree
Showing 6 changed files with 99 additions and 48 deletions.
45 changes: 37 additions & 8 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,14 +103,43 @@ Quota required for this example:

### spack-gromacs.yaml

Spack is a HPC software package manager. This example creates a
[Spack](../resources/scripts/spack-install/README.md) build VM and a
workstation for testing and validating a spack build. The build VM will install
and configure spack, and install gromacs with spack (as configured in the
spack-install resource). This happens in a shared location (/apps). Then the
build VM will shutdown. This build leverages the startup-script resource and
can be applied in any cluster by using the output of spack-install or
startup-script resources.
Spack is a HPC software package manager. This example creates a small slurm
cluster with software installed with
[Spack](../resources/scripts/spack-install/README.md) The controller will
install and configure spack, and install [gromacs](https://www.gromacs.org/)
using spack. Spack is installed in a shared location (/apps) via filestore. This
build leverages the startup-script resource and can be applied in any cluster by
using the output of spack-install or startup-script resources.

The installation will occur as part of the slurm startup-script, a warning
message will be displayed upon SSHing to the login node indicating
that configuration is still active. To track the status of the overall
startup script, run the following command on the login node:

```shell
sudo tail -f /var/log/messages
```

Spack specific installation logs will be sent to the spack_log as configured in
your YAML, by default /var/log/spack.log in the login node.

```shell
sudo tail -f /var/log/spack.log
```

Once Slurm and spack installation is complete, spack will available on the login
node. To use spack in the controller or compute nodes, the following command
must be run first:

```shell
source /apps/spack/share/spack/setup-env.sh
```

To load the gromacs module, use spack:

```shell
spack load gromacs
```

Note: Installing spack compilers and libraries in this example can take 1-2
hours to run on startup. To decrease this time in future deployments, consider
Expand Down
77 changes: 37 additions & 40 deletions examples/spack-gromacs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@

---

blueprint_name: spack-build
blueprint_name: spack-gromacs

vars:
project_id: ## Set GCP Project ID Here ##
deployment_name: spack
deployment_name: spack-gromacs
region: us-central1
zone: us-central1-a

Expand All @@ -29,13 +29,22 @@ resource_groups:
kind: terraform
id: network1

## Filesystems
- source: resources/file-system/filestore
kind: terraform
id: appsfs
use: [network1]
settings:
local_mount: /apps

- source: resources/file-system/filestore
kind: terraform
id: homefs
use: [network1]
settings:
local_mount: /home

## Install Scripts
- source: resources/scripts/spack-install
kind: terraform
id: spack
Expand All @@ -45,8 +54,11 @@ resource_groups:
spack_ref: v0.17.1
log_file: /var/log/spack.log
configs:
- type: single-config
scope: defaults
value: "config:build_stage:/apps/spack/spack-stage"
- type: file
scope: site
scope: defaults
value: |
modules:
tcl:
Expand Down Expand Up @@ -84,55 +96,40 @@ resource_groups:
- type: shell
source: modules/startup-script/examples/install_ansible.sh
destination: install_ansible.sh
- type: shell
content: $(appsfs.install_nfs_client)
destination: install-nfs.sh
- type: ansible-local
source: modules/startup-script/examples/mount.yaml
destination: "mount.yaml"
- type: ansible-local
source: modules/spack-install/scripts/install_spack_deps.yml
destination: install_spack_deps.yml
- type: shell
content: $(spack.startup_script)
destination: install_spack.sh
- type: shell
destination: shutdown.sh
content: shutdown -h
- $(appsfs.install_nfs_client_runner)
- $(appsfs.mount_runner)
- $(spack.install_spack_deps_runner)
- $(spack.install_spack_runner)

- source: resources/compute/simple-instance
- source: resources/third-party/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: spack-build
id: compute_partition
use:
- network1
- homefs
- appsfs
- spack-startup
settings:
name_prefix: spack-builder
machine_type: n2-standard-8
partition_name: compute
max_node_count: 20

- source: resources/scripts/startup-script
- source: resources/third-party/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: mount-startup
id: slurm_controller
use:
- network1
- homefs
- appsfs
- compute_partition
settings:
runners:
- type: shell
source: modules/startup-script/examples/install_ansible.sh
destination: install_ansible.sh
- type: shell
content: $(appsfs.install_nfs_client)
destination: install-nfs.sh
- type: ansible-local
source: modules/startup-script/examples/mount.yaml
destination: "mount.yaml"
login_node_count: 1

- source: resources/compute/simple-instance
- source: resources/third-party/scheduler/SchedMD-slurm-on-gcp-login-node
kind: terraform
id: workstation
id: slurm_login
use:
- network1
- homefs
- appsfs
- mount-startup
- slurm_controller
settings:
name_prefix: workstation
machine_type: n2-standard-8
login_startup_script: $(spack-startup.startup_script)
2 changes: 2 additions & 0 deletions resources/scripts/spack-install/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,5 +176,7 @@ No resources.
| Name | Description |
|------|-------------|
| <a name="output_controller_startup_script"></a> [controller\_startup\_script](#output\_controller\_startup\_script) | Path to the Spack installation script, duplicate for SLURM controller. |
| <a name="output_install_spack_deps_runner"></a> [install\_spack\_deps\_runner](#output\_install\_spack\_deps\_runner) | Runner to install dependencies for spack using startup-scripts, requires ansible. |
| <a name="output_install_spack_runner"></a> [install\_spack\_runner](#output\_install\_spack\_runner) | Runner to install Spack using startup-scripts |
| <a name="output_startup_script"></a> [startup\_script](#output\_startup\_script) | Path to the Spack installation script. |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
10 changes: 10 additions & 0 deletions resources/scripts/spack-install/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,14 @@ locals {
LOG_FILE = var.log_file == null ? "/dev/null" : var.log_file
}
)
install_spack_deps_runner = {
"type" = "ansible-local"
"source" = "${path.module}/scripts/install_spack_deps.yml"
"destination" = "install_spack_deps.yml"
}
install_spack_runner = {
"type" = "shell"
"content" = local.script_content
"destination" = "install_spack.sh"
}
}
10 changes: 10 additions & 0 deletions resources/scripts/spack-install/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,13 @@ output "controller_startup_script" {
description = "Path to the Spack installation script, duplicate for SLURM controller."
value = local.script_content
}

output "install_spack_deps_runner" {
description = "Runner to install dependencies for spack using startup-scripts, requires ansible."
value = local.install_spack_deps_runner
}

output "install_spack_runner" {
description = "Runner to install Spack using startup-scripts"
value = local.install_spack_runner
}
3 changes: 3 additions & 0 deletions resources/scripts/spack-install/templates/install_spack.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -126,5 +126,8 @@ echo "$PREFIX Populating defined buildcaches"
%{endif ~}
%{endfor ~}

echo "source /apps/spack/share/spack/setup-env.sh" >> /etc/profile.d/spack.sh
chmod a+rx /etc/profile.d/spack.sh

echo "$PREFIX Setup complete..."
exit 0

0 comments on commit 19030e8

Please sign in to comment.