Skip to content

Commit

Permalink
Merge pull request #2131 from harshthakkar01/spack-openfoam-issue
Browse files Browse the repository at this point in the history
Update spack openfoam example to use /opt/apps directory
  • Loading branch information
harshthakkar01 authored Jan 12, 2024
2 parents 360f03a + 9b91f43 commit 8e6d6b7
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 13 deletions.
14 changes: 7 additions & 7 deletions docs/tutorials/openfoam/spack-openfoam.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ which should be open in the Cloud Shell Editor (on the left).

This file describes the cluster you will deploy. It defines:

* the existing default network from your project
* a vpc network
* a monitoring dashboard with metrics on your cluster
* a definition of a custom Spack installation
* a startup script that
Expand Down Expand Up @@ -135,16 +135,16 @@ controller. This command can be used to view progress and check for completion
of the startup script:

```bash
gcloud compute instances get-serial-port-output --port 1 --zone us-central1-c --project <walkthrough-project-id/> slurm-spack-openfoam-controller | grep google_metadata_script_runner
gcloud compute instances get-serial-port-output --port 1 --zone us-central1-c --project <walkthrough-project-id/> spackopenf-controller | grep google_metadata_script_runner
```

When the startup script has finished running you will see the following line as
the final output from the above command:
> _`slurm-spack-openfoam-controller google_metadata_script_runner: Finished running startup scripts.`_
> _`spackopenf-controller google_metadata_script_runner: Finished running startup scripts.`_
Optionally while you wait, you can see your deployed VMs on Google Cloud
Console. Open the link below in a new window. Look for
`slurm-spack-openfoam-controller`. If you don't
`spackopenf-controller`. If you don't
see your VMs make sure you have the correct project selected (top left).

```text
Expand Down Expand Up @@ -204,7 +204,7 @@ OpenFOAM job.
2. Submit the job to Slurm to be scheduled:

```bash
sbatch /apps/openfoam/submit_openfoam.sh
sbatch /opt/apps/openfoam/submit_openfoam.sh
```

3. Once submitted, you can watch the job progress by repeatedly calling the
Expand All @@ -218,7 +218,7 @@ The `sbatch` command trigger Slurm to auto-scale up several nodes to run the job

You can refresh the `Compute Engine` > `VM instances` page and see that
additional VMs are being/have been created. These will be named something like
`slurm-spack-openfoam-compute-0-0`.
`spackopenf-comput-0`.

When running `squeue`, observe the job status start as `CF` (configuring),
change to `R` (running) once the compute VMs have been created, and finally `CG`
Expand Down Expand Up @@ -271,7 +271,7 @@ exit
Run the following command in the cloud shell terminal to destroy the cluster:

```bash
./ghpc deploy spack-openfoam
./ghpc destroy spack-openfoam
```

When complete you should see something like:
Expand Down
11 changes: 5 additions & 6 deletions docs/tutorials/openfoam/spack-openfoam.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ deployment_groups:
- id: spack-setup
source: community/modules/scripts/spack-setup
settings:
install_dir: /apps/spack
install_dir: /opt/apps/spack
spack_ref: v0.20.0

- id: spack-execute
Expand Down Expand Up @@ -95,7 +95,7 @@ deployment_groups:
# fi
# spack buildcache keys --install --trust
spack config --scope defaults add config:build_stage:/apps/spack/spack-stage
spack config --scope defaults add config:build_stage:/opt/apps/spack/spack-stage
spack config --scope defaults add -f /tmp/projections-config.yaml
spack config --scope site add -f /tmp/slurm-external-config.yaml
Expand Down Expand Up @@ -124,17 +124,16 @@ deployment_groups:
destination: setup_openfoam.sh
content: |
#!/bin/bash
source /apps/spack/share/spack/setup-env.sh
source /opt/apps/spack/share/spack/setup-env.sh
spack env activate openfoam
chmod -R a+rwX /apps/spack/var/spack/environments/openfoam
- type: data
destination: /apps/openfoam/submit_openfoam.sh
destination: /opt/apps/openfoam/submit_openfoam.sh
content: |
#!/bin/bash
#SBATCH -N 2
#SBATCH --ntasks-per-node 30
source /apps/spack/share/spack/setup-env.sh
source /opt/apps/spack/share/spack/setup-env.sh
spack env activate openfoam
cd $SLURM_SUBMIT_DIR
Expand Down

0 comments on commit 8e6d6b7

Please sign in to comment.