Skip to content

Commit

Permalink
Merge branch 'develop' into refactor/rename-insternal-schema
Browse files Browse the repository at this point in the history
  • Loading branch information
cboneti authored May 3, 2022
2 parents 8ece1de + 208f659 commit e2bf034
Show file tree
Hide file tree
Showing 36 changed files with 75 additions and 104 deletions.
2 changes: 1 addition & 1 deletion cmd/root.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ HPC deployments on the Google Cloud Platform.`,
log.Fatalf("cmd.Help function failed: %s", err)
}
},
Version: "v0.6.0-alpha (private preview)",
Version: "v0.7.0-alpha (private preview)",
}
)

Expand Down
4 changes: 2 additions & 2 deletions community/examples/omnia-cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ deployment_groups:
local_mount: "/home"

## Installation Scripts
- source: ./community/modules/scripts/omnia-install
- source: community/modules/scripts/omnia-install
kind: terraform
id: omnia
outputs: [inventory_file, omnia_user_warning]
Expand Down Expand Up @@ -98,7 +98,7 @@ deployment_groups:
instance_count: 2

# This module simply makes terraform wait until the startup script is complete
- source: ./community/modules/scripts/wait-for-startup
- source: community/modules/scripts/wait-for-startup
kind: terraform
id: wait
use:
Expand Down
8 changes: 4 additions & 4 deletions community/examples/spack-gromacs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ deployment_groups:
local_mount: /home

## Install Scripts
- source: ./community/modules/scripts/spack-install
- source: community/modules/scripts/spack-install
kind: terraform
id: spack
settings:
Expand Down Expand Up @@ -90,7 +90,7 @@ deployment_groups:
- $(spack.install_spack_deps_runner)
- $(spack.install_spack_runner)

- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: compute_partition
use:
Expand All @@ -101,7 +101,7 @@ deployment_groups:
partition_name: compute
max_node_count: 20

- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-controller
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: slurm_controller
use:
Expand All @@ -112,7 +112,7 @@ deployment_groups:
settings:
login_node_count: 1

- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
kind: terraform
id: slurm_login
use:
Expand Down
2 changes: 1 addition & 1 deletion community/modules/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ module documentation](../../modules/README.md).
configures an NFS server that can be mounted by other VM instances.

* [**DDN-EXAScaler**](third-party/file-system/DDN-EXAScaler/README.md): Creates
a DDN Exascaler lustre](<https://www.ddn.com/partners/google-cloud-platform/>)
a [DDN Exascaler lustre](<https://www.ddn.com/partners/google-cloud-platform/>)
file system. This module has
[license costs](https://console.developers.google.com/marketplace/product/ddnstorage/exascaler-cloud).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Create a partition module with a max node count of 200, named "compute",
connected to a module subnetwork and with homefs mounted.

```yaml
- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: compute_partition
settings:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ used to integrate with the slurm cluster to enable accounting data storage.
### Example

```yaml
- source: ./community/modules/database/cloudsql-federation
- source: community/modules/database/cloudsql-federation
kind: terraform
id: project
settings:
Expand Down
2 changes: 2 additions & 0 deletions community/modules/file-system/DDN-EXAScaler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ is enabled.
**WARNING**: This file system has a license cost as described in the pricing
section of the [DDN EXAScaler Cloud Marketplace Solution](https://console.developers.google.com/marketplace/product/ddnstorage/exascaler-cloud).

More information about the architecture can be found at [Architecture: Lustre file system in Google Cloud using DDN EXAScaler](https://cloud.google.com/architecture/lustre-architecture).

## License

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
Expand Down
2 changes: 1 addition & 1 deletion community/modules/file-system/nfs-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ files with other clients over a network via the
### Example

```yaml
- source: ./community/modules/file-system/nfs-server
- source: community/modules/file-system/nfs-server
kind: terraform
id: homefs
settings:
Expand Down
2 changes: 1 addition & 1 deletion community/modules/project/new-project/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This module is meant for use with Terraform 0.13.
### Example

```yaml
- source: ./community/modules/project/new-project
- source: community/modules/project/new-project
kind: terraform
id: project
settings:
Expand Down
2 changes: 1 addition & 1 deletion community/modules/project/service-account/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Allows creation of service accounts for a Google Cloud Platform project.
### Example

```yaml
- source: ./community/modules/project/service-account
- source: community/modules/project/service-account
kind: terraform
id: service_acct
settings:
Expand Down
2 changes: 1 addition & 1 deletion community/modules/project/service-enablement/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Allows management of multiple API services for a Google Cloud Platform project.
### Example

```yaml
- source: ./community/modules/project/service-enablement
- source: community/modules/project/service-enablement
kind: terraform
id: services-api
settings:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ More information about Slurm On GCP can be found at the [project's GitHub page](
### Example

```yaml
- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-controller
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: slurm_controller
settings:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ module.
### Example

```yaml
- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
kind: terraform
id: slurm_login
settings:
Expand Down
4 changes: 2 additions & 2 deletions community/modules/scripts/spack-install/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ https://www.googleapis.com/auth/devstorage.read_write
As an example, the below is a possible definition of a spack installation.

```yaml
- source: ./community/modules/scripts/spack-install
- source: community/modules/scripts/spack-install
kind: terraform
id: spack
settings:
Expand Down Expand Up @@ -91,7 +91,7 @@ Following the above description of this module, it can be added to a Slurm
deployment via the following:
```yaml
- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-controller
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: slurm_controller
use: [spack]
Expand Down
2 changes: 1 addition & 1 deletion community/modules/scripts/wait-for-startup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ up a node.
### Example

```yaml
- source: ./community/modules/scripts/wait-for-startup
- source: community/modules/scripts/wait-for-startup
kind: terraform
id: wait
settings:
Expand Down
4 changes: 2 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,9 +160,9 @@ terraform -chdir=image-builder-001/cluster init
terraform -chdir=image-builder-001/cluster validate
terraform -chdir=image-builder-001/cluster apply

# When you are done you can clean up the resources
terraform -chdir=image-builder-001/builder-env destroy --auto-approve
# When you are done you can clean up the resources in reverse order of creation
terraform -chdir=image-builder-001/cluster destroy --auto-approve
terraform -chdir=image-builder-001/builder-env destroy --auto-approve
```

Using a custom VM image can be more scalable than installing software using
Expand Down
10 changes: 5 additions & 5 deletions examples/hpc-cluster-high-io.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,14 @@ deployment_groups:
size_gb: 10240
local_mount: /projects

- source: ./community/modules/file-system/DDN-EXAScaler
- source: community/modules/file-system/DDN-EXAScaler
kind: terraform
id: scratchfs
use: [network1]
settings:
local_mount: /scratch

- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: low_cost_partition
use:
Expand All @@ -71,7 +71,7 @@ deployment_groups:
machine_type: n2-standard-4

# This compute_partition is far more performant than low_cost_partition.
- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: compute_partition
use:
Expand All @@ -83,7 +83,7 @@ deployment_groups:
max_node_count: 200
partition_name: compute

- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-controller
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: slurm_controller
use:
Expand All @@ -94,7 +94,7 @@ deployment_groups:
- low_cost_partition # low cost partition will be default as it is listed first
- compute_partition

- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
kind: terraform
id: slurm_login
use:
Expand Down
8 changes: 4 additions & 4 deletions examples/hpc-cluster-small.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ deployment_groups:
local_mount: /home

# This debug_partition will work out of the box without requesting additional GCP quota.
- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: debug_partition
use:
Expand All @@ -54,7 +54,7 @@ deployment_groups:
machine_type: n2-standard-2

# This compute_partition is far more performant than debug_partition but may require requesting GCP quotas first.
- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: compute_partition
use:
Expand All @@ -64,7 +64,7 @@ deployment_groups:
partition_name: compute
max_node_count: 20

- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-controller
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: slurm_controller
use:
Expand All @@ -75,7 +75,7 @@ deployment_groups:
settings:
login_node_count: 1

- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
kind: terraform
id: slurm_login
use:
Expand Down
6 changes: 3 additions & 3 deletions examples/image-builder.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ deployment_groups:
- source: modules/network/pre-existing-vpc
kind: terraform
id: cluster-network
- source: ./community/modules/compute/SchedMD-slurm-on-gcp-partition
- source: community/modules/compute/SchedMD-slurm-on-gcp-partition
kind: terraform
id: compute_partition
use: [cluster-network]
Expand All @@ -65,7 +65,7 @@ deployment_groups:
instance_image:
family: $(vars.new_image_family)
project: $(vars.project_id)
- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-controller
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-controller
kind: terraform
id: slurm_controller
use: [cluster-network, compute_partition]
Expand All @@ -74,7 +74,7 @@ deployment_groups:
instance_image:
family: $(vars.new_image_family)
project: $(vars.project_id)
- source: ./community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
- source: community/modules/scheduler/SchedMD-slurm-on-gcp-login-node
kind: terraform
id: slurm_login
use: [cluster-network, slurm_controller]
Expand Down
2 changes: 1 addition & 1 deletion ghpc.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import (
"os"
)

//go:embed modules
//go:embed modules community/modules
var moduleFS embed.FS

func main() {
Expand Down
2 changes: 1 addition & 1 deletion modules/file-system/filestore/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ No modules.
| <a name="input_network_name"></a> [network\_name](#input\_network\_name) | The name of the GCE VPC network to which the instance is connected. | `string` | n/a | yes |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | ID of project in which Filestore instance will be created. | `string` | n/a | yes |
| <a name="input_region"></a> [region](#input\_region) | Location for Filestore instances at Enterprise tier. | `string` | n/a | yes |
| <a name="input_size_gb"></a> [size\_gb](#input\_size\_gb) | Storage size of the filestore instance in GB. | `number` | `2660` | no |
| <a name="input_size_gb"></a> [size\_gb](#input\_size\_gb) | Storage size of the filestore instance in GB. | `number` | `2560` | no |
| <a name="input_zone"></a> [zone](#input\_zone) | Location for Filestore instances below Enterprise tier. | `string` | n/a | yes |

## Outputs
Expand Down
2 changes: 1 addition & 1 deletion modules/file-system/filestore/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ variable "local_mount" {
variable "size_gb" {
description = "Storage size of the filestore instance in GB."
type = number
default = 2660
default = 2560
}

variable "filestore_tier" {
Expand Down
2 changes: 1 addition & 1 deletion modules/network/pre-existing-vpc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ No modules.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_network_name"></a> [network\_name](#input\_network\_name) | The name of the network to be created | `string` | `"default"` | no |
| <a name="input_network_name"></a> [network\_name](#input\_network\_name) | The name of the network whose attributes will be found | `string` | `"default"` | no |
| <a name="input_project_id"></a> [project\_id](#input\_project\_id) | Project in which the HPC deployment will be created | `string` | n/a | yes |
| <a name="input_region"></a> [region](#input\_region) | The region where Cloud NAT and Cloud Router will be configured | `string` | n/a | yes |
| <a name="input_subnetwork_name"></a> [subnetwork\_name](#input\_subnetwork\_name) | The name of the subnetwork to returned, will use network name if null. | `string` | `null` | no |
Expand Down
2 changes: 1 addition & 1 deletion modules/network/pre-existing-vpc/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ variable "project_id" {
}

variable "network_name" {
description = "The name of the network to be created"
description = "The name of the network whose attributes will be found"
type = string
default = "default"
}
Expand Down
12 changes: 12 additions & 0 deletions pkg/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ import (
"fmt"
"io/ioutil"
"log"
"os"
"regexp"
"strings"

Expand Down Expand Up @@ -219,6 +220,16 @@ func NewDeploymentConfig(configFilename string) DeploymentConfig {
return newDeploymentConfig
}

func deprecatedSchema070a() {
os.Stderr.WriteString("*****************************************************************************************\n\n")
os.Stderr.WriteString("Our schemas have recently changed. Key changes:\n")
os.Stderr.WriteString(" 'resource_groups' becomes 'deployment_groups'\n")
os.Stderr.WriteString(" 'resources' becomes 'modules'\n")
os.Stderr.WriteString(" 'source: resources/...' becomes 'source: modules/...'\n")
os.Stderr.WriteString("https://github.com/GoogleCloudPlatform/hpc-toolkit/tree/develop/examples#blueprint-schema\n")
os.Stderr.WriteString("*****************************************************************************************\n\n")
}

// ImportBlueprint imports the blueprint configuration provided.
func importBlueprint(blueprintFilename string) Blueprint {
blueprintText, err := ioutil.ReadFile(blueprintFilename)
Expand All @@ -231,6 +242,7 @@ func importBlueprint(blueprintFilename string) Blueprint {
err = yaml.UnmarshalStrict(blueprintText, &blueprint)

if err != nil {
deprecatedSchema070a()
log.Fatalf("%s filename=%s: %v",
errorMessages["yamlUnmarshalError"], blueprintFilename, err)
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/sourcereader/sourcereader.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ func IsLocalPath(source string) bool {

// IsEmbeddedPath checks if a source path points to an embedded modules
func IsEmbeddedPath(source string) bool {
return strings.HasPrefix(source, "modules/")
return strings.HasPrefix(source, "modules/") || strings.HasPrefix(source, "community/")
}

// IsGitHubPath checks if a source path points to GitHub
Expand Down
Loading

0 comments on commit e2bf034

Please sign in to comment.