Skip to content

Commit

Permalink
Merge branch 'master' into feature/create-folders-for-business-units
Browse files Browse the repository at this point in the history
  • Loading branch information
daniel-cit authored Dec 13, 2023
2 parents 6da390d + 25c61c4 commit 46234f9
Show file tree
Hide file tree
Showing 57 changed files with 600 additions and 319 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/go-lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:
folder: [helpers/foundation-deployer]
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0
- uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0
with:
go-version-file: ${{ matrix.folder }}/go.mod
cache-dependency-path: ${{ matrix.folder }}/go.sum
Expand Down
1 change: 1 addition & 0 deletions 0-bootstrap/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,7 @@ Each step has instructions for this change.
| billing\_account | The ID of the billing account to associate projects with. | `string` | n/a | yes |
| bucket\_force\_destroy | When deleting a bucket, this boolean option will delete all contained objects. If false, Terraform will fail to delete buckets which contain objects. | `bool` | `false` | no |
| bucket\_prefix | Name prefix to use for state bucket created. | `string` | `"bkt"` | no |
| bucket\_tfstate\_kms\_force\_destroy | When deleting a bucket, this boolean option will delete the KMS keys used for the Terraform state bucket. | `bool` | `false` | no |
| default\_region | Default region to create resources where applicable. | `string` | `"us-central1"` | no |
| folder\_prefix | Name prefix to use for folders created. Should be the same in all steps. | `string` | `"fldr"` | no |
| group\_billing\_admins | Google Group for GCP Billing Administrators | `string` | n/a | yes |
Expand Down
8 changes: 8 additions & 0 deletions 0-bootstrap/cb.tf
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ locals {

cicd_project_id = module.tf_source.cloudbuild_project_id

state_bucket_kms_key = "projects/${module.seed_bootstrap.seed_project_id}/locations/${var.default_region}/keyRings/${var.project_prefix}-keyring/cryptoKeys/${var.project_prefix}-key"

bucket_self_link_prefix = "https://www.googleapis.com/storage/v1/b/"
default_state_bucket_self_link = "${local.bucket_self_link_prefix}${module.seed_bootstrap.gcs_bucket_tfstate}"
gcp_projects_state_bucket_self_link = module.gcp_projects_state_bucket.bucket.self_link
Expand Down Expand Up @@ -74,6 +76,12 @@ module "gcp_projects_state_bucket" {
project_id = module.seed_bootstrap.seed_project_id
location = var.default_region
force_destroy = var.bucket_force_destroy

encryption = {
default_kms_key_name = local.state_bucket_kms_key
}

depends_on = [module.seed_bootstrap.gcs_bucket_tfstate]
}

module "tf_source" {
Expand Down
3 changes: 3 additions & 0 deletions 0-bootstrap/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,9 @@ module "seed_bootstrap" {
parent_folder = var.parent_folder == "" ? "" : local.parent
org_admins_org_iam_permissions = local.org_admins_org_iam_permissions
project_prefix = var.project_prefix
encrypt_gcs_bucket_tfstate = true
key_rotation_period = "7776000s"
kms_prevent_destroy = !var.bucket_tfstate_kms_force_destroy

project_labels = {
environment = "bootstrap"
Expand Down
1 change: 1 addition & 0 deletions 0-bootstrap/modules/cb-private-pool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
|------|-------------|------|---------|:--------:|
| private\_worker\_pool | name: Name of the worker pool. A name with a random suffix is generated if not set.<br> region: The private worker pool region. See https://cloud.google.com/build/docs/locations for available locations.<br> disk\_size\_gb: Size of the disk attached to the worker, in GB.<br> machine\_type: Machine type of a worker.<br> no\_external\_ip: If true, workers are created without any public address, which prevents network egress to public IPs.<br> enable\_network\_peering: Set to true to enable configuration of networking peering for the private worker pool.<br> create\_peered\_network: If true a network will be created to stablish the network peering.<br> peered\_network\_id: The ID of the existing network to configure peering for the private worker pool if create\_peered\_network false. The project containing the network must have Service Networking API (`servicenetworking.googleapis.com`) enabled.<br> peered\_network\_subnet\_ip: The IP range to be used for the subnet that a will created in the peered network if create\_peered\_network true.<br> peering\_address: The IP address or beginning of the peering address range. This can be supplied as an input to reserve a specific address or omitted to allow GCP to choose a valid one.<br> peering\_prefix\_length: The prefix length of the IP peering range. If not present, it means the address field is a single IP address. | <pre>object({<br> name = optional(string, "")<br> region = optional(string, "us-central1")<br> disk_size_gb = optional(number, 100)<br> machine_type = optional(string, "e2-medium")<br> no_external_ip = optional(bool, false)<br> enable_network_peering = optional(bool, false)<br> create_peered_network = optional(bool, false)<br> peered_network_id = optional(string, "")<br> peered_network_subnet_ip = optional(string, "")<br> peering_address = optional(string, null)<br> peering_prefix_length = optional(number, 24)<br> })</pre> | `{}` | no |
| project\_id | ID of the project where the private pool will be created | `string` | n/a | yes |
| vpc\_flow\_logs | aggregation\_interval: Toggles the aggregation interval for collecting flow logs. Increasing the interval time will reduce the amount of generated flow logs for long lasting connections. Possible values are: INTERVAL\_5\_SEC, INTERVAL\_30\_SEC, INTERVAL\_1\_MIN, INTERVAL\_5\_MIN, INTERVAL\_10\_MIN, INTERVAL\_15\_MIN.<br> flow\_sampling: Set the sampling rate of VPC flow logs within the subnetwork where 1.0 means all collected logs are reported and 0.0 means no logs are reported. The value of the field must be in [0, 1].<br> metadata: Configures whether metadata fields should be added to the reported VPC flow logs. Possible values are: EXCLUDE\_ALL\_METADATA, INCLUDE\_ALL\_METADATA, CUSTOM\_METADATA.<br> metadata\_fields: ist of metadata fields that should be added to reported logs. Can only be specified if VPC flow logs for this subnetwork is enabled and "metadata" is set to CUSTOM\_METADATA.<br> filter\_expr: Export filter used to define which VPC flow logs should be logged, as as CEL expression. See https://cloud.google.com/vpc/docs/flow-logs#filtering for details on how to format this field. | <pre>object({<br> aggregation_interval = optional(string, "INTERVAL_5_SEC")<br> flow_sampling = optional(string, "0.5")<br> metadata = optional(string, "INCLUDE_ALL_METADATA")<br> metadata_fields = optional(list(string), [])<br> filter_expr = optional(string, "true")<br> })</pre> | `{}` | no |
| vpn\_configuration | enable\_vpn: set to true to create VPN connection to on prem. If true, the following values must be valid.<br> on\_prem\_public\_ip\_address0: The first public IP address for on prem VPN connection.<br> on\_prem\_public\_ip\_address1: The second public IP address for on prem VPN connection.<br> router\_asn: Border Gateway Protocol (BGP) Autonomous System Number (ASN) for cloud routes.<br> bgp\_peer\_asn: Border Gateway Protocol (BGP) Autonomous System Number (ASN) for peer cloud routes.<br> shared\_secret: The shared secret used in the VPN.<br> psk\_secret\_project\_id: The ID of the project that contains the secret from secret manager that holds the VPN pre-shared key.<br> psk\_secret\_name: The name of the secret to retrieve from secret manager that holds the VPN pre-shared key.<br> tunnel0\_bgp\_peer\_address: BGP peer address for tunnel 0.<br> tunnel0\_bgp\_session\_range: BGP session range for tunnel 0.<br> tunnel1\_bgp\_peer\_address: BGP peer address for tunnel 1.<br> tunnel1\_bgp\_session\_range: BGP session range for tunnel 1. | <pre>object({<br> enable_vpn = optional(bool, false)<br> on_prem_public_ip_address0 = optional(string, "")<br> on_prem_public_ip_address1 = optional(string, "")<br> router_asn = optional(number, 64515)<br> bgp_peer_asn = optional(number, 64513)<br> psk_secret_project_id = optional(string, "")<br> psk_secret_name = optional(string, "")<br> tunnel0_bgp_peer_address = optional(string, "")<br> tunnel0_bgp_session_range = optional(string, "")<br> tunnel1_bgp_peer_address = optional(string, "")<br> tunnel1_bgp_session_range = optional(string, "")<br> })</pre> | `{}` | no |

## Outputs
Expand Down
19 changes: 12 additions & 7 deletions 0-bootstrap/modules/cb-private-pool/network.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ locals {

module "peered_network" {
source = "terraform-google-modules/network/google"
version = "~> 7.0"
version = "~> 8.0"
count = var.private_worker_pool.create_peered_network ? 1 : 0

project_id = var.project_id
Expand All @@ -29,12 +29,17 @@ module "peered_network" {

subnets = [
{
subnet_name = "sb-b-cbpools-${var.private_worker_pool.region}"
subnet_ip = var.private_worker_pool.peered_network_subnet_ip
subnet_region = var.private_worker_pool.region
subnet_private_access = "true"
subnet_flow_logs = "true"
description = "Peered subnet for Cloud Build private pool"
subnet_name = "sb-b-cbpools-${var.private_worker_pool.region}"
subnet_ip = var.private_worker_pool.peered_network_subnet_ip
subnet_region = var.private_worker_pool.region
subnet_private_access = "true"
subnet_flow_logs = "true"
subnet_flow_logs_interval = var.vpc_flow_logs.aggregation_interval
subnet_flow_logs_sampling = var.vpc_flow_logs.flow_sampling
subnet_flow_logs_metadata = var.vpc_flow_logs.metadata
subnet_flow_logs_metadata_fields = var.vpc_flow_logs.metadata_fields
subnet_flow_logs_filter = var.vpc_flow_logs.filter_expr
description = "Peered subnet for Cloud Build private pool"
}
]

Expand Down
18 changes: 18 additions & 0 deletions 0-bootstrap/modules/cb-private-pool/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,21 @@ variable "vpn_configuration" {
error_message = "If VPN configuration is enabled, all values are required."
}
}

variable "vpc_flow_logs" {
description = <<EOT
aggregation_interval: Toggles the aggregation interval for collecting flow logs. Increasing the interval time will reduce the amount of generated flow logs for long lasting connections. Possible values are: INTERVAL_5_SEC, INTERVAL_30_SEC, INTERVAL_1_MIN, INTERVAL_5_MIN, INTERVAL_10_MIN, INTERVAL_15_MIN.
flow_sampling: Set the sampling rate of VPC flow logs within the subnetwork where 1.0 means all collected logs are reported and 0.0 means no logs are reported. The value of the field must be in [0, 1].
metadata: Configures whether metadata fields should be added to the reported VPC flow logs. Possible values are: EXCLUDE_ALL_METADATA, INCLUDE_ALL_METADATA, CUSTOM_METADATA.
metadata_fields: ist of metadata fields that should be added to reported logs. Can only be specified if VPC flow logs for this subnetwork is enabled and "metadata" is set to CUSTOM_METADATA.
filter_expr: Export filter used to define which VPC flow logs should be logged, as as CEL expression. See https://cloud.google.com/vpc/docs/flow-logs#filtering for details on how to format this field.
EOT
type = object({
aggregation_interval = optional(string, "INTERVAL_5_SEC")
flow_sampling = optional(string, "0.5")
metadata = optional(string, "INCLUDE_ALL_METADATA")
metadata_fields = optional(list(string), [])
filter_expr = optional(string, "true")
})
default = {}
}
1 change: 1 addition & 0 deletions 0-bootstrap/modules/jenkins-agent/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@ resource "google_compute_subnetwork" "jenkins_agents_subnet" {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
metadata_fields = null
filter_expr = true
}
}
Expand Down
1 change: 1 addition & 0 deletions 0-bootstrap/sa.tf
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@ locals {
"roles/storage.admin",
"roles/iam.serviceAccountAdmin",
"roles/resourcemanager.projectDeleter",
"roles/cloudkms.admin",
],
"org" = [
"roles/storage.objectAdmin",
Expand Down
6 changes: 6 additions & 0 deletions 0-bootstrap/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,12 @@ variable "bucket_force_destroy" {
default = false
}

variable "bucket_tfstate_kms_force_destroy" {
description = "When deleting a bucket, this boolean option will delete the KMS keys used for the Terraform state bucket."
type = bool
default = false
}

/* ----------------------------------------
Specific to Groups creation
---------------------------------------- */
Expand Down
2 changes: 1 addition & 1 deletion 1-org/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ to Bigquery and Pub/Sub. This will result in additional charges for those copies

- This module implements but does not enable [bucket policy retention](https://cloud.google.com/storage/docs/bucket-lock) for organization logs. If needed, enable a retention policy by configuring the `log_export_storage_retention_policy` variable.

- This module implements but does not enable [object versioning](https://cloud.google.com/storage/docs/object-versioning) for organization logs. If needed, enable object versioning by setting the `audit_logs_table_delete_contents_on_destroy` variable to true.
- This module implements but does not enable [object versioning](https://cloud.google.com/storage/docs/object-versioning) for organization logs. If needed, enable object versioning by setting the `log_export_storage_versioning` variable to true.

- Bucket policy retention and object versioning are **mutually exclusive**.

Expand Down
6 changes: 2 additions & 4 deletions 1-org/envs/shared/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| audit\_data\_users | Google Workspace or Cloud Identity group that have access to audit logs. | `string` | n/a | yes |
| audit\_logs\_table\_delete\_contents\_on\_destroy | (Optional) If set to true, delete all the tables in the dataset when destroying the resource; otherwise, destroying the resource will fail if tables are present. | `bool` | `false` | no |
| audit\_logs\_table\_expiration\_days | Period before tables expire for all audit logs in milliseconds. Default is 30 days. | `number` | `30` | no |
| billing\_data\_users | Google Workspace or Cloud Identity group that have access to billing data set. | `string` | n/a | yes |
| billing\_export\_dataset\_location | The location of the dataset for billing data export. | `string` | `"US"` | no |
| cai\_monitoring\_kms\_force\_destroy | If set to true, delete KMS keyring and keys when destroying the module; otherwise, destroying the module will fail if KMS keys are present. | `bool` | `false` | no |
Expand Down Expand Up @@ -42,8 +40,8 @@
| domains\_to\_allow | The list of domains to allow users from in IAM. |
| interconnect\_project\_id | The Dedicated Interconnect project ID |
| interconnect\_project\_number | The Dedicated Interconnect project number |
| logs\_export\_bigquery\_dataset\_name | The log bucket for destination of log exports. See https://cloud.google.com/logging/docs/routing/overview#buckets |
| logs\_export\_logbucket\_name | The log bucket for destination of log exports. See https://cloud.google.com/logging/docs/routing/overview#buckets |
| logs\_export\_logbucket\_linked\_dataset\_name | The resource name of the Log Bucket linked BigQuery dataset created for Log Analytics. See https://cloud.google.com/logging/docs/log-analytics . |
| logs\_export\_logbucket\_name | The log bucket for destination of log exports. See https://cloud.google.com/logging/docs/routing/overview#buckets . |
| logs\_export\_pubsub\_topic | The Pub/Sub topic for destination of log exports |
| logs\_export\_storage\_bucket\_name | The storage bucket for destination of log exports |
| network\_folder\_name | The network folder name. |
Expand Down
34 changes: 13 additions & 21 deletions 1-org/envs/shared/log_sinks.tf
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,16 @@ locals {
parent_resource_id = local.parent_folder != "" ? local.parent_folder : local.org_id
parent_resource_type = local.parent_folder != "" ? "folder" : "organization"
parent_resources = { resource = local.parent_resource_id }
main_logs_filter = <<EOF
logs_filter = <<EOF
logName: /logs/cloudaudit.googleapis.com%2Factivity OR
logName: /logs/cloudaudit.googleapis.com%2Fsystem_event OR
logName: /logs/cloudaudit.googleapis.com%2Fdata_access OR
logName: /logs/cloudaudit.googleapis.com%2Faccess_transparency OR
logName: /logs/cloudaudit.googleapis.com%2Fpolicy OR
logName: /logs/compute.googleapis.com%2Fvpc_flows OR
logName: /logs/compute.googleapis.com%2Ffirewall OR
logName: /logs/cloudaudit.googleapis.com%2Faccess_transparency
logName: /logs/dns.googleapis.com%2Fdns_queries
EOF
all_logs_filter = ""
}

resource "random_string" "suffix" {
Expand All @@ -42,22 +43,11 @@ module "logs_export" {
resource_type = local.parent_resource_type
logging_destination_project_id = module.org_audit_logs.project_id

/******************************************
Send logs to BigQuery
*****************************************/
bigquery_options = {
logging_sink_name = "sk-c-logging-bq"
logging_sink_filter = local.main_logs_filter
dataset_name = "audit_logs"
expiration_days = var.audit_logs_table_expiration_days
delete_contents_on_destroy = var.audit_logs_table_delete_contents_on_destroy
}

/******************************************
Send logs to Storage
*****************************************/
storage_options = {
logging_sink_filter = local.all_logs_filter
logging_sink_filter = local.logs_filter
logging_sink_name = "sk-c-logging-bkt"
storage_bucket_name = "bkt-${module.org_audit_logs.project_id}-org-logs-${random_string.suffix.result}"
location = var.log_export_storage_location
Expand All @@ -72,7 +62,7 @@ module "logs_export" {
Send logs to Pub\Sub
*****************************************/
pubsub_options = {
logging_sink_filter = local.main_logs_filter
logging_sink_filter = local.logs_filter
logging_sink_name = "sk-c-logging-pub"
topic_name = "tp-org-logs-${random_string.suffix.result}"
create_subscriber = true
Expand All @@ -82,14 +72,16 @@ module "logs_export" {
Send logs to Logbucket
*****************************************/
logbucket_options = {
logging_sink_name = "sk-c-logging-logbkt"
logging_sink_filter = local.all_logs_filter
name = "logbkt-org-logs-${random_string.suffix.result}"
location = local.default_region
logging_sink_name = "sk-c-logging-logbkt"
logging_sink_filter = local.logs_filter
name = "logbkt-org-logs-${random_string.suffix.result}"
location = local.default_region
enable_analytics = true
linked_dataset_id = "ds_c_logbkt_analytics"
linked_dataset_description = "BigQuery Dataset for Logbucket analytics"
}
}


/******************************************
Billing logs (Export configured manually)
*****************************************/
Expand Down
8 changes: 4 additions & 4 deletions 1-org/envs/shared/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,12 @@ output "logs_export_storage_bucket_name" {

output "logs_export_logbucket_name" {
value = module.logs_export.logbucket_destination_name
description = "The log bucket for destination of log exports. See https://cloud.google.com/logging/docs/routing/overview#buckets"
description = "The log bucket for destination of log exports. See https://cloud.google.com/logging/docs/routing/overview#buckets ."
}

output "logs_export_bigquery_dataset_name" {
value = module.logs_export.bigquery_destination_name
description = "The log bucket for destination of log exports. See https://cloud.google.com/logging/docs/routing/overview#buckets"
output "logs_export_logbucket_linked_dataset_name" {
value = module.logs_export.logbucket_linked_dataset_name
description = "The resource name of the Log Bucket linked BigQuery dataset created for Log Analytics. See https://cloud.google.com/logging/docs/log-analytics ."
}

output "tags" {
Expand Down
Loading

0 comments on commit 46234f9

Please sign in to comment.