Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add storage bucket resource #417

Merged
merged 10 commits into from
Jun 28, 2023
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@ Canonical reference for changes, improvements, and bugfixes for the Boundary Ter

## Next

### New and Improved

* Add support for a storage bucket as a resource
([PR](https://github.com/hashicorp/terraform-provider-boundary/pull/417))

## 1.1.8 (June 13, 2023)

### New and Improved
Expand Down
75 changes: 75 additions & 0 deletions docs/resources/storage_bucket.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "boundary_storage_bucket Resource - terraform-provider-boundary"
subcategory: ""
description: |-
The storage bucket resource allows you to configure a Boundary storage bucket. A storage bucket can only belong to the Global scope or an Org scope. At this time, the only supported storage for storage buckets is AWS S3. This feature requires Boundary Enterprise or Boundary HCP.
---

# boundary_storage_bucket (Resource)

The storage bucket resource allows you to configure a Boundary storage bucket. A storage bucket can only belong to the Global scope or an Org scope. At this time, the only supported storage for storage buckets is AWS S3. This feature requires Boundary Enterprise or Boundary HCP.

## Example Usage

```terraform
resource "boundary_scope" "org" {
name = "organization_one"
description = "My first scope!"
scope_id = boundary_scope.global.id
auto_create_admin_role = true
auto_create_default_role = true
}

resource "boundary_storage_bucket" "aws_example" {
name = "My aws storage bucket"
description = "My first storage bucket!"
scope_id = boundary_scope.org.id
plugin_name = "aws"
bucket_name = "mybucket"
attributes_json = jsonencode({ "region" = "us-east-1" })

# recommended to pass in aws secrets using a file() or using environment variables
# the secrets below must be generated in aws by creating a aws iam user with programmatic access
secrets_json = jsonencode({
"access_key_id" = "aws_access_key_id_value",
"secret_access_key" = "aws_secret_access_key_value"
})
worker_filter = "\"pki\" in \"/tags/type\""
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `bucket_name` (String) The name of the bucket within the external object store service.
- `scope_id` (String) The scope for this storage bucket.
- `secrets_json` (String, Sensitive) The secrets for the storage bucket. Either values encoded with the "jsonencode" function, pre-escaped JSON string, or a file:// or env:// path. Set to a string "null" to clear any existing values. NOTE: Unlike "attributes_json", removing this block will NOT clear secrets from the storage bucket; this allows injecting secrets for one call, then removing them for storage.
- `worker_filter` (String) Filters to the worker(s) that can handle requests for this storage bucket. The filter must match an existing worker in order to create a storage bucket.

### Optional

- `attributes_json` (String) The attributes for the storage bucket. The "region" attribute field is required when creating AWS storage buckets. Values are either encoded with the "jsonencode" function, pre-escaped JSON string, or a file:// or env:// path. Set to a string "null" or remove the block to clear all attributes in the storage bucket.
- `bucket_prefix` (String) The prefix used to organize the data held within the external object store.
- `description` (String) The storage bucket description.
- `name` (String) The storage bucket name. Defaults to the resource name.
- `plugin_id` (String) The ID of the plugin that should back the resource. This or plugin_name must be defined.
- `plugin_name` (String) The name of the plugin that should back the resource. This or plugin_id must be defined.

### Read-Only

- `id` (String) The ID of the storage bucket.
- `internal_force_update` (String) Internal only. Used to force update so that we can always check the value of secrets.
- `internal_hmac_used_for_secrets_config_hmac` (String) Internal only. The Boundary-provided HMAC used to calculate the current value of the HMAC'd config. Used for drift detection.
- `internal_secrets_config_hmac` (String) Internal only. HMAC of (serverSecretsHmac + config secrets). Used for proper secrets handling.
- `secrets_hmac` (String) The HMAC'd secrets value returned from the server.

## Import

Import is supported using the following syntax:

```shell
terraform import boundary_storage_bucket.foo <my-id>
```
1 change: 1 addition & 0 deletions examples/resources/boundary_storage_bucket/import.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
terraform import boundary_storage_bucket.foo <my-id>
24 changes: 24 additions & 0 deletions examples/resources/boundary_storage_bucket/resource.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
resource "boundary_scope" "org" {
elimt marked this conversation as resolved.
Show resolved Hide resolved
name = "organization_one"
description = "My first scope!"
scope_id = boundary_scope.global.id
auto_create_admin_role = true
auto_create_default_role = true
}

resource "boundary_storage_bucket" "aws_example" {
name = "My aws storage bucket"
description = "My first storage bucket!"
scope_id = boundary_scope.org.id
plugin_name = "aws"
bucket_name = "mybucket"
attributes_json = jsonencode({ "region" = "us-east-1" })

# recommended to pass in aws secrets using a file() or using environment variables
# the secrets below must be generated in aws by creating a aws iam user with programmatic access
secrets_json = jsonencode({
"access_key_id" = "aws_access_key_id_value",
"secret_access_key" = "aws_secret_access_key_value"
})
worker_filter = "\"pki\" in \"/tags/type\""
}
2 changes: 2 additions & 0 deletions internal/provider/const.go
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,6 @@ const (
// internalForceUpdateKey is used to force updates so we can always check
// the value of secrets
internalForceUpdateKey = "internal_force_update"
// workerFilter is used for common "worker_filter" resource attribute
WorkerFilterKey = "worker_filter"
)
1 change: 1 addition & 0 deletions internal/provider/provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,7 @@ func New() *schema.Provider {
"boundary_host_set_plugin": resourceHostSetPlugin(),
"boundary_role": resourceRole(),
"boundary_scope": resourceScope(),
"boundary_storage_bucket": resourceStorageBucket(),
"boundary_target": resourceTarget(),
"boundary_user": resourceUser(),
"boundary_worker": resourceWorker(),
Expand Down
Loading