Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for custom S3-compatible services using BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT #51

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Defines how the cache is stored and restored. Can be any string (see [Customizab

Very basic local filesystem backend.

The `BUILDKITE_PLUGIN_FS_CACHE_FOLDER` environment variable defines where the copies are (default: `/var/cache/buildkite`). If you don't change it, you will need to make sure that the folder exists and `buildkite-agent` has the propper permissions, otherwise the plugin will fail.
The `BUILDKITE_PLUGIN_FS_CACHE_FOLDER` environment variable defines where the copies are (default: `/var/cache/buildkite`). If you don't change it, you will need to make sure that the folder exists and `buildkite-agent` has the propper permissions, otherwise the plugin will fail.

**IMPORTANT**: the `fs` backend just copies files to a different location in the current agent, as it is not a shared or external resource, its caching possibilities are quite limited.

Expand All @@ -59,6 +59,7 @@ Store things in an S3 bucket. You need to make sure that the `aws` command is av
You also need the agent to have access to the following defined environment variables:
* `BUILDKITE_PLUGIN_S3_CACHE_BUCKET`: the bucket to use (backend will fail if not defined)
* `BUILDKITE_PLUGIN_S3_CACHE_PREFIX`: optional prefix to use for the cache within the bucket
* `BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT`: optional S3 custom endpoint to use

Setting the `BUILDKITE_PLUGIN_S3_CACHE_ONLY_SHOW_ERRORS` environment variable will reduce logging of file operations towards S3.

Expand Down Expand Up @@ -91,7 +92,7 @@ When restoring from cache, **all levels, in the described order, up to the one s

One of the greatest flexibilities of this plugin is its flexible backend architecture. You can provide whatever value you want for the `backend` option of this plugin (`X` for example) as long as there is an executable script accessible to the agent named `cache_X` that respects the following execution protocol:

* `cache_X exists $KEY`
* `cache_X exists $KEY`

Should exit successfully (0 return code) if any previous call to this very same plugin was made with `cache_x save $KEY`. Any other exit code will mean that there is no valid cache and will be ignored.

Expand Down Expand Up @@ -122,7 +123,7 @@ You can always have more complicated logic by using the plugin multiple times wi
```yaml
steps:
- label: ':nodejs: Install dependencies'
command: npm ci
command: npm ci
plugins:
- cache#v0.5.0:
manifest: package-lock.json
Expand Down
19 changes: 18 additions & 1 deletion backends/cache_s3
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,30 @@ s3_sync() {
local to="$2"

aws_cmd=(aws s3 sync)

if [ -n "${BUILDKITE_PLUGIN_S3_CACHE_ONLY_SHOW_ERRORS}" ]; then
aws_cmd+=(--only-show-errors)
fi

if [ -n "${BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT}" ]; then
aws_cmd+=(--endpoint-url "${BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT}")
fi

"${aws_cmd[@]}" "${from}" "${to}"
}

s3_listobjects() {
local prefix="$1"

aws_cmd=(aws s3api list-objects-v2 --bucket "${BUILDKITE_PLUGIN_S3_CACHE_BUCKET}" --prefix "$(build_key "${prefix}")" --max-items 1)

if [ -n "${BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT}" ]; then
aws_cmd+=(--endpoint-url "${BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT}")
fi

"${aws_cmd[@]}"
}

restore_cache() {
local from="$1"
local to="$2"
Expand All @@ -39,7 +56,7 @@ save_cache() {

exists_cache() {
if [ -z "$1" ]; then exit 1; fi
[ -n "$(aws s3api list-objects-v2 --bucket "${BUILDKITE_PLUGIN_S3_CACHE_BUCKET}" --prefix "$(build_key "$1")" --max-items 1)" ]
[ -n "$(s3_listobjects "$1")" ]
}

OPCODE="$1"
Expand Down
46 changes: 46 additions & 0 deletions tests/cache_s3.bats
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,52 @@ setup() {
unstub aws
}

@test 'Endpoint URL flag passed when environment is set' {
export BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT=https://s3.somewhere.com

stub aws \
's3 sync --endpoint-url https://s3.somewhere.com \* \* : echo ' \
's3 sync --endpoint-url https://s3.somewhere.com \* \* : echo ' \
's3api list-objects-v2 --bucket \* --prefix \* --max-items 1 --endpoint-url https://s3.somewhere.com : echo exists' \
's3 sync \* \* : echo ' \
's3 sync \* \* : echo ' \
's3api list-objects-v2 --bucket \* --prefix \* --max-items 1 : echo exists'

run "${PWD}/backends/cache_s3" save from to

assert_success
assert_output ''

run "${PWD}/backends/cache_s3" get from to

assert_success
assert_output ''

run "${PWD}/backends/cache_s3" exists to

assert_success
assert_output ''

unset BUILDKITE_PLUGIN_S3_CACHE_ENDPOINT

run "${PWD}/backends/cache_s3" save from to

assert_success
assert_output ''

run "${PWD}/backends/cache_s3" get from to

assert_success
assert_output ''

run "${PWD}/backends/cache_s3" exists to

assert_success
assert_output ''

unstub aws
}

@test 'File exists and can be restored after save' {
touch "${BATS_TEST_TMPDIR}/new-file"
mkdir "${BATS_TEST_TMPDIR}/s3-cache"
Expand Down