Skip to content

Commit

Permalink
Squashed 'release-tools/' changes from 4cf843f..a0f195c
Browse files Browse the repository at this point in the history
a0f195c Merge pull request kubernetes-csi#106 from msau42/fix-canary
7100c12 Only set staging registry when running canary job
b3c65f9 Merge pull request kubernetes-csi#99 from msau42/add-release-process
e53f3e8 Merge pull request kubernetes-csi#103 from msau42/fix-canary
d129462 Document new method for adding CI jobs are new K8s versions
e73c2ce Use staging registry for canary tests
2c09846 Add cleanup instructions to release-notes generation
60e1cd3 Merge pull request kubernetes-csi#98 from pohly/kubernetes-1-19-fixes
0979c09 prow.sh: fix E2E suite for Kubernetes >= 1.18
3b4a2f1 prow.sh: fix installing Go for Kubernetes 1.19.0
1fbb636 Merge pull request kubernetes-csi#97 from pohly/go-1.15
82d108a switch to Go 1.15
d8a2530 Merge pull request kubernetes-csi#95 from msau42/add-release-process
843bddc Add steps on promoting release images
0345a83 Merge pull request kubernetes-csi#94 from linux-on-ibm-z/bump-timeout
1fdf2d5 cloud build: bump timeout in Prow job
41ec6d1 Merge pull request kubernetes-csi#93 from animeshk08/patch-1
5a54e67 filter-junit: Fix gofmt error
0676fcb Merge pull request kubernetes-csi#92 from animeshk08/patch-1
36ea4ff filter-junit: Fix golint error
f5a4203 Merge pull request kubernetes-csi#91 from cyb70289/arm64
43e50d6 prow.sh: enable building arm64 image
0d5bd84 Merge pull request kubernetes-csi#90 from pohly/k8s-staging-sig-storage
3df86b7 cloud build: k8s-staging-sig-storage
c5fd961 Merge pull request kubernetes-csi#89 from pohly/cloud-build-binfmt
db0c2a7 cloud build: initialize support for running commands in Dockerfile
be902f4 Merge pull request kubernetes-csi#88 from pohly/multiarch-windows-fix
340e082 build.make: optional inclusion of Windows in multiarch images
5231f05 build.make: properly declare push-multiarch
4569f27 build.make: fix push-multiarch ambiguity
17dde9e Merge pull request kubernetes-csi#87 from pohly/cloud-build
bd41690 cloud build: initial set of shared files
9084fec Merge pull request kubernetes-csi#81 from msau42/add-release-process
6f2322e Update patch release notes generation command
0fcc3b1 Merge pull request kubernetes-csi#78 from ggriffiths/fix_csi_snapshotter_rbac_version_set
d8c76fe Support local snapshot RBAC for pull jobs
c1bdf5b Merge pull request kubernetes-csi#80 from msau42/add-release-process
ea1f94a update release tools instructions
152396e Merge pull request kubernetes-csi#77 from ggriffiths/snapshotter201_update
7edc146 Update snapshotter to version 2.0.1

git-subtree-dir: release-tools
git-subtree-split: a0f195cc2ddc2a1f07d4d3e46fc08187db358f94
  • Loading branch information
pohly committed Oct 12, 2020
1 parent 4cf843f commit 63ae26b
Show file tree
Hide file tree
Showing 7 changed files with 221 additions and 43 deletions.
40 changes: 28 additions & 12 deletions SIDECAR_RELEASE_PROCESS.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,29 +39,39 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
1. Changes can then be updated in all the sidecar repos and hostpath driver repo
by following the [update
instructions](https://github.com/kubernetes-csi/csi-release-tools/blob/master/README.md#sharing-and-updating).
1. New pull and CI jobs are configured by
1. New pull and CI jobs are configured by adding new K8s versions to the top of
[gen-jobs.sh](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-csi/gen-jobs.sh).
New pull jobs that have been unverified should be initially made optional.
[Example](https://github.com/kubernetes/test-infra/pull/15055)
New pull jobs that have been unverified should be initially made optional by
setting the new K8s version as
[experimental](https://github.com/kubernetes/test-infra/blob/a1858f46d6014480b130789df58b230a49203a64/config/jobs/kubernetes-csi/gen-jobs.sh#L40).
1. Once new pull and CI jobs have been verified, and the new Kubernetes version
is released, we can make the optional jobs required, and also remove the
Kubernetes versions that are no longer supported.

## Release Process
1. Identify all issues and ongoing PRs that should go into the release, and
drive them to resolution.
1. Download [K8s release notes
1. Download v2.8+ [K8s release notes
generator](https://github.com/kubernetes/release/tree/master/cmd/release-notes)
1. Generate release notes for the release. Replace arguments with the relevant
information.
```
GITHUB_TOKEN=<token> ./release-notes --start-sha=0ed6978fd199e3ca10326b82b4b8b8e916211c9b --end-sha=3cb3d2f18ed8cb40371c6d8886edcabd1f27e7b9 \
--github-org=kubernetes-csi --github-repo=external-attacher -branch=master -output out.md
```
* `--start-sha` should point to the last release from the same branch. For
example:
* `1.X-1.0` tag when releasing `1.X.0`
* `1.X.Y-1` tag when releasing `1.X.Y`
* Clean up old cached information (also needed if you are generating release
notes for multiple repos)
```bash
rm -rf /tmp/k8s-repo
```
* For new minor releases on master:
```bash
GITHUB_TOKEN=<token> release-notes --discover=mergebase-to-latest
--github-org=kubernetes-csi --github-repo=external-provisioner
--required-author="" --output out.md
```
* For new patch releases on a release branch:
```bash
GITHUB_TOKEN=<token> release-notes --discover=patch-to-latest --branch=release-1.1
--github-org=kubernetes-csi --github-repo=external-provisioner
--required-author="" --output out.md
```
1. Compare the generated output to the new commits for the release to check if
any notable change missed a release note.
1. Reword release notes as needed. Make sure to check notes for breaking
Expand All @@ -82,6 +92,12 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
[external-provisioner example](https://github.com/kubernetes-csi/external-provisioner/releases/new)
1. If release was a new major/minor version, create a new `release-<minor>`
branch at that commit.
1. Check [image build status](https://k8s-testgrid.appspot.com/sig-storage-image-build).
1. Promote images from k8s-staging-sig-storage to k8s.gcr.io/sig-storage. From
the [k8s image
repo](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images/k8s-staging-sig-storage),
run `./generate.sh > images.yaml`, and send a PR with the updated images.
Once merged, the image promoter will copy the images from staging to prod.
1. Update [kubernetes-csi/docs](https://github.com/kubernetes-csi/docs) sidecar
and feature pages with the new released version.
1. After all the sidecars have been released, update
Expand Down
77 changes: 74 additions & 3 deletions build.make
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ BUILD_PLATFORMS =

# This builds each command (= the sub-directories of ./cmd) for the target platform(s)
# defined by BUILD_PLATFORMS.
build-%: check-go-version-go
$(CMDS:%=build-%): build-%: check-go-version-go
mkdir -p bin
echo '$(BUILD_PLATFORMS)' | tr ';' '\n' | while read -r os arch suffix; do \
if ! (set -x; CGO_ENABLED=0 GOOS="$$os" GOARCH="$$arch" go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o "./bin/$*$$suffix" ./cmd/$*); then \
Expand All @@ -80,10 +80,10 @@ build-%: check-go-version-go
fi; \
done

container-%: build-%
$(CMDS:%=container-%): container-%: build-%
docker build -t $*:latest -f $(shell if [ -e ./cmd/$*/Dockerfile ]; then echo ./cmd/$*/Dockerfile; else echo Dockerfile; fi) --label revision=$(REV) .

push-%: container-%
$(CMDS:%=push-%): push-%: container-%
set -ex; \
push_image () { \
docker tag $*:latest $(IMAGE_NAME):$$tag; \
Expand All @@ -105,6 +105,77 @@ build: $(CMDS:%=build-%)
container: $(CMDS:%=container-%)
push: $(CMDS:%=push-%)

# Additional parameters are needed when pushing to a local registry,
# see https://github.com/docker/buildx/issues/94.
# However, that then runs into https://github.com/docker/cli/issues/2396.
#
# What works for local testing is:
# make push-multiarch PULL_BASE_REF=master REGISTRY_NAME=<your account on dockerhub.io> BUILD_PLATFORMS="linux amd64; windows amd64 .exe; linux ppc64le -ppc64le; linux s390x -s390x"
DOCKER_BUILDX_CREATE_ARGS ?=

# This target builds a multiarch image for one command using Moby BuildKit builder toolkit.
# Docker Buildx is included in Docker 19.03.
#
# ./cmd/<command>/Dockerfile[.Windows] is used if found, otherwise Dockerfile[.Windows].
# It is currently optional: if no such file exists, Windows images are not included,
# even when Windows is listed in BUILD_PLATFORMS. That way, projects can test that
# Windows binaries can be built before adding a Dockerfile for it.
#
# BUILD_PLATFORMS determines which individual images are included in the multiarch image.
# PULL_BASE_REF must be set to 'master', 'release-x.y', or a tag name, and determines
# the tag for the resulting multiarch image.
$(CMDS:%=push-multiarch-%): push-multiarch-%: check-pull-base-ref build-%
set -ex; \
DOCKER_CLI_EXPERIMENTAL=enabled; \
export DOCKER_CLI_EXPERIMENTAL; \
docker buildx create $(DOCKER_BUILDX_CREATE_ARGS) --use --name multiarchimage-buildertest; \
trap "docker buildx rm multiarchimage-buildertest" EXIT; \
dockerfile_linux=$$(if [ -e ./cmd/$*/Dockerfile ]; then echo ./cmd/$*/Dockerfile; else echo Dockerfile; fi); \
dockerfile_windows=$$(if [ -e ./cmd/$*/Dockerfile.Windows ]; then echo ./cmd/$*/Dockerfile.Windows; else echo Dockerfile.Windows; fi); \
if [ '$(BUILD_PLATFORMS)' ]; then build_platforms='$(BUILD_PLATFORMS)'; else build_platforms="linux amd64"; fi; \
if ! [ -f "$$dockerfile_windows" ]; then \
build_platforms="$$(echo "$$build_platforms" | sed -e 's/windows *[^ ]* *.exe//g' -e 's/; *;/;/g')"; \
fi; \
pushMultiArch () { \
tag=$$1; \
echo "$$build_platforms" | tr ';' '\n' | while read -r os arch suffix; do \
docker buildx build --push \
--tag $(IMAGE_NAME):$$arch-$$os-$$tag \
--platform=$$os/$$arch \
--file $$(eval echo \$${dockerfile_$$os}) \
--build-arg binary=./bin/$*$$suffix \
--label revision=$(REV) \
.; \
done; \
images=$$(echo "$$build_platforms" | tr ';' '\n' | while read -r os arch suffix; do echo $(IMAGE_NAME):$$arch-$$os-$$tag; done); \
docker manifest create --amend $(IMAGE_NAME):$$tag $$images; \
docker manifest push -p $(IMAGE_NAME):$$tag; \
}; \
if [ $(PULL_BASE_REF) = "master" ]; then \
: "creating or overwriting canary image"; \
pushMultiArch canary; \
elif echo $(PULL_BASE_REF) | grep -q -e 'release-*' ; then \
: "creating or overwriting canary image for release branch"; \
release_canary_tag=$$(echo $(PULL_BASE_REF) | cut -f2 -d '-')-canary; \
pushMultiArch $$release_canary_tag; \
elif docker pull $(IMAGE_NAME):$(PULL_BASE_REF) 2>&1 | tee /dev/stderr | grep -q "manifest for $(IMAGE_NAME):$(PULL_BASE_REF) not found"; then \
: "creating release image"; \
pushMultiArch $(PULL_BASE_REF); \
else \
: "ERROR: release image $(IMAGE_NAME):$(PULL_BASE_REF) already exists: a new tag is required!"; \
exit 1; \
fi

.PHONY: check-pull-base-ref
check-pull-base-ref:
if ! [ "$(PULL_BASE_REF)" ]; then \
echo >&2 "ERROR: PULL_BASE_REF must be set to 'master', 'release-x.y', or a tag name."; \
exit 1; \
fi

.PHONY: push-multiarch
push-multiarch: $(CMDS:%=push-multiarch-%)

clean:
-rm -rf bin

Expand Down
6 changes: 6 additions & 0 deletions cloudbuild.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#! /bin/bash

# shellcheck disable=SC1091
. release-tools/prow.sh

gcr_cloud_build
46 changes: 46 additions & 0 deletions cloudbuild.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# A configuration file for multi-arch image building with the Google cloud build service.
#
# Repos using this file must:
# - import csi-release-tools
# - add a symlink cloudbuild.yaml -> release-tools/cloudbuild.yaml
# - add a .cloudbuild.sh which can be a custom file or a symlink
# to release-tools/cloudbuild.sh
# - accept "binary" as build argument in their Dockerfile(s) (see
# https://github.com/pohly/node-driver-registrar/blob/3018101987b0bb6da2a2657de607174d6e3728f7/Dockerfile#L4-L6)
# because binaries will get built for different architectures and then
# get copied from the built host into the container image
#
# See https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/README.md
# for more details on image pushing process in Kubernetes.
#
# To promote release images, see https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images/k8s-staging-sig-storage.

# This must be specified in seconds. If omitted, defaults to 600s (10 mins).
timeout: 1800s
# This prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
# or any new substitutions added in the future.
options:
substitution_option: ALLOW_LOOSE
steps:
# The image must contain bash and curl. Ideally it should also contain
# the desired version of Go (currently defined in release-tools/travis.yml),
# but that just speeds up the build and is not required.
- name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20200421-a2bf5f8'
entrypoint: ./.cloudbuild.sh
env:
- GIT_TAG=${_GIT_TAG}
- PULL_BASE_REF=${_PULL_BASE_REF}
- REGISTRY_NAME=gcr.io/${_STAGING_PROJECT}
- HOME=/root
substitutions:
# _GIT_TAG will be filled with a git-based tag for the image, of the form vYYYYMMDD-hash, and
# can be used as a substitution.
_GIT_TAG: '12345'
# _PULL_BASE_REF will contain the ref that was pushed to trigger this build -
# a branch like 'master' or 'release-0.2', or a tag like 'v0.2'.
_PULL_BASE_REF: 'master'
# The default gcr.io staging project for Kubernetes-CSI
# (=> https://console.cloud.google.com/gcr/images/k8s-staging-sig-storage/GLOBAL).
# Might be overridden in the Prow build job for a repo which wants
# images elsewhere.
_STAGING_PROJECT: 'k8s-staging-sig-storage'
8 changes: 4 additions & 4 deletions filter-junit.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ limitations under the License.
*/

/*
* This command filters a JUnit file such that only tests with a name
* matching a regular expression are passed through. By concatenating
* multiple input files it is possible to merge them into a single file.
*/
This command filters a JUnit file such that only tests with a name
matching a regular expression are passed through. By concatenating
multiple input files it is possible to merge them into a single file.
*/
package main

import (
Expand Down
85 changes: 62 additions & 23 deletions prow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ get_versioned_variable () {
echo "$value"
}

configvar CSI_PROW_BUILD_PLATFORMS "linux amd64; windows amd64 .exe; linux ppc64le -ppc64le; linux s390x -s390x" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries"
configvar CSI_PROW_BUILD_PLATFORMS "linux amd64; windows amd64 .exe; linux ppc64le -ppc64le; linux s390x -s390x; linux arm64 -arm64" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries"

# If we have a vendor directory, then use it. We must be careful to only
# use this for "make" invocations inside the project's repo itself because
Expand Down Expand Up @@ -211,24 +211,29 @@ configvar CSI_PROW_DRIVER_INSTALL "install_csi_driver" "name of the shell functi
# still use that name.
configvar CSI_PROW_DRIVER_CANARY "${CSI_PROW_HOSTPATH_CANARY}" "driver image override for canary images"

# Image registry to use for canary images.
# Only valid if CSI_PROW_DRIVER_CANARY == "canary".
configvar CSI_PROW_DRIVER_CANARY_REGISTRY "gcr.io/k8s-staging-sig-storage" "registry for canary images"

# The E2E testing can come from an arbitrary repo. The expectation is that
# the repo supports "go test ./test/e2e -args --storage.testdriver" (https://github.com/kubernetes/kubernetes/pull/72836)
# after setting KUBECONFIG. As a special case, if the repository is Kubernetes,
# then `make WHAT=test/e2e/e2e.test` is called first to ensure that
# all generated files are present.
#
# CSI_PROW_E2E_REPO=none disables E2E testing.
# TOOO: remove versioned variables and make e2e version match k8s version
configvar CSI_PROW_E2E_VERSION_1_15 v1.15.0 "E2E version for Kubernetes 1.15.x"
configvar CSI_PROW_E2E_VERSION_1_16 v1.16.0 "E2E version for Kubernetes 1.16.x"
configvar CSI_PROW_E2E_VERSION_1_17 v1.17.0 "E2E version for Kubernetes 1.17.x"
# TODO: add new CSI_PROW_E2E_VERSION entry for future Kubernetes releases
configvar CSI_PROW_E2E_VERSION_LATEST master "E2E version for Kubernetes master" # testing against Kubernetes master is already tracking a moving target, so we might as well use a moving E2E version
configvar CSI_PROW_E2E_REPO_LATEST https://github.com/kubernetes/kubernetes "E2E repo for Kubernetes >= 1.13.x" # currently the same for all versions
configvar CSI_PROW_E2E_IMPORT_PATH_LATEST k8s.io/kubernetes "E2E package for Kubernetes >= 1.13.x" # currently the same for all versions
configvar CSI_PROW_E2E_VERSION "$(get_versioned_variable CSI_PROW_E2E_VERSION "${csi_prow_kubernetes_version_suffix}")" "E2E version"
configvar CSI_PROW_E2E_REPO "$(get_versioned_variable CSI_PROW_E2E_REPO "${csi_prow_kubernetes_version_suffix}")" "E2E repo"
configvar CSI_PROW_E2E_IMPORT_PATH "$(get_versioned_variable CSI_PROW_E2E_IMPORT_PATH "${csi_prow_kubernetes_version_suffix}")" "E2E package"
tag_from_version () {
version="$1"
shift
case "$version" in
latest) echo "master";;
release-*) echo "$version";;
*) echo "v$version";;
esac
}
configvar CSI_PROW_E2E_VERSION "$(tag_from_version "${CSI_PROW_KUBERNETES_VERSION}")" "E2E version"
configvar CSI_PROW_E2E_REPO "https://github.com/kubernetes/kubernetes" "E2E repo"
configvar CSI_PROW_E2E_IMPORT_PATH "k8s.io/kubernetes" "E2E package"

# csi-sanity testing from the csi-test repo can be run against the installed
# CSI driver. For this to work, deploying the driver must expose the Unix domain
Expand Down Expand Up @@ -342,7 +347,7 @@ configvar CSI_PROW_E2E_ALPHA_GATES_LATEST '' "alpha feature gates for latest Kub
configvar CSI_PROW_E2E_ALPHA_GATES "$(get_versioned_variable CSI_PROW_E2E_ALPHA_GATES "${csi_prow_kubernetes_version_suffix}")" "alpha E2E feature gates"

# Which external-snapshotter tag to use for the snapshotter CRD and snapshot-controller deployment
configvar CSI_SNAPSHOTTER_VERSION 'v2.0.0' "external-snapshotter version tag"
configvar CSI_SNAPSHOTTER_VERSION 'v2.0.1' "external-snapshotter version tag"

# Some tests are known to be unusable in a KinD cluster. For example,
# stopping kubelet with "ssh <node IP> systemctl stop kubelet" simply
Expand Down Expand Up @@ -513,6 +518,10 @@ go_version_for_kubernetes () (
if ! [ "$go_version" ]; then
die "Unable to determine Go version for Kubernetes $version from hack/lib/golang.sh."
fi
# Strip the trailing .0. Kubernetes includes it, Go itself doesn't.
# Ignore: See if you can use ${variable//search/replace} instead.
# shellcheck disable=SC2001
go_version="$(echo "$go_version" | sed -e 's/\.0$//')"
echo "$go_version"
)

Expand Down Expand Up @@ -688,7 +697,11 @@ install_csi_driver () {
fi

if [ "${CSI_PROW_DRIVER_CANARY}" != "stable" ]; then
if [ "${CSI_PROW_DRIVER_CANARY}" == "canary" ]; then
images="$images IMAGE_TAG=${CSI_PROW_DRIVER_CANARY} IMAGE_REGISTRY=${CSI_PROW_DRIVER_CANARY_REGISTRY}"
else
images="$images IMAGE_TAG=${CSI_PROW_DRIVER_CANARY}"
fi
fi
# Ignore: Double quote to prevent globbing and word splitting.
# It's intentional here for $images.
Expand Down Expand Up @@ -1064,18 +1077,24 @@ main () {
# always pulling the image
# (https://github.com/kubernetes-sigs/kind/issues/328).
docker tag "$i:latest" "$i:csiprow" || die "tagging the locally built container image for $i failed"
done

if [ -e deploy/kubernetes/rbac.yaml ]; then
# This is one of those components which has its own RBAC rules (like external-provisioner).
# We are testing a locally built image and also want to test with the the current,
# potentially modified RBAC rules.
if [ "$(echo "$cmds" | wc -w)" != 1 ]; then
die "ambiguous deploy/kubernetes/rbac.yaml: need exactly one command, got: $cmds"
# For components with multiple cmds, the RBAC file should be in the following format:
# rbac-$cmd.yaml
# If this file cannot be found, we can default to the standard location:
# deploy/kubernetes/rbac.yaml
rbac_file_path=$(find . -type f -name "rbac-$i.yaml")
if [ "$rbac_file_path" == "" ]; then
rbac_file_path="$(pwd)/deploy/kubernetes/rbac.yaml"
fi
e=$(echo "$cmds" | tr '[:lower:]' '[:upper:]' | tr - _)
images="$images ${e}_RBAC=$(pwd)/deploy/kubernetes/rbac.yaml"
fi

if [ -e "$rbac_file_path" ]; then
# This is one of those components which has its own RBAC rules (like external-provisioner).
# We are testing a locally built image and also want to test with the the current,
# potentially modified RBAC rules.
e=$(echo "$i" | tr '[:lower:]' '[:upper:]' | tr - _)
images="$images ${e}_RBAC=$rbac_file_path"
fi
done
fi

if tests_need_non_alpha_cluster; then
Expand Down Expand Up @@ -1183,3 +1202,23 @@ main () {

return "$ret"
}

# This function can be called by a repo's top-level cloudbuild.sh:
# it handles environment set up in the GCR cloud build and then
# invokes "make push-multiarch" to do the actual image building.
gcr_cloud_build () {
# Register gcloud as a Docker credential helper.
# Required for "docker buildx build --push".
gcloud auth configure-docker

if find . -name Dockerfile | grep -v ^./vendor | xargs --no-run-if-empty cat | grep -q ^RUN; then
# Needed for "RUN" steps on non-linux/amd64 platforms.
# See https://github.com/multiarch/qemu-user-static#getting-started
(set -x; docker run --rm --privileged multiarch/qemu-user-static --reset -p yes)
fi

# Extract tag-n-hash value from GIT_TAG (form vYYYYMMDD-tag-n-hash) for REV value.
REV=v$(echo "$GIT_TAG" | cut -f3- -d 'v')

run_with_go "${CSI_PROW_GO_VERSION_BUILD}" make push-multiarch REV="${REV}" REGISTRY_NAME="${REGISTRY_NAME}" BUILD_PLATFORMS="${CSI_PROW_BUILD_PLATFORMS}"
}
2 changes: 1 addition & 1 deletion travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ git:
depth: false
matrix:
include:
- go: 1.13.3
- go: 1.15
before_script:
- mkdir -p bin
- wget https://github.com/golang/dep/releases/download/v0.5.1/dep-linux-amd64 -O bin/dep
Expand Down

0 comments on commit 63ae26b

Please sign in to comment.