Skip to content

Latest commit

 

History

History
534 lines (397 loc) · 23.3 KB

devguide.md

File metadata and controls

534 lines (397 loc) · 23.3 KB
title layout
Developer Guide
docwithnav

Overview

Thank you for deciding to contribute to our project! 💖 We welcome contributors from all backgrounds and experience levels.

If you are interested in going beyond a single PR, take a look at our contribution ladder and learn how to become a reviewer, or even a maintainer!

Working on Issues

Github does not allow non-maintainers to assign, or be assigned to, issues. As such non-maintainers can indicate their desire to work on (own) a particular issue by adding a comment to it of the form:

#dibs

However, it is a good idea to discuss the issue, and your intent to work on it, with the other members via the slack channel to make sure there isn't some other work already going on with respect to that issue.

When you create a pull request (PR) that completely addresses an open issue please include a line in the initial comment that looks like:

Closes: #1234

where 1234 is the issue number. This allows Github to automatically close the issue when the PR is merged.

Also, before you start working on your issue, please read our Code Standards document.

Prerequisites

At a minimum you will need:

  • Docker 17.05+ installed locally
  • GNU Make
  • git

These will allow you to build and test service catalog components within a Docker container.

If you want to deploy service catalog components built from source, you will also need:

  • A working Kubernetes cluster and kubectl installed in your local PATH, properly configured to access that cluster. The version of Kubernetes and kubectl must be >= 1.9. See below for instructions on how to download these versions of kubectl
  • Helm (Tiller) installed in your Kubernetes cluster and the helm binary in your PATH
  • To be pre-authenticated to a Docker registry (if using a remote cluster)

Note: It is not generally useful to run service catalog components outside a Kubernetes cluster. As such, our build process only supports compilation of linux/amd64 binaries suitable for execution within a Docker container.

Workflow

We can set up the repo by following a process similar to the dev guide for k8s

1 Fork in the Cloud

  1. Visit https://github.com/kubernetes-incubator/service-catalog
  2. Click Fork button (top right) to establish a cloud-based fork.

2 Clone fork to local storage

Per Go's workspace instructions, place Service Catalog's code on your GOPATH using the following cloning procedure.

Define a local working directory:

If your GOPATH has multiple paths, pick just one and use it instead of $GOPATH.

You must follow exactly this pattern, neither $GOPATH/src/github.com/${your github profile name}/ nor any other pattern will work.

From your shell:

# Run the following only if `echo $GOPATH` shows nothing.
export GOPATH=$(go env GOPATH)

# Set your working directory
working_dir=$GOPATH/src/github.com/kubernetes-incubator

# Set user to match your github profile name
user={your github profile name}

# Create your clone:
mkdir -p $working_dir
cd $working_dir
git clone https://github.com/$user/service-catalog.git
# or: git clone git@github.com:$user/service-catalog.git

cd service-catalog
git remote add upstream https://github.com/kubernetes-incubator/service-catalog.git
# or: git remote add upstream git@github.com:kubernetes-incubator/service-catalog.git

# Never push to upstream master
git remote set-url --push upstream no_push

# Confirm that your remotes make sense:
git remote -v

Code Layout

This repository is organized as similarly to Kubernetes itself as the developers have found possible (or practical). Below is a summary of the repository's layout:

.
├── bin                     # Destination for binaries compiled for linux/amd64 (untracked)
├── build                   # Contains build-related scripts and subdirectories containing Dockerfiles
├── charts                  # Helm charts for deployment
│   └── catalog             # Helm chart for deploying the service catalog
│   └── ups-broker          # Helm chart for deploying the user-provided service broker
├── cmd                     # Contains "main" Go packages for each service catalog component binary
│   └── apiserver           # The service catalog API server service-catalog command
│   └── controller-manager  # The service catalog controller manager service-catalog command
│   └── service-catalog     # The service catalog binary, which is used to run commands
│   └── svcat               # The command-line interface for interacting with kubernetes service-catalog resources
├── contrib                 # Contains examples, non-essential golang source, CI configurations, etc
│   └── build               # Dockerfiles for contrib images (example: ups-broker)
│   └── cmd                 # Entrypoints for contrib binaries
│   └── examples            # Example API resources
│   └── hack                # Non-build related scripts
│   └── jenkins             # Jenkins configuration
│   └── pkg                 # Contrib golang code
│   └── travis              # Travis configuration
├── docs                    # Documentation
├── pkg                     # Contains all non-"main" Go packages
├── plugin                  # Plugins for API server
├── test                    # Integration and e2e tests
├── vendor                  # dep-managed dependencies
├── Gopkg.toml              # dep manifest
└── Gopkg.lock              # dep lock (autogenerated, do not edit)

Building

First cd to the root of the cloned repository tree. To build the service-catalog server components:

$ make build

The above will build all executables and place them in the bin directory. This is done within a Docker container-- meaning you do not need to have all of the necessary tooling installed on your host (such as a golang compiler or dep). Building outside the container is possible, but not officially supported.

To build the service-catalog client, svcat:

$ make svcat

The svcat cli binary is located at bin/svcat/svcat.

To install svcat to your $GOPATH/bin directory:

$ make svcat-install

Note, this will do the basic build of the service catalog. There are more more advanced build steps below as well.

To deploy to Kubernetes, see the Deploying to Kubernetes section.

Notes Concerning the Build Process/Makefile

  • The Makefile assumes you're running make from the root of the repo.

  • There are some source files that are generated during the build process. These are:

    • pkg/client/*_generated
    • pkg/apis/servicecatalog/zz_*
    • pkg/apis/servicecatalog/v1beta1/zz_*
    • pkg/apis/servicecatalog/v1beta1/types.generated.go
    • pkg/openapi/openapi_generated.go
  • Running make clean or make clean-generated will roll back (via git checkout --) the state of any generated files in the repo.

  • Running make purge-generated will remove those generated files from the repo.

  • A Docker Image called "scbuildimage" will be used. The image isn't pre-built and pulled from a public registry. Instead, it is built from source contained within the service catalog repository.

  • While many people have utilities, such as editor hooks, that auto-format their go source files with gofmt, there is a Makefile target called format which can be used to do this task for you.

  • make build will build binaries for linux/amd64 only.

Testing

There are three types of tests: unit, integration and e2e.

Unit Tests

The unit testcases can be run via the test-unit Makefile target, e.g.:

$ make test-unit

These will execute any *_test.go files within the source tree.

Integration Tests

The integration tests can be run via the test-integration Makefile target, e.g.:

$ make test-integration

The integration tests require the Kubernetes client (kubectl) so there is a script called contrib/hack/kubectl that will run it from within a Docker container. This avoids the need for you to download, or install it, youself. You may find it useful to add contrib/hack to your PATH.

e2e Tests

The e2e tests require an existing kubernetes cluster with service-catalog deployed into it. The test runner needs the configuration to talk to the cluster and the service-catalog server. Since service-catalog can run aggregated, this is done by giving the same kubeconfig.

$ KUBECONFIG=~/.kube/config SERVICECATALOGCONFIG=~/.kube/config make test-e2e

Once built, the binary can also be run directly. Some example output is included below.

$ e2e.test
I0529 13:37:15.942348   21610 e2e.go:45] Starting e2e run "12ee92dc-6380-11e8-8a97-54e1ad543ebd" on Ginkgo node 1
Running Suite: Service Catalog e2e suite
========================================
Random Seed: 1527626235 - Will randomize all specs
Will run 3 of 3 specs

< ... Test Output ... >


Ran 3 of 3 Specs in 47.271 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped PASS

Test Running Tips

The test Makefile target will run both the unit and integration tests, e.g.:

$ make test

If you want to run just a subset of the unit testcases then you can specify the source directories of the tests:

$ TEST_DIRS="path1 path2" make test

or you can specify a regexp expression for the test name:

$ UNIT_TESTS=TestBar* make test

a regexp expression also works for integration test names:

$ INT_TESTS=TestIntegrateBar* make test

You can also set the log level for the tests, which is useful for debugging using the TEST_LOG_LEVEL env variable. Log level 5 e.g.:

$ TEST_LOG_LEVEL=5 make test-integration

Test Code Coverage

To see how well these tests cover the source code, you can use:

$ make coverage

These will execute the tests and perform an analysis of how well they cover all code paths. The results are put into a file called: coverage.html at the root of the repo.

As mentioned above, integration tests require a running Catalog API & ETCD image and a properly configured .kubeconfig. When developing or drilling in on a specific test failure you may find it helpful to run Catalog in your "normal" environment and as long as you have properly configured your KUBECONFIG environment variable you can run integration tests much more quickly with a couple of commands:

$ make build-integration
$ ./integration.test -test.v -v 5 -logtostderr -test.run  TestPollServiceInstanceLastOperationSuccess/async_provisioning_with_error_on_second_poll

The first command ensures the test integration executable is up-to-date. The second command runs one specific test case with verbose logging and can be re-run over and over without having to wait for the start and stop of API and ETCD. This example will execute the test case "async provisioning with error on second poll" within the integration test TestPollServiceInstanceLastOperationSuccess.

Golden Files

The svcat tests rely on "golden files", a pattern used in the Go standard library, for testing command output. The expected output is stored in a file in the testdata directory, cmd/svcat/testdata, and and then the test's output is compared against the "golden output" stored in that file. It helps avoid putting hard coded strings in the tests themselves.

You do not edit the golden files by hand. When you need to update the golden files, run make test-update-goldenfiles or go test ./cmd/svcat/... -update, and the golden files are updated automatically with the results of the test run.

For new tests, first you need to manually create the empty golden file into the destination directory specified in your test, e.g. touch cmd/svcat/testdata/mygoldenfile.txt before updating the golden files. This only manages the contents of the golden files, but doesn't create or delete them.

Keep in mind that golden files help catch errors when the output unexpectedly changes. It's up to you to judge when you should run the tests with -update, and to diff the changes in the golden file to ensure that the new output is correct.

Counterfeiter

Certain tests use fakes generated with Counterfeiter. If you add a method to an interface (such as SvcatClient in pkg/svcat/service-catalog) you may need to regenerate the fake. You can install Counterfeiter by running go get github.com/maxbrunsfeld/counterfeiter. Then regenerate the fake with counterfeiter ./pkg/svcat/service-catalog SvcatClient and manually paste the boilerplate copyright comment into the generated file.

FeatureGates

Feature gates are a set of key=value pairs that describe experimental features and can be turned on or off by specifying the value when launching the Service Catalog executable (typically done in the Helm chart). A new feature gate should be created when introducing new features that may break existing functionality or introduce instability. See FeatureGates for more details.

When adding a FeatureGate to Helm charts, define the variable fooEnabled with a value of false in values.yaml. In the API Server and Controller templates, add the new FeatureGate: {% raw %}

    - --feature-gates
    - Foo={{.Values.fooEnabled}}

{% endraw %}

When the feature has had enough testing and the community agrees to change the default to true, update features.go and values.yaml changing the default for feature foo to true. And lastly update the appropriate information in the FeatureGates doc.

Documentation

Our documentation site is located at svc-cat.io. The content files are located in the docs/ directory, and the website framework in docsite/.

To preview your changes, run make docs-preview and then open http://localhost:4000 in your web browser. When you create a pull request, you can preview documentation changes by clicking on the deploy/netlify build check in your PR.

Making a Contribution

Once you have compiled and tested your code locally, make a Pull Request. Create a branch on your local repo with a short descriptive name of the work you are doing. Make a commit with the work in it, and push it up to your remote fork on github. Come back to the code tab of the repository, and there should be a box suggesting to make a Pull Request.

Pull requests are expected to have a few things before asking people to review the PR:

  • Build the code with make build (for server-side changes) or make svcat (for cli changes).
  • Run the tests with make test.
  • Run the build checks with make verify. This helps catch compilation errors and code formatting/linting problems.
  • Added new tests or updated existing tests to verify your changes. If this is a svcat related change, you may need to update the golden files.
  • Any associated documentation changes. You can preview documentation changes by clicking on the deploy/netlify build check on your pull request.

Once the Pull Request has been created, it will automatically be built and the tests run. The unit and integration tests will run in travis, and Jenkins will run the e2e tests.

You can use the Prow /cc command to request reviews from the maintainers of the project. This works even if you do not have status in the service-catalog project.

On travis, a build is made up of two jobs, one builds our chosen golang version, and the other builds with future release candidates (rc). It is okay for the rc build to fail. The rc build will not fail the overall build, and exists to give us a warning as to what changes we will have to make to support future versions of golang.

Advanced Build Steps

You can build the service catalog executables into Docker images yourself. By default, image names are quay.io/kubernetes-service-catalog/<component>. Since most contributors who hack on service catalog components will wish to produce custom-built images, but will be unable to push to this location, it can be overridden through use of the REGISTRY environment variable.

Examples of service-catalog image names:

REGISTRY Fully Qualified Image Name Notes
Unset; default quay.io/kubernetes-service-catalog/service-catalog You probably don't have permissions to push to here
Dockerhub username + trailing slash, e.g. krancour/ krancour/service-catalog Missing hostname == Dockerhub
Dockerhub username + slash + some prefix, e.g. krancour/sc- krancour/sc-service-catalog The prefix is useful for disambiguating similarly names images within a single namespace.
192.168.99.102:5000/ 192.168.99.102:5000/service-catalog A local registry

With REGISTRY set appropriately:

$ make images push

This will build Docker images for all service catalog components. The images are also pushed to the registry specified by the REGISTRY environment variable, so they can be accessed by your Kubernetes cluster.

The images are tagged with the current Git commit SHA:

$ docker images

svcat targets

These are targets for the service-catalog client, svcat:

  • make svcat-all builds all supported client platforms (darwin, linux, windows).
  • make svcat-for-X builds a specific platform.
  • make svcat builds for the current dev's platform.
  • make svcat-publish compiles everything and uploads the binaries.

The same tags are used for both client and server. The cli uses the format that always includes a tag, so that it's clear which release you are "closest" to, e.g. v1.2.3 for official releases and v1.2.3-2-gabc123 for untagged commits.

Deploying Releases

The idea behind "latest" link is that we can provide a permanent link to the most recent stable release of svcat. If someone wants to install a unreleased version, they must build it locally.


Deploying to Kubernetes

Use the catalog chart to deploy the service catalog into your cluster. The easiest way to get started is to deploy into a cluster you regularly use and are familiar with. One of the choices you can make when deploying the catalog is whether to make the API server store its resources in an external etcd server, or in third party resources.

If you have recently merged changes that haven't yet made it into a release, you probably want to deploy the canary images. Always use the canary images when testing local changes.

For more information see the installation instructions. The last two lines of the following helm install example show the canary images being installed with the other standard installation options.

From the root of this repository:

helm install charts/catalog \
    --name catalog --namespace catalog \
    --set image=quay.io/kubernetes-service-catalog/service-catalog:canary

If you choose etcd storage, the helm chart will launch an etcd server for you in the same pod as the service-catalog API server. You will be responsible for the data in the etcd server container.

If you choose third party resources storage, the helm chart will not launch an etcd server, but will instead instruct the API server to store all resources in the Kubernetes cluster as third party resources.

Deploy local canary

For your convenience, you can use the following script quickly rebuild, push and deploy the canary image. There are a few assumptions about your environment and configuration in the script (for example, that you have persistent storage setup for etcd so that you don't lose data in between pushes). If the assumptions do not match your needs, we suggest copying the contents of that script and using it as a starting off point for your own custom deployment script.

# The registry defaults to DockerHub with the same user name as the current user
# Examples: quay.io/myuser/service-catalog/, another-user/
$ export REGISTRY="myuser/"
$ ./contrib/hack/deploy-local-canary.sh

Dependency Management

We use dep to manage our dependencies. We commit the resulting vendor directory to ensure repeatable builds and isolation from upstream source disruptions. Because vendor is committed, you do not need to interact with dep unless you are changing dependencies.

  • Gopkg.toml - the dep manifest, this is intended to be hand-edited and contains a set of constraints and other rules for dep to apply when selecting appropriate versions of dependencies.
  • Gopkg.lock - the dep lockfile, do not edit because it is a generated file.
  • vendor/ - the source of all of our dependencies. Commit changes to this directory in a separate commit from any other changes (including to the Gopkg files) so that it's easier to review your pull request.

If you use VS Code, we recommend installing the dep extension. It provides snippets and improved highlighting that makes it easier to work with dep.

Selecting the version for a dependency

  • Use released versions of a dependency, for example v1.2.3.
  • Use the master branch when a dependency does not tag releases, or we require an unreleased change.
  • Include an explanatory comment with a link to any relevant issues anytime a dependency is pinned to a specific revision in Gopkg.toml.

Add a new dependency

  1. Run dep ensure -add github.com/example/project/pkg/foo. This adds a constraint to Gopkg.toml, and downloads the dependency to vendor/.
  2. Import the package in the code and use it.
  3. Run dep ensure -v to sync Gopkg.lock and vendor/ with your changes.

Change the version of a dependency

  1. Edit Gopkg.toml and update the version for the project. If the project is not in Gopkg.toml already, add a constraint for it and set the version.
  2. Run dep ensure -v to sync Gopkg.lock and vendor/ with the updated version.

Watch a screencast

Demo walkthrough

Check out the walkthrough to get started with installation and a self-guided demo.