Raising issues is encouraged. We have some templates to help you get started.
If you found an error in our docs, or you simply want to make them better, contributions to our docs are always appreciated!
You can use Docker and docker-compose to test shuttle locally during development. See the Docker install and docker-compose install instructions if you do not have them installed already.
git clone git@github.com:shuttle-hq/shuttle.git
cd shuttle
You should now be ready to setup a local environment to test code changes to core shuttle
packages as follows:
Build the required images with:
make images
Note: The current Makefile does not work on Windows systems, if you want to build the local environment on Windows you could use Windows Subsystem for Linux.
The images get built with cargo-chef and therefore support incremental builds (most of the time). So they will be much faster to re-build after an incremental change in your code - should you wish to deploy it locally straight away.
You can now start a local deployment of shuttle and the required containers with:
make up
Note: Other useful commands can be found within the Makefile.
The API is now accessible on localhost:8000
(for app proxies) and localhost:8001
(for the control plane). When running cargo run --bin cargo-shuttle
(in a debug build), the CLI will point itself to localhost
for its API calls.
In order to test local changes to the shuttle-service
crate, you may want to add the below to a .cargo/config.toml
file. (See Overriding Dependencies for more)
[patch.crates-io]
shuttle-service = { path = "[base]/shuttle/service" }
shuttle-aws-rds = { path = "[base]/shuttle/resources/aws-rds" }
shuttle-persist = { path = "[base]/shuttle/resources/persist" }
shuttle-shared-db = { path = "[base]/shuttle/resources/shared-db" }
shuttle-secrets = { path = "[base]/shuttle/resources/secrets" }
shuttle-static-folder = { path = "[base]/shuttle/resources/static-folder" }
Prime gateway database with an admin user:
docker compose --file docker-compose.rendered.yml --project-name shuttle-dev exec gateway /usr/local/bin/service --state=/var/lib/shuttle init --name admin --key test-key
Login to shuttle service in a new terminal window from the main shuttle directory:
cargo run --bin cargo-shuttle -- login --api-key "test-key"
cd into one of the examples:
git submodule init
git submodule update
cd examples/rocket/hello-world/
Create a new project, this will start a deployer container:
# the --manifest-path is used to locate the root of the shuttle workspace
cargo run --manifest-path ../../../Cargo.toml --bin cargo-shuttle -- project new
Verify that the deployer is healthy and in the ready state:
cargo run --manifest-path ../../../Cargo.toml --bin cargo-shuttle -- project status
Deploy the example:
cargo run --manifest-path ../../../Cargo.toml --bin cargo-shuttle -- deploy
Test if the deploy is working:
# the Host header should match the Host from the deploy output
curl --header "Host: {app}.unstable.shuttleapp.rs" localhost:8000/hello
View logs from the current deployment:
# append `--follow` to this command for a live feed of logs
cargo run --manifest-path ../../../Cargo.toml --bin cargo-shuttle -- logs
The steps outlined above starts all the services used by shuttle locally (ie. both gateway
and deployer
). However, sometimes you will want to quickly test changes to deployer
only. To do this replace make up
with the following:
docker-compose -f docker-compose.rendered.yml up provisioner
This prevents gateway
from starting up. Now you can start deployer only using:
provisioner_address=$(docker inspect --format '{{(index .NetworkSettings.Networks "shuttle_default").IPAddress}}' shuttle_prod_hello-world-rocket-app_run)
cargo run -p shuttle-deployer -- --provisioner-address $provisioner_address --provisioner-port 8000 --proxy-fqdn local.rs --admin-secret test-key --project <project_name>
The --admin-secret
can safely be changed to your api-key to make testing easier. While <project_name>
needs to match the name of the project that will be deployed to this deployer. This is the Cargo.toml
or Shuttle.toml
name for the project.
If you are using Podman over Docker, then expose a rootless socket of Podman using the following command:
podman system service --time=0 unix:///tmp/podman.sock
Now make docker-compose use this socket by setting the following environment variable:
export DOCKER_HOST=unix:///tmp/podman.sock
shuttle can now be run locally using the steps shown earlier.
Note: Testing the
gateway
with a rootless Podman does not work since Podman does not allow access to thedeployer
containers via IP address!
shuttle has reasonable test coverage - and we are working on improving this every day. We encourage PRs to come with tests. If you're not sure about what a test should look like, feel free to get in touch.
To run the unit tests for a spesific crate, from the root of the repository run:
# replace <crate-name> with the name of the crate to test, e.g. `shuttle-common`
cargo test --package <crate-name> --all-features --lib -- --nocapture
To run the integration tests for a spesific crate (if it has any), from the root of the repository run:
# replace <crate-name> with the name of the crate to test, e.g. `cargo-shuttle`
cargo test --package <crate-name> --all-features --test '*' -- --nocapture
To run the end-to-end tests, from the root of the repository run:
make test
Note: Running all the end-to-end tests may take a long time, so it is recommended to run individual tests shipped as part of each crate in the workspace first.
We use the Angular Commit Guidelines. We expect all commits to conform to these guidelines.
Furthermore, commits should be squashed before being merged to master.
Before committing:
- Make sure your commits don't trigger any warnings from Clippy by running:
cargo clippy --tests --all-targets
. If you have a good reason to contradict Clippy, insert an#[allow(clippy::<lint>)]
macro, so that it won't complain. - Make sure your code is correctly formatted:
cargo fmt --all --check
. - Make sure your
Cargo.toml
's are sorted:cargo sort --workspace
. This command uses the cargo-sort crate to sort theCargo.toml
dependencies alphabetically. - If you've made changes to examples, make sure the above commands are ran there as well.
The folders in this repository relate to each other as follow:
graph BT
classDef default fill:#1f1f1f,stroke-width:0,color:white;
classDef binary fill:#f25100,font-weight:bolder,stroke-width:0,color:white;
classDef external fill:#343434,font-style:italic,stroke:#f25100,color:white;
deployer:::binary
cargo-shuttle:::binary
common
codegen
e2e
proto
provisioner:::binary
service
gateway:::binary
user([user service]):::external
gateway --> common
gateway -.->|starts instances| deployer
deployer --> proto
deployer -.->|calls| provisioner
service ---> common
deployer --> common
cargo-shuttle --->|"features = ['loader']"| service
deployer -->|"features = ['loader']"| service
cargo-shuttle --> common
service --> codegen
proto ---> common
provisioner --> proto
e2e -.->|starts up| gateway
e2e -.->|calls| cargo-shuttle
user -->|"features = ['codegen']"| service
First, provisioner
, gateway
, deployer
, and cargo-shuttle
are binary crates with provisioner
, gateway
and deployer
being backend services. The cargo-shuttle
binary is the cargo shuttle
command used by users.
The rest are the following libraries:
common
contains shared models and functions used by the other libraries and binaries.codegen
contains our proc-macro code which gets exposed to user services fromservice
by thecodegen
feature flag. The redirect throughservice
is to make it available under the prettier name ofshuttle_service::main
.service
is where our specialService
trait is defined. Anything implementing thisService
can be loaded by thedeployer
and the local runner incargo-shuttle
. Thecodegen
automatically implements theService
trait for any user service.proto
contains the gRPC server and client definitions to allowdeployer
to communicate withprovisioner
.e2e
just contains tests which starts up thedeployer
in a container and then deploys services to it usingcargo-shuttle
.
Lastly, the user service
is not a folder in this repository, but is the user service that will be deployed by deployer
.