Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add setup for multi node/multi-machine testing #2044

Merged
merged 18 commits into from
Jan 9, 2020
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 8 additions & 66 deletions scripts/vagrant/README.md
Original file line number Diff line number Diff line change
@@ -1,71 +1,13 @@
# Vagrant

This allows you to run a Kubernetes environment on a single box using vagrant to provision a VM either local or in a cloud environment.
Setup scripts inside of this folder are useful for testing either a cluster on a single node or multiple clusters on separate nodes.
Multi node is useful for testing master/feature differences.

It includes:
- kubernetes (using kind)
- etcd (single node)
- m3db operator
- m3db node (single node)
- m3coordinator dedicated (two instances)
- prometheus
- grafana (accessible localhost:3333, login admin:admin)
All single node setup scripts are inside of `./single`.
arnikola marked this conversation as resolved.
Show resolved Hide resolved
This setup is generally best for performance testing and benchmarking a single version of M3DB.

This is useful for benchmarking and similar needs.
All multi node setup scripts are inside of `./multi`.
This setup is useful for comparing performance of feature branches against `latest`, as well as helping verify correctness of the feature branch.

# Local setup

Start:
```bash
./start_vagrant.sh
```

Stop:
```bash
./stop_vagrant.sh
```

Reopen tunnels:
```bash
./tunnel_vagrant.sh
```

SSH:
```bash
./ssh_vagrant.sh
```

# GCP setup

If you authorized with `gcloud` you can use `~/.ssh/google_compute_engine` as your SSH key.

Start:
```bash
PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ./start_vagrant.sh
```

Stop:
```bash
PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ./stop_vagrant.sh
```

Reopen tunnels:
```bash
PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ./tunnel_vagrant.sh
```

SSH:
```bash
PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ./ssh_vagrant.sh
```

# Running

Once setup you can SSH in and turn on write load (scaling to a single replica is roughly equivalent to applying 10,000 writes/sec):
```bash
kubectl scale --replicas=1 deployment/promremotebench
```

## Accessing Grafana

You can access grafana by visiting `http://localhost:3333` and using username `admin` and password `admin`.
All shared scripts are inside of `./shared`.
These are the same for both single/multi node setups.
3 changes: 3 additions & 0 deletions scripts/vagrant/docker/daemon.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"data-root": "/mnt/docker"
}
97 changes: 97 additions & 0 deletions scripts/vagrant/multi/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Vagrant
arnikola marked this conversation as resolved.
Show resolved Hide resolved

This allows comparisons between a feature branch (specified by a `FEATURE_DOCKER_IMAGE` environment variable), and `latest`.

It includes:
- kubernetes (using kind)
- etcd (single node)
- m3db operator
- m3db node (multi node)
- m3coordinator dedicated (two instances)
- prometheus
- grafana (accessible localhost:3333, login admin:admin)

# Requirements
Setup vagrant azure provider via [docs](https://github.com/Azure/vagrant-azure).
arnikola marked this conversation as resolved.
Show resolved Hide resolved
Or alternatively set up google provider via [docs](https://github.com/mitchellh/vagrant-google).

# Local setup

Start:
```bash
./start_vagrant.sh
```

Stop:
```bash
./stop_vagrant.sh
```

Reopen tunnels:
```bash
./tunnel_vagrant.sh
```

SSH:
```bash
./ssh_vagrant.sh
```

# GCP setup
arnikola marked this conversation as resolved.
Show resolved Hide resolved

After authorizing with gcloud, use ~/.ssh/google_compute_engine as the SSH key.

Start:
```bash
FEATURE_DOCKER_IMAGE=quay.io/m3dbtest/m3dbnode:feature PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ./start_vagrant.sh
```

Stop:
```bash
PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ../shared/stop_vagrant.sh
```

Reopen tunnels (must provide $MACHINE):
```bash
MACHINE=primary PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ../shared/tunnel_vagrant.sh
```

SSH (must provide $MACHINE):
```bash
MACHINE=primary PROVIDER="google" GOOGLE_PROJECT_ID="your_google_project_id" GOOGLE_JSON_KEY_LOCATION="your_google_service_account_json_key_as_local_path" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ../shared/ssh_vagrant.sh
```

# AZURE setup

After authorizing with azure, use preferred key as the SSH key.

Start:
```bash
FEATURE_DOCKER_IMAGE=quay.io/m3dbtest/m3dbnode:feature PROVIDER="azure" AZURE_TENANT_ID="tenant-id" AZURE_CLIENT_ID="client-id" AZURE_CLIENT_SECRET="client-secret" AZURE_SUBSCRIPTION_ID="subscription-id" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ./start_vagrant.sh
```

Stop:
```bash
PROVIDER="azure" AZURE_TENANT_ID="tenant-id" AZURE_CLIENT_ID="client-id" AZURE_CLIENT_SECRET="client-secret" AZURE_SUBSCRIPTION_ID="subscription-id" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ../shared/stop_vagrant.sh
```

Reopen tunnels (must provide $MACHINE):
```bash
MACHINE=primary PROVIDER="azure" AZURE_TENANT_ID="tenant-id" AZURE_CLIENT_ID="client-id" AZURE_CLIENT_SECRET="client-secret" AZURE_SUBSCRIPTION_ID="subscription-id" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ../shared/tunnel_vagrant.sh
```

SSH (must provide $MACHINE):
```bash
MACHINE=primary PROVIDER="azure" AZURE_TENANT_ID="tenant-id" AZURE_CLIENT_ID="client-id" AZURE_CLIENT_SECRET="client-secret" AZURE_SUBSCRIPTION_ID="subscription-id" USER="$(whoami)" SSH_KEY="your_ssh_key_as_local_path" ../shared/ssh_vagrant.sh
```

# Running

Once setup you can SSH into the benchmarker vm and turn on write load (scaling to a single replica is roughly equivalent to applying 10,000 writes/sec):
```bash
kubectl scale --replicas=1 deployment/promremotebench
```

## Accessing Grafana

You can access grafana by visiting `http://localhost:3333` and using username `admin` and password `admin`.
Loading